uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,993,063
arxiv
\section{#1}} \def\ssect#1{\subsection{#1}} \def\sssect#1{\subsubsection{#1}} \def\lt#1{\left#1} \def\rt#1{\right#1} \def\t#1{\tilde{#1}} \def\h#1{\hat{#1}} \def\b#1{\bar{#1}} \def\frc#1#2{\frac{#1}{#2}} \newcommand{\underline{\mathrm{P}}}{\underline{\mathrm{P}}} \newcommand{\Gamma}{\Gamma} \newcommand{\partial}{\partial} \newcommand{{\cal P}\exp}{{\cal P}\exp} \newcommand{{\cal P}}{{\cal P}} \newcommand{{\rm vac}}{{\rm vac}} \newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle} \newcommand{{\mathbb{Z}}}{{\mathbb{Z}}} \newcommand{{\mathbb{N}}}{{\mathbb{N}}} \newcommand{{\mathbb{R}}}{{\mathbb{R}}} \newcommand{{\mathbb{C}}}{{\mathbb{C}}} \newcommand{{\rm Tr}}{{\rm Tr}} \newcommand{{\cal O}}{{\cal O}} \newcommand{\uparrow}{\uparrow} \newcommand{\downarrow}{\downarrow} \newcommand{\epsilon}{\epsilon} \newcommand{\varepsilon}{\varepsilon} \newcommand{\alpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\gamma}{\gamma} \newcommand{\lambda}{\lambda} \newcommand{\Lambda}{\Lambda} \def\Res#1{\mbox{Res}_{#1}} \newcommand{{\rm End}}{{\rm End}} \newcommand{{\int \hspace{-4mm} \backslash}}{{\int \hspace{-4mm} \backslash}} \newcommand{{\cal J}}{{\cal J}} \newcommand{\alpha^\dag}{\alpha^\dag} \newcommand{\t\alpha^\dag}{\t\alpha^\dag} \newcommand{\Delta}{\Delta} \begin{document} \title{Entanglement entropy of highly degenerate states and fractal dimensions} \author{Olalla A. Castro-Alvaredo} \affiliation {Centre for Mathematical Science, City University London, Northampton Square EC1V 0HB, U.K.} \author{Benjamin Doyon} \affiliation {Department of Mathematics, King's College London, Strand WC2R 2LS, U.K.} \date{\today} \pacs{03.65.Ud, 65.40.gd,11.25.Hf, 75.10.Pq, 75.10.Jm} \begin{abstract} We consider the bi-partite entanglement entropy of ground states of extended quantum systems with a large degeneracy. Often, as when there is a spontaneously broken global Lie group symmetry, basis elements of the lowest-energy space form a natural geometrical structure. For instance, the spins of a spin-1/2 representation, pointing in various directions, form a sphere. We show that for subsystems with a large number $m$ of local degrees of freedom, the entanglement entropy diverges as $\frac{d}{2}\log m$, where $d$ is the fractal dimension of the subset of basis elements with non-zero coefficients. We interpret this result by seeing $d$ as the (not necessarily integer) number of zero-energy Goldstone bosons describing the ground state. We suggest that this result holds quite generally for largely degenerate ground states, with potential applications to spin glasses and quenched disorder. \end{abstract} \maketitle The entanglement entropy is a measure of entanglement between two complementary sets of observables in a quantum system \cite{bennet}. It is defined as the von Neumann entropy of the reduced density matrix of the state $|\Psi\rangle$ with respect to a tensor factor of the Hilbert space ${\cal H}$: \begin{equation} \label{def} S = -{\rm Tr}_{\cal A}(\rho_A\,\log\rho_A)\quad \text{with} \quad \rho_A = {\rm Tr}_{\cal B}|\Psi\rangle\langle\Psi|, \end{equation} and ${\cal H} = {\cal A} \otimes {\cal B}$. A related measure is obtained from the R\'enyi entropy, $S_n = \frc1{1-n}\log {\rm Tr}_{\cal A}(\rho_A^n)$; clearly, $S = S_1=\lim_{n\to 1^+}S_n$. The entanglement and R\'enyi entropies have important applications to e.g. quantum computation and numerical simulations of quantum systems. In extended quantum systems near to critical points, the entanglement entropy has turned out to reveal fundamental properties of ground states (for reviews, see e.g. \cite{special}). An important result is the so-called area law. Consider a quantum system of dimensionality $D\ge 2$ with correlation length $\xi$, and a subsystem ${\cal A}$ composed of the local degrees of freedom on a $D$-dimensional region $A$ of linear extension $\ell$ (generically, the region $A$ is composed various components of different connectivities, and $\ell$ is the overall scale of $A$). It turns out that the entanglement entropy between the subsystem and the rest diverges as $\xi$ and $\ell$ increase, the ratio $r=\ell/\xi$ being fixed, with a power law $\ell^{D-1}$, with possible logarithmic corrections for gapless systems \cite{Bombelli:1986rw,Srednicki:1993im,Wolf08,Wolf06,Gioev06,CalMinVic11}. But this area law is special in the cases where $D=1$. There, the divergence is always logarithmic: $\frc{qc}6\log(\ell)$ where $q$ is the number of points separating $A$ from the rest and $c$ is the central charge of the critical theory \cite{Holzhey,CalabreseCardy}. Interestingly, the number $c$ comes out, which essentially measures the number of degrees of freedom that are carried over from the microscopic theory to the macroscopic universal theory. Further, for $D=1$ again, subtracting this divergence, the rest is a finite quantity which depends on $r$, which saturates to a finite value at $r=\infty$, and which tends to this value in an exponential way that is solely determined by the spectrum of masses of the corresponding perturbation of the critical point \cite{entropy,entropy2}. The spectrum of asymptotic particles characterizes the low-energy degrees of freedom of the universal theory. Moreover, in systems with a boundary, the boundary degeneracy appears also by a natural subtraction \cite{CalabreseCardy,entropy3}. This degeneracy characterizes the number of degrees of freedom carried by the boundary. These results point to the observation that if the entanglement entropy diverges logarithmically at large subsystem size $\ell$, then the way it diverges is controlled by some basic counting of universal degrees of freedom. All these results were established for non-degenerate ground states, or ground states with small, finite degeneracies. A question arises as to the entanglement entropy for highly-degenerate ground states. Let us start by discussing an example where a symmetry group is spontaneously broken: the Heisenberg ferromagnet. This is an $N$-site lattice ${\tt L}$ with spin-$1/2$ local degrees of freedom, with Hamiltonian: $H = J\sum_{(i,j)\;\in\; {\rm edges\ of\ }{\tt L}} \vec\sigma_i\cdot\vec\sigma_j,\quad J<0\label{xxx}$ ($\vec\sigma_i$ is a vector of Pauli matrices acting on site $i$). This model has an $SU(2)$ global symmetry, and the states $|\psi_{\vec{v}}\rangle^{(N)}=\otimes_{i\in{\tt L}} |\psi_{\vec{v}}\rangle_i$, where all spins point in the same direction $\vec{v}$ (i.e.~$\vec\sigma_i\cdot\vec{v}|\psi_{\vec{v}}\rangle_i = |\psi_{\vec{v}}\rangle_i$, $|\vec{v}|=1$), span the lowest-energy subspace. In the usual description, we make a choice of an arbitrary direction $\vec{v}_0$. Such a state is not invariant under $SU(2)$ transformations, hence the symmetry is dynamically broken. The Hilbert space ${\cal H}_{\vec{v}_0}$ in the thermodynamic limit $N\to\infty$ is then, with the ground state $|\psi_{\vec{v}_0}\rangle^{(N)}$, the set of all finite-energy, local excitations above it. By locality of the Hamiltonian, it excludes all ground states and excited states associated to other directions, ${\cal H}_{\vec{v}}$ for $\vec{v}\neq \vec{v}_0$ (these cannot be reached by a finite number of local changes of the infinite system). But linear combinations of $|\psi_{\vec{v}}\rangle^{(N)}$s also give lowest-energy states, and for them we will take a different description of quantum states that is more appropriate. The lowest-energy subspace is the $N+1$-dimensional subspace forming a spin-$N/2$ representation. For every $N$, there exists a set of $N+1$ points $\vec{v}_k$ such that the set of vectors $|\psi_{\vec{v}_k}\rangle^{(N)}$ forms a basis for this subspace. Further, in the limit $N\to\infty$, every point on the unit sphere is arbitrarily close to such a basis point. A good description of the resulting space of infinite-volume lowest-energy quantum states is then obtained by using the geometry induced by averages of local operators (see \cite{permutation}, Section 4). In this geometry, the distance between states in directions $\vec{v}$ and $\vec{v}'$ is smoothly related to the distances between the vectors $\vec{v}$ and $\vec{v}'$ on the unit sphere. This geometry is convenient for discussing the entanglement entropy, because, as is developed in \cite{permutation} (based on earlier works \cite{entropy}), the latter can be evaluated from the average of a local permutation operator. Linear combinations could involve infinitely many directions $\vec{v}$, with appropriate integration measures on the unit sphere. This occurs, e.g., when a ground state of the infinite-length one-dimensional Heisenberg ferromagnet is reached by an adiabatic lowering of the anisotropy of the XXZ model: an integration over a great circle on the unit sphere is obtained \cite{permutation}. Although each state $|\psi_{\vec{v}}\rangle^{(\infty)}$ has zero entanglement entropy (since it is factorisable), linear combinations do not, and linear combinations involving infinitely many directions $\vec{v}$ should have growing entropy as $\ell\to\infty$. What is the $\ell\to\infty$ behaviour for such infinite linear combinations? Let ${\cal A}$ be composed of $m$ degrees of freedom and $N=\infty$. Clearly, any minimal-energy state has a symmetry under exchange of any two sites, hence the entanglement entropy depends on $m$ but not on the particular sites chosen. We may take the $m$ sites to form a continuum of dimension $D$, writing $m = \ell^D$. First note that a large-$m$ divergence $\frc{1}2\log m$ of the entanglement entropy was found in \cite{permutation,popkov} for the state formed by an integration over a great circle. Second, recall that when there is spontaneous symmetry breaking, there are massless excitations in the spectrum, the Goldstone bosons. In general, our idea is that linear combinations composed of all points along an arc on the unit sphere should be interpreted as representing the presence of a zero-energy Goldstone boson corresponding to the continuous motion along this arc. Moreover, every linearly independent local direction on the unit sphere corresponds to a linearly independent Goldstone boson, each of which can be seen as a universal degree of freedom. Hence, the observations above suggest a divergence of the form $\frc{d}2 \log m$, where $d$ is the number of Goldstone degrees of freedom present in the linear combinations. This number is simply the dimension of the support of the linear combination on the unit sphere. The ``number'' of Goldstone degrees of freedom $d$ is not restricted to the integers: the support of the linear combination may have a fractal dimension. Here we argue that the large-$m$ (large-$\ell$) behaviour is \begin{equation}\label{main} S_n = \frc{d}2 \log m + O(1) = \frc{dD}{2} \log \ell + O(1) \end{equation} for all $n$, where $d$ is the (fractal) dimension of the support of the linear combination, $0\leq d\leq 2$ for the spin-$1/2$ Heisenberg ferromagnet. For instance, if the great circle in the above example is replaced by the Cantor set, we would find $d=\log(2)/\log(3)$. Note that the result is independent of $n$. A simple application of this formula is to detect a possible blurring of the dynamically chosen direction $\vec{v}_0$ obtained, for instance, as the system is cooled in a fixed magnetic field. The blurring could lead to a linear combination covering a possibly fractal small surface around $\vec{v}_0$. Standard local observables would not discern this, whereas formula (\ref{main}) shows that the entanglement entropy is extremely sensitive to it, giving $d>0$ instead of $d=0$. Beyond the Heisenberg ferromagnet, our derivation below makes it clear that (\ref{main}) should hold much more generally. There are two conditions: 1) there exists a basis for the lowest-energy subspace where the entanglement entropy of each basis element is zero or small, and 2) the basis elements are given the local-operator geometry \cite{permutation}. The dimension $d$ in (\ref{main}) is that of the support of the linear combination in this geometry, which may be an integer or not, and which essentially counts the number of Goldsone bosons in the state. For instance, for a quantum system in a ``Mexican hat'' potential, the basis set is the geometrical circle at the bottom of the hat, and subsets of this will have $0\leq d\leq 1$; similar observations hold for any system with spontaneously broken continuous symmetry. Note that some ``permutation-symmetric'' (PS) states of the above type in a Hilbert space with on-site spin $S$ were considered in \cite{popkov}, and (\ref{main}) with $d=2S$ was observed. An analysis shows that the states chosen generalize the $S=1/2$ great-circle state. Further, this is in agreement with our general arguments, which imply that all possibilities $0\leq d\leq 4S$ may occur for PS states. Our formula is in sharp contrast with behaviors reviewed above (e.g. $\ell^{D-1}$ for $D>1$), related to the geometric structure of the region $A$ and arising thanks to locality of the system's interaction. To explain this, consider the case of a spontaneous symmetry breaking: local interactions only fix the lowest-energy subspace, not the ground state. By choosing a basis of lowest-energy states with minimal entanglement and with a local-operator geometry, we expect that we encode all local information in the basis states themselves, and only the symmetry is probed by the degeneracy. Hence, our result, which has to do with the degeneracy, cannot measure geometrical objects in the system's real space. Rather, a logarithm occurs, whose coefficient counts Goldstone degrees of freedom, associated with the symmetry. If condition 1) above does not hold, we expect two contributions to the asymptotic of the entanglement entropy: that of the large degeneracy (\ref{main}), and that coming from locality. Examples of fractal sets of minima are found wherever random potentials occur (quenched disorder, see e.g. \cite{CarLeDou}), with possible connections to glasses. In the classical phenomenology \cite{Dotsenko}, at temperatures below the glass transition point, the free energy surface reveals finer structures in the form of new energy minima within previous valleys, displaying self-similarity and a fractal structure; the set of effective minima has a nontrivial fractal dimension. High classical degeneracy also naturally occurs in frustrated spin systems \cite{Diep}. These degeneracies may be lifted by quantum fluctuations (so-called ``order by disorder''), although the underlying classical degeneracy is known to have nontrivial quantum effects and to survive semi-classically \cite{Berg}. By our formula (\ref{main}), the entanglement entropy could provide a further indicator at the quantum level of this (semi-)classical degeneracy. In the rest of this paper, we provide the main lines of the derivation of (\ref{main}). A more precise proof and statement will be given in a separate work. \section{The Heisenberg ferromagnet case} The present derivation uses the replica trick, whereby $n$ is assumed to be an integer $>1$. The result, however, can be interpreted for $n\in(1,\infty)$. This provides the unique analytic continuation which does not diverge exponentially at large $n$; as is usual, this analytic continuation is assumed to provide $S_n$ for all real $n\geq1$. Given a point $\vec{v}$ on the unit sphere $S^2$, let us denote by $\psi_{\vec{v}}\in{\cal F}$ the quantum state corresponding to the $N\to\infty$ limit of $|\psi_{\vec{v}}\rangle^{(N)}$. As developed in \cite{permutation}, a quantum state is a linear functional on the space of finitely-supported operators, which evaluates the average; e.g. $\psi_{\vec{v}}({\cal O}) = \lim_{N\to\infty}{}^{(N)}\langle\psi_{\vec{v}}|{\cal O}|\psi_{\vec{v}}\rangle^{(N)}$. We can write $\psi_{\vec{v}}$ as a product of single-site quantum states, all acting in the same way: $ \psi_{\vec{v}} = \bigotimes_{i\in{\mathbb{Z}}} \psi_{\vec{v};i}.$ At infinite volume, vectors pointing in different directions have zero overlap: $\lim_{N\to\infty} {}^{(N)}\langle \psi_{\vec{v}}|\psi_{\vec{v'}}\rangle^{(N)} = 0$ for $\vec{v}\neq \vec{v'}$. This holds as well with insertions of finitely-supported operators, so the infinite-volume limit of linear combinations $\sum_{\vec{v}} a_{\vec{v}} |\psi_{\vec{v}}\rangle^{(N)}$ gives the quantum state \begin{equation}\label{pa} \psi_{\{a_{\vec{v}}\}} := \sum_{\vec{v}} |a_{\vec{v}}|^2 \psi_{\vec{v}},\quad \sum_{\vec{v}} |a_{\vec{v}}|^2 = 1. \end{equation} In order to evaluate the R\'enyi entanglement entropy associated to the ground state $\psi_{\{a_{\vec{v}}\}}$ we recall the approach developed in \cite{permutation}. There, the R\'enyi entropy of a region $A$ in a quantum state $\psi$ was expressed as an average on the $n^{\rm th}$ tensor power of $\psi$: \begin{equation}\label{ren} S_n =\frc1{1-n} \log\left(\psi^{\otimes n}( \mathcal{T}_A)\right). \end{equation} The operator averaged is $\mathcal{T}_A=\prod_{i \in A} \mathcal{T}_i$, where $\mathcal{T}_i$ are \emph{local cyclic replica permutation operators} which act on site $i$ of the quantum spin chain by cyclicly permuting the spins of the $n$ replicas of the model at that particular site. One of the results of \cite{permutation} was the closed formula \begin{equation}\label{trace} {\mathcal{T}}_i=\text{Tr}_{\text{aux}} \prod_{\alpha=1}^n \sum_{\epsilon_1,\epsilon_2} E^{\epsilon_1\epsilon_2}_{\text{aux}} E_{\alpha,i}^{\epsilon_2\epsilon_1}, \end{equation} where $ E_{V}^{\epsilon_2\epsilon_1}$ represent elementary $2\times 2$ matrices with a single non-vanishing entry at row $\epsilon_2$, column $\epsilon_1$, acting on space $V=\alpha,i$ (site $i$ tensor copy $\alpha$) or $V={\rm aux}$ (auxiliary space). For the quantum state $\psi_{\{a_{\vec{v}}\}}$, we write \[ \psi_{\{a_{\vec{v}}\}}^{\otimes n} = \sum_{\{\vec{v}_\alpha:\alpha=1,\ldots,n\}} \lt(\prod_{\alpha=1}^n |a_{\vec{v}_\alpha}|^2\rt) \bigotimes_{\alpha=1}^n \psi_{\vec{v}_\alpha}. \] From the trace expression (\ref{trace}) we find \begin{equation} \bigotimes_{\alpha} \psi_{\vec{v}_\alpha} \lt({\cal T}_A\rt) = \prod_{i\in A}\text{Tr}_{\text{aux}} \prod_{\alpha} \sum_{\epsilon_1,\epsilon_2} E_{\text{aux}}^{\epsilon_1\epsilon_2} \psi_{\vec{v}_\alpha;i}\lt(E_{i}^{\epsilon_2\epsilon_1}\rt). \end{equation} Clearly, $\psi_{\vec{v}_\alpha;i}\lt(E_{i}^{\epsilon_2\epsilon_1}\rt)$ is independent of $i$. Writing $|\psi_{\vec{v}}\rangle = s_{\vec{v},1}|\uparrow\rangle + s_{\vec{v},2}|\downarrow\rangle$, we find $\psi_{\vec{v}_\alpha,i}\lt(E_{\alpha,i}^{\epsilon_2\epsilon_1}\rt) = s_{\vec{v}_\alpha,\epsilon_2}^* s_{\vec{v}_\alpha,\epsilon_1}$, and tracing over the auxiliary space we obtain \[ \text{Tr}_{\text{aux}} \prod_{\alpha} \sum_{\epsilon_1,\epsilon_2} E_{\text{aux}}^{\epsilon_1\epsilon_2} \psi_{\vec{v}_\alpha;i}\lt(E_{i}^{\epsilon_2\epsilon_1}\rt) = \prod_{\alpha} \langle\psi_{\vec{v}_\alpha}|\psi_{\vec{v}_{\alpha+1}}\rangle. \] Hence, we find \begin{equation}\label{rgsa} S_n= \frc1{1-n} \log\lt(\sum_{\{\vec{v}_\alpha\}} \lt[\prod_{\alpha} |a_{\vec{v}_\alpha}|^2 \rt] \lt[\prod_{\alpha} \langle\psi_{\vec{v}_\alpha}|\psi_{\vec{v}_{\alpha+1}}\rangle\rt]^{m}\rt) \end{equation} with $\vec{v}_{n+1} := \vec{v}_1$. This saturates at large $m$ to \begin{equation}\label{saturation} \lim_{m\to\infty} S_n = \frc1{1-n}\log\lt(\sum_{\vec{v}} |a_{\vec{v}}|^{2n}\rt). \end{equation} That is, as expected, for any ground state given by a finite linear combination of basic zero entropy states, the entanglement entropy reaches a finite maximum as the number $m$ of site of $A$ tends to infinity. This corresponds to the case $d=0$ in (\ref{main}). More interesting behaviours are obtained from ``infinite linear combinations'' of basic states, generalising (\ref{pa}). Given a smooth, self-avoiding path $\vec\gamma:[0,1]\to S^2$ and a smooth function $f:S^2\to{\mathbb{R}}^+$ with $\int_0^1 |d\vec{\gamma}(t)|\,f(\vec{\gamma}(t)) = 1$, the following integral can be defined and is a quantum ground state: $\psi^{(1)}:= \int_0^1 |d\vec{\gamma}(t)| \,f(\vec{\gamma}(t))\, \psi_{\vec\gamma(t)}$. Similarly, let $\vec\mu:[0,1]\times [0,1] \to S^2$ be a two-dimensional smooth curve such that $\int_0^{1} \int_0^{1} |d^2\vec{\mu}(\lambda,\phi)| f(\vec{\mu}(\lambda,\phi))=1$ (where $|d^2\vec{\mu}(\lambda,\phi)|$ is the surface element on the unit sphere). Then, the following is a ground state: $ \psi^{(2)}:= \int_0^{1} \int_0^{1} |d^2\vec{\mu}(\lambda,\phi)|\, f(\vec{\mu}(\lambda,\phi))\, \psi_{\vec\mu(\lambda,\phi)} $. Generalising, we may consider the set of non-zero coefficients to be a fractal set $\mathcal{W} \subset S^2$ with fractal dimension $d$. With $d{\mathcal{H}}(\vec{v})$ the corresponding Hausdorff integration measure, we may write \[ \psi^{(d)}:=\int_{\mathcal{W}}d{\mathcal{H}}(\vec{v}) \,f(\vec{v})\, \psi_{\vec{v}} \,\, \,\text{with}\,\, \int_{\mathcal{W}}d{\mathcal{H}}(\vec{v})\, f(\vec{v})= 1. \] The Hausdorff measure is expected to occur naturally in taking the large-volume limit, if the set of vectors $\vec{v}$ such that $a_{\vec{v}}\neq0$ becomes a fractal set. Computing the R\'enyi entropy of $\psi^{(d)}$ yields a simple generalisation of (\ref{rgsa}) where the sums over the vectors $\vec{v}_\alpha$ are replaced by integrations and the coefficients $|a_{\vec{v}_\alpha}|^2$ by the functions $f(\vec{v}_\alpha)$. The logarithmic factor in (\ref{rgsa}) becomes \begin{equation}\label{rgsf} \log\lt(\int_{\mathcal{W}} \prod_{\alpha} d{\mathcal{H}}(\vec{v}_\alpha)\,f(\vec{v}_\alpha) \lt(\prod_{\alpha} \langle\psi_{\vec{v}_\alpha}|\psi_{\vec{v}_{\alpha+1}}\rangle\rt)^{m}\rt). \end{equation} The large $m$ asymptotics of these expressions will however be radically different from (\ref{saturation}): there will be no saturation, and we will recover the behaviour highlighted in (\ref{main}). This can be shown using a saddle-point analysis, as was done in \cite{permutation} in a particular case; here it is generalised to integrals over fractal domains. For the explicit calculations, we use $ |\psi_{\vec{v}}\rangle = \frc1{\sqrt{2}} \mato{c} \sqrt{1+z} \\ \sqrt{1-z} \, e^{i\theta}\ea\right) $, where $\vec{v}=:(x,y,z)$ is a unit vector, and $\theta$ is defined by $x+iy = \sqrt{1-z^2}\, e^{i\theta}$. From this we see that $|\langle\psi_{\vec{v}}|\psi_{\vec{w}}\rangle|\leq1$, with equality if and only if $\vec{v}=\vec{w}$. The saddle-point analysis from (\ref{rgsf}) is done by expanding the overlaps $\langle\psi_{\vec{v}_\alpha}|\psi_{\vec{v}_{\alpha+1}}\rangle$ around $\vec{v}_\alpha = \vec{v}_{\alpha+1}$, and re-writing the product $\prod_{\alpha}$ of these overlaps as an exponential. We get \begin{equation} \prod_{\alpha=1}^n \langle\psi_{\vec{v}_\alpha}| \psi_{\vec{v}_{\alpha+1}}\rangle = \exp\lt[-\frac{1}{8}\sum_{\alpha=1}^n |\vec{v}_{\alpha+1} -\vec{v}_\alpha|^2 + \ldots\rt],\label{eq1} \end{equation} where the ellipsis stand for terms that are order-2 and antisymmetric, and higher order terms. The order-2 antisymmetric terms vanish when the integrations in (\ref{rgsf}) are performed. In the case of $\psi^{(1)}$ for instance, we may use the assumptions relating to $f$ and $\gamma$: both are smooth, and the curve $\gamma$ is self-avoiding. Hence, with $\vec{v}_\alpha = \vec{\gamma}(t_\alpha)$, the maximum occurs when $t_\alpha = t_{\alpha+1}$ for all $\alpha=1,\ldots,n$. In this case we find \begin{equation} \prod_{\alpha=1}^n \langle\psi_{\vec{\gamma}(t_\alpha)}| \psi_{\vec{\gamma}(t_{\alpha+1})}\rangle = \exp\lt[-\frac{|\dot{\vec{\gamma}}|^2}{8}\sum_{\alpha=1}^n (t_{\alpha+1} -t_\alpha)^2 + \ldots\rt],\label{eq2} \end{equation} where $|\dot{\vec{\gamma}}|^2$ is evaluated at $t=t_1$. The saddle-point analysis is then performed as follows. We need to raise the quantity above to the power $m$ and substitute into the integral (\ref{rgsf}). We can replace $f(\vec\gamma(t_\alpha))$ by $f(\vec\gamma(t_1))$ for all $\alpha$, since $f$ is smooth. Changing variables to $\hat{t}_i=\sqrt{m}({t_{i}-t_{1}})$, $i=2,\ldots,n$ guarantees that larger positive powers of $\h{t}_\alpha$ give lower-order contributions at large $m$. We obtain \begin{eqnarray} S_n&=& \frc1{1-n} \log\lt(\frac{1}{m^{\frac{n-1}{2}}}\int_{0}^{1} dt_1| \dot{\vec{\gamma}}(t_1)|^n f(\vec{\gamma}(t_1))^n \rt.\label{int}\\ && \lt. \int_{-\infty}^{\infty} d^{n-1}\hat{t} \, e^{-\frac{|\vec{\gamma}|^2}{8}\left[\sum\limits_{\alpha=2}^{n-1} (\hat{t}_{\alpha+1} - \hat{t}_{\alpha})^2+\h{t}_2^2 + \h{t}_n^2 \right]+ O(\hat{t}^3/\sqrt{m})}\rt),\nonumber \end{eqnarray} where the integrals over $\h{t}_2,\ldots,\h{t}_n$ have been extended to $(-\infty,\infty)$ (the resulting correction terms are exponentially small). These integrals are of standard gaussian type and can be carried out explicitly. A very similar computation can be carried out for the state $\psi^{(2)}$ instead of $\psi^{(1)}$. The final result can be expressed in both cases $d=1$ and $d=2$ as \begin{equation}\label{logm} S_n \sim \frc{d}2 \log \lt(\frc{m}{8\pi}\rt) +\frac{1}{1-n} \log\lt(n^{-\frc{d}2}\int |d^d\vec{v}|\, \,f(\vec{v})^n\rt), \end{equation} where higher order corrections would be $O(m^{-1/2})$. This is in agreement with (\ref{main}). Note that the constant term in (\ref{logm}) is $-\log(f_{\rm max})$ as $n\to\infty$, where $f_{\rm max}$ is the maximum of $f$ on its support. At $n=1$, we have rather $d/2-\int |d^d\vec{v}|\, f(\vec v) \log f(\vec{v})$. The calculation for fractal sets follows similar lines. A crucial feature of the Hausdorff measure is its scaling property. On the plane, the Hausdorff measure ${\cal H}'$ satisfies $ s^{d} \, {\mathcal{H}}'(\mathcal{W}') ={\mathcal{H}'}(s \mathcal{W}'+\vec{u}) $ for any $\mathcal{W}'\subset{\mathbb{R}}^2$. For the measure $\mathcal{H}$ on $S^2$, this scaling covariance is replaced by an asymptotic behaviour that gives rise to the measure $\mathcal{H}'$ on the tangent plane: \begin{equation}\label{asympt} \lim_{m\to\infty} m^{d/2}\,d{\cal H}(\h{\vec{v}}_i/\sqrt{m} + \vec{v}_1) = d{\cal H}'(\h{\vec{v}}_i). \end{equation} Putting (\ref{eq1}) inside (\ref{rgsf}), changing variables to $\h{\vec{v}}_i = \sqrt{m}(\vec{v}_i-\vec{v}_1)$, $i=2,\dots,n$, and using (\ref{asympt}), as $m\to\infty$, \begin{eqnarray} S_n&\sim& \frc1{1-n} \log\lt(\frac{1}{m^{\frac{d(n-1)}{2}}}\int_{{\cal W}} d{\cal H}(\vec{v}_1) f(\vec{v}_1) \int_{{\cal W}_m'} \prod_{i=2}^n \rt.\label{intd}\nonumber\\ && \lt. \hspace{-1cm} d{\cal H}'(\h{\vec{v}}_i) f\lt(\frc{\h{\vec{v}}_i}{\sqrt{m}} +\vec{v}_1\rt) \, e^{-\frac{1}{8}\left[\sum\limits_{\alpha=2}^{n-1} |\hat{\vec{v}}_{\alpha+1}-\hat{\vec{v}}_{\alpha}|^2+|\hat{\vec{v}}_2|^2 + |\hat{\vec{v}}_n|^2 \right]}\rt)\nonumber \end{eqnarray} where ${\cal W}_m' = \sqrt{m}({\cal W}-\vec{v}_1)$ (projected to the tangent plane at $\vec{v}_1$). Although the integral might not exist in the large-$m$ limit, it is bounded, thanks to the exponentially decaying factor. This boundedness immediately gives rise to the leading asymptotics (\ref{main}). It would be desirable to investigate more precisely the nature of the constant corrections to this general leading behaviour; we hope to return to this in a future work. {\bf Acknowledgment:} We would like to thank J.L. Cardy for useful comments.
1,314,259,993,064
arxiv
\section{Introduction} Since the Euclidean Einstein action of the gravitational field is not limited from below, the minimal Lagrangian $\sqrt{g}R/16\pi G$ has often been ``stabilized`` with a term $\sqrt{g}R^2$ which does not have sizable effects at macroscopic distances, but only on a very small scale \cite{stelle1977renormalization,rovelli2004quantum,hamber2008quantum,capozziello2011extended}. The $R^2$ term can spoil the unitarity of perturbation theory, but non-perturbative approaches have the potential to solve this problem \cite{hamber2009quantum,ambjorn2012nonperturbative,hamber2019vacuum,ambjorn2019towards}. Moreover, in the Asymptotic Safety scenario (with renormalization around an UV fixed point) terms quadratic in the curvature arise in a natural way \cite{niedermaier2006asymptotic,reuter2018quantum}. Bonanno and Reuter have recently studied through non-perturbative analytical methods the continuum quadratic theory, limited to conformal modes, and found indications of a ``rippled`` ground state that violates translational symmetry \cite{bonanno2013modulated,bonanno2019structure}. They have suggested that the vacuum of asymptotically safe gravity theories quadratic in the curvature has the form of a kinetic condensate similar (if confirmed in the full theory with all physical degrees of freedom) to the Savvidy vacuum of Quantum Chromodynamics \cite{lauscher2000rotation,branchina1999antiferromagnetic}. In this work we consider a dimensional reduction of the metric, employed in several classical and quantum gravitational models, where the only variable is the metric component $g_{rr}(r)\equiv A(r)$. The metric is supposed to be independent from time and from the angle variables; we also suppose that $g_{00}(r)=const$. Technically it would be possible to introduce a dependence on time and on the angles and a dynamical $g_{00}$, but the chosen reduction has the advantage of leading to a strong simplification of the action. The Einstein term, here denoted as $S^R$, while $S^{R^2}$ denotes the $R^2$ term, is written simply as \cite{weinberg1973gravitation,modanese2007vacuum,modanese2019metrics,modanese2020quantum} \begin{equation} S^R=\frac{\tau}{16\pi G} \int_0^\infty dr \, \sqrt{|A|} \left( \frac{rA'}{A^2}+1-\frac{1}{A} \right) \label{SR} \end{equation} The time interval $\tau$ can be interpreted as the duration of the vacuum fluctuations of the metric that we are going to simulate. These fluctuations are considered with respect to the classical vacuum solution $A(r)=1$, for which $S^R=0$. We have chosen in (\ref{SR}) natural units such that $\hbar=c=1$, and in the following we shall also fix the length unit to $L_{Pl}$ and thus $G=1$. In the numerical simulations we discretize the variable $r$ on a finite interval $(0,L)$, with the boundary condition $A(L)=1$, and we investigate the scaling in $L$. In Sect.\ \ref{discretized-action} we write the discretized version of the action $S^R+S^{R^2}$ with variables $A_h=A(h\cdot L_{Pl})$, where $h=0,1,...,N$, so that $L=NL_{Pl}$. In Sect.\ \ref{results} we report the results of simulations in which this systems starts from a flat-space configuration with $A_h=1$ for all $h$, and then at each step one of the $A_h$ is randomly chosen and varied as $A_h \to A_h \pm \varepsilon$. The variation is accepted or rejected with a standard Metropolis criterion \cite{newman1999monte}, namely always accepted if $\Delta S<0$, or otherwise accepted if $\exp(-\beta \Delta S)<\xi$, being $\xi$ a random variable in the interval $(0,1)$. In equilibrium metric configurations the variables $A_h$ oscillate quite strongly, with amplitudes which of course decrease at large $\beta$ ($\beta$ has the role of inverse temperature in the equivalent statistical system). These oscillations, however, are such that the total action remains $\ll \hbar$, thanks to the fact that the $R$ term in the action has indefinite sign and so opposite fluctuations of the $A_h$'s can compensate each other. This feature is unique of the gravitational action, even when stabilized with the $R^2$ term, and apparently leads to the formation of vacuum fluctuations much larger than in other quantum field theories \cite{modanese2000paradox}. Numerical simulations of this effect have been already reported in \cite{modanese2019metrics,modanese2020quantum}, but without the stabilizing $R^2$ term, and were therefore less clear and reliable. The average total action scales quite exactly in proportion to $\beta^{-1}$ and $N$. This allows to extend the validity of simulations to large $N$, i.e., to a scale much larger than the Planck scale, and to a continuum limit, provided $\beta$ is increased, implying a lower temperature and thus a better approximation of the ground state. In taking the large-$N$ limit one must also consider the time duration $\tau$ of the fluctuations, which needs to be large enough to represent in a consistent way the adiabatic switch on/off of the fluctuations. Otherwise, the action should be modified by including the contributions to $R$ coming from time derivatives. Among the quantities measured in the simulations, the most interesting ones are probably the averages $\langle A_h \rangle=\langle g_{rr}(hL_{Pl}) \rangle$. These show what average metric emerges spontaneously from the action without imposing a background. In long runs $\langle A_h \rangle$ turns out to be independent from $h$ (metric constant in the interval $(0,L)$) and significantly different from 1. Let us define $\psi_h=\langle A_h -1\rangle $, and let $\psi$ be the lattice average of $\psi_h$, namely \begin{equation} \psi=\frac{1}{N+1} \sum_{h=0}^N \psi_h \label{def-psi} \end{equation} while $\sigma_\psi$ quantifies the (small) spatial fluctuations of $\psi$: \begin{equation} \sigma^2_\psi=\frac{1}{N+1} \sum_{h=0}^N (\psi_h-\psi)^2 \end{equation} The ``order parameter`` $\psi$ turns out to be independent from $\alpha$ and $\beta$ and to scale in $N$ as $N^{-1}$ (Tab.\ \ref{table2}). The measured difference between the average metric and flat space may look tiny (after all, we are close to the Planck scale), but it is relevant, in our opinion, if correctly interpreted. In fact the scale-independent product $M=N\psi$ turns out to be equal to the Planck mass and can be regarded as twice the total virtual mass of the vacuum fluctuation described by the average metric $\psi_h$, as discussed in Sect.\ \ref{interp}. Sect.\ \ref{concl} contains our conclusions. \section{The discretized $R+R^2$ action} \label{discretized-action} Let us choose units in which not only $\hbar=c=1$ (natural units, as in \cite{modanese2020quantum}), but also $G=1$. This means that all lengths are measured as multiples of the fundamental length $L_{Pl}$. We consider metric configurations that differ from flat space in an interval of $r$ from 0 to $L$ and divide this interval into $N$ parts in the discretized calculation. In order to obtain a good precision, $N$ should be as large as possible; the data and pictures of Sect.\ \ref{results} are mostly for $N=1600$, but the scaling in $N$ is also very important, and we have data up to $N\sim 10^5$. The discretization cut-off is chosen as $d=L/N= 1=L_{Pl}$. This means that the interval $L$ has a length that is $N$ times the Planck length. For the time duration $\tau$ of our fluctuations we fix at first $\tau=1$; this is however a very short time and if we want to increase $N$ (and therefore $L$) in a way that is consistent with our quasi-stationary approximation for the metric, we will eventually need to increase $\tau$. We shall see that this is possible thanks to the favourable scaling of the simulation in the inverse temperature $\beta$. The discretized action for time-independent metrics with spherical symmetry and $g_{00}=const.$ can be written as \begin{equation} S=\sum_{h=0}^N S_h = \tau d \sum_{h=0}^N \left( S_h^R+S_h^{R^2} \right) \label{total-action} \end{equation} where the term $S_h^R$ is given by \begin{equation} S_h^R=\sqrt{|\hat{A}_h|} \tilde{S}_h \end{equation} \begin{equation} \tilde{S}_h=\frac{1}{\hat{A}_h^2} \left( \frac{A_{h+1}-A_h}{d}\right) \left( h+\frac{1}{2} \right) d +1-\frac{1}{\hat{A}_h} \end{equation} and \begin{equation} \hat{A}_h=\frac{1}{2}(A_{h+1}+A_h) \end{equation} It is straightforward to see that $S^R$ is the discretized version of the continuum action (\ref{SR}), since $\hat{A}_h$ is the average of $A$ on the $h$-th small space interval, $(A_{h+1}-A_h)/d$ is an estimate of $A'$ on the same interval and $r$ is discretized as $(h+1/2)d$. The contribution of the continuum term $\alpha \sqrt{g}R^2$ is obtained by taking the square of $\tilde{S}_h$ and dividing by $r^2$: \begin{equation} S_h^{R^2}=\frac{\alpha \sqrt{|\hat{A}_h|}}{d ^2 \left( h+\frac{1}{2} \right)^2} \tilde{S}_h^2 \end{equation} The resulting total action (\ref{total-action}) is relatively simple, considering that we are dealing with a gravitational theory quadratic in the curvature. In the term $S^{R^2}$, $\alpha$ is an adimensional coupling whose amplitude will be set empirically in the simulations following certain criteria explained below. Our definition of $\alpha$ differs from that in \cite{bonanno2013modulated,bonanno2019structure} by a factor $4\pi$. \subsection{Description of the Metropolis algorithm} \label{metropolis} The code of the Monte Carlo algorithm, both in Python and C, is appended at the end of the source of the arXiv preprint of this work. In the code, $m$ is the number of sub-cycles, usually 200; at the end of each sub-cycle, the algorithm visualizes the current value of the total action $S$, the latest variations of $S$ (in the parts $R$ and $R^2$), and the current value of the ratio between the steps with $\Delta S$ positive and negative, which indicates the thermalization status. $n$ is the number of elementary steps in a sub-cycle, typically from 20 to 80 millions. At each elementary step one of the $N$ variables $A_h$ is randomly chosen and varied as $A_h\pm \varepsilon$, with $\varepsilon=10^{-6}$. The variable \texttt{sommaA} contains the sum of the values of $A_h$ used at the end to compute the average $\langle A_h \rangle$. It is updated only when $A_h$ changes, and keeps track of the number of steps without changes through the counter \texttt{contA}. On the other hand, the corresponding sums for $\exp(-\beta \Delta S)$, $S$ and $S^2$ and the counters for $\Delta S$ positive (accepted and not) and $\Delta S$ negative are updated at each step. All the averages are computed excluding an initial equilibration time corresponding to $m_0<m$ sub-cycles (normally $m_0=100$). When $A_h$ is changed, the action changes in two of its terms $S_h$; the first one involves $A_h$ and $A_{h+1}$, and for it the quantity $\tilde{S}_h$ is written as \texttt{Shtilde=(Ath2*(h+0.5)*(A[h+1]-A[h])+1-Ath1)} \noindent where we evaluate in advance \texttt{Ath=0.5*(A[h+1]+A[h])}, \ \texttt{Ath1=1/Ath}, \ \texttt{Ath2=Ath1*Ath1} The second term involves $A_h$ and $A_{h-1}$ and for it the quantity $\tilde{S}_h$ is written as \texttt{ShtildeM=(AthM2*(h-0.5)*(A[h]-A[h-1])+1-AthM1)} \noindent with \texttt{AthM=0.5*(A[h]+A[h-1])} \noindent etc. The two changing terms of the action are computed (including suitable factors for $S_h^R$ and $S_h^{R^2}$) with the old value of $A_h$, then with the new one and finally one takes the difference $\Delta S$. \section{Results} \label{results} \subsection{Runs with fixed $N$ (scaling in $\beta$)} \label{fixed-N} In these runs the number of lattice spacings is kept constant at $N=1600$. The average values obtained for the action at equilibrium by changing the inverse temperature $\beta$ are summarized in Tab.\ \ref{table1} and in the plot of Fig.\ \ref{grafico-azione-beta}, where $\langle S \rangle$ is first seen to decrease as $\beta^{-1}$ and then to collapse below the noise level for large $\beta$. Fig.\ \ref{Ah-foto-media} illustrates the behavior of $A_h$, with an instant picture of a typical fluctuating configuration (\ref{Ah-foto-media}-(a)) and the behavior of $\langle A_h \rangle$ at two different temperatures (\ref{Ah-foto-media}-(b) and \ref{Ah-foto-media}-(c)). When the temperature is sufficiently low the average metric $\langle A_h \rangle$ becomes independent from $h$, i.e.\ from the radial coordinate, and stabilizes with small fluctuations at 1.000625, thus with an ``order parameter`` $\psi=6.25\cdot 10^{-4}$ with respect to flat space. In Sect.\ \ref{scaling-N} we show how this number scales in $N$, in such a way to keep the product $N\psi$ equal, with a good precision, to the Planck mass. One of the challenges posed by the numerical simulations is the choice of ``good`` values of the coupling $\alpha$ and inverse temperature $\beta$. Since for weak fields one normally has $R^2 \ll R$, it seems at first natural to choose a large value of $\alpha$ in order to have an effective stabilization. However, by analysing in several trials the random variations of the terms $R$ and $R^2$, it is seen that the term $R$ becomes almost irrelevant if $\alpha$ is much larger than $\sim 10^3$, and in this case the thermalization of the algorithm is difficult to achieve. On the other hand, if $\alpha$ is less than $\sim 10^1$ the stabilization is not sufficient and the system may collapse as a consequence of some large negative fluctuations of the action, especially if $\beta$ is not large enough. For this reason we set $\alpha=25$ in most of the simulations. Changes in $\alpha$ do not affect the values of $\langle S \rangle$ or $\langle A_h \rangle$, but only the transition probability in the algorithm (see an example in Tab.\ \ref{table1}). \begin{figure}[h] \begin{center} \includegraphics[width=7.0cm,height=5.1cm]{beta-S.pdf} \caption{ Log-log plot of the average action in dependence on $\beta$, with $\alpha=25$, $N=1600$ (see complete data in Tab.\ \ref{table1}). $\langle S \rangle$ decreases almost exactly as $\beta^{-1}$ up to $\beta \simeq 10^{11}$, then it drops abruptly, mixing with the noise. } \label{grafico-azione-beta} \end{center} \end{figure} When the algorithm has reached equilibrium, the number of accepted steps with $\Delta S>0$ is equal to a fraction $\langle e^{-\beta \Delta S} \rangle$ of all steps with $\Delta S>0$. For example, when $\beta=10^8$ we can deduce from Tab.\ \ref{table1} that out of 192 random variations of $A_h$ there will be on average 92 with $\Delta S<0$ and 100 with $\Delta S>0$, of which 92 accepted. When the temperature drops to $\beta=10^{11}$, out of 106 variations there are 6 with $\Delta S<0$ and 6 accepted with $\Delta S>0$, etc. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \toprule $\beta$ & $\langle S \rangle$ & $\langle e^{-\beta \Delta S} \rangle$ & $\sigma_S/S$ \nonumber \\ \hline $10^8$ & $8.00\cdot 10^{-6}$ & 0.92 & 0.04 \nonumber \\ $3.16\cdot 10^8$ & $2.54\cdot 10^{-6}$ & 0.86* & 0.04 \nonumber \\ $10^9$ & $8.00\cdot 10^{-7}$ & 0.78 & 0.04 \nonumber \\ $3.16\cdot 10^9$ & $2.53\cdot 10^{-7}$ & 0.64 & 0.04 \nonumber \\ $10^{10}$ & $8.00\cdot 10^{-8}$ & 0.45 & 0.04 \nonumber \\ $3.16\cdot 10^{10}$ & $2.53\cdot 10^{-8}$ & 0.25 & 0.04 \nonumber \\ $10^{11}$ & $5.17\cdot 10^{-9}$ & 0.06 & 0.07 \nonumber \\ $3.16\cdot 10^{11}$ & $1.9\cdot 10^{-12}$ & $2.7\cdot 10^{-4}$ & 2 \nonumber \\ $10^{12}$ & $5.01\cdot 10^{-13}$ & $2.2\cdot 10^{-4}$ & 2 \nonumber \\ \hline \end{tabular} \caption{Dependence on $\beta$ (inverse temperature) of the average action $\langle S \rangle$, the transition probability $\langle e^{-\beta \Delta S} \rangle$ and the ratio $\sigma_S/\langle S \rangle$. The lattice size $N$ is fixed at 1600, $\alpha=25$, the number of simulation steps varies between $4\cdot 10^9$ and $8\cdot 10^9$. If we change $\alpha$ (coupling to $R^2$), only the transition probability changes: for example, when we have $\langle e^{-\beta \Delta S} \rangle=0.86$ with $\alpha=25$ (*), we would instead obtain 0.75 with $\alpha=100$ and 0.67 with $\alpha=200$; i.e., increasing $\alpha$ has a similar effect to a diminution in the temperature, but only upon the transition probability, not on the average of $S$ and also not on the average of $A_h$, equal for all $\alpha$ and $\beta$ to 1.000625, with fluctuations $\sim 10^{-5}$ for $\beta=10^9$ and decreasing in proportion to $\sqrt{\beta}$ (see Fig.\ \ref{Ah-foto-media}). } \label{table1} \end{center} \end{table} \subsection{Scaling in $N$} \label{scaling-N} If we keep the temperature fixed and increase $N$, we can observe the scaling in $N$ of the average action and of the ``order parameter`` $\psi$ defined in (\ref{def-psi}) (the lattice average of $\langle A_h-1 \rangle$). Some results are summarized in Tab.\ \ref{table2}. The average action scales exactly as $N$, while $\psi$ scales as $N^{-1}$. The product $\psi N$ is equal to 1, within errors, and is independent from $\alpha$ and $\beta$. This product can be interpreted as a mass, as discussed in Sect.\ \ref{interp}, and thus it is equal to 1 Planck mass (about $2\cdot 10^{-8}$ Kg). This is a remarkable coincidence which probably has some fundamental explanation, but emerges here from long numerical simulations which proceed directly from the choice of the action (\ref{total-action}) without any other assumption. We should stress, however, that the value 1 for the product might change in case of different conventions on the units, becoming equal for instance to $2\pi$ or $(2\pi)^{-1}$. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \toprule $N$ & $\psi$ & $\sigma_\psi/ \psi$ & $\psi \cdot N$ \nonumber \\ \hline 1600 & $6.25\cdot 10^{-4}$ & $1\cdot 10^{-3}$ & 1.000 \nonumber \\ 3200 & $3.12\cdot 10^{-4}$ & $2\cdot 10^{-3}$ & 0.999 \nonumber \\ 6400 & $1.56\cdot 10^{-4}$ & $7\cdot 10^{-3}$ & 1.00 \nonumber \\ 12800 & $7.8\cdot 10^{-5}$ & $2\cdot 10^{-2}$ & 1.00 \nonumber \\ 25600 & $3.9\cdot 10^{-5}$ & $5\cdot 10^{-2}$ & 1.00 \nonumber \\ \hline \end{tabular} \caption{Values of the order parameter $\psi$ (lattice average of $\langle A_h-1 \rangle $) in dependence on the size $N$ of the lattice. } \label{table2} \end{center} \end{table} \subsection{Role of the boundary condition} \label{role} All the results above were obtained with the fixed boundary condition $A_{N+1}=1$. (For $h=0,1,...,N$ there is an initial condition $A_h=1$, but then $A_h$ is free to fluctuate.) We also made runs in which the boundary condition is $A_{N+1}=1+\delta$, with $-10^{-4} \leq \delta \leq 10^4$. It turns out that in this case the action remains $\ll 1$ but does not drop to such small values as in Tab.\ \ref{table1}. Note that the initial value of the discretized action with the boundary condition $A_{N+1}=1+\delta$ is not zero but $\simeq N|\delta|$, entirely due to the jump between $h=N$ and $h=N+1$. What is remarkable is that $\langle A_h \rangle$ at equilibrium does not depend on $\delta$ and is always equal to the value with $\delta=0$. In other words, the boundary condition does not affect the equilibrium order parameter, at least for the tested values of $\delta$. \section{Possible interpretation of the product $N\psi$ as virtual mass of the fluctuations} \label{interp} If we write the metric component $g_{rr}(r)$ for the classical Schwarzschild solution in units in which $c=\hbar=G=L_{Pl}=1$, it takes the form \begin{equation} g_{rr}^{Schw}(r)=\left( 1-\frac{r_{Schw}}{r} \right)^{-1} \label{grr-schw} \end{equation} where $r_{Schw}=2M$, and $M$ is the mass of the source. (Notice that in natural units $c=\hbar=1$ one has $r_{Schw}=2GM$, where $G$ has dimensions $l^2$ and $M$ has dimensions $l^{-1}$.) At distances $r\gg r_{Schw}$, the metric $g_{rr}^{Schw}(r)$ can be approximated as $\simeq 1+r_{Schw}/r$. This means that by measuring the distant field generated by a mass and fitting the coefficient of its $1/r$ dependence, we can in principle find the mass of the source. This mass would coincide with the ADM mass computed from a surface integral of the metric at spatial infinity. Now, consider a field configuration which has a ``far metric`` with behavior $g_{rr}\simeq 1+r_{Schw}/r$, and suppose that when we get closer to the source we observe more precisely a metric like (\ref{grr-schw}) down to a distance $L=NL_{Pl}$. We know that the scalar curvature of the Schwarzschild metric is zero, and thus on the interval $(L,+\infty)$ the observed metric has the same action as flat space. Next suppose to get even closer to the source. In classical gravity we expect that ``something happens`` here, in the sense that we encounter a singularity or in any case some non-zero contribution to the action. But in the quantum theory we have found a huge set of configurations in statistical equilibrium for which the action in the interval $(0,L)$ is essentially zero and the \emph{average} metric is almost exactly constant, namely $\langle g_{rr}(r) \rangle=1+\psi$ (although the metric in each configuration has oscillations in $r$). Therefore it makes sense to impose, on average, the matching condition \begin{equation} \langle g_{rr}(L) \rangle = g_{rr}^{Schw}(L) \end{equation} or \begin{equation} 1+\psi=\left( 1-\frac{2M}{NL_{Pl}} \right)^{-1} \end{equation} Since $N \gg 1$, for $M=1/2$ this yields $\psi \simeq N^{-1}$, in agreement with the result $N\psi \simeq 1$ found in the simulations. In other words, a metric $g_{rr}$ which outside the interval $(0,NL_{Pl})$ behaves like $(1-1/r)^{-1}$ and inside the interval fluctuates like in the simulations, with average $1+\psi$, has action very close to zero and resembles very much, looking at its far metric, a particle of mass 1/2 in Planck units. We thus arrive to the conclusion that the vacuum of quantum gravity contains localized fluctuations with size a multiple of $L_{Pl}$ and virtual mass equal to half the Planck mass. This idea is quite natural and not new, but in previous work singular geometries with complex topology have been considered, like wormholes or virtual black holes \cite{hawking1996virtual,preparata2000gas,garattini2002spacetime,hooft2018virtual}. Here we have provided new numerical evidence using simpler formal ingredients. It should be added that in our simulations the size of the fluctuations can be a large multiple $N$ of the Planck length (depending also on their duration $\tau$), but the associated virtual mass is always equal to half the Planck mass, at least in the present assumption of stationary metrics with spherical symmetry and constant $g_{00}$. \begin{figure}[h] \begin{subfigure}{.5\textwidth} \includegraphics[width=7.0cm,height=5.1cm]{Ah-foto-beta-1e9-Q4-bis.pdf} \caption{} \end{subfigure} \begin{subfigure}{.5\textwidth} \includegraphics[width=7.0cm,height=5.1cm]{Ah-medie-beta-1e9-Q4.pdf} \caption{} \end{subfigure} \begin{subfigure}{.5\textwidth} \includegraphics[width=7.0cm,height=5.1cm]{prova-A-A1-R2-FREDDO.pdf} \caption{} \end{subfigure} \caption{{\bf (a)} An example of instant values of $A_h$ at equilibrium with $\beta=10^9$, $N=1600$, $\alpha=25$. {\bf (b)} Average values $\langle A_h \rangle$ with $\beta=10^9$, $N=1600$, $\alpha=25$, 200 cycles of $10^7$ steps. $\langle A_h \rangle$ is almost independent from $h$ and equal to 1.000625, with fluctuations of the order of $10^{-6}$. {\bf (c)} Average values $\langle A_h \rangle$ with $\beta=3.16\cdot 10^{11}$, $N=1600$, $\alpha=25$, 200 cycles of $10^7$ steps. Fluctuations are smaller than in (b). The metric is practically constant and different from flat space. } \label{Ah-foto-media} \end{figure} \begin{figure}[h] \begin{subfigure}{.5\textwidth} \includegraphics[width=7.0cm,height=5.1cm]{valori-S-R2-19-marzo.pdf} \caption{} \end{subfigure} \begin{subfigure}{.5\textwidth} \includegraphics[width=7.0cm,height=5.1cm]{azione-rumore-Q3.pdf} \caption{} \end{subfigure} \caption{{\bf (a)} Example of values of the action at the end of 200 sub-cycles of $2\cdot 10^7$ steps each, at middle temperature ($\beta= 10^{9}$), $N=1600$, $\alpha=25$. The noise is less than 5\% of $S$. {\bf (b)} Example of values of the action at the end of 200 sub-cycles of $2\cdot 10^7$ steps each, at low temperature ($\beta= 3.16\cdot 10^{11}$), $N=1600$, $\alpha=25$. One has $\langle S \rangle \simeq 2\cdot 10^{-12}$, $\sigma_S \simeq S$, with some fluctuations reaching $\sim 10^{-11}$. } \label{azione-varie-temp} \end{figure} \begin{figure}[h] \includegraphics[width=7.0cm,height=5.1cm]{foto-Sh-19-marzo.pdf} \caption{The contributions $S_h$ to the action (\ref{total-action}) on each lattice interval, for the metric fluctuation of Fig.\ \ref{Ah-foto-media}-(a). The phenomenon of curvature polarization at small scale is evident: the single contributions are of the order of $\sim 10^{-2}$, but due to their randomly alternated sign they give a total action of $\sim 10^{-6}$ (Fig.\ \ref{azione-varie-temp}-(a) and Tab.\ \ref{table1}). The amplitude of the fluctuations of $S_h$ is seen to increase in proportion to $h^2$. This is probably due to the volume factor $r^2$ in the action integral. } \label{foto-contributi-Sh} \end{figure} \section{Conclusion} \label{concl} In this work we have proven with reliable non-perturbative lattice simulations the existence of a class of gravitational vacuum fluctuations for which there was until now only partial numerical and analytical evidence. In spite of the restrictions on the dynamical degrees of freedom, this result concerns the true, physical quantum gravity in 3+1 dimensions, not a toy model. The $R+R^2$ action employed is stable in the Euclidean formulation and in line with the current approaches to quantum gravity based on asymptotic safety. One element that is missing, in comparison to more general techniques, is the explicit implementation of the diffeomorphism invariance \cite{hamber2008quantum,ambjorn2012nonperturbative}. We are essentially working in a fixed gauge, and the lattice spacing does not correspond to the physical distance; the latter could be re-obtained using the metric, either in single configurations or in an average sense. The physical interpretation of the results is intriguing. The quantity $\psi$ that we called ``order parameter``, equal to the lattice average (on $h$) of the statistical average $\langle A_h-1 \rangle = \langle g_{rr}(hL_{Pl})-1 \rangle$, is clearly non-zero, with small fluctuations, and such that its product $N\psi$ with the size of the lattice is equal to 1 in Planck units with a precision of $10^{-3}$ to $10^{-2}$. We have discussed the physical interpretation of $N\psi$ as the virtual mass of the fluctuations, which turn out in this sense to be exactly quantized. It is conceivable that more complex fluctuations, e.g.\ with variable $g_{00}$ or without spherical symmetry, may have a mass multiple of the minimum mass. Their numerical simulation needs a non-trivial extension of the algorithm, but appears to be feasible and will be the object of future work. \bibliographystyle{unsrt}
1,314,259,993,065
arxiv
\section{Introduction} Let $\Omega \subset \mathbb{R}^2$ be a polygonal domain and let $\gamma$ be a line segment in $\Omega$. Consider the elliptic boundary value problem \begin{equation} \label{eq:Possion} \left\{ \begin{aligned} -\Delta u =\delta_{\gamma} & \quad \text{in } \Omega, \\ u =0 & \quad \text{on } \partial \Omega, \end{aligned} \right. \end{equation} where the source term $\delta_\gamma$ is the line Dirac measure on $\gamma$, namely, \[ \langle \delta_\gamma, v \rangle = \int_\gamma v(s)ds, \qquad \forall\ v \in L^2(\gamma). \] Such equations occur in many mathematical models including monophasic flows in porous media, tissue perfusion or drug delivery by a network of blood vessels \cite{DAngelo12} and elliptic optimal control problems with controls acting on a lower dimensional manifold \cite{Gong14}. Note that the line Dirac measure $\delta_\gamma$ is not an $L^2$ function. Although the solution tends to be smooth in a large part of the domain, it can become singular in the region close to the one-dimensional (1D) fracture $\gamma$ and in the region close to the vertices of the domain, where the corner singularities are expected to rise. Since the corner singularity associated to equation (\ref{eq:Possion}) is understood fairly well in the literature, we shall address the concerns on the regularity of the solution near $\gamma$ and on the efficacy of the numerical approximation. Finite element approximations for second order elliptic equations with singular source terms have attracted considerable attention and many studies have focused on point singular measures. Babu\v{s}ka \cite{Babuska71}, Scott \cite{Scott73, Scott76}, and Casas \cite{Casas85} studied the convergence in the $L^2$ (or $H^\epsilon$ with small $\epsilon$) norm for Dirac measures centered at some points in 2D; and a review of the convergence rates can be found in \cite{Koppl14}, in which the authors considered the Dirac measures centered at some points in both 2D and 3D and showed that for $P_1$ finite elements quasi-optimal order and for higher order finite elements optimal order a priori estimates on a family of quasi-uniform meshes in $L^2$-norm on a subdomain excludes the locations of the delta source terms. For a Dirac measure centered at a point in a $N$-dimensional domain with $N\geq 2$, locally refined meshes around the singular point were used in \cite{Eriksson85} to improve the convergence rate. Graded meshes were used in \cite{Apel11} to study the convergence rate of the finite element approximation for a point Dirac measure in 2D and $L^2$ error estimate of order $h^2|\ln h |^\frac{3}{2}$ was obtained for approximations based on $P_1$ polynomials. More recently, 1D singular source terms have also attracted some attention. By assuming the regularity of an elliptic equation in 3D with a Dirac measure concentrated on a 1D fracture in a weighted Sobolev space, optimal finite element convergence rates were obtained in \cite{DAngelo08, DAngelo12} by using graded meshes. Then the authors in \cite{Ariche16} derived the 3D regularity for the simplified equation in \cite{DAngelo08, DAngelo12} when the Dirac measure concentrated on a line or segment fracture. In this paper, we derive regularity estimates and propose optimal finite element algorithms for equation (\ref{eq:Possion}). In particular, we investigate the solution regularity in a class of Kondratiev-type weighted spaces. Note that the smoothness of the solution vary in different parts of the domain: the region close to the vertices, the neighborhood of the fracture $\gamma$, and the rest of the domain (Remark \ref{rk31}). By studying the local problem that inherits the line Dirac measure from equation (\ref{eq:Possion}), we obtain a ``full-regularity" estimate in these weighted spaces in the neighborhood of $\gamma$. The key idea is to exploit the connection between the line Dirac measure and proper elliptic transmission problems in these weighted spaces. Based on the new regularity results and the existing regularity estimates on corner singularities, we in turn propose graded mesh refinement algorithms, such that the associated finite element methods of any order recover the optimal convergence rate in the energy norm even when the solution is singular. We study the model problem (\ref{eq:Possion}) with a simple line fracture to simplify the exposition and avoid nonessential complications in analysis. These results can be extended to more general cases, including the case where the single line fracture is replaced by multiple line fractures, whether intersecting or non-intersecting. With proper modifications, we also expect these analytical tools will be useful in the case when $\gamma$ is a smooth curve and when the source term $\delta_{\gamma}$ is replace by $q\delta_{\gamma}$ for $q\in L^2({\gamma})$. The rest of the paper is organized as follows. In Section \ref{sec-2}, we discuss the well-posedness and global regularity of equation \eqref{eq:Possion} in Sobolev spaces. In Section \ref{sec-3}, we introduce the weighted spaces and derive the regularity estimates for the solution in the neighborhood of $\gamma$. The main regularity results, summarized in Theorem \ref{ureg}, imply that in addition to the lack of regularity in the direction across $\gamma$, the solution also possesses isotropic singularities at the endpoints of the line fracture. In Section \ref{sec-4}, we propose the finite element approximation of equation (\ref{eq:Possion}) based on a simple and explicit construction of graded meshes (Algorithm \ref{graded} and Remark \ref{rkgraded}). We further show that the proposed numerical methods achieve the optimal convergence rate by local interpolation error analysis in weighted spaces. We present various numerical test results in Section \ref{sec-5} to validate the theory. Throughout the text below, we denote by $ab$ the line segment with endpoints $a$ and $b$. The generic constant $C>0$ in our estimates may be different at different occurrences. It will depend on the computational domain, but not on the functions involved or the mesh level in the finite element algorithms. \section{Well-posedness and regularity in Sobolev spaces} \label{sec-2} \subsection{Well-posedness of the solution} Denote by $H^m(\Omega)$, $m\geq 0$, the Sobolev space that consists of functions whose $i$th ($0\leq i\leq m$) derivatives are square integrable. Let $L^2(\Omega):=H^0(\Omega)$. Denote by $H^1_0(\Omega)\subset H^1(\Omega)$ the subspace consisting of functions with zero trace on the boundary $\partial\Omega$. The variational formulation for equation (\ref{eq:Possion}) is \begin{eqnarray}\label{eqn.weak} a(u, v):=\int_\Omega\nabla u\cdot \nabla v dx=\langle \delta_\gamma, v \rangle, \quad \forall\ v\in H^1_0(\Omega). \end{eqnarray} According to the trace estimate \cite{LionsMaganesVolI}, $v|_\gamma$ is well defined in $L^2(\gamma)$ for $v\in H^1(\Omega)$. Therefore, it is clear that there exists a unique solution $u\in H^1_0(\Omega)$ defined by (\ref{eqn.weak}). However, the solution has limited regularity because the singular source term $\delta_\gamma\notin L^2(\Omega)$. In the rest of this section, we present the global regularity estimates for the solution in the domain. \subsection{Regularity in Sobolev spaces} We begin with the regularity estimates of problem (\ref{eq:Possion}) in Sobolev spaces $H^m$. We first have the following result regarding the line Dirac measure $\delta_\gamma$. \begin{lem} \label{lemma2-1} Let $\Omega \subset \mathbb{R}^2$ be a bounded domain. Then $\delta_\gamma \in H^{-\frac{1}{2}-\epsilon}(\Omega)$ for any $\epsilon > 0$. \end{lem} \begin{proof} The proof is based on the duality pairing (cf. \cite{LionsMaganesVolI}). Given $\epsilon > 0$ and $v \in H^{\frac{1}{2}+\epsilon}(\Omega)$, by H\"older's inequality and the trace estimate \cite{Kesavan89, LionsMaganesVolI}, we have \[ \langle \delta_\gamma, v \rangle = \int_\gamma v(s)ds \leq C \|v\|_{L^2(\gamma)} \leq C \|v\|_{H^{\frac{1}{2}+\epsilon}(\Omega)}. \] Therefore, by the standard definition, we have \[ \|\delta_{\gamma}\|_{H^{-\frac{1}{2}-\epsilon}(\Omega)} := \sup \{\langle \delta_\gamma, v \rangle \ : \ \|v\|_{H^{\frac{1}{2}+\epsilon}(\Omega)} = 1\} \leq C, \] which completes the proof. \end{proof} Consequently, we have the following global regularity estimate for the solution. \begin{lem} \label{thm2-2} Given $\epsilon>0$, the solution of equation \eqref{eq:Possion} satisfies $u \in H^{\frac{3}{2}-\epsilon}(\Omega) \cap H^1_0(\Omega)$. \end{lem} \begin{proof} From Lemma \ref{lemma2-1}, it follows $\delta_\gamma \in H^{-\frac{1}{2}-\epsilon}(\Omega)$. Then the standard elliptic regularity theory \cite{Alinhac07} leads to the conclusion. \end{proof} Thus, by Lemma \ref{thm2-2} and the Sobolev embedding theorem \cite{Ciarlet74}, we obtain \begin{coro}\label{co1} The solution $u$ of equation \eqref{eq:Possion} is H\"{o}lder continuous $u\in C^{0,1/2-\epsilon}(\Omega)$ for any small $\epsilon>0$. In particular, we have $u\in C^0({\Omega})$. \end{coro} Based on Lemma \ref{thm2-2} and Corollary \ref{co1}, the solution is merely in $H^{\frac{3}{2}-\epsilon}(\Omega)$ for $\epsilon>0$. The lack of regularity is largely due to the singular line Dirac measure $\delta_\gamma$ in the source term. However, regularity is a local property. Such solution singularity shall occur only in the neighborhood of $\gamma$. In a large part of the domain, the solution is reasonably smooth. Hence, we shall study the regularity of equation (\ref{eq:Possion}) in some weighted Sobolev spaces that can accurately characterize the local behavior of the solution. \section{Regularity estimates in weighted spaces} \label{sec-3} Recall the domain $\Omega$ and the line segment $\gamma$ in equation (\ref{eq:Possion}). Without loss of generality, we assume $\gamma=\{(x,0), \ 0< x<1\}$ with the endpoints $Q_1=(0,0)$ and $Q_2=(1,0)$ as shown in Figure \ref{fig:Omega}. Let $\mathcal{V}$ be the singular set, which is the collection of $Q_1$, $Q_2$, and all the vertices of $\Omega$. In this section, we first study an auxiliary transmission problem in Subsections \ref{31} and \ref{32}. Then, we obtain the regularity estimates for equation (\ref{eq:Possion}) in Subsection \ref{33}. \subsection{The transmission problem}\label{31} Consider the equation \begin{equation} \label{eq:2d} \left\{ \begin{aligned} -\Delta w=0 &\quad \mbox{ in }\Omega \setminus \gamma,\\ w_y^+=w_y^--1 &\quad \mbox{ on } \gamma,\\ w^+=w^- &\quad \mbox{ on } \gamma,\\ w=0 &\quad \mbox{ on } \partial \Omega, \end{aligned} \right. \end{equation} where $w_y=\partial_yw$. Here, for a function $v$, $ v^\pm:=\lim_{\epsilon\rightarrow 0}v(x,y\pm\epsilon). $ It is clear that equation (\ref{eq:2d}) has a unique weak solution $$w\in H^1(\Omega\setminus \gamma)\cap \{w|_{\partial\Omega}=0\}.$$ \begin{rem}\label{rk31}We define different regions of the domain as follows for further local regularity estimates. Denote by $\mathbb{H}^+$ and $\mathbb{H}^-$ the upper and lower half planes, respectively. Define $\gamma_0=\{(x,0): d \leq x \leq 1-d\} \subset \gamma$ for some small $d>0$. Then we choose two open subsets $\Omega^+\subset \Omega \cap {\mathbb{H}^+}$ and $\Omega^-\subset \Omega \cap {\mathbb{H}^-}$, each of whom has a smooth boundary and is away from $\partial\Omega$, such that $\gamma_0 = \overline{\Omega^+}\cap \overline{\Omega^-}$. Let $B(x_0, r)$ be the ball centered at $x_0$ with radius $r$. Denote by $B_i=B(Q_i,2d)$, $i=1, 2$, the neighborhoods around the endpoints of $\gamma$. See Figure \ref{fig:decom_L_R}. We assume $d$ is sufficiently small such that $B_1\cap B_2=\emptyset$ and $(B_1\cup B_2)\cap \partial \Omega=\emptyset$. Therefore, the domain $\Omega$ is divided into three regions: (i) the interior region $R_1={\Omega^+} \cup {\Omega^-}$ away from the set $\mathcal V$, (ii) the region $R_2=B_1\cup B_2$ consisting of the neighborhoods of the endpoints of $\gamma$, and (iii) $R_3=\Omega\setminus (\bar R_1\cup \bar R_2)$ is the region close to the boundary $\partial \Omega$. \end{rem} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.2] \draw[thick] (-18,-11) -- (-15,6) -- (16,10) -- (18,-11) -- (-18,-11); \draw[thick] (-10,-2) node {$\bullet$} node[anchor = north] {$Q_1$} -- (10,-2) node {$\bullet$} node[anchor = north] {$Q_2$}; \draw (12,6) node {$\Omega$}; \draw(0,-1) node{$\gamma$}; \end{tikzpicture} \end{center} \vspace*{-15pt} \caption{Domain $\Omega$ containing a line fracture $\gamma$.} \label{fig:Omega} \end{figure} \begin{rem}\label{r3} In region $R_3$, the solution regularity in (\ref{eq:2d}) is determined by the geometry of the domain. In particular, the solution can possess singularities near the non-smooth points (vertices) of the boundary. The regularity estimates in this region is well understood in the literature. See for example \cite{Apel99, Dauge88, Grisvard85, Kondratiev67, Li10} and references therein. Therefore, we shall concentrate on the regularity analysis in regions $R_1$ and $R_2$ for equation (\ref{eq:2d}). \end{rem} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.22] \draw[thick] (-18,-11) -- (-15,6) -- (16,10) -- (18,-11) -- (-18,-11); \draw (-10,-2) -- (10,-2); \filldraw[color=red!60, fill=red!15] (-8,-2) .. controls (-10,-3) and (-10,-5) .. (-8,-6) .. controls (-4, -8) and (4,-8) .. (8,-6) .. controls (10, -5) and (10,-3) .. (8,-2); \filldraw[color=red!60, fill=red!15] (-8,-2) .. controls (-10,-1) and (-10,1) .. (-8,2) .. controls (-4, 4) and (4,4) .. (8,2) .. controls (10, 1) and (10,-1) .. (8,-2) ; \draw[thick] (-8,-2) node {$\boldsymbol{\cdot}$} node[anchor = north] {$d$} -- (8,-2) node {$\boldsymbol{\cdot}$} node [anchor = north] {$1-d$}; \draw[ultra thick] (-10,-2) node {$\boldsymbol{\cdot}$}; \draw[ultra thick] (10,-2) node {$\boldsymbol{\cdot}$}; \draw (-11.5,-2.5) node {$Q_1$}; \draw (11.5,-2.5) node {$Q_2$}; \draw (-10,-2) circle (4); \draw (10,-2) circle (4); \draw (12,6) node {$\Omega$}; \draw (0,2) node {$\Omega^+$}; \draw (0,-6) node {$\Omega^-$}; \draw(0,-3) node{$\gamma_0$}; \draw(-11,1) node{$B_1$}; \draw(11,1) node{$B_2$}; \end{tikzpicture} \end{center} \vspace*{-15pt} \caption{Decomposition around the singular line: $\Omega^+, \Omega^-, B_1$ and $B_2$.} \label{fig:decom_L_R} \end{figure} We now introduce a class of Kondratiev-type weighted spaces for the analysis of equation (\ref{eq:2d}). \begin{definition} \label{wss} (Weighted Sobolev spaces) Recall the set $\mathcal V$ that consists of the endpoints of $\gamma$ and all the vertices of the domain $\Omega$. Let $r_i(x,Q_i)$ be the distance from $x$ to $Q_i \in \mathcal{V}$ and let \begin{eqnarray}\label{eqn.rho} \rho(x)=\Pi_{Q_i\in \mathcal{V}} r_i(x,Q_i). \end{eqnarray} For $a\in\mathbb R$, $m\geq 0$, and $G\subset \Omega$, we define the weighted Sobolev space $$ {\mathcal K}_{a}^m(G) := \{v,\ \rho^{|\alpha|-a}\partial^\alpha v\in L^2(G), \forall\ |\alpha|\leq m \}, $$ where the multi-index $\alpha=(\alpha_1,\alpha_2)\in\mathbb Z^2_{\geq 0}$, $|\alpha|=\alpha_1+\alpha_2$, and $\partial^\alpha=\partial_x^{\alpha_1}\partial_y^{\alpha_2}$. The ${\mathcal K}_{a}^m(G)$ norm for $v$ is defined by $$ \|v\|_{{\mathcal K}_{a}^m(G)}=\big(\sum_{|\alpha|\leq m}\iint_{G} |\rho^{|\alpha|-a}\partial^\alpha v|^2dxdy\big)^{\frac{1}{2}}. $$ \end{definition} \begin{rem} According to Definition \ref{wss}, in the region that is away from the set $\mathcal V$, the weighted space ${\mathcal K}^m_a$ is equivalent to the Sobolev space $H^m$. In the region $R_3$ (see Remark \ref{rk31}) that is close to the vertices of the domain, the space ${\mathcal K}^m_a$ is the same Kondratiev space for analyzing corner singularities \cite{Dauge88,Grisvard85,Kondratiev67}. In contrast to the Kondratiev space where the weight is the distance function to the vertex set, the weight in the space ${\mathcal K}^m_a$ also consists of the distance function to the endpoints of $\gamma$. In particular, for $i=1,2$, in the neighborhood $B_i$ (Figure \ref{fig:decom_L_R}) of an endpoint $Q_i$ of $\gamma$, the weighted space can be written as \begin{equation*} \label{def_weighted_Sobolev} {\mathcal K}_a^m(B_i) = \{v, r_i^{|\alpha|-a}\partial^\alpha v\in L^2(B_i), \forall\ |\alpha|\leq m \}. \end{equation*} \end{rem} In each $B_i$, we further define $\chi_i \in C_0^\infty(B_i)$ that satisfies \begin{equation*} \label{WS} \chi_i=\left\{ \begin{aligned} 1& \quad \mbox{ in $B(Q_i,d)$}, \\ 0& \quad \mbox{ on } \partial B_i. \end{aligned} \right. \end{equation*} Note that supp$(\chi_1)\cap$supp$(\chi_2)=\emptyset$. In addition, we denote by \begin{eqnarray}\label{w}W= {\rm{span}}\{\chi_i\}, \quad i=1, 2, \end{eqnarray} the linear span of these two functions. \subsection{Regularity estimates for equation (\ref{eq:2d})}\label{32} We now proceed to carry out the regularity analysis for the transmission problem (\ref{eq:2d}). Recall the interior region $R_1={\Omega^+} \cup {\Omega^-}$ in Remark \ref{rk31}. We start with the regularity analysis for the solution in $R_1$. \begin{lem}\label{waway} The solution of equation \eqref{eq:2d} is smooth in either ${\Omega^+}$ or in ${\Omega^-}$. Namely, for any $m\geq 1$, $w\in H^{m+1}({\Omega^+})$ and $w\in H^{m+1}({\Omega^-})$. \end{lem} \begin{proof} Recall that ${\Omega^+}$ and ${\Omega^-}$ are regions with a smooth boundary. Therefore, by the trace estimate, for $m\geq 1$, we can find two functions $w_U\in H^{m+1}(\Omega^+)$ and $w_D\in H^{m+1}(\Omega^-)$ such that $w_U=w_D$ and $\frac{\partial w_U}{\partial y}=\frac{\partial w_D}{\partial y}-1$ on $\gamma_0:=\overline{{\Omega^+}} \cup \overline{{\Omega^-}}\subset\gamma$. Define \begin{equation} w_0= \left\{ \begin{aligned} w_U \quad & \mbox{ in } \Omega^+, \\ w_D \quad & \mbox{ in } \Omega^-. \end{aligned} \right. \end{equation} Then $w-w_0$ satisfies the standard transmission problem with a smooth interface \begin{equation} \left\{\begin{aligned} -\Delta(w-w_0)=\Delta w_0& \quad \mbox{ in } (\Omega^+\cup \Omega^-), \\ (w-w_0)_y^+=(w-w_0)_y^- &\quad \mbox{ on } \gamma_0,\\ (w-w_0)^+=(w-w_0)^- & \quad \mbox{ on } \gamma_0. \end{aligned}\right. \end{equation} Therefore, by the regularity results in \cite{Nicaise92, Li10}, we have $w-w_0\in H^{m+1}(\Omega^+)$ and $w-w_0\in H^{m+1}(\Omega^-)$, which leads to the desired result. \end{proof} We now concentrate on the solution behavior in the neighborhood $B_i$, $i=1,2$, of an endpoint of $\gamma$ (see Remark \ref{rk31}). We first consider the following problem with a simpler transmission condition on $\gamma$, \begin{equation}\label{Lisimiliar} \left\{ \begin{aligned} -\Delta z =f & \quad \text{in } B_i\setminus \gamma, \\ {z}_y^+={z}_y^- & \quad \text{on } \gamma \cap B_i, \\ {z}^+={z}^- & \quad \text{on } \gamma \cap B_i, \\ z=0 & \quad \text{on } \partial B_i. \end{aligned} \right. \end{equation} We recall a regularity result in \cite{Li10} regarding $z$ in the neighborhood of $Q_i$. \begin{lem}\label{singular} For equation (\ref{Lisimiliar}), there exists $b_{Q_i}>0$ such that the following statement holds. Let $0<a<b_{Q_i}$ and $m \geq 1$. Assume $f \in {\mathcal K}_{a-1}^{m-1}(B_i\setminus \gamma)$. Recall the finite dimensional space $W$ in (\ref{w}). Then, there exists a unique decomposition $z=z_{reg}+z_s$, such that $z_{reg} \in {\mathcal K}_{a+1}^{m+1}(B(Q_i, d)\setminus\gamma)$ and $z_s \in W$. Moreover, it follows \begin{equation} \|z_{reg}\|_{{\mathcal K}_{a+1}^{m+1}(B(Q_i, d)\setminus\gamma)}+\|z_s\|_{L^\infty(B_i)} \leq C \|f\|_{{\mathcal K}_{a-1}^{m-1}(B_i\setminus\gamma)}, \end{equation} where the constant $C>0$ is independent of $f$. \end{lem} \begin{rem}Based on the calculation in \cite{Li10}, the constant $b_{Q_i}$ is determined by the smallest positive eigenvalue of the operator $-\partial_\theta^2$ in $(0,2\pi)$ with the periodic boundary condition. Note that $k^2$, $k\in \mathbb{Z}_{\geq 0}$, are these eigenvalues. Thus, it follows $b_{Q_i}=1$. \end{rem} Recall the solution $w$ of the transmission problem (\ref{eq:2d}). Recall the space $W$ in (\ref{w}). Then, in the neighborhood $B_i$ of $Q_i$, $i=1,2$, we have the following regularity result. \begin{thm}\label{lemma3-4} Let $B_{d, i}:=B(Q_i, d)\subset B_i$, $i=1, 2$. Then, in $B_{d,i}$, the solution $w$ of equation \eqref{eq:2d} admits a decomposition $$ w=w_{reg}+w_s, $$ where $w_s \in W$ and $w_{reg} \in {\mathcal K}_{a+1}^{m+1}(B_{d,i}\setminus\gamma)$ for $0<a<1$ and $m\geq 1$. Moreover, we have \begin{equation}\label{regularity} \|w_{reg}\|_{{\mathcal K}_{a+1}^{m+1}(B_{d, i}\setminus\gamma)}+\|w_s\|_{L^\infty(B_{i})} \leq C. \end{equation} \end{thm} \begin{proof} We shall derive the theorem in $B_{d,1}$. The proof in $B_{d,2}$ can be carried out in a similar manner. Let ($r, \theta$) be the local polar coordinates in $B_1$ for which $Q_1$ is at the origin and $\theta=0$ corresponds to the positive $x$-axis. We shall use a localization argument to obtain the estimate. In the rest of the proof, we simplify the notation for $B_{d,1}$ by letting $B_d=B_{d, 1}$. {\bf Step 1.} Let $\eta\in C^\infty_0(B_1)$ be a cutoff function such that $\eta=1$ in $B_d$, $\eta=0$ for $r>3d/2$, and $\eta_\theta:=\partial_\theta\eta=0$. Define $q:=\eta w$. Note that on $\gamma$ ($\theta=0, 2\pi$), we have \begin{eqnarray*} q_y^+&=&(\sin\theta)^{+} q_r^+ +\frac{(\cos\theta)^+}{r}q_\theta^+=\frac{1}{r}q_\theta^+\\ &=&\frac{1}{r}\eta w_\theta^+=\eta\left((\sin\theta)^{+} w_r^+ +\frac{(\cos\theta)^+}{r}w_\theta^+\right)=\eta w_y^+, \end{eqnarray*} where for a function $v(r, \theta)$, $ v^\pm:=\lim_{\epsilon\rightarrow 0}v(r,\theta\pm\epsilon). $ With a similar calculation, we have $q_y^-=\eta w_y^-$ on $\gamma$. Then, according to the transmission condition in equation (\ref{eq:2d}), we have $$ q_y^+=\eta w^+_y=\eta(w^-_y-1)=q_y^--\eta, \qquad {\rm{on}}\ \gamma. $$ Consequently, $q$ satisfies the following equation \begin{equation} \label{eq:polaro} \left\{\begin{aligned} -\Delta q=-\Delta(w\eta) & && \mbox{ in } B_1\setminus\gamma,\\ q_y^+=q_y^--\eta & && \mbox{ on } \gamma,\\ q^+=q^- & && \mbox{ on }\gamma,\\ q=0 & &&\mbox{ on }\partial B_1. \end{aligned}\right. \end{equation} Note that based on the definition of $\eta$, in $B_1\setminus \gamma$, $-\Delta(w\eta)=-2\nabla w\cdot\nabla \eta-w\Delta \eta$ and in $B_d\setminus \gamma$, $-\Delta(w\eta)=0$. {\bf Step 2.} Define $p(r,\theta)=-\eta r \sin \frac{\theta}{2}$ for $0\leq\theta\leq2\pi$, where $\eta$ is defined in Step 1. Then $p\in H^1(B_1)$ satisfies \begin{equation} \label{eq:polarv} \left\{\begin{aligned} -\Delta p= \Delta\left(\eta r\sin\frac{\theta}{2}\right) & && \mbox{ in } B_1\setminus\gamma,\\ p_y^+=(\sin\theta)^+ p_r^+ +\frac{(\cos\theta)^+}{r} p_\theta^+=-\frac{1}{2}\eta & && \mbox{ on } \theta=0,\\ p_y^-=(\sin\theta)^- p_r^- +\frac{(\cos\theta)^-}{r} p_\theta^-=\frac{1}{2}\eta & && \mbox{ on } \theta=2\pi,\\ p=0 & && \mbox{ on } \partial B_1. \end{aligned}\right. \end{equation} It is worth noting that $p \not \in H^2(B_1)$. However, by a straightforward calculation, it is clear that $p\in {\mathcal K}_{a+1}^{m+1}(B_1)$ and $\Delta(\eta r\sin\frac{\theta}{2}) \in {\mathcal K}_{a-1}^{m-1}(B_1)$ for any $m\geq1$ and $0<a<1$. {\bf Step 3.} Let $z=p-q$. Then, based on equations (\ref{eq:2d}), (\ref{eq:polaro}), and (\ref{eq:polarv}), $z$ satisfies \begin{equation} \label{eq:zz} \left\{\begin{aligned} -\Delta z= f & \quad \text{in } B_1\setminus \gamma, \\ z_y^+=z_y^- & \quad \text{on } \gamma , \\ z^+=z^- & \quad \text{on } \gamma , \\ z=0& \quad \text{on } \partial B_1, \end{aligned}\right. \end{equation} where $f=-\Delta(w\eta)-\Delta(\eta r\sin\frac{\theta}{2})$. Note that by the fact $\Delta(w\eta)=0$ in $B_d\setminus \gamma$ and by Lemma \ref{waway}, $f\in {\mathcal K}_{a-1}^{m-1}(B_1)$ for any $m\geq1$ and $0<a<1$. Applying Lemma \ref{singular} to equation (\ref{eq:zz}), we conclude that there exists a unique decomposition $z=z_{reg}+z_s$, with $z_{reg} \in {\mathcal K}_{a+1}^{m+1}(B_d\setminus\gamma)$ and $z_s \in W$, satisfying \begin{equation}\label{eq.zzz} \|z_{reg}\|_{{\mathcal K}_{a+1}^{m+1}(B_d\setminus\gamma)}+\|z_s\|_{L^\infty(B_1)} \leq C \|f\|_{{\mathcal K}_{a-1}^{m-1}(B_1\setminus\gamma)}. \end{equation} Since $\eta w=q=p-z$, by the estimate (\ref{eq.zzz}) and by the definition of $p$ in Step 2, we obtain the decomposition of $w$ in $B_d\setminus \gamma$: $$ w=w_{reg}+w_s, $$ where $w_{reg}=p-z_{reg}$ and $w_s=-z_s$, such that for any $m\geq1$ and $0<a<1$, \begin{equation*} \|w_{reg}\|_{{\mathcal K}_{a+1}^{m+1}(B_d\setminus\gamma)}+\|w_s\|_{L^\infty(B_1)} \leq C \|f\|_{{\mathcal K}_{a-1}^{m-1}(B_1)}+\|p\|_{{\mathcal K}_{a+1}^{m+1}(B_d\setminus\gamma)}<C, \end{equation*} which completes the proof. \end{proof} \subsection{Regularity estimates for equation (\ref{eq:Possion})}\label{33} Recall that $\mathcal V$ consists of the endpoints of $\gamma$ and all the vertices of $\Omega$. Recall $B_{d, i}:=B(Q_i, d)$ in Theorem \ref{lemma3-4}, and the regions $\Omega^+$, $\Omega^-$, $R_3$ in Remark \ref{rk31}. We are now ready to derive the regularity estimate for the solution of equation \eqref{eq:Possion} with the line Dirac measure. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.22] \draw[thick] (-18,-11) -- (-15,6) -- (16,10) -- (18,-11) -- (-18,-11); \filldraw[color=red!60, fill=red!15] (-10.5,-2.5) -- (-10.5,-1.5) -- (10.5,-1.5) -- (10.5,-2.5) -- (-10.5,-2.5); \draw[thick] (-10,-2) -- (10,-2); \draw (0,-3.8) node {$R_{\epsilon}$}; \draw (12,6) node {$\Omega$}; \end{tikzpicture} \end{center} \vspace*{-15pt} \caption{A small neighborhood $R_\epsilon$ of the line fracture $\gamma$.} \label{fig:R_epsilon} \end{figure} \begin{thm}\label{ureg} The solution $u$ of equation (\ref{eq:Possion}) is smooth in the region away from the set $\mathcal V$, namely, for $m\geq 1$, $u\in H^{m+1}(\Omega^+)$ and $u\in H^{m+1}(\Omega^-)$. In the neighborhood of each endpoint of $\gamma$, $u$ admits a decompostion $$ u=u_{{reg}}+u_s, \qquad u_s\in W, $$ such that for any $m\geq 1$ and $0<a<1$, \begin{equation*} \|u_{reg}\|_{{\mathcal K}_{a+1}^{m+1}(B_{d,i}\setminus\gamma)}+\|u_s\|_{L^\infty(B_i)} \leq C. \end{equation*} In the region $R_3$ away from $\gamma$ and close to the boundary, $u\in{\mathcal K}^{m+1}_{a+1}(R_3)$ for $m\geq 1$ and $0<a<\frac{\pi}{\omega}$, where $\omega$ is the largest interior angle among all the vertices of the domain $\Omega$. \end{thm} \begin{proof} Recall the solution $w$ of the transmission problem (\ref{eq:2d}). We shall show $u=w$. We first extend $w$ to $\Omega$ by defining \begin{equation}\label{u_alter} w:= \left\{ \begin{aligned} w\quad & \text{in } \Omega\backslash \gamma, \\ w^+(=w^-)\quad & \text{on } \gamma. \end{aligned} \right. \end{equation} For $\epsilon>0$ small, define $R_{\epsilon}:=\{(-\epsilon, 1+\epsilon) \times (-\epsilon, \epsilon)\}$ to be a small neighborhood of $\gamma$. Let $\mathbf{n}_\epsilon$ be the unit outward normal vector to $\partial R_{\epsilon}$. See Figure \ref{fig:R_epsilon}. Let $\tilde{u}=u-w$. Then for any $\phi \in C_0^\infty(\Omega)$, it follows \begin{equation} \begin{aligned}\label{uwdiff} -\iint_\Omega \Delta \tilde{u} \phi dxdy = & -\iint_\Omega \Delta u \phi dxdy +\iint_\Omega \Delta w \phi dxdy \\ = & \iint_\Omega \delta_\gamma \phi dxdy + \iint_{\Omega\backslash R_{\epsilon}} \Delta w \phi dxdy+ \iint_{R_{\epsilon}} \Delta w \phi dxdy \\ = & \int_\gamma \phi ds + \iint_{\Omega\backslash R_{\epsilon}} \Delta w \phi dxdy - \iint_{R_{\epsilon}} \nabla w \cdot \nabla \phi dxdy + \int_{\partial R_{\epsilon}} \nabla w\cdot \mathbf{n}_\epsilon \phi ds. \end{aligned} \end{equation} For each term on the right hand side of (\ref{uwdiff}), we have the following estimates. In particular, $$ \int_{\partial R_{\epsilon}} \nabla w\cdot \mathbf{n}_\epsilon \phi ds =\int_{-\epsilon}^{1+\epsilon} (w_y(x,\epsilon)-w_y(x,-\epsilon))\phi dx + \int_{-\epsilon}^{\epsilon} (w_x(1+\epsilon,y)-w_x(-\epsilon,y))\phi dy. $$ By (\ref{eq:2d}) we have \begin{equation} \begin{aligned} \iint_{\Omega\backslash R_{\epsilon}} \Delta w \phi dxdy=0. \nonumbe \end{aligned} \end{equation} As $\epsilon \rightarrow 0$, due to the boundedness of $|\nabla w|$ in $R_{\epsilon}$, it follows $$ \iint_{R_{\epsilon}} \nabla w \cdot \nabla \phi dxdy \rightarrow 0; $$ and by the transmission condition in (\ref{eq:2d}), we futher have $$ \int_{\partial R_{\epsilon}} \nabla w\cdot \mathbf{n}_\epsilon \phi ds \rightarrow \int_0^1 (w_y(x,0+)-w_y(x,0-))\phi dx = -\int_\gamma \phi dx. $$ Incorporating the above estimates into equation (\ref{uwdiff}), we have $$ -\iint_\Omega \Delta \tilde{u} \phi dxdy = 0, \quad \forall\ \phi \in C_0^\infty(\Omega). $$ We then conclude that $$ -\Delta \tilde{u} = 0 \quad \text{in } \Omega. $$ Note that $\tilde{u}=u-w=0$ on $\partial \Omega$, then it follows $\tilde{u}=0$ in $\Omega$, namely, $u=w$ in $\Omega$. Therefore, the regularity estimates for $u$ in $\Omega^+$, $\Omega^-$, and in $B_{d, i}$, $i=1,2$ can be derived from the corresponding estimates for $w$ in Lemma \ref{waway} and in Theorem \ref{lemma3-4}. The regularity estimates for $u$ in $R_3$ follow from the results in \cite{Kondratiev67, Grisvard85} for elliptic Dirichlet problems in polygonal domains. \end{proof} \section{Optimal finite element methods}\label{sec-4} According to Lemma \ref{thm2-2}, the solution of equation \eqref{eq:Possion} is merely in $H^{\frac{3}{2}-\epsilon}(\Omega)$ for any $\epsilon>0$. The singularities in the solution can severely slow down the convergence of the usual finite element method associated with a quasi-uniform mesh. In this section, we propose new finite element algorithms to approximate the solution of equation \eqref{eq:Possion} that shall converge at the optimal rate. \subsection{The finite element method} Let $\mathcal{T}=\{T_i\}$ be a triangulation of $\Omega$ with triangles. For $m\geq 1$, we denote the Lagrange finite element space by \begin{equation}\label{femspace} S(\mathcal{T},m)=\{v\in C^0(\Omega) \cap H_0^1(\Omega):v|_T\in P_m(T), \ \forall \ T \in \mathcal{T}\}, \end{equation} where $P_m(T)$ is the space of polynomials with degree no more than $m$ on $T$. Following the variational form (\ref{eqn.weak}), we define the finite element solution $u_h\in S(\mathcal{T},m)$ of equation \eqref{eq:Possion} by \begin{equation} \label{eq:FEM} \int_\Omega\nabla u_h\cdot\nabla v_hdx=\int_\gamma v_hdx, \quad \forall\ v_h\in S(\mathcal{T},m). \end{equation} Suppose that the mesh $\mathcal T$ consists of quasi-uniform triangles with size $h$. Because of the lack of regularity in the solution ($u\in H^{\frac{3}{2}-\epsilon}(\Omega))$, the standard error estimate \cite{Ciarlet74} yields only a sup-optimal convergence rate \begin{equation} \|u-u_h\|_{H^1(\Omega)}\leq C h^{\frac{1}{2}-\epsilon},\qquad {\rm{for}}\ \epsilon>0. \end{equation} This is highly ineffective since the optimal convergence rate using the $m$th-degree polynomials when the solution is smooth is \begin{equation*} \|u-u_h\|_{H^1(\Omega)}\leq C h^{m}. \end{equation*} We now propose new finite element methods to solve equation \eqref{eq:Possion} based on the special refinement of the triangles. Recall that the singular set $\mathcal V$ includes the endpoints of $\gamma$ and all the vertices of $\Omega$. We call the points in $\mathcal V$ the \textit{singular points}. \begin{figure} \includegraphics[scale=0.34]{figures/line4.png}\hspace{3cm}\includegraphics[scale=0.34]{figures/line3.png} \caption{The new node of the an edge $pq$ (left -- right): no singular vertices (midpoint); $p$ is a singular point ($|pr|=\kappa_p|pq|$, $\kappa_p<0.5$).}\label{fig.2} \end{figure} \begin{figure} \includegraphics[scale=0.34]{figures/t.png}\hspace{0.5cm} \includegraphics[scale=0.34]{figures/t1.png}\\\hspace{0cm}\includegraphics[scale=0.34]{figures/t22.png}\hspace{.5cm}\includegraphics[scale=0.34]{figures/3.png} \caption{Refinement of a triangle $\triangle x_0x_1x_2$. First row: (left -- right): the initial triangle and the midpoint refinement; second row: two consecutive graded refinements toward $x_0$, ($\kappa_{x_0}<0.5$).}\label{fig.333}\end{figure} \begin{alg} \label{graded} (Graded refinements) Suppose each singular point is a vertex in the triangulation $\mathcal T$ and each triangle in $\mathcal T$ contains at most one singular point. We also suppose $\mathcal T$ conforms to $\gamma$. Namely, $\gamma$ is the union of some edges in $\mathcal T$ and does not cross triangles in $\mathcal T$. Let ${pq}$ be an edge in the triangulation $\mathcal T$ with $p$ and $q$ as the endpoints. Then, in a graded refinement, a new node $r$ on $pq$ is produced according to the following conditions: \begin{itemize} \item[1.] (Neither $p$ or $q$ is a singular point.) We choose $r$ as the midpoint ($|pr|=|qr|$). \item[2.] ($p$ is a singular point.) We choose $r$ such that $|pr|=\kappa_p|pq|$, where $\kappa_p\in (0, 0.5)$ is a parameter that will be specified later. See Figure \ref{fig.2} for example. \end{itemize} Then, the graded refinement, denoted by $\kappa(\mathcal T)$, proceeds as follows. For each triangle in $\mathcal T$, a new node is generated on each edge as described above. Then, $T$ is decomposed into four small triangles by connecting these new nodes (Figure \ref{fig.333}). Given an initial mesh $\mathcal T_0$ satisfying the condition above, the associated family of graded meshes $\{\mathcal T_n,\ n\geq0\}$ is defined recursively $\mathcal T_{n+1}=\kappa(\mathcal T_{n})$. \end{alg} \begin{rem}\label{rkgraded} In Algorithm \ref{graded}, we choose the parameter $\kappa_p$ for each $p\in \mathcal V$ as follows. Recall $m$ is the degree of polynomials in the finite element space $S(\mathcal T_n, m)$. Then, if $p$ is an endpoint of $\gamma$, we choose $\kappa_p=2^{-\frac{m}{a}}$ for any $0<a<1$, and if $p$ is an vertex of the domain $\Omega$, we choose $\kappa_p<2^{-\frac{m\omega}{\pi}}$, where $\omega$ is the largest interior angle of the domain. \end{rem} Let $S_n:=S(\mathcal{T}_n,m)$ be the finite element space of degree $m$ associated with the graded meshes defined in Algorithm \ref{graded} and Remark \ref{rkgraded}. Then, we define the finite element solution $u_n\in S_n$ as \begin{equation} \label{eq:FEM1} a(u_n, v_n)=\int_\Omega\nabla u_n\cdot\nabla v_ndx=\int_\gamma v_ndx, \quad \forall\ v_n\in S_n. \end{equation} Note that the bilinear form $a(\cdot, \cdot)$ is coercive and continuous on $S_n$. Thus, by C\'ea's Theorem, we have \begin{eqnarray}\label{cea} \|u-u_n\|_{H^1(\Omega)}\leq C\inf_{v\in S_n}\|u-v\|_{H^1(\Omega)}. \end{eqnarray} In the rest of this section, we shall show that the proposed numerical solution $u_n$ converges to the solution $u$ of \eqref{eq:Possion} in the optimal rate. \subsection{Interpolation error estimates} Recall the three regions $R_1$, $R_2$ and $R_3$ of the domain $\Omega$ in Remark \ref{rk31}. $R_1$ is the region that is away from the singular set $\mathcal V$. $R_2$ is the region close to the endpoints of $\gamma$ and $R_3$ is the region close to the boundary of the domain. According to the regularity analysis in Section \ref{sec-3}, the solution of equation (\ref{eq:Possion}) behaves differently in these three regions. We therefore focus on the local interpolation error analysis in different regions. \subsubsection{Interpolation error estimates in $R_1$ and $R_3$.} \begin{lem}\label{r1r3} Recall the triangulation $\mathcal T_n$ in Algorithm \ref{graded} and Remark \ref{rkgraded}. Let $T_{(0)}\in\mathcal T_{0}$ be an initial triangle and let $u_I$ be the nodal interpolation of $u$ associated with $\mathcal T_n$. If $\bar T_{(0)}$ does not contain the endpoint of $\gamma$, then \begin{equation*} \|u-u_I\|_{H^1(T_{(0)})}\leq Ch^m, \end{equation*} where $h:=2^{-n}$. \end{lem} \begin{proof} Note that if $\bar T_0$ does not contain the endpoint of $\gamma$, then $\bar T_{(0)}\cap\mathcal V= \emptyset$ or $\bar T_{(0)}$ contains a vertex of the domain $\Omega$. If $\bar T_{(0)}\cap\mathcal V=\emptyset$, we have $u\in H^{m+1}(T_{(0)})$ (Theorem \ref{ureg}) and the mesh on $T_{(0)}$ is quasi-uniform (Algorithm \ref{graded}) with size $O(2^{-n})$. Therefore, based on the standard interpolation error estimate, we have \begin{eqnarray}\label{1.1} \|u-u_I\|_{H^1(T_{(0)})}\leq Ch^m\|u\|_{H^{m+1}(T_{(0)})}\leq Ch^m. \end{eqnarray} In the case that $\bar T_0$ contains a vertex of the domain, the solution may be singular in the neighborhood of a corner. Based on the results in \cite{BNZ1}, the solution $u\in \mathcal K^{m+1}_{a+1}(T_{(0)})$ for $a<\frac{\pi}{\omega}$ and $m\geq 1$, where $\omega$ is the largest interior angle of the domain. Note that the graded mesh on $T_{(0)}$ with the parameter in Remark \ref{rkgraded} is the same mesh defined in \cite{BNZ1, Li10}, which can recover the optimal convergence rate in the finite element method even when the solution has corner singularities: \begin{eqnarray}\label{1.2} \|u-u_I\|_{H^1(T_{(0)})}\leq Ch^m. \end{eqnarray} The proof is hence completed by (\ref{1.1}) and (\ref{1.2}). \end{proof} \subsubsection{Interpolation error estimates in $R_2$.} We now study the interpolation error in the neighborhood of the endpoint $Q$ of $\gamma$. In the rest of this subsection, we assume $T_{(0)}\in\mathcal T_0$ is an initial triangle such that $Q$ is a vertex of $T$. According to Remark \ref{rkgraded}, the mesh on $T_{(0)}$ is graded toward $Q$ with $\kappa_Q=2^{-\frac{m}{a}}$ for any $0<a<1$. We first define mesh layers on $T_0$ which are collections of triangles in $\mathcal T_n$. \begin{definition} (Mesh layers) Let $T_{(i)}\subset T_{(0)}$ be the triangle in $\mathcal T_i$, $0\leq i\leq n$, that is attached to the singular vertex $Q$ of $T_{(0)}$. For $0\leq i<n$, we define the $i$th mesh layer of $\mathcal T_n$ on $T_{(0)}$ to be the region $L_{i}:=T_{(i)}\setminus T_{(i+1)}$; and for $i=n$, the $n$th layer is $L_{n}:=T_{(n)}$. See Figure \ref{fig.layer} for example. \end{definition} \begin{figure} \includegraphics[scale=0.34]{figures/t0.png}\hspace{0.5cm} \includegraphics[scale=0.34]{figures/l1.png}\hspace{0.5cm}\includegraphics[scale=0.34]{figures/l2.png} \caption{Mesh layers (left -- right): the initial triangle $T_{(0)}$ with a vertex $Q$; two layers after one refinement; three layers after two refinements.}\label{fig.layer}\end{figure} \begin{rem} The triangles in $\mathcal T_n$ constitute $n$ mesh layers on $T_{(0)}$. According to Algorithm \ref{graded} and the choice of grading parameters in Remark \ref{rkgraded}, the mesh size in the $i$th layer $L_i$ is \begin{equation}\label{eqn.size}O(\kappa_Q^i2^{i-n}). \end{equation} Meanwhile, the weight function $\rho$ in (\ref{eqn.rho}) satisfies \begin{eqnarray}\label{eqn.dist} \rho=O(\kappa_Q^i) \ \ \ {\rm{in\ }} L_i\ (0\leq i< n) \qquad {\rm{and}} \qquad \rho \leq C\kappa_Q^n \ \ \ {\rm{in\ }} L_n. \end{eqnarray} Although the mesh size varies in different layers, the triangles in $\mathcal T_n$ are shape regular. In addition, using the local Cartesian coordinates such that $Q$ is the origin, the mapping \begin{eqnarray}\label{eqn.map} \mathbf B_{i}= \begin{pmatrix} \kappa_Q^{-i} & 0 \\ 0 & \kappa_Q^{-i} \\ \end{pmatrix},\qquad 0\leq i\leq n \end{eqnarray} is a bijection between $L_i$ and $L_0$ for $0\leq i<n$ and a bijection between $L_n$ and $T_{(0)}$. \end{rem} We then derive the interpolation error estimate in each layer. \begin{lem}\label{TNtri} Recall $\kappa_Q = 2^{-\frac{m}{a}}$ for the graded mesh on $T_{(0)}$, $m\geq 1$ and $0<a<1$. Let $u_I$ be the nodal interpolation of $u$ in the $i$th layer $L_i$ on $T_{(0)}$, $0\leq i<n$. Then, for $h:=2^{-n}$, we have $$ |u-u_{I}|_{H^1(L_i)} \leq Ch^m. $$ \end{lem} \begin{proof} Based on Theorem \ref{ureg}, the solution can be decomposed into two parts on $T_{(0)}$, $u=u_{reg}+u_s$, where for $m\geq 1$ and $0<a<1$, \begin{equation*} \|u_{reg}\|_{{\mathcal K}_{a+1}^{m+1}(T_{(0)})}+\|u_s\|_{L^\infty(T_{(0)})} \leq C. \end{equation*} Since $u_s$ belongs to a finite dimensional space, the norms of $u_s$ are equivalent. Thus, we have \begin{equation}\label{eqn.decomp} \|u_{reg}\|_{{\mathcal K}_{a+1}^{m+1}(T_{(0)})}+\|u_s\|_{H^{m+1}(T_{(0)})} \leq C. \end{equation} Note that in each $L_i$, $i<n$, the space ${\mathcal K}_{a+1}^{m+1}$ is equivalent to $H^{m+1}$. Therefore, both $u_{reg}$ and $u_s$ are continuous functions in $L_i$. Let $u_{reg, I}$ and $u_{s,I}$ be the nodal interpolations of $u_{reg}$ and $u_{s}$, respectively. Then, it is clear that $u_I=u_{reg, I}+u_{s,I}$. Thus, we have \begin{eqnarray}\label{eqn.deom1} |u-u_{I}|_{H^1(L_i)}\leq |u_{reg}-u_{reg, I}|_{H^1(L_i)}+|u_{s}-u_{s, I}|_{H^1(L_i)}. \end{eqnarray} We shall obtain the estimate for each term on the right hand side of (\ref{eqn.deom1}). Recall the mapping $\mathbf B_i$ in (\ref{eqn.map}). For any point $(x, y)\in L_i$, let $(\hat x, \hat y)=\mathbf B_i(x,y)\in L_0$. Then, for a function $v(x, y)$ in $L_i$, define $\hat v(\hat x, \hat y):=v(x, y)$ in $L_0$. Using the standard interpolation error estimate, the scaling argument, the estimate in (\ref{eqn.size}) and the mapping in (\ref{eqn.map}), we have \begin{eqnarray*} |u_{reg}-u_{reg, I}|_{H^1(L_i)}&=& |\hat u_{reg}-\hat u_{reg, \hat I}|_{H^1(L_0)}\leq C2^{(i-n)m}|\hat u_{reg}|_{H^{m+1}(L_0)}\\ &\leq& C 2^{(i-n)m}\kappa_Q^{mi}|u_{reg}|_{H^{m+1}(L_i)}=C h^m(2\kappa_Q)^{mi}|u_{reg}|_{H^{m+1}(L_i)}. \end{eqnarray*} Recall $\kappa_Q<2^{-\frac{m}{a}}$ for any $0<a<1$ and recall the estimate in (\ref{eqn.dist}). Then, continuing the estimate above, we obtain \begin{eqnarray} |u_{reg}-u_{reg, I}|^2_{H^1(L_i)}&\leq&C h^{2m}\sum_{|\alpha|=m+1}|\rho^{-1-a}\rho^{m+1}\partial^\alpha u_{reg}|^2_{L^2(L_i)}\nonumber\\&\leq& Ch^{2m} \|u_{reg}\|_{{\mathcal K}_{a+1}^{m+1}(L_i)}^2,\label{eqn.d1} \end{eqnarray} where the last step is based on definition of the weighted space. For $|u_{s}-u_{s, I}|_{H^1(L_i)}$, by the fact that $\kappa_Q<0.5$, we similarly have \begin{eqnarray} |u_{s}-u_{s, I}|_{H^1(L_i)}&=& |\hat u_{s}-\hat u_{s, \hat I}|_{H^1(L_0)}\leq C2^{(i-n)m}|\hat u_{s}|_{H^{m+1}(L_0)}\nonumber\\ &\leq& C 2^{(i-n)m}\kappa_Q^{mi}|u_{s}|_{H^{m+1}(L_i)}=C h^m|u_{s}|_{H^{m+1}(L_i)}.\label{eqn.d2} \end{eqnarray} Then, the proof is completed by combining (\ref{eqn.deom1}), (\ref{eqn.d1}), (\ref{eqn.d2}), and (\ref{eqn.decomp}). \end{proof} We now derive the interpolation error estimate in the last layer $L_n$ on $T_{(0)}$. \begin{lem}\label{TNtri2} Recall $\kappa_Q = 2^{-\frac{m}{a}}$ for the graded mesh on $T_{(0)}$, $m\geq 1$ and $0<a<1$. Let $u_I$ be the nodal interpolation of $u$ in the $n$th layer $L_n$ on $T_{(0)}$ for $n$ sufficiently large. Then, for $h:=2^{-n}$, we have $$ |u-u_{I}|_{H^1(L_n)} \leq Ch^m. $$ \end{lem} \begin{proof} Recall from Theorem \ref{ureg} that on $T_{(0)}$, $u=u_{reg}+u_s\in \mathcal K_{a+1}^{m+1}+W$ (see also (\ref{eqn.decomp})). Let $u_{reg, I}$ and $u_{s,I}$ be the nodal interpolations of $u_{reg}$ and $u_{s}$, respectively. Recall $u_s$ is a constant in the $n$th layer $L_n$ when $n$ is sufficiently large, and therefore $(u_s-u_{s, I})|_{L_n}=0$. Thus, it is sufficient to estimate $|u_{reg}-u_{reg, I}|_{H^1(L_n)}$. Recall the mapping $\mathbf B_n$ in (\ref{eqn.map}). For any point $(x, y)\in L_n$, let $(\hat x, \hat y)=\mathbf B_n(x,y)\in T_{(0)}$. Then, for a function $v(x, y)$ in $L_n$, define $\hat v(\hat x, \hat y):=v(x, y)$ in $T_{(0)}$. Let $\psi: T_{(0)} \rightarrow [0, 1]$ be a smooth function that is equal to $0$ in a neighborhood of $Q$, but is equal to 1 at all the other nodal points in $\mathcal T_0$. Then, we let $w=\psi \hat u_{reg}$ in $T_{(0)}$. Consequently, we have for $l\geq 0$ \begin{equation}\label{eqn.aux111} \|w\|^2_{{\mathcal K}^{l}_{1}(T_{(0)})}=\|\psi \hat u_{reg}\|^2_{{\mathcal K}^{l}_{1}(T_{(0)})} \leq C \|\hat u_{reg}\|^2_{{\mathcal K}^{l}_{1}(T_{(0)})}, \end{equation} where $C$ depends on $k$ and the smooth function $\psi$. Moreover, the condition $u_{reg}\in {\mathcal K}_{a+1}^{m+1}(T_{(0)})$ implies $u_{reg}(Q)=0$. Let $w_{\hat I}$ be the nodal interpolation of $w$ associated with the mesh $\mathcal T_0$ on $T_{(0)}$. Therefore, by the definition of $w$, we have \begin{eqnarray}\label{wi} w_{\hat I}=\hat u_{reg, \hat I} = \widehat{u_{reg, I}} \quad {\rm{in}}\ T_{(0)}.\end{eqnarray} Note that the ${\mathcal K}^{l}_{1}$ norm and the $H^m$ norm are equivalent for $w$ on $T_{(0)}$, since $w=0$ in the neighborhood of the vertex $Q$. Let $r$ be the distance to $Q$. Then, by the definition of the weighted space, the scaling argument, (\ref{eqn.aux111}), (\ref{wi}), and (\ref{eqn.dist}), we have \begin{eqnarray*} |u_{reg}-u_{reg, I}|_{H^1(L_{n})}^2 &\leq& C\|u_{reg}-u_{reg, I}\|_{{\mathcal K}^1_{{1}}(L_{n})}^2 \leq C\sum_{|\alpha|\leq 1}\|r(x,y)^{|\alpha|-1}\partial^\alpha (u_{reg}- u_{reg,I})\|_{L^2(L_{n})}^2\\ &=&C\sum_{|\alpha|\leq 1}\|r(\hat{x},\hat{y})^{|\alpha|-1}\partial^\alpha (\hat u_{reg}- \widehat{u_{reg,I}})\|_{L^2(T_{(0)})}^2\leq C\|\hat u_{reg}- w+w-\widehat{u_{reg,I}}\|_{{\mathcal K}^1_{1}( T_{(0)})}^2 \\ & \leq &C\big( \|\hat u_{reg}-w\|^2_{{\mathcal K}^1_{1}(T_{(0)})} + \|w-\widehat{u_{reg,I}}\|^2_{{\mathcal K}^1_{1}( T_{(0)} )}\big) \\ & = & C\big( \|\hat u_{reg}-w\|^2_{{\mathcal K}^1_{1}(T_{(0)})} + \|w-w_{\hat I}\|^2_{{\mathcal K}^1_{1}( T_{(0)} )}\big) \\ & \leq & C\big( \|\hat u_{reg}\|^2_{{\mathcal K}^1_{1}(T_{(0)} )} + \|w\|^2_{{\mathcal K}^{m+1}_{1}( T_{(0)} )}\big) \leq C\big( \|\hat u_{reg}\|^2_{{\mathcal K}^1_{1}(T_{(0)})} + \|\hat u_{reg}\|^2_{{\mathcal K}^{m+1}_{1}( T_{(0)} )}\big)\\% & \leq & C\big( \|u_{reg}\|^2_{{\mathcal K}^1_{1}(L_n)} + \|u_{reg}\|^2_{{\mathcal K}^{m+1}_{1}( L_n)}\big)\leq C \kappa_Q^{2na}\|u_{reg}\|_{{\mathcal K}^{m+1}_{{a}+1}(L_{n})}^2\\ & \leq & C 2^{-2nm}\|u_{reg}\|_{{\mathcal K}^{m+1}_{{a}+1}(L_{n})}^2\leq C h^{2m}. \end{eqnarray*} This completes the proof. \end{proof} Therefore, for the finite element method solving equation (\ref{eq:Possion}) defined in Algorithm \ref{graded} and Remark \ref{rkgraded}, we obtain the optimal convergence rate. \begin{thm}\label{thm.optimal} Let $S_n$ be the finite element space associated with the graded triangulation $\mathcal T_n$ defined in Algorithm \ref{graded} and Remark \ref{rkgraded}. Let $u_n\in S_n$ be the finite element solution of equation (\ref{eq:Possion}) defined in (\ref{eq:FEM1}). Then, $$ \|u-u_n\|_{H^1(\Omega)}\leq C{\rm{dim}}(S_n)^{-\frac{m}{2}}, $$ where dim$(S_n)$ is the dimension of $S_n$. \end{thm} \begin{proof} By C\'ea's Theorem (see (\ref{cea})), $$ \|u-u_n\|^2_{H^1(\Omega)}\leq C\|u-u_I\|^2_{H^1(\Omega)}=C\sum_{T_{(0)}\in\mathcal T_0}\|u-u_I\|^2_{H^1(T_{(0)})}. $$ Based on the Poincar\'e inequality and Lemmas \ref{TNtri} and \ref{TNtri2}, if the initial triangle $T_{(0)}$ has an endpoint of $\gamma$ as a vertex, we have $$ \|u-u_I\|^2_{H^1(T_{(0)})}\leq Ch^m=C2^{-mn}. $$ Summing up this estimate and the estimates in Lemma \ref{r1r3}, and noting that based on Algorithm \ref{graded} dim$S_n=O(4^n)$, we obtain $$ \|u-u_n\|^2_{H^1(\Omega)}\leq Ch^m\leq C{\rm{dim}}(S_n)^{-\frac{m}{2}}, $$ which completes the proof. \end{proof} \begin{rem} The solution of equation (\ref{eq:Possion}) may possess singularities across the line segment $\gamma$, near the vertices of the domain, and near the endpoints of $\gamma$. We have derived regularity results in weighted Sobolev spaces and proposed numerical methods that solve equation (\ref{eq:Possion}) in the optimal convergence rate. These results can be extended to more general cases, for example, the case where the line fracture is replaced by multiple line fractures, whether intersecting or non-intersecting. With proper modifications, we also expect the analytical tools will be useful when $\gamma$ is a smooth curve and when the source term $\delta_{\gamma}$ is replace by $q\delta_{\gamma}$ for $q\in L^2({\gamma})$. \end{rem} \section{Numerical examples} \label{sec-5} In this section, we present numerical test results to validate our theoretical predictions for the proposed finite element method solving equation (\ref{eq:Possion}). Since the solution $u$ is unknown, we use the following numerical convergence rate \begin{eqnarray}\label{rate} e=\log_2\frac{|u_j-u_{j-1}|_{H^1(\Omega)}}{|u_{j+1}-u_j|_{H^1(\Omega)}}, \end{eqnarray} where $u_j$ is the finite element solution on the mesh $\mathcal T_j$ obtained after $j$ refinements of the initial triangulation $\mathcal T_0$. According to Theorem \ref{thm.optimal}, when the optimal convergence rate is obtained, the value of $e$ shall be close to $m$, where $m$ is the degree of the polynomial used in the numerical method. This desired rate can be achieved especially when the grading parameter near the endpoint $Q$ of $\gamma$ satisfies $\kappa_Q=2^{-\frac{m}{a}}$ for any $0<a<1$ and the grading parameter near a vertex $p$ of domain satisfies $\kappa_p<2^{-\frac{m\omega}{\pi}}$, where $\omega$ is the largest interior angle among all the vertices of $\Omega$. For Example \ref{P1h} and \ref{P1ex2}, we consider the finite element method based on $P_1$ polynomials for problem (\ref{eq:Possion}) in a square domain $\Omega=(0,1)^2$. \begin{example}\label{P1h} (Union-Jack meshes and graded meshes) In this example, the line fracture $\gamma=Q_1Q_2$ has two vertices $Q_1=(0.25,0.5)$ and $Q_2=(0.75,0.5)$. We use finite element methods on two types of triangular meshes: the Union-Jack mesh with elements across the line fracture $\gamma$; and the graded meshes conforming to $\gamma$ defined in Algorithm \ref{graded} with different values of the grading parameter. The initial triangulations are given in (a) and (c) of Figure \ref{Mesh_Init}, respectively, where the Union-Jack mesh has $128$ elements and the graded mesh has $64$ elements. To refine the Union-Jack mesh, each triangle is divided into four equal triangles. Note that in the square domain, the vertices of the domain does not lead to corner singularities in $H^2$. Therefore, we use quasi-uniform meshes near the corners, which shall not affect the global convergence rate. However, in the region across $\gamma$, the solution merely belongs to $H^{\frac{3}{2}-\epsilon}$ for any $\epsilon>0$. Union-Jack mesh does not resolve the singularity across the fracture $\gamma$. Thus, on the Union-Jack mesh, the convergence rate (\ref{rate}) of the numerical solution shall be about $0.5$. The graded mesh conforms to $\gamma$ and therefore resolves the solution singularity across $\gamma$. Based on Theorem \ref{thm.optimal}, when the grading parameter for the endpoints of $\gamma$ satisfies $\kappa:=\kappa_{Q_1}=\kappa_{Q_2}=2^{-\frac{1}{a}}<0.5$, the singular solution near $Q_1$ and $Q_2$ shall be well approximated, which yields the optimal convergence rate in the numerical approximation. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.222\textwidth]{figure/uj0.png}}\hspace{0.19cm} \subfigure[]{\includegraphics[width=0.22\textwidth]{figure/uj3.png}}\hspace{0.3cm} \subfigure[]{\includegraphics[width=0.22\textwidth]{figure/gm0.png}}\hspace{0.3cm} \subfigure[]{\includegraphics[width=0.22\textwidth]{figure/gm2.png}} \caption{Graded mesh and Union-Jack mesh. (a) and (b): the initial Union-Jack mesh and the mesh after one refinement. (c) and (d): the initial graded mesh and the mesh after one refinement, $\kappa=\kappa_{Q_1}=\kappa_{Q_2}=0.2$. }\label{Mesh_Init} \end{figure} The convergence rate (\ref{rate}) associated with these two types of meshes are reported in Table \ref{TabConRate}. The first five rows are the rates on graded meshes, and the last row contains data on the Union-Jack mesh. Here $j$ is the number of refinements from the initial mesh. It is clear that the rate on a sequence of Union-Jack meshes is suboptimal with $e=0.5$. For graded meshes, when $\kappa<0.5$, the convergence rate is optimal with rate $e=1$; and the convergence is not optimal when $\kappa=0.5$. These results are closely aligned with our aforementioned theoretical predication. \begin{table}[!htbp]\tabcolsep0.03in \centering \caption{Convergence history of the numerical solution in Example \ref{P1h} with mesh refinements.} \begin{tabular}{|l|l|} \hline $\kappa \backslash j$ & $j=2$ $j=3$ $j=4$ $j=5$ \\ \hline $\kappa=0.1$ & 0.99 \hspace{0.07cm} 0.94 \hspace{0.07cm} 0.97 \hspace{0.07cm} 0.99 \\ \hline $\kappa=0.2$ & 0.97 \hspace{0.07cm} 0.99 \hspace{0.07cm} 0.99 \hspace{0.07cm} 1.00 \\ \hline $\kappa=0.3$ & 0.87 \hspace{0.07cm} 0.96 \hspace{0.07cm} 0.99 \hspace{0.07cm} 1.00 \\ \hline $\kappa=0.4$ & 0.86 \hspace{0.07cm} 0.91 \hspace{0.07cm} 0.94 \hspace{0.07cm} 0.98 \\ \hline $\kappa=0.5$ & 0.84 \hspace{0.07cm} 0.87 \hspace{0.07cm} 0.89 \hspace{0.07cm} 0.91 \\ \hline \text{Union-Jack} & 0.46 \hspace{0.07cm} 0.47 \hspace{0.07cm} 0.49 \hspace{0.07cm} 0.49 \\ \hline \end{tabular}\label{TabConRate} \end{table} \end{example} \begin{example}\label{P1ex2} (Graded meshes for different fractures) This example is to test the convergence rate on a sequence of graded meshes for problem (\ref{eq:Possion}) with the line fracture(s) at different locations. We shall use the linear finite element method and the same square domain as in Example \ref{P1h} for all the numerical tests in this example. \noindent\textbf{Test 1.} Suppose we have a longer line fracture $\gamma=Q_1Q_2$ with two vertices $Q_1=(0.1, 0.5)$, $Q_2=(0.9,0.5)$. See Figure \ref{Mesh_Init2} for the initial mesh and the graded mesh with $\kappa=0.2$ after four refinements. The convergence rates associated with different values of $\kappa=\kappa_{Q_1}=\kappa_{Q_2}$ are reported in the second column of Table \ref{TabConRate2}. Similar to the numerical tests in Example \ref{P1h}, these results show that the convergence rate is suboptimal with $e=0.93$ on the quasi-uniform mesh ($\kappa=0.5$), but becomes optimal ($e=1$) on graded meshes for $\kappa<0.5$. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.26\textwidth]{figure/gmi1.png}}\hspace{0.8cm} \subfigure[]{\includegraphics[width=0.26\textwidth]{figure/gmf1.png}}\hspace{0.8cm} \subfigure[]{\includegraphics[width=0.3\textwidth]{figure/gmc2.png}} \caption{Graded meshes with line fracture $\gamma=Q_1Q_2$, $Q_1=(0.1,0.5)$, $Q_2=(0.9,0.5)$. (a) the initial mesh; (b) the mesh after four refinements, $\kappa=\kappa_{Q_1}=\kappa_{Q_2}=0.2$; (c) the numerical solution.}\label{Mesh_Init2} \end{figure} \begin{table}[!htbp]\tabcolsep0.03in \caption{Convergence history in Tests 1 \& 2 of Example \ref{P1ex2} on graded meshes.} \label{TabConRate2} \centering \begin{tabular}{|l|l|l|} \hline $\kappa \backslash j$ & $j=4$ $j=5$ $j=6$ $j=7$ & $j=4$ $j=5$ $j=6$ $j=7$ \\ \hline $\kappa=0.1$ & 0.97 \hspace{0.07cm} 0.98 \hspace{0.07cm} 0.99 \hspace{0.07cm} 1.00 & 0.97 \hspace{0.07cm} 0.99 \hspace{0.07cm} 0.99 \hspace{0.07cm} 1.00 \\ \hline $\kappa=0.2$ & 0.98 \hspace{0.07cm} 0.99 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 & 0.97 \hspace{0.07cm} 0.99 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 \\ \hline $\kappa=0.3$ & 0.99 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 & 1.00 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 \\ \hline $\kappa=0.4$ & 0.95 \hspace{0.07cm} 0.97 \hspace{0.07cm} 0.98 \hspace{0.07cm} 0.99 & 0.96 \hspace{0.07cm} 0.98 \hspace{0.07cm} 0.99 \hspace{0.07cm} 0.99\\ \hline $\kappa=0.5$ & 0.91 \hspace{0.07cm} 0.92 \hspace{0.07cm} 0.93 \hspace{0.07cm} 0.93 & 0.93 \hspace{0.07cm} 0.93 \hspace{0.07cm} 0.94 \hspace{0.07cm} 0.94 \\ \hline \end{tabular} \end{table} \noindent\textbf{Test 2.} We consider a line fracture $\gamma=Q_1Q_2$ with the two vertices $Q_1=(0.2, 0.2)$, $Q_2=(0.8,0.8)$. Here we solve the problem (\ref{eq:Possion}) on graded meshes with the initial triangulation given in Figure \ref{Mesh_Init3}. The convergence rate is reported in the third column of Table \ref{TabConRate2}. We observe that convergence rate is suboptimal with $e=0.94$ on quasi-uniform mesh ($\kappa=0.5$), but it is optimal ($e=1$) on graded meshes for $\kappa<0.5$. The results in Table \ref{TabConRate2}, both from Test 1 and Test 2, are well predicted by the theory as discussed above. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.26\textwidth]{figure/gmi2.png}}\hspace{0.8cm} \subfigure[]{\includegraphics[width=0.26\textwidth]{figure/gmf2.png}}\hspace{0.8cm} \subfigure[]{\includegraphics[width=0.31\textwidth]{figure/gmc3.png}} \caption{Graded meshes with line fracture $\gamma=Q_1Q_2$, $Q_1=(0.2,0.2)$, $Q_2=(0.8,0.8)$. (a) the initial mesh; (b) the mesh after four refinements, $\kappa=\kappa_{Q_1}=\kappa_{Q_2}=0.2$; (c) the numerical solution.}\label{Mesh_Init3} \end{figure} \noindent\textbf{Test 3.} In this test, we consider two line fractures with $\gamma_1 = Q_1Q_2, \gamma_2=Q_3Q_4$ in equation (\ref{eq:Possion}). Here the vertices are $Q_1=(0.3,0.1)$, $Q_2=(0.3, 0.9)$, $Q_3=(0.6,0.1)$ and $Q_4=(0.9, 0.9)$. The initial mesh is given in Figure \ref{Mesh_Init4}. Although two line fractures are imposed, we observe similar convergence rates: the suboptimal convergence rate with $e=0.94$ on quasi-uniform meshes ($\kappa=0.5$), and optimal ($e=1$) on graded meshes as $\kappa:=\kappa_{Q_1}=\kappa_{Q_2}=\kappa_{Q_3}=\kappa_{Q_4}<0.5$. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.26\textwidth]{figure/gmi3.png}}\hspace{0.8cm} \subfigure[]{\includegraphics[width=0.26\textwidth]{figure/gmf3.png}}\hspace{0.8cm} \subfigure[]{\includegraphics[width=0.305\textwidth]{figure/gmc4.png}} \caption{Graded meshes with two line fractures $\gamma_1=Q_1Q_2$ and $\gamma_2=Q_3Q_4$. (a) the initial mesh; (b) the mesh after four refinements, $\kappa=\kappa_{Q_1}=\kappa_{Q_2}=\kappa_{Q_3}=\kappa_{Q_4}=0.2$; (c) the numerical solution.}\label{Mesh_Init4} \end{figure} \begin{table}[!htbp]\tabcolsep0.03in \caption{Convergence history in Tests 3 of Example \ref{P1ex2} on graded meshes.} \label{TabConRate4} \centering \begin{tabular}{|l|l|} \hline $\kappa \backslash j$ & $j=4$ $j=5$ $j=6$ $j=7$ \\ \hline $\kappa=0.1$ & 0.98 \hspace{0.07cm} 0.99 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 \\ \hline $\kappa=0.2$ & 1.00 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 \\ \hline $\kappa=0.3$ & 0.99 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 \\ \hline $\kappa=0.4$ & 0.96 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 \\ \hline $\kappa=0.5$ & 0.92 \hspace{0.07cm} 0.93 \hspace{0.07cm} 0.93 \hspace{0.07cm} 0.94 \\ \hline \end{tabular} \end{table} In Test 1 and Test 2, we have implemented linear finite element methods proposed in Algorithm \ref{graded}. These numerical test results are in strong support of the estimate in Theorem \ref{thm.optimal}. We chose the square domain to avoid the possible corner singularity due to the non-smoothness of the domain, so that we can concentrate on the singular solution in the neighborhood of the line fracture. For general polygonal domains, the corner singularities should be taken into account. A proper refinement algorithm near these corners are also given in Remark \ref{rkgraded} and Theorem \ref{thm.optimal}. \end{example} \begin{example}\label{ex.3} ($P_2$ finite element methods) In this example, we consider the finite element method based on $P_2$ polynomials for equation (\ref{eq:Possion}). To minimize the effect of potential corner singularities, we solve the equation in the triangle domain $\Omega=\Delta ABC$ with $A=(0,0), B=(1,0)$ and $C=(0.5,1)$ and the line fracture $\gamma=Q_1Q_2$ with the two vertices $Q_1=(0.3, 0.25)$, $Q_2=(0.7,0.25)$. Since all the interior angles of $\Omega$ are less then $\frac{\pi}{2}$, the solution is in $H^3$ except for the region that contains $\gamma$. See Figure \ref{Mesh_InitP2} for the initial triangulation that conforms to the fracture. Based on Theorem \ref{thm.optimal}, to achieve the optimal convergence rate in the numerical approximation, it is sufficient to use quasi-uniform meshes near the vertices of the domain and use graded meshes with the grading parameter $\kappa:=\kappa_{Q_1}=\kappa_{Q_2}=2^{-\frac{2}{a}}<0.25$ due to the fact $0<a<1$. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.26\textwidth]{figure/gmip2.png}}\hspace{0.8cm} \subfigure[]{\includegraphics[width=0.26\textwidth]{figure/gmfp2.png}}\hspace{0.8cm} \subfigure[]{\includegraphics[width=0.31\textwidth]{figure/gmcp2.png}} \caption{Quadratic finite element methods on graded meshes with the line fracture $\gamma=Q_1Q_2$, $Q_1=(0.3,0.25)$, $Q_2=(0.7,0.25)$. (a) the initial mesh; (b) the mesh after four refinements, $\kappa=\kappa_{Q_1}=\kappa_{Q_2}=0.2$; (c) the numerical solution.}\label{Mesh_InitP2} \end{figure} \begin{table}[!htbp]\tabcolsep0.03in \caption{Convergence history of the $P_2$ elements in Example \ref{ex.3} on graded meshes.} \label{TabConRateP2} \centering \begin{tabular}{|l|l|} \hline $\kappa \backslash j$ & $j=4$ $j=5$ $j=6$ $j=7$ \\ \hline $\kappa=0.1$ & 1.74 \hspace{0.07cm} 1.86 \hspace{0.07cm} 1.94 \hspace{0.07cm} 1.97 \\ \hline $\kappa=0.2$ & 1.81 \hspace{0.07cm} 1.88 \hspace{0.07cm} 1.93 \hspace{0.07cm} 1.97 \\ \hline $\kappa=0.3$ & 1.65 \hspace{0.07cm} 1.68 \hspace{0.07cm} 1.70 \hspace{0.07cm} 1.71 \\ \hline $\kappa=0.4$ & 1.32 \hspace{0.07cm} 1.32 \hspace{0.07cm} 1.32 \hspace{0.07cm} 1.32 \\ \hline $\kappa=0.5$ & 1.00 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 \hspace{0.07cm} 1.00 \\ \hline \end{tabular} \end{table} The convergence rate (\ref{rate}) of the numerical solution in this example is reported in Table \ref{TabConRateP2}. We observe that the convergence rate is suboptimal on graded meshes with $\kappa>0.25$. In particular, $e=1$ on quasi-uniform meshes ($\kappa=0.5$) and $1<e<2$ on graded meshes with $\kappa=0.3, 0.4$. It is clear that the optimal convergence rate $e=2$ is obtained on graded meshes when $\kappa<0.25$. These numerical results are clearly consistent with the theory developed in this paper. \end{example} \section*{Acknowledgments} This research was supported in part by the National Science Foundation Grant DMS-1819041 and by the Wayne State University Faculty Competition for Postdoctoral Fellows Award. \bigskip \def\cprime{$'$} \def\ocirc#1{\ifmmode\setbox0=\hbox{$#1$}\dimen0=\ht0 \advance\dimen0 by1pt\rlap{\hbox to\wd0{\hss\raise\dimen0 \hbox{\hskip.2em$\scriptscriptstyle\circ$}\hss}}#1\else {\accent"17 #1}\fi}
1,314,259,993,066
arxiv
\section{Introduction} Even galaxies with little star formation activity continue to evolve, as evidenced by the substantial increase of their cosmic stellar mass density over the past 7 billion years \citep{bell04b, faber07, brown07}. This must be related to the decreasing star formation activity over the same period \citep[e.g.,][]{lefloch05}, and the production of such quiescent galaxies through the truncation of star formation \citep[e.g.,][]{faber07, bell07}; the color scatter among quiescent galaxies and its evolution are in precise agreement with such a scenario \citep{ruhland09}. There are, however, quiescent galaxies at all redshifts $z\lesssim 1.3$ that are more massive than the most massive star-forming galaxies. This implies that star formation in the most massive galaxies was truncated even earlier, and/or that mergers play an important role in producing massive galaxies. Evidence for the early formation of massive galaxies is provided by their old stellar populations. However, we need to bear in mind that there can be a large difference between the age of the stellar population and the assembly age, especially if mergers are important, as is the case in a hierarchical framework for galaxy formation \citep{delucia07}. Hence, the number density evolution of galaxies is important in constraining their assembly history. Measuring this is difficult because of its sensitivity to the luminosity evolution correction, especially for massive galaxies at the exponential cut-off of the mass function. As a result, there is no consensus among the currently available measurements \citep{cimatti06, wake06, brown07, cool08}. Given these difficulties, other observations have been used to either directly or indirectly constrain the assembly of galaxies. Merging activity among the massive galaxy population is observed \citep[e.g.,][]{vandokkum99, vandokkum05, bell06a, bell06b, lin08}, and has been shown to produce a color-magnitude relation that is in agreement with observations \citep{skelton09}. However, its cosmological relevance has always been difficult to determine, given the uncertainties in converting observed merger fractions to merger rates and the associated growth in mass. An independent and indirect indication that massive galaxies undergo continuous evolution is provided by the recent result that high-redshift quiescent galaxies are substantially smaller than local galaxies with the same mass \citep[see,][and references therein]{vanderwel08c}. This strongly suggests that mergers are important \citep[see, e.g.,][]{vanderwel09a}, and that the assembly of massive galaxies is continuing up until the present day. Another indirect, yet powerful, constraint is provided by the evolution in the clustering and halo occupation distribution of red galaxies \citep{white07, conroy07, brown08}: the evolution in the clustering strength of red galaxies is slower than expected in the absence of merging. In this Letter we address the question whether major merging is the dominant mechanism for the production of very massive, quiescent galaxies. The argument that we invoke is simply that major merging generally leads to rounder galaxies. An analysis of the shape distribution of quiescent galaxies can therefore constrain the importance of merging. Since merging among galaxies with mass ratios of $\lesssim 3$ is the only known mechanism to produce round galaxies (see Section 3 for further discussion), this is a powerful test. The disadvantage of this method, compared to those mentioned above, is that no information about the time scale and epoch of galaxy assembly can be inferred. \citet{vincent05} and \citet{padilla08} were the first to systematically study the axial ratio distribution, $p(b/a)$, of a large number of galaxies, selected from the Sloan Digital Sky Survey (SDSS). Through a detailed analysis, they infer the intrinsic shape distribution and the effect of extinction. Both divide the sample into 'elliptical' and 'spiral' galaxies, and confirmed that luminous 'elliptical' galaxies are, on average, rounder and tri-axial, compared to low-luminosity 'ellipticals', which are more elongated and oblate \citep{davies83, franx91}, and display disky isophotes \citep{jorgensen94}. This phenomenon is not recent: \citet{holden09a} showed that this trend persists at least out to $z\sim 1$. Here we present a complementary, modified analysis, focusing on $p(b/a)$ as a function of stellar mass for quiescent, i.e., non-star-forming, galaxies. Because mass-to-light ratios are well constrained by broad-band colors for quiescent galaxies, stellar mass estimates are robust. This is essential for our purposes, as we are interested in the most massive objects, i.e., those that populate the exponential tail of the mass function. Furthermore, as opposed to previous studies, we pre-select galaxies independent of their photometric properties. Our shape-independent, spectroscopic selection criteria circumvent the biases that are potentially introduced by selecting galaxies by their 'morphological' properties, or some pre-defined surface brightness profile. With this sample, for which we have determined axial ratios from our own fits to two-dimensional light distributions, we address the following specific questions. Are high-mass, quiescent galaxies rounder than low-mass quiescent galaxies? If so, is there a mass limit at which $p(b/a)$ distinctly changes, and above which disk-dominated are completely absent? Such evidence would imply that the only evolutionary path to such masses is a disk-destroying mechanism, i.e., major merging. \section{The Sample} We select a sample of 17,480 quiescent galaxies from Data Release 6 of the SDSS \citep{adelman08}. Our sample includes galaxies at redshifts $0.04<z<0.08$ without detectable $[\rm{OII}]$ and $\rm{H}\alpha$ emission lines. The selection criteria are described and motivated in full by \citet{graves09b}; but as opposed to that work, we do not exclude galaxies with a low concentration index and galaxies that are fit better by an exponential profile than by a \citet{devaucouleurs48} profile, because this may exclude quiescent, yet disk-like galaxies, which are obviously relevant for quantifying $p(b/a)$ of quiescent galaxies. As a consequence, our sample may include galaxies with star formation in an extended disk outside the SDSS spectroscopic fiber. This effect, however, does not affect our main conclusion that quiescent massive galaxies with prominent disks are extremely rare (see Section \ref{res}). Rather, such a bias works in the opposite direction in the sense that it would lead to the mistaken inclusion of galaxies with large disks. The exclusion of all galaxies with emission lines also excludes quiescent galaxies with active galactic nuclei. Their number, however, is small, and make up a small fraction of the population \citep[e.g.,][]{pasquali09a} that is negligible for our purposes. The axial ratios were obtained as described by \citet{vanderwel08c}. Briefly, GALFIT \citep{peng02} is used to determine from the $r$-band the radii, axial ratios, position angles, and total magnitudes, assuming a \citet{devaucouleurs48} surface brightness profile. We have verified that adopting surface brightness models with a free S\'ersic index does not lead to a significantly different $p(b/a)$. The stellar masses are derived with the simple conversion from color to mass-to-light ratio \citep{bell03}, but are normalized to correspond to the \citet{kroupa01} stellar initial mass function. The assumed cosmology is $(\Omega_{\rm{M}},~\Omega_{\Lambda},~h) = (0.3,~0.7,~0.7)$. The sample is complete over the entire redshift range $0.04 < z < 0.08$ down to $M_* \sim 4\times 10^{10}M_{\odot}$, set by the spectroscopic magnitude limit of the SDSS ($r=17.7$). The SDSS may be incomplete for low-luminosity, low-surface brightness galaxies \citep{blanton05c}, which could, in addition, depend on their orientation \citep[see, e.g.,][]{odewahn97}. However, since we are concerned with the high-mass end of the galaxy population, this does not play a role. Moreover, simulations of images with even lower signal-to-ratio than those of the massive galaxies analyzed here demonstrate that axial ratio measurements from GALFIT are robust and accurate \citep{holden09a}. In summary, the lack of galaxies with small $b/a$, reported in the following section, is not in any way compromised by selection effects or measurement errors. \section{Results and Discussion}\label{res} In Figure \ref{M_q}(a) we show $p(b/a)$ of the 17,480 spectroscopically selected, quiescent galaxies as a function of stellar mass. $p(b/a)$ is shown in gray scale, with the percentiles of the cumulative $b/a$ distribution shown as (red) lines. Figure \ref{M_q}(a) immediately demonstrates that for quiescent galaxies, the projected axial ratio distribution is a strong function of stellar mass. In the narrow mass range $8\times 10^{10} \lesssim M_*/M_{\odot} \lesssim 2\times 10^{11}$ there is a rapid decrease in the number of galaxies with small axial ratios. As further illustrated by Figure \ref{M_q}(b), above $M_*\sim 2\times 10^{11}~M_{\odot}$ quiescent galaxies with $b/a<0.6$ are essentially absent. This result shows that evolutionary paths that lead to quiescent galaxies with stellar mass $M_* \gtrsim 2\times 10^{11}~M_{\odot}$ all but exclude the existence, or the survival, of highly flattened, disk-like stellar components. As highly flattened stellar systems are quite common at lower masses, in the possible realm of plausible progenitors of high-mass galaxies, this result implies the destruction of the flattened component in whatever process causes growth beyond $M_*\sim 2\times 10^{11}~M_{\odot}$. Therefore, our result that essentially all quiescent galaxies with masses larger than $M_*\sim 2\times 10^{11}~M_{\odot}$ are round strongly suggests that for such galaxies major mergers are the dominant, perhaps even unique, formation channel. The destruction of a stellar disk requires a major merger, i.e., a merger involving progenitors with a relatively small mass ratio of at most $\sim 3$, mergers with a larger mass ratio leaving stellar disks intact \citep[see, e.g.,][]{bekki98, bournaud04}. Moreover, most likely, the progenitors are not very gas rich, as this would produce a disky remnant \citep[e.g.,][]{naab06b}. It has been suggested that cold flows are responsible for the formation of massive, classical bulges at high redshift \citep{dekel09, ceverino09}. In this scenario, intensely star-forming 'knots' merge, forming a massive bulge \citep[see also][]{noguchi99}. However, a substantial fraction of the mass ($\sim 50\%$) is still predicted to reside in a disk. Even at later stages, when the gas disk has become stable against fragmentation and collapse \citep[``morphological quenching'';][]{martig09}, the stellar disk remains intact and contains a non-negligible fraction of the total mass. In short, although cold flows plausibly produce quiescent galaxies, the end-products will not be uniquely round. Only in the case of sufficient merger activity would galaxies become spheroidal. In passing, we note that the sharp decrease in the fraction of very round galaxies ($b/a > 0.8$) at the very highest masses ($M>3\times 10^{11}~M_{\odot}$) signifies that such high mass galaxies are typically brightest group/cluster galaxies, which tend to be slightly more elongated than 'normal' massive elliptical galaxies \citep[see,][]{bernardi08}. As already noted above, at masses lower than $M_*\sim 10^{11}~M_{\odot}$, quiescent galaxies display a large range in axial ratios, which implies that star formation truncation mechanisms below $10^{11}~M_{\odot}$ are often not associated with the destruction of the disk. It remains to be tested whether $p(b/a)$ of low-mass quiescent galaxies is similar to or different from $p(b/a)$ of star-forming galaxies in the same mass range. Such an analysis, which is non-trivial because of the effects of extinction and color gradients, will constrain the degree to which mergers or bulge growth regulate star formation at these lower masses. Another open question concerns the number and properties of massive, star-forming galaxies. Morphological studies suggest that a large fraction ($20\%-40\%$) of all galaxies more massive than $M\sim 10^{11}~M_{\odot}$ are late-type galaxies \citep{vanderwel08a, bamford09}, and their high masses of at least some of these objects are confirmed by their rotational velocities \citep[e.g.,][]{courteau07}. Yet, the degree to which such galaxies are disk-dominated and should be considered actively star forming remains to be determined. It would, therefore, be premature to conclude that merging is the only way to produce a massive galaxy in general, and therefore restrict this proposition to the formation of massive, quiescent galaxies. The picture sketched by the axial ratio distribution of quiescent galaxies is in agreement with the strong correlation between structure and mass for galaxies in general \citep[e.g.,][]{kauffmann03b, vanderwel08a} and early-type galaxies in particular \citep[e.g.,][]{caon93, graham03}: high-mass galaxies are more concentrated and have higher S\'ersic indices than low-mass galaxies. These trends are an indirect indication of a decreasing importance of disks for galaxies with higher masses, although part of this trend is caused by the increase in S\'ersic index with galaxy mass among spheroidal galaxies. In our sample we see a similar trend: in the mass range $10^{10} < M_*/M_{\odot} < 2\times 10^{10}$, 41\% of the galaxies have S\'ersic indices $n<3$, whereas at higher masses, $M_*>2\times 10^{11}M_{\odot}$, only 3\% have such low S\'ersic indices. We postpone a full exploration of the joint behavior of shape and structure as a function of galaxy mass until a future paper, but it is encouraging that the apparent absence of prominent disks in high-mass, quiescent galaxies is reflected in both the S\'ersic index and $p(b/a)$. \acknowledgements{The authors thank the referee for positive feedback. A.v.d.W. thanks Marijn Franx for helpful discussion. H.W.R. thanks Simon White for his insistence on eventually 'publishing the last thesis chapter'. This Letter makes use of the Sloan Digitial Sky Survey (www.sdss.org).} \bibliographystyle{apj}
1,314,259,993,067
arxiv
\section{Introduction} A current trend in cryptography is the design of primitives with a low-cost implementation. \textit{Lightweight cryptography}~\cite{Eisenbarth2007} is interested in metrics such as the circuit area, energy consumption or code size. The Girault, Poupard and Stern authentication protocol (GPS) is one of the oldest lightweight primitives which has been recommended by the European project NESSIE~\cite{Baudron2001} and appeared in the ISO/IEC~9798-5 standard~\cite{iso}. GPS is a reference in the domain of lightweight authentication and the existing implementations aim to reduce as much as possible the area. In this paper, we explore the trade-off between speed and area in the implementation of GPS. Implementing GPS consists in designing an adder and a multiplier. The latter is the most critical part in a GPS core. Three typical strategies can be followed to implement this core: \textit{serial}, \textit{parallel} and a trade-off between the previous two that we designate by \textit{hybrid}. The serial implementation based on \textit{shift-and-add} has been already covered in~\cite{Mcloone2007} and is the baseline for area. The other two strategies have not yet been studied. The parallel implementation is the traditional method to achieve the best speed at the drawback of a large area. The hybrid implementation in our paper is in fact a serialization of the parallel approach that takes advantage of hardware repetition. It provides a trade off between serial/parallel approaches by executing chunks of parallel multiplication. To implement a parallel implementation of GPS, we face a big challenge: implementing parallel multiplier with large variable operands is out-of-question. To overcome this limitation, we have specialized our parallel and hybrid implementations for a specific key. The complexity of our multiplier is therefore reduced to the implementation of a \textit{multiplier by a constant}~\cite{Wirthlin2004}. We attempt to integrate our different implementations in the network stack of the wireless sensor platform {PowWow}. This platform embeds an Actel {IGLOO} FPGA with a 8 MHz clock cycle. Our simulation results for a security level of 128-bit show that the parallel implementation of GPS with a secret takes 1$\mu s$ against 42$\mu s$ for the existing serial implementation at 8~MHz. However, it does not fit within the FPGA. The integration of GPS into our platform is only possible for hybrid and serial implementation. We note that the basic GPS protocol requires a 320$\mu s$ response time, but McLoone and Robshaw circumvent this difficulty by using a different protocol in their article~\cite{Mcloone2007} with a 18$m s$ latency. Our implementation fulfil the 320$\mu s$ constraint for all architectures at 8~MHz, and would help reach this goal at a lower frequency for parallel and hybrid implementations. The rest of this paper is organized as follows. The GPS protocol is reminded in Section~\ref{sec:gps}. The Section~\ref{sec:serial} describes the existing serial implementation of GPS. The main contributions of the paper are given in Section~\ref{sec:parallel} and~\ref{sec:hybrid} with the description of the parallel and hybrid implementation. The performances are compared and analyzed in Section~\ref{sec:comp}. \section{GPS protocol}\label{sec:gps} The GPS authentication protocol~\cite{Baudron2001} is an interactive zero-knowledge authentication protocol initially proposed by Girault, Poupard, and Stern~\cite{Girault2006}. It provides provable security based on the composite discrete logarithm problem. It also combines short transmissions and minimal on-line computation, using precomputed ``coupons''. This protocol has been selected in the NESSIE portfolio~\cite{nessie} and it is mentioned in the ISO/IEC~9798-5 Clause~8~\cite{iso} as a reference. Throughout the paper, we implicitly refer GPS as this variant ``with coupons''. \paragraph{Description} The parameters used in this protocol are the following: \begin{itemize} \item $S$, $C$, $D$ are public integers, where $|S| \approx 180$ bits\footnote{In all the paper we use the notation $|X|$ for {\em the size in bits of number $X$}}, $|C| = 32$ and $|D| = |S| + |C| + 80$, \item $n = p \times q$ is a public composite modulus, where $p$ and $q$ are secret primes, $|n| = 1024$, $|p| = |q| = 512$, \item $g$ is an element of $\mathbb{Z}_n^*$, \item $\Phi = (C-1) \times (S-1)$, \item $s \in [0,S[$ and $I = g^{-s} \mod n$, \item a coupon $i$ is a couple $(r_i, x_i = g^{r_i} \mod n)$, where $r_i \in [0, D[$ is a random number. \end{itemize} At the beginning, the prover $P$ has a unique identifier $Id_P$, a unique pair of keys (the private $s$ and the public $I$) and a set of coupons $c$ computed by a higher trusted entity. The verifier $V$ knows the prover's identifier and public key. \begin{figure}[htb] \centering \framebox[0.5\textwidth]{ \begin{tabular}{p{4cm}cp{4cm}} \multicolumn{1}{c}{\textbf{Verifier $V$}} & & \multicolumn{1}{c}{\textbf{Prover $P$}}\\ \multicolumn{1}{c}{$Id_V, I$} & & \multicolumn{1}{c}{$Id_P, s, I, c$}\\ \\ & $\xleftarrow{\ \ \ \ Id_P,\ x_i\ \ \ \ \ }$ & $^{(1)}$\\ \multicolumn{1}{r}{$^{(2)}$} & $\xrightarrow{\ \ \ \ \ \ \ n_V\ \ \ \ \ \ \ }$ & \\ \multicolumn{1}{r}{$_{(4)}$}& $\xleftarrow{\ \ \ \ \ \ \ \ y\ \ \ \ \ \ \ \ }$ & $^{(3)}$\\ \end{tabular}} \caption{The GPS protocol.}\label{fig:gps} \end{figure} GPS, depicted in Fig.~\ref{fig:gps}, works as follows. \begin{enumerate} \item[(1)] The prover $P$ chooses a coupon $(r_i, x_i)$, and sends its identifier $Id_P$ and $x_i$ to the verifier $R$. \item[(2)] The verifier answers a challenge $n_V$ randomly chosen in the interval $[0, C[$. \item[(3)] The prover computes $y = r_i + n_V \times s$, and sends $y$ to the verifier. \item[(4)] The verifier checks if: \begin{itemize} \item $g^y \times I^{n_V} \mod n = x_i$, \item $y \in [0, D + \Phi[$. \end{itemize} \end{enumerate} The arguments supporting the security of GPS can be found in~\cite{Baudron2001,Girault2006}. They are not included because the security of GPS is not affected by our results. In the remaining of the paper, $s$ is named \textit{the secret}, $n_V$ is named \textit{the challenge} and $r_i$ \textit{the commitment} (see~\cite{Baudron2001}). \paragraph{Existing implementation} GPS authentication has been designed for constraint embedded systems such as smart cards, RFID or sensors networks. The critical part for the implementation is on the prover side assuming that the verifier (RFID reader or a base station) suffers from less restriction. Two steps are critical for the prover: the computation of $x_i$ (Step~(1)) and $y$ (Step~(3)). The computation of $x_i$ is the most expensive one (exponentiation) but the prover has nothing to do thanks to the coupons (pre-computation). Therefore, the last remaining cost is the Step~(3) which consists of the multiplication by $s$ and the addition of $r_i$. The multiplication by $s$ is the most complex due to the size of the multiplicands. GPS was first designed for smart cards~\cite{Girault2006}. McLoone and Robshaw proposed in~\cite{Mcloone2007,McLoone2007b} the first hardware implementation of GPS. Their solution is based on the shift-and-add algorithm for multiplication. It offers a very small hardware footprint. This implementation is described in Section~\ref{sec:serial} and it serves as the reference in our comparison. Girault and Lefranc~\cite{Girault2004} proposed a variant of GPS which exploits low Hamming weight secret $s$. The multiplication is transformed in an addition when the secret is chosen properly. This variant can significantly reduce the cost of GPS. However, this solution was subsequently attacked in~\cite{Coron2005}, and it is not considered in this work. In the following sections, the different architectures for GPS are explored. Two criteria are examined: \textit{area} and \textit{speed}. A cryptographic design can be tuned for a specific key or support any value for the key. Throughout this paper, fixed-key and variable-key implementation are considered. \section{Serial implementation}\label{sec:serial} \begin{figure}[h] \centering $\begin{array}{l c l r} &&& 101001 \\ && \times & 110 \\ \cline{3-4} 1 \times 101001 \times 100 & \rightarrow & + & 101001\hspace{0.08cm}.\hspace{0.08cm}. \\ 1 \times 101001 \times 10 & \rightarrow & + & 101001\hspace{0.08cm}. \\ 0 \times 101001 & \rightarrow & & 000000 \\ \cline{3-4} &&& 11110110 \end{array}$ \caption{Shift-and-add classical binary multiplication.}\label{fig:school_bin_product} \end{figure} McLoone and Robshaw have proposed in~\cite{Mcloone2007} the reference serial implementation of GPS. This architecture is based on the classical shift-and-add multiplication~(Fig.~\ref{fig:school_bin_product}). The core of the design is a 16-bit adder. The product $n_V\times s$ is decomposed into a succession of 16-bit additions. The same 16-bit adder is re-used to perform the final addition $ n_V\times s+r_i$ in 16-bit chunks. \begin{figure}[!ht] \centering \includegraphics[width=.5\textwidth]{GPS_serial.pdf} \caption{GPS serial implementation for a 128-bit secret and a 16-bit adder (from \cite{Mcloone2007}).} \label{fig:GPS_serial} \end{figure} The architecture of McLoone and Robshaw is presented on Fig.~\ref{fig:GPS_serial}. The control logic block process the challenge $n_V$ bitwise and drives the multiplexers. The first multiplexer selects 16 bits of the secret $s$ or $0$, depending on the challenge bit. The adder sums the multiplexer data with the previous results stored in the shift register. The final addition of $r_i$ with the result is controlled by the second multiplexer. The same hardware is re-used for this addition. The data is processed sequentially by 16-bit blocks to obtain a serial architecture. The balance between occupied area and execution time is set by the data bus size. Doubling the data bus size halve the required run cycles, while increasing the adder and the multiplexer size. We have implemented the serial architecture of McLoone and Robshaw independently, and we get area results close to their original article for the different security parameters (see Table~\ref{tab:GPSvsarticle}). However, McLoone and Robshaw results only take into account the computing hardware. We give results for a complete implementation of the serial multiplier, including memories for $n_V$, $r_i$, $s$ and $y$. \section{Parallel implementation}\label{sec:parallel} A multiplier can be implemented using lookup tables (ROM). Let us consider two variable operands $n_V$ and $s$, of size $|n_V|$ and $|s|$. The product $n_V\times s$ has a $|n_V|+|s|$ bits width. A lookup table approach requires to store: $$2^{|n_V|+|s|}\times (|n_V|+|s|) \text{ bits.}$$ While being attractive for small values of $n_V$ and $s$, lookup tables are clearly not practicable for GPS, \textit{i.e.} $|n_V|\geq 32$. A first attempt to reduce this cost consists in fixing $s$. This hypothesis implies that our implementation is dedicated to a given key. In most cryptographic implementations, the key can be updated. To our knowledge, there are two exceptions. FPGA supplier provides encryption scheme with fixed key to protect the users bit-stream. White-box cryptography~\cite{Wyseur2011} also uses implementation with fixed key but to obfuscate the key into the code. Our goal is to show that implementation with fixed key can provide benefits in term of throughput. Using this assumption, the cost of a table lookup multiplier can be reduced to: $$2^{|n_V|}\times (|n_V|+|s|) \text{ bits.}$$ This approach is still not reasonable for GPS. To further reduce the memory size, Chapman proposed in~\cite{Chapman1996} the KCM method (for {\em k constant multiplier}) to decompose the computation into partial products. Let us explain Chapman's KCM method in decimal basis (Fig.~\ref{fig:constant_product}) for the multiplication of $953 \times x$. If one can store in look-up tables (LUT) the nines values $1\times 953$, $2\times 953$,\ldots, $9\times 953$, then one can compute the whole product by simply adding stored values as indicated in the Fig.~\ref{fig:constant_product} for $953 \times 482$. \begin{figure}[h] \centering $\begin{array}{l c l r} &&& 953 \\ && \times & 482 \\ \cline{3-4} 4 \times 953 \times 100 & \rightarrow & + & 3812\hspace{0.08cm}.\hspace{0.08cm}. \\ 8 \times 953 \times 10 & \rightarrow & + & 7624\hspace{0.08cm}. \\ 2 \times 953 & \rightarrow & & 1906 \\ \cline{3-4} &&& 459346 \end{array}$ \caption{Chapman KCM principle for multiplication by 953 in decimal basis.}\label{fig:constant_product} \end{figure} For an hardware design, KCM uses $2^{\ell}$ basis rather than the decimal basis. The choice of $\ell$ depends on the trade-off between the cost (memory and adder) and the technology characteristics (number of inputs per lookup table). For example, a classic Xilinx Virtex 4 as 4-input LUT, and our Actel Igloo as 3-input LUT. First, $2^\ell$ values need to be stored in each table, hence for a total memory size of: $$\frac{|n_V|}{\ell} \times 2^{\ell}\times (|s|+\ell) \text{ bits.}$$ The results obtained from these tables are combined by $\frac{|n_V|}{\ell}-1$ adders of $|s|+\ell$ bits. \begin{figure}[!ht] \centering \input{./fig/KCM_archi.pdf_tex} \caption{KCM constant multiplier (adapted from~\cite{Wirthlin2004}).}\label{fig:KCM} \end{figure} This architecture is illustrated on Fig.~\ref{fig:KCM} for $|s|=8$, $|n_V|=12$ and $\ell=4$. Each lookup table takes one part of $n_V$ as an input. The partials results are combined using 2 adders to produce the final result. Note that the left shifts are done by positioning the partial results and have no gate cost. Note also that the result is obtained in a single cycle, with a very long critical path that could slow down the clock cycle of the circuit. To cope with this clock cycle problem, the designer can pipeline the circuit, increasing latency but maintaining a throughput of 1 result per clock cycle. For our GPS implementation of the parallel approach, we used a KCM operator generated by the {FloPoCo} library\footnote{\url{http://flopoco.gforge.inria.fr}}. This library is an open source project for non-standards arithmetic operators. It provides in particular several integer constant multipliers~\cite{Brisebarre2008}. All operators are pipelined to run at high frequency (default setting at 300 MHz on Virtex FPGA). \begin{figure}[!ht] \centering \includegraphics[width=.4\textwidth]{GPS_parallel.pdf} \caption{Parallel implementation of GPS (128 bits).}\label{fig:GPS_parallel} \end{figure} The Fig.~\ref{fig:GPS_parallel} illustrates the parallel implementation of GPS used in our design. The notation $KCM_{32,4}(s)$ stands for {\em Chapman constant multiplier of a 32-bit number with the constant $s$, using 4-input LUT}. The product is computed by the constant multiplier core generated with {FloPoCo}, with the secret $s$ as a constant. The result $s \times n_V$ is added to $r_i$. This architecture is the fastest in terms of speed using the property that the secret $s$ is constant to reduce the size of the implementation. The performance results of the parallel approach are presented further in the paper, but the design could not fit on a small embedded FPGA such as the {IGLOO} AGL250 used in {PowWow}, hence the need of a trade-off between the serial and parallel approach. \section{Hybrid implementation}\label{sec:hybrid} \begin{figure}[h] \centering \includegraphics[width=.5\textwidth]{GPS_hybrid.pdf} \caption{Hybrid KCM architecture (128 bits).}\label{fig:hybrid_kcm} \end{figure} The idea of the hybrid implementation is to serialize the KCM architecture of Fig.~\ref{fig:KCM} {\em horizontally}. Indeed, each horizontal row of LUT is the same: it contains all the values of $s$ multiplied by $0,1,\ldots, 2^{\ell}-1$. Hence if we sequentialize the addition of one row with the others we may use a single LUT of size $2^{\ell}\times|s| + \ell$. The challenge $n_V$ is sent by blocks of four bits to drive the LUT, the output of the LUT is added to the accumulation register and then shifted by four bits for the next addition. The architecture, illustrated on Fig.~\ref{fig:hybrid_kcm}, uses one KCM table of $2^{\ell}\times (|s|+\ell) \text{ bits}$ to compute the product between $\ell$ bits of the challenge and the constant $s$. The result is accumulated and shifted by $\ell$ bits each cycle. The adder is reused to sum the result of the multiplication with $r_i$. All the computation is done using a parallel adder. To further reduce the critical path, the adder could also be sequentialized with a sequence of smaller adders as it was done for the serial approach in section~\ref{sec:serial}. \section{Experimental platform}\label{sec:exp} We performed our implementation on the wireless sensor platform {PowWow}\footnote{\url{http://powwow.gforge.inria.fr}}. This platform is specifically designed to be energy efficient~\cite{Berder2010}, and offers a constraint hardware platform in terms of area and time. It uses the Actel {IGLOO} AGL250 FPGA for control and a Texas Instrument CC2420 chip for RF communications. Actel {IGLOO} uses non volatile flash technology for FPGA dedicated to low power. We integrated the GPS authentication into the network stack of the platform. The physical layer is based on the 802.15.4 standard with a CSMA/CA access provided by the CC2420 and controlled by the FPGA. It supports a simple point to point access with the verifier in order to process the whole GPS authentication (see Fig.~\ref{fig:gps} for details). In our set-up, the verifier is a computer interfaced with a Senslab\footnote{\url{http://www.senslab.info}} node for telecommunications. The implementation was verified by authenticating the prover to the verifier 8 times in a row, and with different keys on separate synthesis, using vectors generated by GMP\footnote{\url{http://gmplib.org}} as reference. The serial architecture was evaluated with several challenge and key sizes ($|s|=\{128,256,512\} , |n_V| = \{16,20,32\}$). The hybrid architecture could be implemented on the platform for a key size up to 256 bits. The experimental set-up is illustrated on Fig.~\ref{fig:demonstrator}. \begin{figure}[ht] \centering \input{./fig/demonstrator.pdf_tex} \caption{{PowWow} running a wireless authentication.}\label{fig:demonstrator} \end{figure} In order to validate our work, we have implemented independently the serial approach of GPS to check that our results were compatible with those of McLoone and Robshaw in~\cite{Mcloone2007}. The synthesis was done on the Cadence ATL Compiler and normalized to a NAND size to be consistent with~\cite{Mcloone2007}. The Table~\ref{tab:GPSvsarticle} compares our area results with the existing ones. The difference is below 5~\% for a serial implementation with an 8-bit adder and below 10~\% with an 16-bit adder. We consider these differences as acceptable: our serial implementation can be used as a benchmark to compare the different architectures. \begin{table}[!ht] \centering \caption{Area comparison between our serial implementation and the one of McLoone and Robshaw~\cite{Mcloone2007}.}\label{tab:GPSvsarticle} \begin{tabular}{c c c c c} Secret & Challenge & \multicolumn{2}{c}{Area (NAND)} & Difference \\ (bits) & (bits) & \cite{Mcloone2007} & our work & \\ \hline 8 bits adders \\ 160 & 32 & 1541 & 1505 & 2.31\% \\ 128 & 32 & 1327 & 1320 & 0,54\% \\ 160 & 20 & 1486 & 1413 & 4,89\% \\ 128 & 20 & 1286 & 1231 & 4,28\% \\ 160 & 8 & 1371 & 1341 & 2,21\% \\ 128 & 8 & 1167 & 1163 & 0,37\% \\ \hline 16 bits adders \\ 160 & 32 & 1642 & 1594 & 2,90\% \\ 128 & 32 & 1411 & 1403 & 0,54\% \\ 160 & 20 & 1642 & 1502 & 8,53\% \\ 128 & 20 & 1411 & 1314 & 6,90\% \\ 160 & 8 & 1511 & 1395 & 7,65\% \\ 128 & 8 & 1298 & 1205 & 7,16\% \end{tabular} \end{table} \section{Implementation results}\label{sec:comp} From Section~\ref{sec:serial} to~\ref{sec:hybrid}, three different approaches to implement GPS have been described to obtain a low area footprint, an high throughput or a trade-off between them. We now aim to verify these expectations, compare our implementations quantitatively and determine the influence of the secret size. The three approaches discussed above were written in VHDL and different synthesis were made. All the results are given for a synthesis and a mapping performed using Actel tools and targeting Actel {IGLOO} AGL250. The targeted clock cycle was fixed for all impementations at 8 MHz, frequency used on the {PowWow} platform. During our experiments, we have aligned systematically the bus size on the adder size. Moreover, we evaluated the overall footprint, including all the required memories. This is an important point, as it turns out that memories are occupying much more space than computing parts. It also permits to illustrate the gains obtained by using a constant multiplier, which includes the constant memory. \begin{figure}[!ht] \centering \input{fig/graph.pdf_t} \caption{Area vs throughput for a 32-bit challenge (log scale) and different secret sizes ($|s|=\{128,256,512\}$).}\label{fig:plot} \end{figure} We plotted the area and throughput for the three implementations on Fig.~\ref{fig:plot}. This illustrates the three different balances between area and throughput achieved by the three implementations. Moreover, these balances are maintained for the different levels of security, and both the area and throughput can be represented by a linear approximation in function of the secret size ($y=a*|s|+b$). We observe that the area is increasing with the security size for the serial ($a=5,6$), the parallel ($a=89,8$) and the hybrid ($a=11,3$ ) implementation, with the best scaling for the serial approach in regards of the area. In terms of throughput, the serial implementation as a negative slope ($a=-46,3*10^{-6}$), contrary to the parallel ($a=125*10^{-3}$) and hybrid ($a=61,9*10^{-6}$) implementations, which gets a better throughput by using a larger secret $|s|$. \begin{table}[!ht] \centering \caption{Area, latency and throughput comparison between implementations for a 32-bit challenge and different secret sizes.}\label{tab:ratio} \begin{tabular}{c@{\hspace{.8cm}} c@{\hspace{.8cm}} c@{\hspace{.8cm}} c@{\hspace{.8cm}}} Secret & \multicolumn{3}{c}{Implementation}\\ (bits) & Serial & Parallel & Hybrid \\ \hline & \multicolumn{3}{c}{Area (core cells)} \\ \cline{2-4} 128 & 1546 & 10676 & 2243 \\ 256 & 2253 & 21171 & 3467 \\ 512 & 3698 & 44978 & 6553 \\ \hline & \multicolumn{3}{c}{Latency (cycles)} \\ \cline{2-4} 128 & 339 & 8 & 48 \\ 256 & 603 & 12 & 72 \\ 512 & 1131 & 20 & 120 \\ \hline & \multicolumn{3}{c}{Throughput (cycles/byte)} \\ \cline{2-4} 128 & 0,088 & 30 & 0,625 \\ 256 & 0,076 & 46 & 0,639 \\ 512 & 0,069 & 76 & 0,650 \\ \end{tabular} \end{table} The results shown in Table~\ref{tab:ratio} compares the three implementations in terms of hardware area, run cycles and throughput (in cycles per byte of result). We use the serial implementation as a reference for our comparison. The parallel implementation has an area 10 times larger than the serial implementation. For that cost, its pipeline structure provides a new result each cycle and a 40 times smaller latency when using multi-authentication. The hybrid implementation offers a middle ground with an area less than doubled, and an 8 times smaller latency. We also explored the impact of the adder size on the area for the serial implementation in the Table~\ref{tab:GPS_bus_size}. To reduce the area, it would seems logical to choose the smallest adder. However, it impacts also the bus size and how the memory is addressed. Therefore, choosing the smallest adder is not necessarily the best solution because it can imply an addressing overhead. The gain for the adder can be offset by the memory cost. This effect is highly dependent on the underlying technology. In our case, the 16-bit adder has a lower area than the 8-bit one. \begin{table}[!ht] \centering \caption{Impact of the adder on the serial implementation area for a 32-bit challenge.}\label{tab:GPS_bus_size} \begin{tabular}{c c c c} Adder & \multicolumn{3}{c}{Secret (bits)} \\ (bits) & 128 & 256 & 512 \\ \hline & \multicolumn{3}{c}{Area (core cells)} \\ \hline 8 & 1542 & 2270 & 3745 \\ 16 & 1546 & 2253 & 3698 \\ 32 & 1934 & 2632 & 4034 \\ \end{tabular} \end{table} In order to be complete on the implementation footprint, we should also add the coupon footprint. By using a PRNG as suggested in the McLoone and Robshaw article~\cite{Mcloone2007}, one can reduce the footprint to 1000 NAND equivalent for storing 20 coupons, with another 1000 NAND for the PRNG. This would require about 2300 core cells, and is compatible with the serial and hybrid implementation for 128-bit and 256-bit secret size. \section{Conclusion} Several architectures for GPS are presented in this work offering different balances between area and throughput. With respect to this goal, our contribution is three-fold. First, we have shown that using a fixed key can enable new implementations possibilities. Second, we have serialized KCM to reduce its area cost. To our knowledge, this is the first time that such a trade-off is proposed for KCM. Third, we have integrated our GPS cores into a real platform ({PowWow}) to enable nodes authentication in a wireless sensor network. By integrating the memory cost, we have a better view of the overall area needed for GPS. Two of our cores are compatible with the platform constraints. We have shown the benefits of having a fixed key for the implementation of GPS. In return, an adversary may exploit these features to mount \textit{side-channel attacks} (SPA, DPA\dots). While not considered in this work, it will be interesting to analyze how resistant are our implementations to these attacks. We focused on GPS but our results impact the implementation of more cryptosystems. In term of hardware architecture, GPS is very similar to WIPR~\cite{Oren2009} and BlueJay~\cite{Saarinen2012}. These two cryptosystems are based on an early paper of Shamir~\cite{Shamir1994} which introduced \textit{randomized multiplication cryptosystem}. The cores of these proposals are an adder and a multiplier as for GPS. Our solutions can be adapted to implement WIPR, BlueJay and Shamir's scheme. In a future work, we will compare the implementations of these different schemes. \section*{Acknowledgments} The authors wants to thank Florent de Dinechin from Aric INRIA team for his help on constant multiplication and Romain Fontaine from the CAIRN INRIA team for his help on the RF-communication of {PowWow} platform. \bibliographystyle{IEEEtran}
1,314,259,993,068
arxiv
\section{Introduction}\label{sec:intro} \noindent In this paper we consider an $N$--person differential game driven by a stochastic system of differential equations \begin{equation}\label{eq:sde} dX^i_t=(AX^i_t-\alpha^i_t)dt+\sigma dW^i_t\,, \qquad X_0^i=x^i\in\mathbb{R}^d\,,\qquad i=1,\ldots,N\,, \end{equation} where $A,\sigma$ are given $d\times d$ matrices, with $\det(\sigma)\neq 0$, $(W^1_t,\ldots,W^N_t)$ are $N$ independent Brownian motions and each $\alpha^i_t\colon [0,+\infty[\to\mathbb{R}^d$ is a bounded process adapted to $W^i_t$ which represents the control of the $i$--th player. Each player wants to minimize on the infinite time horizon a discounted quadratic cost functional given by \begin{equation}\label{eq:cost_disc} J^i(X,\alpha^1,\ldots,\alpha^N):=\mathbb{E}\left[\int_0^{+\infty}e^{-\ell t}\left(\,{(\alpha_t^i)^TR\alpha_t^i\over 2}\,+f^i(X^i_t\,;m^1,\ldots,m^N) \right)\,dt\right] \end{equation} where $X=(x^1,\ldots,x^N)\in\mathbb{R}^{N d}$ is the initial position of the dynamics, $\mathbb{E}$ denotes the expected value, $\ell$ is a positive discount factor, $R$ is a positive definite symmetric $d\times d$ matrix, and we set \begin{equation}\label{eq:average} f^i(x;m^1,\ldots,m^N):= \int_{\mathbb{R}^{d(N-1)}} F^i(\xi^1,\ldots,\xi^{i-1},x,\xi^{i+1},\ldots\xi^N)\prod_{j\neq i} dm^j(\xi^j)\,, \end{equation} with \begin{align}\label{eq:quad_cost} F^i(X^1,\ldots, X^N)&:= (X-\overline{X_i})^T Q^i(X-\overline{X_i}) =\sum_{j,k=1}^N (X^j-\overline{X_i}^j)^T Q^i_{jk}(X^k-\overline{X_i}^k)\,, \end{align} for suitable $Nd\times Nd$ symmetric matrices $Q^i$ and suitable reference positions $\overline{X_i}\in\mathbb{R}^{N d}$. The notation $Q^i_{jk}$ ($j,k\in\{1,\ldots,N\}$) is used for the $d\times d$ block matrices of $Q^i$. In~\eqref{eq:cost_disc} and~\eqref{eq:average}, we denoted with $m^1,\ldots, m^N$ the invariant measures associated to the processes $X^1_t,\ldots, X^N_t$. In other words, we are assuming that the cost $J^i$ depends directly on the state of the $i$--th player only, while the other players only influence the cost through their asymptotic distribution in the environment, since $f^i$ represents an average of the quadratic cost $F^i$ w.r.t. the invariant measures of other players. The standing assumptions on the game~\eqref{eq:sde}--\eqref{eq:cost_disc} are summed up in the following conditions. \begin{description} \item{\bf (H1)} The matrix $\sigma$ in~\eqref{eq:sde} is invertible, the matrix $R$ is symmetric and positive definite and the matrices $Q^i$ in~\eqref{eq:quad_cost} are symmetric. \item{\bf (H2)} There exist matrices $Q,B$, $C_1,\ldots,C_N$, $D_1,\ldots,D_N$ and vectors $\Delta, H$ such that block matrices and reference states in~\eqref{eq:quad_cost} satisfy for all $i$ $$ Q^i_{ii}= Q\in \mathrm{Sym_d^+}\,, \qquad \overline{X_i}^i=H\,, $$ $$ Q^i_{ij}=\,{B\over 2}\,, \qquad Q^i_{jj}=C_i\,, \qquad \overline{X_i}^j=\Delta\,, \qquad \qquad \forall~j\neq i\,, $$ $$ Q^i_{jk}=D_i\,, \qquad \qquad \forall~j,k\neq i\,,j\neq k\,. $$ \item{\bf (H3)} The matrix $A$ is symmetric and there exist constants $r,k>0$ such that $R=r\,\mathrm{I}_d$ and $\nu:=\,{\sigma^T\sigma\over 2}=k\,\mathrm{I}_d$. \end{description} In~\cite{Bardi,BardiPriuli} games satisfying {\bf (H2)} were referred to as games with ``{\em nearly identical players}''. Notice that for such games we can rewrite~\eqref{eq:quad_cost} as \begin{align*} F^i(X^1,\ldots, X^N)&=(X^i-H)^T\,Q\,(X^i-H)\\ &+\sum_{j\neq i}\left[ (X^i-H)^T\,{B\over 2}\,(X^j-\Delta)+ (X^j-\Delta)^T\,{B\over 2}\,(X^i-H)\right]\\ &+\sum_{j\neq i}^N (X^j-\Delta)^T C_i(X^j-\Delta)+\sum_{j,k\neq i}^N (X^j-\Delta)^T D_i(X^k-\Delta)\,, \end{align*} which, in particular, means that each player cannot distinguish among other players and tries to reach his happy state $H$ while pushing all competitors towards a common state $\Delta$. For games of the form~\eqref{eq:sde}--\eqref{eq:cost_disc}, we study in this paper the existence of Nash equilibria through the solutions of an associated system of $2N$ Hamilton--Jacobi--Bellman (HJB, in the following) and Kolmogorov--Fokker--Planck (KFP) equations \begin{equation}\label{eq:hjkfp_disc_intro} \left\{ \begin{array}{l} -k\Delta v^i+\displaystyle\,{1\over 2r}\,|\nabla v^i|^2-(\nabla v^i)^TAx+\ell v^i=f^i(x;m^1,\ldots,m^N)\\ -k\Delta m^i-\displaystyle\mathrm{div}\left(m^i \cdot\Big({\nabla v^i\over r}\,-A x\Big)\right)=0\\ \int_{\mathbb{R}^d}m^i(x)\,dx=1\,,\qquad m^i>0 \end{array} \right. \qquad\qquad i=1,\ldots, N \end{equation} where the unknown $v^i,m^i$ represent respectively the value function for the $i$--th player and its invariant measure (with a slight abuse of notations, we denote with $m^i$ a measure as well as its density), and $\mathrm{div}$ is the divergence operator. In view of the Linear-Quadratic structure of the game, we look for solutions of the HJB--KFP system in the class of quadratic value functions and multivariate Gaussian distributions. This produces Nash equilibria for~\eqref{eq:sde}--\eqref{eq:cost_disc} in the form of affine feedbacks. Our result for these games is that, for small values of the discount factor $\ell>0$, there exists a unique Quadratic--Gaussian (abbreviated QG later on) solution to~\eqref{eq:hjkfp_disc_intro} and thus a unique affine Nash equilibrium strategy. Moreover, we rigorously prove that, as the number of players $N$ tends to infinity, QG solutions of~\eqref{eq:hjkfp_disc_intro} converge to solutions of the Mean Field PDE system $$ \left\{ \begin{array}{l} -k\Delta v+\displaystyle\,{1\over 2r}\,|\nabla v|^2-\nabla v^TAx+\ell v=\hat V[m](x)\\ -k\Delta m-\displaystyle\mathrm{div}\left(m \cdot\Big({\nabla v\over r}\,-A x\Big)\right)=0\\ \int_{\mathbb{R}^d}m(x)\,dx=1\,,\qquad m>0 \end{array} \right. $$ for a suitable integral operator $\hat V$ mapping probability measures into quadratic polynomials of the variable $x$. This latter result perfectly matches the ones obtained by Lasry \& Lions in their seminal papers~\cite{LL1,LL2,LL3} about differential games on the torus $\mathbb{T}^d$, and the ones on ergodic LQ games in $\mathbb{R}$ and $\mathbb{R}^d$ (see~\cite{Bardi} and~\cite{BardiPriuli}, respectively). \noindent Then, we investigate the relation between the games with discounted cost~\eqref{eq:cost_disc} and the ones with long--time--average cost functional studied in~\cite{BardiPriuli}. Namely, using the same notations as above, we consider the ergodic cost \begin{equation}\label{eq:cost_ergodic} J^i(X,\alpha^1,\ldots,\alpha^N):= \liminf_{T\to\infty}\,{1\over T}\,\mathbb{E}\left[ \int_0^T \,{(\alpha_t^i)^TR\,\alpha_t^i\over 2}\,+F^i(X^1,\ldots, X^N)\,dt \right]\,, \end{equation} whose affine Nash equilibria were characterized in~\cite{BardiPriuli} through the study of the corresponding HJB and KFP equations. Here, we prove that the QG solutions giving Nash equilibria for the game~\eqref{eq:sde}--\eqref{eq:cost_disc} converge as $\ell\to 0^+$ to the corresponding QG solutions for the game~\eqref{eq:sde}--\eqref{eq:cost_ergodic}, as in the case of classical differential games. Moreover, we prove that the limit procedures as $\ell\to 0^+$ and as $N\to+\infty$ commute. \noindent Finally, we investigate other singular limits procedures, and prove that the deterministic ($\nu\to 0$) and cheap cost ($R\to 0$) limits for the games with ergodic cost~\eqref{eq:sde}--\eqref{eq:cost_ergodic} do commute with the mean field limit ($N\to+\infty$). Linear--Quadratic differential games have a large literature, see the books~\cite{BO,Engw} and the references therein. The Lasry--Lions approach to MFG, originally introduced in~\cite{LL1,LL2,LL3}, has found application to several different contexts spanning from numerical methods~\cite{AcCD}, to discrete games~\cite{Gomes}, to financial problems~\cite{GLL}. Large population limits for multi--agent systems were also studied independently by Huang, Caines and Malhame~\cite{HCM03,HCM06,HCM07ieee}. They introduced a method named ``Nash certainty equivalence principle'' that produces a feedback from a mean--field equation, and shows that such control gives an approximate Nash equilibrium for the $N$--person game if $N$ is large enough. We cannot review here the number of papers inspired by their approach, but let us cite~\cite{BSYY,LZ} for LQ problems,~\cite{NCMH} for recent progress on nonlinear systems,~\cite{KTY} on the rate of convergence as $N\to\infty$, and the references therein. In particular, we mention that~\cite{HCM07ieee,LZ} deal with discounted infinite horizon games as the ones we are considering here. There are some differences between our results and the ones in the cited papers, though. In~\cite{HCM07ieee,LZ} more general costs $J^i$ are allowed, explicitly depending on other players' states $X^j$, but only the existence of approximate Nash equilibria is established. Here, we trade off the generality of the cost to prove existence of \emph{exact} Nash equilibria for the game, and to prove the relation between $N$--players games and their mean field limit, as $N\to+\infty$. More details will be discussed in section~\ref{sec:compare_caines}. The paper is organized as follows. In section~\ref{sec:prelim} we recall some preliminary facts for symmetric matrices, algebraic Riccati equations and LQ games~\eqref{eq:sde}--\eqref{eq:cost_ergodic}. Section~\ref{sec:discounted} is devoted to the existence of Nash equilibria for infinite horizon differential games with discounted cost~\eqref{eq:sde}--\eqref{eq:cost_disc}. Section~\ref{sec:singular} contains the results about singular limits as $\nu\to 0$ (deterministic limit), $R\to 0$ (cheap control) and $\ell\to 0^+$ (vanishing discount). Finally, section~\ref{sec:tech_proof} contains the proofs of the results and section~\ref{sec:conclusion} discusses extensions and open problems. \section{Notations and preliminaries}\label{sec:prelim} \subsection{Matrices and eigenvalues} In the following, we will use the notation $\mathrm{Mat}_{m\times n}(\mathbb{R})$ for the linear space of real $m\times n$ matrices, $\mathrm{I}_d\in\mathrm{Mat}_{d\times d}(\mathbb{R})$ for the identical $d\times d$ matrix and $\mathrm{spec}(A)$ for the spectrum of a matrix $A$. The linear subspace of real symmetric $d\times d$ matrices will be denoted by $\mathrm{Sym_d}$ and, for $M\in\mathrm{Sym_d}$, we say that $M$ is positive semidefinite (resp. positive definite) if for all $x\in\mathbb{R}^d$ there holds $x^TMx\geq 0$ (resp. if for all $x\in\mathbb{R}^d\setminus\{0\}$ there holds $x^TMx> 0$). The notation $\mathrm{Sym_d^+}$ will be used for the set of real symmetric and positive definite $d\times d$ matrices. Recall that for matrices $M\in\mathrm{Sym_d}$, the expression \begin{equation}\label{eq:max_spec_norm} \|M\|:=\max\,\{|\ell|~;~\ell\in\mathrm{spec}(M)\}\,, \end{equation} defines a norm. In particular, $\|M\|=\max\mathrm{spec}(M)$ whenever $M$ is positive semidefinite. Also, eigenvalues of a matrix depend continuously on its coefficients (see e.g.~\cite{SerreMat}) so that, for instance, given a sequence of symmetric matrices $A_n\to A$, the sequences of the minimal and maximal eigenvalues of $A_n$ converge, respectively, to $\min\,\mathrm{spec}(A)$ and $\max\,\mathrm{spec}(A)$. We conclude with a property that will be used in the rest of the paper (cf again~\cite{SerreMat}). \begin{prop}\label{prop:matrices} Let $H\in\mathrm{Sym_d}$ and $K\in\mathrm{Sym_d^+}$. Then, $HK$ is diagonalizable with real eigenvalues and the number of positive (resp. negative) eigenvalues of $HK$ is equal to the number of positive (resp. negative) eigenvalues of $H$. The same holds for $KH$. \end{prop} \subsection{Admissible strategies and Nash equilibria} \begin{defn}\label{defn:admiss_strategy} A strategy $\alpha^i$ is said to be \emph{admissible} (for the $i$--th player) if it is a bounded process adapted to $W^i_t$ such that the corresponding solution $X^i_t$ to~\eqref{eq:sde} satisfies \begin{itemize} \item $\mathbb{E}[X^i_t]$ and $\mathbb{E}[(X^i_t)(X^i_t)^T]$ are both bounded on $[0,T]$ for every $T$; \item $X^i_t$ is \emph{ergodic} in the following sense: there exists a probability measure $m^i=m^i(\alpha^i)$ on $\mathbb{R}^d$ such that $$ \int_{\mathbb{R}^d}|x|\,dm^i(x)<\infty \qquad\qquad \int_{\mathbb{R}^d}|x|^2\,dm^i(x)<\infty $$ and $$ \lim_{T\to+\infty}\,{1\over T}~\mathbb{E}\left[\int_0^Tg(X^i_t)\,dt\right]=\int_{\mathbb{R}^d}g(x)\,dm^i(x)\,, $$ locally uniformly w.r.t. the initial state $X^i_0$, for all functions $g$ which are polynomials of degree at most $2$. \end{itemize} \end{defn} \noindent In~\cite{BardiPriuli} it was shown that all affine strategies $\alpha^i(x)= K^ix+c^i$ with $K^i\in\mathrm{Mat}_{d\times d}(\mathbb{R})$ such that the matrix $A-K^i$ has only eigenvalues with negative real part, and $c^i\in\mathbb{R}^d$, are admissible. Namely, considering $\alpha^i_t:= \alpha^i(X^i_t)$ with $X^i_t$ solution of \begin{equation}\label{eq:csde} dX^i_t=[(A-K^i)X^i_t-c^i]dt+\sigma^idW^i_t\,, \end{equation} $\alpha^i_t$ is admissible and $X^i_t$ has a unique invariant measure $m^i$ given by a multivariate Gaussian. \begin{defn} A vector of admissible strategies $\overline{\alpha}=(\overline{\alpha}^1,\ldots,\overline{\alpha}^N)$ is a \emph{Nash equilibrium strategy} for the $N$--person game with dynamics~\eqref{eq:sde} and cost $J^i$ given by either~\eqref{eq:cost_ergodic} or~\eqref{eq:cost_disc}, if for every index $i\in\{1,\ldots,N\}$ and for every admissible strategy $\alpha^i$ for the $i$--th player there holds $$ J^i(X,\overline{\alpha}^1,\ldots,\overline{\alpha}^N)\leq J^i(X,\overline{\alpha}^1,\ldots,\overline{\alpha}^{i-1},\alpha^i,\overline{\alpha}^{i+1},\ldots\overline{\alpha}^N)\,. $$ The Nash equilibrium is said to be \emph{symmetric} if all the players adopt the same strategy. \end{defn} \subsection{Algebraic Riccati equations} We recall here some basic facts about algebraic Riccati equations (ARE in the following). \begin{prop}\label{prop:ARE} Consider the ARE \begin{equation}\label{eq:ARE} Y {\cal R} Y + Y{\cal A}+{\cal A}^T Y - {\cal Q}=0 \end{equation} with ${\cal R}\in\mathrm{Sym_d^+}$, ${\cal Q}\in\mathrm{Sym_d}$ and ${\cal A}$ any $d\times d$ matrix, and introduce the following notations $$ \Xi_S:= \left[ \begin{array}{c} I_d\\ S \end{array} \right]\in\mathrm{Mat}_{2d\times d}(\mathbb{R})\,, \qquad\qquad {\cal H}:= \left( \begin{array}{cc} {\cal A} & {\cal R}\\ {\cal Q} & -{\cal A}^T \end{array} \right)\in\mathrm{Mat}_{2d\times 2d}(\mathbb{R})\,, $$ where $S$ is any element of $\mathrm{Mat}_{d\times d}(\mathbb{R})$, and $\mathrm{Im}\,\Xi_S$ for the $d$--dimensional linear subspace of $\mathbb{R}^{2d}$ spanned by the columns of $\Xi_S$. Then the following facts hold. \begin{description} \item{\it (i)} $Y$ is a solution of~\eqref{eq:ARE} if and only if $\mathrm{Im}\,\Xi_Y$ is ${\cal H}$--invariant, i.e. if and only if ${\cal H}\xi\in\mathrm{Im}\,\Xi$ for all $\xi\in\mathrm{Im}\,\Xi$. \item{\it (ii)} If the matrix ${\cal H}$ has no purely imaginary nonzero eigenvalues, then equation~\eqref{eq:ARE} has solutions $Y$ such that $Y=Y^T$. \item{\it (iii)} If~\eqref{eq:ARE} has symmetric solutions, then there exists a unique symmetric solution $Y$ with $$ \big\{\lambda\in\mathrm{spec}({\cal A}+{\cal R}Y)~;~\mathrm{Re}\,\lambda\neq 0\big\} = \mathrm{spec}({\cal H})\cap \big\{z\in\mathbb{C}~;~\mathrm{Re}\, z>0\big\}\,. $$ In particular, if ${\cal H}$ has only real nonzero eigenvalues, then there exists a unique symmetric solution $Y$ such that \begin{equation}\label{eq:spec_ARE} \mathrm{spec}({\cal A}+{\cal R}Y) = \mathrm{spec}({\cal H})\cap (0,+\infty)\,. \end{equation} \end{description} \end{prop} \noindent The proof follows from standard arguments about Riccati equations that can be found in~\cite{Engw,LR}. We give here some explicit references for sake of completeness. Part {\it (i)} is contained in Proposition 7.1.1 of~\cite{LR}. Part {\it (ii)} is a particular case of Theorem 8.1.7 in~\cite{LR}. Finally, part {\it (iii)} is proved in Theorem 8.3.2 of~\cite{LR}. \subsection{Results for LQ games with ergodic cost}\label{sec:LQG} \noindent In view of the study of the singular limits, we review the results obtained in~\cite{BardiPriuli} for LQ differential games with ergodic costs. We start by noticing that, for the games~\eqref{eq:sde}--\eqref{eq:cost_ergodic}, all players share the same Hamiltonian given by \begin{align*} H(x,p)&:=\min_{\omega}\left\{-\omega^T \,{R\over 2}\,\omega - p^T\big(A\,x-\omega\big)\right\}=-p^TA\,x+\min_{\omega}\left\{-\omega^T \,{R\over 2}\,\omega - p^T\cdot\omega\right\}\,. \end{align*} Since the minimum is attained at $\omega=R^{-1} p$, we conclude $H(x,p)=p^T\,{R^{-1}\over 2}\,p - p^TA\,x$. Therefore, the system of HJB--KFP equations associated to the game is given by \begin{equation}\label{eq:hjkfp} \left\{ \begin{array}{l} -\mathrm{tr}(\nu\,\mathrm{D}^2 v^i)+\displaystyle\,{1\over 2}\,(\nabla v^i)^T R^{-1}\nabla v^i-(\nabla v^i)^TAx+\lambda^i=f^i(x;m^1,\ldots,m^N)\\ -\mathrm{tr}(\nu\,\mathrm{D}^2 m^i)-\displaystyle\mathrm{div}\Big(m^i \cdot(R^{-1}\nabla v^i-A x)\Big)=0\qquad\qquad\qquad\qquad\qquad i=1,\ldots, N\\ \int_{\mathbb{R}^d}m^i(x)\,dx=1\,,\qquad m^i>0 \end{array} \right. \end{equation} where the unknown $v^i,m^i$ represent respectively the value function for the $i$--th player and its invariant measure, and $\lambda^i$ is a real number representing the outcome of the game for the $i$--th player. Here $\mathrm{tr}$ and $\mathrm{div}$ are respectively the trace of a matrix and the divergence operator. In order to formulate the algebraic conditions which characterize the existence of Quadratic--Gaussian (QG in the rest of the paper) solutions to~\eqref{eq:hjkfp}, we need the following definition. \begin{defn}\label{def:RSprop} Given matrices ${\bf A}\in\mathrm{Sym_d}$ and ${\bf N}, {\bf R}, {\bf Q}\in\mathrm{Sym_d^+}$, we say that $({\bf A}, {\bf N}, {\bf R}, {\bf Q})$ satisfy the \emph{Riccati--Sylvester property} if every symmetric and positive definite solution $Y$ of the ARE \begin{equation}\label{eq:riccati} Y\,{{\bf N} {\bf R}{\bf N}\over 2}\,Y=\,{{\bf A}^T {\bf R}{\bf A}\over 2}\,+{\bf Q}\,, \end{equation} is also a solution of the Sylvester equation \begin{equation}\label{eq:sylvester} Y {\bf N}{\bf R}-{\bf R}{\bf N} Y= {\bf R}{\bf A}-{\bf A}^T{\bf R}\,. \end{equation} \end{defn} The first result for $N$--players games~\eqref{eq:sde}--\eqref{eq:cost_ergodic} satisfying {\bf (H1)} and {\bf (H2)} was the following (cf Theorem~2 in~\cite{BardiPriuli}): The system of $2N$ HJB--KFP equations~\eqref{eq:hjkfp} admits a unique solution $(v^i,m^i,\lambda^i)$ of the form \begin{equation}\label{eq:ansatz_nearly_id} v^i(x) =x^T\,{ \Lambda\over 2}\, x+\rho x\,, \qquad\qquad m^i(x)= {\cal N}(\mu,\Sigma^{-1})\,, \qquad\qquad \lambda^i\in\mathbb{R}\,, \end{equation} for suitable symmetric matrices $\Lambda,\Sigma$, with $\Sigma$ positive definite, and suitable vectors $\mu,\rho$, which are in common for all the players, if and only if $(A,\nu,R,Q)$ satisfy the Riccati--Sylvester property in the sense of Definition~\ref{def:RSprop} and the matrix ${\cal B}:= Q+\,{A^TRA\over 2}\,+(N-1)~{B\over 2}$ is invertible. Moreover, the affine feedbacks $\overline{\alpha}^i=\overline{\alpha}:= R^{-1}\nabla v$, for $i=1,\ldots,N$, provide a symmetric Nash equilibrium strategy for all initial positions $X\in\mathbb{R}^{Nd}$ and $J^i(X,\overline{\alpha})=\lambda^i$ for all $X$ and all $i$. \noindent In particular, by going through the proof of this Theorem in~\cite{BardiPriuli}, one sees that the coefficients $\Lambda,\Sigma,\rho,\mu$ are determined by solving the following algebraic relations \begin{equation}\label{eq:KFP_matrix} \Sigma\,{\nu R\nu\over 2}\,\Sigma-\,{A^T RA\over 2}\,-Q=0\,, \qquad {\cal B} \mu=P\,, \qquad \Lambda=R\big(\nu\Sigma+A\big)\,, \qquad \rho=-R\,\nu\,\Sigma\mu\,. \end{equation} with $P:=Q H+(N-1)~{B\over 2}\,\Delta$, and $\lambda^i=F^i(\Sigma,\mu)+\mathrm{tr}(\nu R\nu\Sigma+\nu R A)-\mu^T\,{\Sigma \nu R\nu \Sigma \over 2}\, \mu$ with \begin{align}\label{eq:v1} F^i(\Sigma,\mu)&:= H^TQH-(N-1)\,H^T\,{B\over 2}\,(\mu-\Delta) -(N-1)(\mu-\Delta)^T\,{B\over 2}\,H+(N-1)\mathrm{tr}(C_i \Sigma^{-1})\nonumber\\ &~~~ + (N-1)(\mu-\Delta)^T C_i(\mu-\Delta) +(N-1)(N-2)(\mu-\Delta)^T D_i(\mu-\Delta)\,. \end{align} \medskip In order to study the behavior of QG solutions of~\eqref{eq:hjkfp} as $N\to+\infty$, we assume for simplicity that the control system, the costs of the control and the reference positions are always the same, i.e. that $A,\sigma,R,H$ and $\Delta$ are all independent from the number of players $N$. We also denote with $$ Q^N\,, \qquad B^N\,, \qquad C_i^N\,, \qquad D_i^N\,, $$ the primary and secondary costs of displacement, respectively, which are assumed to depend on $N$. Concerning these quantities, we require that they tend to suitable matrices $\hat Q,\hat B,\hat C,\hat D$ with their natural scaling, i.e., that as $N\to+\infty$ there hold \begin{equation}\label{eq:scale} Q^N\to \hat Q\,, \qquad B^N(N-1)\to \hat B\,, \qquad C_i^N(N-1)\to \hat C\,, \qquad D_i^N(N-1)^2\to \hat D\,, \qquad \forall~i\,. \end{equation} If we define an operator on probability measures of $\mathbb{R}^d$ by setting for all measures ${\frak m}\in\mathscr{P}(\mathbb{R}^d)$ \begin{align*} \hat V[{\frak m}](X)&:=(X-H)^T \hat Q (X-H) +\!\!\int_{\mathbb{R}^d}\!\!\left((X-H)^T \,{\hat B\over 2}\, (\xi-\Delta)+(\xi-\Delta)^T\,{\hat B\over 2}\,(X-H)\right)d{\frak m}(\xi)\\ &~~~~~+\int_{\mathbb{R}^d} (\xi-\Delta)^T \hat C (\xi-\Delta)\,d{\frak m}(\xi) +\left(\int_{\mathbb{R}^d} (\xi-\Delta)\,d{\frak m}(\xi)\right)^T\hat D\left(\int_{\mathbb{R}^d} (\xi-\Delta)\,d{\frak m}(\xi)\right) \end{align*} then it is possible to verify that, as $N\to+\infty$, the solutions $v^i_N$, $m^i_N$ and $\lambda^i_N$ of~\eqref{eq:hjkfp} tend to solutions of the system of mean field equations \begin{equation}\label{eq:mfpde} \left\{ \begin{array}{l} -\mathrm{tr}(\nu\mathrm{D}^2 v)+\displaystyle\,{1\over 2}\,\nabla v^T R^{-1}\nabla v-\nabla v^TAx+\lambda=\hat V[m](x)\\ -\mathrm{tr}(\nu\mathrm{D}^2 m)-\displaystyle\mathrm{div}\Big(m \cdot(R^{-1}\nabla v-A x)\Big)=0\\ \int_{\mathbb{R}^d}m(x)\,dx=1\,,\qquad m>0 \end{array} \right. \end{equation} like in~\cite{Bardi,LL1,LL3}. Namely, if we assume that \begin{equation}\label{eq:hyp_limit} \nu\in\mathrm{Sym_d^+}\,, \qquad\qquad R\in\mathrm{Sym_d^+}\,, \qquad\qquad \hat Q\in\mathrm{Sym_d^+}\,. \end{equation} the following facts hold (cf Theorem~3 in~\cite{BardiPriuli}). First of all, the system~\eqref{eq:mfpde} admits a unique solution $(v,m,\lambda)$ of the form \begin{equation}\label{eq:ansatz_limit} v(x) =x^T\,{ \Lambda\over 2}\, x+\rho x\,, \qquad\qquad m(x)= {\cal N}(\mu,\Sigma^{-1})\,, \qquad\qquad \lambda\in\mathbb{R}\,, \end{equation} for suitable symmetric matrices $\Lambda,\Sigma$, with $\Sigma$ positive definite, and suitable vectors $\mu,\rho$, if and only if $(A,\nu,R,\hat Q)$ satisfy the Riccati--Sylvester property in the sense of Definition~\ref{def:RSprop} and the matrix ${\cal B}^\infty:= \hat Q+\,{A^TRA\over 2}\,+\,{\hat B\over 2}$ is invertible. If in addition $\hat B\geq 0$, then the solution $(v,m,\lambda)$ of the form~\eqref{eq:ansatz_limit} is the unique solution of~\eqref{eq:mfpde} such that $v(0)=0$. Finally, assume we are given a sequence of $N$--players differential games of the form~\eqref{eq:sde}--\eqref{eq:cost_ergodic} which satisfy {\bf (H1)} and {\bf (H2)} and admit solutions of the form~\eqref{eq:ansatz_nearly_id} for all $N$. Then, if the the limit system~\eqref{eq:mfpde} admits a unique solution of the form~\eqref{eq:ansatz_limit}, we have that the QG solutions $(v^i_N,m^i_N,\lambda_N^i)$ of the $N$--person game converge as $N\to+\infty$ to the QG solution $(v,m,\lambda)$ of~\eqref{eq:mfpde} in the following sense: for all $i=1,\ldots,N$, $v^i_N\to v$ in $C^1_{loc}(\mathbb{R}^d)$ with second derivative converging uniformly in $\mathbb{R}^d$, $m^i_N\to m$ in $C^k(\mathbb{R}^d)$ for all $k$, and $\lambda_N^i\to \lambda$ in $\mathbb{R}$. \noindent For later use, we also remark that the coefficients $\Lambda,\Sigma,\rho,\mu$ in~\eqref{eq:ansatz_limit} are determined by solving the following algebraic relations \begin{equation}\label{eq:KFP_matrix_limit} \Sigma\,{\nu R\nu\over 2}\,\Sigma-\,{A^T RA\over 2}\,-\hat Q=0\,, \qquad {\cal B}^\infty \mu=P^\infty\,, \qquad \Lambda=R\big(\nu\Sigma+A\big)\,, \qquad \rho=-R\,\nu\,\Sigma\mu\,. \end{equation} with $P^\infty:=\hat Q H+\,{\hat B\over 2}\,\Delta$, and $\lambda=\hat F(\Sigma,\mu)+\mathrm{tr}(\nu R\nu\Sigma+\nu R A)-\mu^T\,{\Sigma \nu R\nu \Sigma \over 2}\, \mu$, with \begin{align}\label{eq:v1_limit} \hat F(\Sigma,\mu)&:= H^T\hat Q H-\left(H^T{\hat B\over 2}\,(\mu-\Delta)+(\mu-\Delta)^T {\hat B\over 2}\,H\right)\nonumber\\ &~~~+\mathrm{tr}(\hat C \Sigma^{-1}) +(\mu-\Delta)^T(\hat C+\hat D)(\mu-\Delta)\,. \end{align} \section{Discounted problems} \label{sec:discounted} In this section, we extend the analysis of~\cite{BardiPriuli} to the case of infinite horizon $N$--person games \begin{equation}\label{eq:sde2} dX^i_t=(AX^i_t-\alpha^i_t)dt+\sigma dW^i_t\,, \qquad X_0^i=x^i\in\mathbb{R}^d\,,\qquad i=1,\ldots,N\,, \end{equation} with discounted costs \begin{equation}\label{eq:cost_disc2} J^i(X,\alpha^1,\ldots,\alpha^N):=\mathbb{E}\left[\int_0^{+\infty}e^{-\ell t}\left(\,{r\,|\alpha_t^i|^2\over 2}\,+f^i(X^i_t\,;m^1,\ldots,m^N) \right)\,dt\right]\,, \end{equation} which satisfy {\bf (H1)}--{\bf (H3)}. In this case, the associated system of $2N$ HJ--KFP equation takes the form \begin{equation}\label{eq:hjkfp_disc} \left\{ \begin{array}{l} -k\Delta v^i+\displaystyle\,{1\over 2r}\,|\nabla v^i|^2-(\nabla v^i)^TAx+\ell v^i=f^i(x;m^1,\ldots,m^N)\\ -k\Delta m^i-\displaystyle\mathrm{div}\left(m^i \cdot\Big({\nabla v^i\over r}\,-A x\Big)\right)=0\\ \int_{\mathbb{R}^d}m^i(x)\,dx=1\,,\qquad m^i>0 \end{array} \right. \qquad\qquad i=1,\ldots, N\,. \end{equation} \begin{rem} Observe that if {\bf (H3)} holds, then $(A,\nu,R,Q)$ satisfy the Riccati--Sylvester property. Indeed, equation~\eqref{eq:sylvester} reduces to $k\,r\,(Y-Y)=r(A-A^T)=0$, which is identically satisfied for all $Y\in\mathrm{Sym_d}$. \end{rem} \begin{thm}\label{thm:disc_Nplay} Assume {\bf (H1)}--{\bf (H3)}. Then, there exists $\bar\ell>0$ such that for $\ell<\bar\ell$ the system of HJB--KFP equations~\eqref{eq:hjkfp_disc} admits a unique solution $(v^i_\ell,m^i_\ell)$ satisfying $$ v^i_\ell(x)=x^T\,{\Lambda_\ell\over 2}\, x+\rho_\ell x+c_\ell^i\,,\qquad\qquad m^i_\ell(x)= {\cal N}(\mu_\ell,\Sigma_\ell^{-1})\,, $$ for suitable symmetric matrices $\Lambda_\ell,\Sigma_\ell$, with $\Sigma_\ell$ positive definite, vectors $\mu_\ell,\rho_\ell$ and numbers $c^1_\ell,\ldots,c^N_\ell\in\mathbb{R}$, if and only if the matrix ${\cal B}_\ell:= Q+r\,{A^2\over 2}\,-\ell\,r\,{A\over 2}\,+(N-1)~{B\over 2}$ is invertible. \\ Moreover, the affine feedbacks $\overline{\alpha}^i(x)=\,{\nabla v^i_\ell(x)\over r}$, for $x\in\mathbb{R}^d$ and $i=1,\ldots,N$, provide a symmetric Nash equilibrium strategy for~\eqref{eq:sde2}--\eqref{eq:cost_disc2}, for all initial positions $X\in\mathbb{R}^{Nd}$. \end{thm} \noindent The proof is quite technical and it is deferred to section~\ref{sec:tech_proof}. Here we mention that, similarly to the results in section~\ref{sec:LQG}, the coefficients $\Lambda_\ell,\Sigma_\ell,\mu_\ell,\rho_\ell,c^i_\ell$ are characterized b \begin{equation}\label{eq:KFP_matrix_discount} \Lambda_\ell=r\big(k\Sigma_\ell+A\big)\,,\qquad\qquad \rho_\ell=-r\,k\,\Sigma_\ell\mu_\ell\,, \end{equation} \begin{align}\label{eq:v0_disc} \ell c^i_\ell&=F^i(\Sigma_\ell,\mu_\ell)+kr\,\mathrm{tr}(k\Sigma_\ell+A)-\,{k^2r\over 2}\,(\mu_\ell)^T\,\Sigma_\ell^2\, \mu_\ell \end{align} where $F^i$ is the function defined by~\eqref{eq:v1} and $\Sigma_\ell$ and $\mu_\ell$ solve respectively \begin{equation}\label{eq:discount1} \Sigma_\ell{\cal R}\Sigma_\ell+{\cal A}_{\ell}^T\Sigma_\ell+\Sigma_\ell{\cal A}_{\ell}-{\cal Q}_\ell=0\,, \qquad\qquad {\cal B}_\ell \mu_\ell=QH+(N-1)\,{B\over 2}\,\Delta\,. \end{equation} with ${\cal R}:=\,{k^2 r\over 2}\,\mathrm{I}_d$, ${\cal A}_\ell:=\ell\,{k r\over 4}\,\mathrm{I}_d$ and ${\cal Q}_\ell:=Q+r\,{A^2\over 2}\,-\ell\,r\,{A\over 2}$. \begin{rem}\label{rem:disc_are} Observe that the conclusion of Theorem~\ref{thm:disc_Nplay} fails when $\ell$ is not small enough. Indeed, the ARE in~\eqref{eq:discount1} may fail to have solutions in $\mathrm{Sym_d^+}$, which in turn would give no Gaussian solution $m^i$ for the KFP equation in~\eqref{eq:hjkfp_disc}. To see this, recall that Proposition~\ref{prop:ARE}{\it (iii)} ensures the existence of a unique solution $Y_\ell\in\mathrm{Sym_d}$ such that ${\cal A}_\ell+{\cal R}Y_\ell = \,{k^2 r\over 2}\,\left({\ell\over 2 k}\,\mathrm{I}_d+Y_\ell\right)$ has eigenvalues which coincide with the ones with positive real part of \begin{equation}\label{eq:discount_are_matrix} {\cal H}_\ell:=\left( \begin{array}{cc} {\cal A}_\ell & {\cal R}\\ {\cal Q}_\ell & -{\cal A}_\ell^T \end{array} \right)\,. \end{equation} Since $\lambda\in\mathbb{R}$ is an eigenvalue of ${\cal A}_\ell+{\cal R}Y_\ell$ if and only if $\left({2\over k^2 r}\lambda-\,{\ell\over 2 k}\right)$ is an eigenvalue of $Y_\ell$, it is clear that ${\cal A}_\ell+{\cal R}Y_\ell$ has real eigenvalues and that, fixing $\ell>0$ such that $kr>(4\hat\lambda/\ell)$ for some eigenvalue $\hat\lambda> 0$ of ${\cal A}_\ell+{\cal R}Y_\ell$, the matrix $Y_\ell$ has a negative eigenvalue and does not belong to $\mathrm{Sym_d^+}$. Given that no other positive definite solution can exist, because any such $Z_\ell$ would give positive spectrum to ${\cal A}_\ell+{\cal R}Z_\ell$ and this would violate the uniqueness of $Y_\ell$, this means that the game~\eqref{eq:sde2}--\eqref{eq:cost_disc2} corresponding to this value $\ell$ admit no Quadratic--Gaussian solutions to~\eqref{eq:hjkfp_disc}. \end{rem} To study the convergence of Nash equilibria as $N\to+\infty$, assume again that the coefficients $A,\sigma,R,H$ and $\Delta$ are all independent from the number of players $N$. Also, assume that the discount factor $\ell$ does not depend on $N$ and that~\eqref{eq:scale} holds for the cost coefficients $Q^N,B^N,C_i^N,D_i^N$. By denoting with $(v^i_N,m^i_N)$ the solutions found in Theorem~\ref{thm:disc_Nplay}, we expect that they converge, like for games with ergodic costs~\cite{Bardi,BardiPriuli,LL1,LL3}, to solutions of the system of two mean field equations \begin{equation}\label{eq:mfpde_disc} \left\{ \begin{array}{l} -k\Delta v+\displaystyle\,{1\over 2r}\,|\nabla v|^2-\nabla v^TAx+\ell v=\hat V[m](x)\\ -k\Delta m-\displaystyle\mathrm{div}\left(m \cdot\Big({\nabla v\over r}\,-A x\Big)\right)=0\\ \int_{\mathbb{R}^d}m(x)\,dx=1\,,\qquad m>0 \end{array} \right. \end{equation} Along the lines of Theorem~3 in~\cite{BardiPriuli} (see also section~\ref{sec:LQG}), our main result for this system is the following, whose proof is given in section~\ref{sec:tech_proof}. \begin{thm}\label{thm:disc_LIMplay} Assume that $r,k>0$ in~\eqref{eq:mfpde_disc} and that the matrix $\hat Q$ in~\eqref{eq:scale} satisfies $\hat Q\in\mathrm{Sym_d^+}$. Then, the following facts hold. \begin{description} \item{\it (a)}~{\bf [Solutions to MFPDE]} There exists $\hat\ell>0$ such that for $\ell<\hat\ell$ the system~\eqref{eq:mfpde_disc} admits a unique solution $(v,m)$ satisfying \begin{equation}\label{eq:disc_ansatz_limit} v(x)=x^T\,{\Lambda\over 2}\, x+\rho x+c\,,\qquad\qquad m(x)= {\cal N}(\mu,\Sigma^{-1})\,, \end{equation} for suitable symmetric matrices $\Lambda,\Sigma$, with $\Sigma$ positive definite, vectors $\mu,\rho$ and $c\in\mathbb{R}$ if and only if the matrix ${\cal B}_\ell^\infty:= \hat Q+r\,{A^2\over 2}\,-\ell\,r\,{A\over 2}\,+\,{\hat B\over 2}$ is invertible. \item{\it (b)}~{\bf [Uniqueness]} If in addition $\hat B\geq 0$, the solution $(v,m)$ of the form~\eqref{eq:disc_ansatz_limit} is the unique solution of~\eqref{eq:mfpde_disc} such that $v(0)=c$. \item{\it (c)}~{\bf [Convergence as $N\to\infty$]} Let assume $\ell<\hat\ell$, where $\hat\ell>0$ is the value found in {\it (a)}. For all $N\in\mathbb{N}$ consider $N$--players differential games of the form~\eqref{eq:sde2}--\eqref{eq:cost_disc2} such that {\bf (H1)}--{\bf (H3)} hold. Assume that~\eqref{eq:scale} is verified as $N\to+\infty$, and that the Mean-Field system~\eqref{eq:mfpde_disc} admits a unique Quadratic--Gaussian solution. Then, the solutions $(v^i_N,m^i_N)$ found in Theorem~\ref{thm:disc_Nplay} converge to a solution $(v,m)$ of~\eqref{eq:mfpde_disc} as $N\to+\infty$ in the following sense: for all $i=1,\ldots,N$, $v^i_N\to v$ in $C^1_{loc}(\mathbb{R}^d)$ with second derivative converging uniformly in $\mathbb{R}^d$ and $m^i_N\to m$ in $C^k(\mathbb{R}^d)$.\\ Moreover such solution is the unique one given in {\it (a)}, with $(v,m)$ of the form~\eqref{eq:disc_ansatz_limit}. \end{description} \end{thm} \section{Singular limits}\label{sec:singular} We collect in this section, some results on singular limit processes for the LQG $N$--person games and mean field games. We start from the result on the vanishing discount limit, which shows the relation between the solutions found in Theorems~\ref{thm:disc_Nplay} and~\ref{thm:disc_LIMplay}, and their limits as the discount factor $\ell$ tends to $0$. We prove that the limit procedures as $\ell\to 0^+$ and as $N\to+\infty$ commute and that both tend to the solution of the mean field equation for the problem with ergodic cost described in section~\ref{sec:LQG}. \begin{thm}\label{thm:vd} For $N\in\mathbb{N}$, consider $N$--players games of the form~\eqref{eq:sde2}--\eqref{eq:cost_disc2} such that {\bf (H1)}--{\bf (H3)} hold. Assume that~\eqref{eq:scale} holds as $N\to+\infty$ and that ${\cal B}^\infty= \hat Q+\,{A^TRA\over 2}\,+\,{\hat B\over 2}$ is invertible. \noindent Then, the vanishing discount limit as $\ell\to 0^+$ and the mean field limit as $N\to+\infty$ commute. Namely, denoting with $(v^i_{\ell,N}, m^i_{\ell,N})$ the solutions to the $N$--players game with discount factor $\ell>0$, there hold \begin{equation}\label{eq:commute_disc1} \lim_{\ell\to0^+}\lim_{N\to+\infty}\Big[v^i_{\ell,N}-v^i_{\ell,N}(0)\Big] = \lim_{N\to+\infty}\lim_{\ell\to0^+}\Big[v^i_{\ell,N}-v^i_{\ell,N}(0)\Big] = v \end{equation} in $C^1_{loc}(\mathbb{R}^d)$ with second derivative converging uniformly in $\mathbb{R}^d$, \begin{equation}\label{eq:commute_disc2} \lim_{\ell\to0^+}\lim_{N\to+\infty}m^i_{\ell,N} = \lim_{N\to+\infty}\lim_{\ell\to0^+}m^i_{\ell,N} = m \qquad\qquad \mbox{in $C^k(\mathbb{R}^d)$ for all $k$,} \end{equation} \begin{equation}\label{eq:commute_disc3} \lim_{\ell\to0^+}\lim_{N\to+\infty}\ell v^i_{\ell,N} = \lim_{N\to+\infty}\lim_{\ell\to0^+}\ell v^i_{\ell,N} = \lambda \qquad\qquad \mbox{uniformly in $\mathbb{R}^d$,} \end{equation} where $(v,m,\lambda)$ is the QG solution to~\eqref{eq:mfpde}. \end{thm} \begin{rem} As a byproduct of the previous proof, we have proved that as $\ell\to0^+$ the solution of HJB--KPF system for $N$--players games with discounted cost~\eqref{eq:sde2}--\eqref{eq:cost_disc2} converge to the solution of the corresponding system for $N$--players games with ergodic cost~\eqref{eq:sde}--\eqref{eq:cost_ergodic}. The same holds for solutions of the Mean Field systems of PDE. \end{rem} Next we consider the deterministic limit as $k\to 0^+$ (and hence as the noise matrix $\sigma\to 0$) and we prove that such limit and the limit to the Mean Field PDE as $N\to+\infty$ do commute. \begin{thm}\label{thm:vv} For $N\in\mathbb{N}$, consider $N$--players games of the form~\eqref{eq:sde}--\eqref{eq:cost_ergodic} such that {\bf (H1)}--{\bf (H3)} hold. Assume that~\eqref{eq:scale} holds as $N\to+\infty$ and that ${\cal B}^\infty= \hat Q+\,{A^TRA\over 2}\,+\,{\hat B\over 2}$ is invertible. \noindent Then, the deterministic limit as $k\to 0^+$ and the mean field limit as $N\to+\infty$ commute. Namely, denoting with $(v^i_{k,N}, m^i_{k,N},\lambda^i_{k,N})$ the solutions to the $N$--players game with viscosity $\nu=k\,\mathrm{I}_d$, there hold \begin{equation*} \lim_{k\to0^+}\lim_{N\to+\infty}v^i_{k,N}=\lim_{N\to+\infty}\lim_{k\to0^+}v^i_{k,N} = v \end{equation*} in $C^1_{loc}(\mathbb{R}^d)$ with second derivative converging uniformly in $\mathbb{R}^d$, \begin{equation*} ~~\lim_{k\to0^+}\lim_{N\to+\infty}m^i_{k,N}=\lim_{N\to+\infty}\lim_{k\to0^+}m^i_{k,N} = m \qquad\qquad \mbox{in distributional sense,} \end{equation*} \begin{equation*} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \lim_{k\to0^+}\lim_{N\to+\infty} \lambda^i_{k,N}=\lim_{N\to+\infty}\lim_{k\to0^+} \lambda^i_{k,N} = \lambda \qquad\qquad ~~\mbox{ in $\mathbb{R}$,} \end{equation*} where $(v,m,\lambda)$ are given by \begin{equation}\label{eq:vv_limit} v(x)=\sqrt{r} \hat V(x-\hat \mu)+rAx\,, \qquad\qquad m=\delta_{\hat \mu}\,, \qquad\qquad \lambda = \hat F(0,\hat \mu)-\hat \mu^T\,{\hat V^2\over 2}\, \hat \mu\,, \end{equation} for $\hat V:=\sqrt{2\hat Q+r A^2}$, $\hat\mu:=(\hat V^2+\hat B)^{-1}\left(2\hat QH+\hat B\,\Delta\right)$ and $\hat F$ defined as in~\eqref{eq:v1_limit}. \end{thm} Finally, we study the limit when the cost for the control $r\to 0^+$, and thus large control can be chosen at cheap cost. Even if equations in~\eqref{eq:hjkfp} become singular when $r$ tends to zero, we can still use the formulas we found in the previous section to study the limit behavior. \begin{thm}\label{thm:cc} For $N\in\mathbb{N}$, consider $N$--players games of the form~\eqref{eq:sde}--\eqref{eq:cost_ergodic} such that {\bf (H1)}--{\bf (H3)} hold. Assume that~\eqref{eq:scale} holds as $N\to+\infty$ and that $\overline{\cal B}^\infty:=\hat Q+\,{\hat B\over 2}$ is invertible. \noindent Then, the cheap control limit as $r\to 0^+$ and the mean field limit as $N\to+\infty$ commute. Namely, denoting with $(v^i_{r,N}, m^i_{r,N},\lambda^i_{r,N})$ the solutions to the $N$--players game with control cost $R=r\,\mathrm{I}_d$, there hold \begin{equation*} \lim_{r\to0^+}\lim_{N\to+\infty}v^i_{r,N}=\lim_{N\to+\infty}\lim_{r\to0^+}v^i_{r,N} = v \end{equation*} in $C^1_{loc}(\mathbb{R}^d)$ with second derivative converging uniformly in $\mathbb{R}^d$, \begin{equation*} ~~\lim_{r\to0^+}\lim_{N\to+\infty}m^i_{r,N}=\lim_{N\to+\infty}\lim_{r\to0^+}m^i_{r,N} = m \qquad\qquad \mbox{in distributional sense,} \end{equation*} \begin{equation*} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \lim_{r\to0^+}\lim_{N\to+\infty} \lambda^i_{r,N}=\lim_{N\to+\infty}\lim_{r\to0^+} \lambda^i_{r,N} = \lambda \qquad\qquad ~~\mbox{ in $\mathbb{R}$,} \end{equation*} where $(v,m,\lambda)$ are given by \begin{equation}\label{eq:cc_limit} v(x)\equiv 0\,, \qquad\qquad m=\delta_{\hat \mu}\,, \qquad\qquad \lambda = \hat F(0,\hat \mu)-\hat \mu^T\,{\hat V^2\over 2}\, \hat \mu\,, \end{equation} for $\hat V:=\sqrt{2\hat Q}$, $\hat\mu:=(\hat V^2+\hat B)^{-1}\left(2\hat QH+\hat B\,\Delta\right)$ and $\hat F$ defined as in~\eqref{eq:v1_limit}. \end{thm} \section{Technical proofs}\label{sec:tech_proof} \noindent{\bf Proof of Theorem~\ref{thm:disc_Nplay}.} {\it Step 1.} By simply inserting the expressions of $v^i$ and $m^i$ into~\eqref{eq:hjkfp_disc}, one can transform the system of $2N$ equations into a system of equalities between quadratic forms to be satisfied for all $x\in\mathbb{R}^d$. Thus, by equating the coefficients of these quadratic forms, ~\eqref{eq:hjkfp_disc} reduces to algebraic relations~\eqref{eq:KFP_matrix_discount}--\eqref{eq:discount1} among the coefficients of $v^i$ and $m^i$. \noindent It is now clear that if we show that there exists a unique solution in $\mathrm{Sym_d^+}$ to ARE in~\eqref{eq:discount1} for small $\ell$, then the existence and uniqueness part of the theorem would be proved. Indeed, the invertibility of ${\cal B}_\ell$ is equivalent to the existence and uniqueness of solutions for the linear system~\eqref{eq:discount1}, and once $\Sigma_\ell$ and $\mu_\ell$ are uniquely determined, conditions~\eqref{eq:KFP_matrix_discount} and~\eqref{eq:v0_disc} also give unique choices for $\Lambda_\ell,\rho_\ell, c^i_\ell$. \noindent We therefore focus our attention on the ARE in~\eqref{eq:discount1}. By Proposition~\ref{prop:ARE}, solutions to~\eqref{eq:discount1} can be found as the $d$--dimensional invariant graph subspaces of the $2d\times 2d$ matrix ${\cal H}_\ell$ introduced in~\eqref{eq:discount_are_matrix}. Noticing that we have ${\cal A}_\ell\to 0$ and ${\cal Q}_\ell\to {\cal Q}=Q+r\,{A^2\over 2}$, as $\ell\to 0^+$, it is immediate to see that $$ {\cal H}_\ell \qquad\longrightarrow\qquad {\cal H}:=\left( \begin{array}{cc} {\bf 0} & {\cal R}\\ {\cal Q} & {\bf 0} \end{array} \right)\, $$ and that ${\cal H}$ has $d$ strictly positive and $d$ strictly negative eigenvalues. This latter property follows from the fact that $\lambda\in\mathrm{spec}({\cal H})$ if and only if $\lambda^2\in\mathrm{spec}({\cal RQ})$ and that Proposition~\ref{prop:matrices} implies $\mathrm{spec}({\cal RQ})\subset(0,+\infty)$ because both ${\cal R}$ and ${\cal Q}$ are positive definite. Therefore, all (possibly complex) eigenvalues of ${\cal H}_\ell$ will converge to some eigenvalue of ${\cal H}$, and there exists $\bar\ell>0$ small enough so that ${\cal H}_\ell$ has no non--zero purely imaginary eigenvalues when $\ell<\bar\ell$. Propositions~\ref{prop:ARE}{\it (ii)} allows to conclude that ARE~\eqref{eq:discount1} admits symmetric solutions for $\ell<\bar\ell$. \noindent Owing to Propositions~\ref{prop:ARE}{\it (iii)}, we also deduce that~\eqref{eq:discount1} has a unique symmetric solution $Y_\ell$ such that the eigenvalues of ${\cal A}_\ell+{\cal R}Y_\ell$ with non--zero real part are exactly the eigenvalues of ${\cal H}_\ell$ with positive real part. But $$ {\cal A}_\ell+{\cal R}Y_\ell={k^2 r\over 2}\,\left(\,{\ell\over 2 k}\,\mathrm{I}_d+Y_\ell\right)\,, $$ is symmetric, so its eigenvalues are real and so are the ones of ${\cal H}_\ell$. Using~\eqref{eq:spec_ARE}, we obtain $$ \mathrm{spec}({\cal A}_\ell+{\cal R}Y_\ell) = \mathrm{spec}({\cal H}_\ell)\cap (0,+\infty)\,. $$ By setting $\delta:=\min\,\big\{\mathrm{spec}({\cal H})\cap (0,+\infty)\big\}>0$ and possibly reducing $\bar\ell$, we have for $\ell<\bar\ell$ $$ \min\,\big\{\mathrm{spec}({\cal H}_\ell)\cap (0,+\infty)\big\}~>\,{\delta\over 2}\,, \qquad\qquad \ell \,k\,r<\delta\,. $$ Hence, \begin{align*} \min\,\mathrm{spec}(Y_\ell)~&=~{2\over k^2 r}\,\min\,\mathrm{spec}({\cal A}_\ell+{\cal R}Y_\ell)~-\,{\ell\over 2 k}\, >\,{\delta\over 2k^2 r}\,>0\,, \end{align*} which implies $Y_\ell\in\mathrm{Sym_d^+}$ for $\ell<\bar\ell$. We claim that such a solution is also unique. Indeed, if any solution $Z_\ell\in\mathrm{Sym_d^+}$ exists with $Z_\ell\neq Y_\ell$, then $\mathrm{spec}({\cal A}_\ell+{\cal R}Z_\ell)=\,{k^2 r\over 2}\,\big(\mathrm{spec}(Z_\ell)+\,{\ell\over2 k}\big)\subseteq (0,+\infty)$ and this contradicts the characterization of $Y_\ell$ via~\eqref{eq:spec_ARE}. \smallskip \noindent{\it Step 2.} It remains to verify that affine feedback strategies $\overline{\alpha}^i(x)=(R^i)^{-1}\,{\nabla v^i(x)\over r}$ give a Nash equilibrium for the game~\eqref{eq:sde2}--\eqref{eq:cost_disc2}. Indeed, by applying Dynkin's formula, \begin{align*} \mathbb{E}&\Big[e^{-\ell T}v^i(X^i_T)-v^i(X^i_0)\Big]= \mathbb{E} \Bigg[\int_0^T\!\!e^{-\ell s}\Big(\!-\ell v^i+\mathrm{tr}(\nu\mathrm{D}^2 v^i)+(\nabla v^i)^T\!A\,x-(\nabla v^i)^T\alpha^i_s\Big)(X^i_s)ds \Bigg]\\ &\geq\mathbb{E}\Bigg[\int_0^T\!\!e^{-\ell s}\!\left(\!\!\Big(\!-\ell v^i+\mathrm{tr}(\nu\mathrm{D}^2 v^i)+(\nabla v^i)^T\!A\,x- {(\nabla v^i)^T R^{-1}\nabla v^i\over 2}\,\Big)(X^i_s)- \,{(\alpha^i_s)^TR\alpha^i_s\over 2}\,\right)\!ds\Bigg]\\ &=-\mathbb{E} \Bigg[\int_0^T\!\!e^{-\ell s}\Big(f^i (X^i_s)+(\alpha^i_s)^T \,{R\over 2}\,\alpha^i_s\Big)ds \Bigg] \end{align*} with equality holding if $\alpha^i=\overline{\alpha}^i$. Since $\mathbb{E}\big[e^{-\ell T}v^i(X^i_T)\big]\to0$ as $T\to+\infty$, because the value function is quadratic and the strategies are admissible, we get \begin{align}\label{eq:fin_est} v^i(X^i_0)&\leq \lim_{T\to+\infty}\mathbb{E}\left[\int_0^Te^{-\ell s}\Big(f^i (X^i_s)+(\alpha^i_s)^T \,{R\over 2}\,\alpha^i_s\Big)\,ds\right]\nonumber\\ &=~\mathbb{E}\left[\int_0^\infty e^{-\ell s}\Big(f^i (X^i_s)+(\alpha^i_s)^T \,{R\over 2}\,\alpha^i_s\Big)\,ds\right] \end{align} where we have used Lebesgue dominated convergence theorem in the last equality. Noticing that equality holds only for $\alpha^i=\overline{\alpha}^i$, we can conclude that the cost corresponding to any unilateral change of strategy $\alpha^i$ (the r.h.s. of~\eqref{eq:fin_est}) is larger than the cost corresponding to $\overline{\alpha}^i$, and we have proved that $(\overline{\alpha}^1,\ldots,\overline{\alpha}^N)$ is a Nash equilibrium strategy.~~$\diamond$ \medskip \noindent{\bf Proof of Theorem~\ref{thm:disc_LIMplay}}. {\it Step 1.} Proceeding as in the proof of Theorem~\ref{thm:disc_Nplay}, from imposing the expressions~\eqref{eq:disc_ansatz_limit} in~\eqref{eq:mfpde_disc}, we find that the coefficients $\Lambda,\Sigma,\rho,\mu$ satisfy the conditions~\eqref{eq:KFP_matrix_discount}--\eqref{eq:discount1} with $Q$ replaced by $\hat Q$ and with ${\cal B}_\ell$ replaced by ${\cal B}^\infty_\ell$. We can therefore repeat the arguments of the previous proof to show part {\it (a)}. In particular, we can assume that $\hat \ell>0$ is small enough to ensure that for $\ell<\hat\ell$ there hold $\mathrm{spec}({\cal H}_\ell)\subset\mathbb{R}$ and, setting $\varepsilon^2:= \min\,\mathrm{spec}\left(\hat Q + r\,{A^2\over 2}\,\right) >0$, \begin{equation}\label{eq:helper_disc} \min\,\mathrm{spec}\left(\hat Q + r\,{A^2\over 2}\,-\,{\ell r\over2}\,A\right)>\,{\varepsilon^2\over 2}\,, \qquad\qquad \ell\, k\, r< 2\varepsilon\,. \end{equation} \smallskip \noindent{\it Step 2.} Proceeding as in Theorem~4 in~\cite{BardiPriuli}, it is easy to prove that $\hat B\geq 0$ is equivalent to the monotonicity of the operator $\hat V[m]$. Hence, we can repeat the arguments from~\cite{LL1,LL3} to show the uniqueness property {\it (b)}. \smallskip \noindent{\it Step 3.} As a preliminary step towards {\it (c)}, observe that $Q^N\to\hat Q$ as $N\to+\infty$ implies $$ \min\,\mathrm{spec}\left(Q^N + r\,{A^2\over 2}\,-\,{\ell r\over2}\,A\right)>\,{1\over 2}\,\min\,\mathrm{spec}\left(\hat Q + r\,{A^2\over 2}\,-\,{\ell r\over2}\,A\right)> \,{\varepsilon^2\over 4}\,>0\,, $$ for $N$ large enough, where $\varepsilon$ is the value introduced in step~1 and we have used~\eqref{eq:helper_disc} thanks to $\ell<\hat\ell$. With the notations $Q^N_\ell:= Q^N + r\,{A^2\over 2}\,-\,{\ell r\over2}\,A$ and ${\cal H}^N_\ell:=\left( \begin{array}{cc} {\ell r k\over 4}\,\mathrm{I}_d & \,{r k^2\over 2}\,\mathrm{I}_d \\ Q^N_\ell & -\,{\ell r k\over 4}\,\mathrm{I}_d \end{array} \right) $, we conclude that the matrix $Y^N_\ell\in\mathrm{Sym_d}$, solving the ARE corresponding to ${\cal H}^N_\ell$, satisfies \begin{align*} \min\,\mathrm{spec}(Y^N_\ell)&=\,{2\over k^2 r}~\min\,\{\mathrm{spec}({\cal H}^N_\ell)\cap(0,+\infty)\}-\,{\ell\over 2k}\,\\ &=\,{2\over k^2 r}~\sqrt{\min\,\mathrm{spec} \left({r k^2\over 2}\, Q^N_\ell\right)+\,{\ell^2 k^2 r^2\over 16}}-\,{\ell\over 2k}>\sqrt{2\over k^2 r}~\left(\,{\varepsilon\over 2}\,-\,{\ell k r\over 4}\right)>0\,, \end{align*} because of the explicit expression of ${\cal H}^N_\ell$ in the second equality, and~\eqref{eq:helper_disc}. In particular, $Y^N_\ell$ is positive definite. Observing that invertibility of ${\cal B}_\ell^\infty$also implies the invertibility of ${\cal B}_\ell$ for $N$ large enough, we conclude that~\eqref{eq:hjkfp_disc} admits a unique QG solution $(v^i_{\ell,N},m^i_{\ell,N})$ for large $N$. \smallskip \noindent{\it Step 4.} To pass to the limit as $N\to+\infty$, and complete the proof of part {\it (c)}, let us concentrate first on the sequence of the AREs in~\eqref{eq:discount1} as $N$ varies in $\mathbb{N}$. We can observe that ${\cal H}^N_\ell\to {\cal H}_\ell$ in~\eqref{eq:discount_are_matrix}, and that eigenvalues of ${\cal H}_\ell$ are real by our choice of $\ell$. Thus, the sequence of matrices $\Sigma^N_\ell$ solving~\eqref{eq:discount1} is bounded w.r.t. the norm $\|\cdot\|$ of the largest eigenvalue, defined in~\eqref{eq:max_spec_norm}, because $$ \left\|\Sigma^N_\ell\right\|\,\leq\,{2\over k^2 r}\,\max\,\big\{\mathrm{spec}({\cal H}^N_\ell)\cap (0,+\infty)\big\}\, + \,{\ell\over 2k} \leq\,{2\over k^2 r}\,\max\,\big\{\mathrm{spec}({\cal H})\cap (0,+\infty)\big\} +1 + \,{\ell\over 2k}\,, $$ when $N$ is large enough. There follows that $\Sigma^N_\ell$ has a converging subsequence $\Sigma^{N_m}_\ell$ whose limit $\overline{\Sigma}_\ell\in\mathrm{Sym_d}$ solves $$ \,{r k^2\over 2}\,X^2+{\ell r k\over 2}\,X-\hat Q-r\,{A^2\over 2}\,+\ell\,r\,{A\over 2}\,=0\,, $$ which is analogous to~\eqref{eq:discount1}, except for having $\hat Q$ in place of $Q$. If we could prove that $\overline{\Sigma}_\ell\in\mathrm{Sym_d^+}$, then we would have, by uniqueness in $\mathrm{Sym_d^+}$ of this limit ARE (which follows from {\bf (H1)} and Proposition~\ref{prop:ARE}{\it (iii)}), that $\overline{\Sigma}_\ell$ coincides with the matrix $\Sigma$ found in part {\it (a)} for the measure in~\eqref{eq:disc_ansatz_limit}. This additional property on $\overline{\Sigma}_\ell$ follows again by the continuity of the eigenvalues: we have seen in step~3 that for $N_m$ large enough we had $$ \min\,\mathrm{spec}\left(\Sigma^{N_m}_\ell\right)\,>\sqrt{2\over k^2 r}~\left(\,{\varepsilon\over 2}\,-\,{\ell k r\over 4}\right)>0\,, $$ and this implies, as $N_m\to+\infty$, $\min\,\mathrm{spec}\big(\overline{\Sigma}_\ell\big)>0$, so that $\overline{\Sigma}_\ell\in\mathrm{Sym_d^+}$ and $\overline{\Sigma}_\ell=\Sigma$. \noindent Now, we can pass to the limit $N\to +\infty$ also in the equation~\eqref{eq:discount1} for the average vector $\mu^N_\ell$: since ${\cal B}_\ell^\infty$ is invertible, we must have ${\cal B}_\ell$ invertible as well for $N$ large enough, so that \begin{equation} \mu^N_\ell={\cal B}_\ell^{-1}\Big(QH+(N-1)\,{B\over 2}\,\Delta\Big) \qquad\longrightarrow\qquad \mu=({\cal B}_\ell^\infty)^{-1}\Big(\hat QH+\,{\hat B\over 2}\,\Delta\Big)\,, \end{equation} i.e., $\mu^N_\ell$ converges to the average vector $\mu$ found in part {\it (a)}. We conclude by observing that the previous convergence results for $\Sigma^N_\ell$ and $\mu^N_\ell$ allow to pass to the limit in~\eqref{eq:KFP_matrix_discount} and~\eqref{eq:v0_disc} as well, so to obtain the convergence of the value function.~~$\diamond$ \medskip \noindent{\bf Proof of Theorem~\ref{thm:vd}.} {\it Step 1.} We start by proving that, when passing to the limit as $\ell\to 0^+$ in the discounted $N$--person game, the QG solution given in Theorem~\ref{thm:disc_Nplay} converges to the QG solution of the $N$--person game~\eqref{eq:sde}--\eqref{eq:cost_ergodic} given in section~\ref{sec:LQG}. \noindent Let $\bar N\in\mathbb{N}$ be fixed large enough so that the matrix ${\cal B}^N= Q+\,{A^TRA\over 2}\,+(N-1)~{B\over 2}$ is invertible for $N\geq \bar N$ (compared to section~\ref{sec:LQG}, we added a superscript $N$ in the notation to stress its dependence on the number of players). For any $N\geq \bar N$, let us consider the discounted $N$--players game~\eqref{eq:sde2}--\eqref{eq:cost_disc2} and let $\bar\ell>0$ be the value found in Theorem~\ref{thm:disc_Nplay}. Since ${\cal B}^N_\ell:= Q+r\,{A^2\over 2}\,-\ell\,r\,{A\over 2}\,+(N-1)~{B\over 2}$ converges to ${\cal B}^N$ as $\ell\to 0^+$, it is not restrictive to assume that $\bar\ell$ is small enough to have ${\cal B}^N_\ell$ invertible for $\ell<\bar\ell$. \noindent First, we focus our attention on the ARE in~\eqref{eq:discount1} and we fix any sequence $\ell_n\to 0^+$ with $\ell_n<\bar\ell$. By proceeding as in the proof of Theorem~\ref{thm:disc_LIMplay} above, we obtain that the sequence $\Sigma_{\ell_n}$ of solutions of~\eqref{eq:discount1} in $\mathrm{Sym_d^+}$ is bounded, and that any convergent subsequence has limit belonging to $\mathrm{Sym_d^+}$ and solving the ARE in~\eqref{eq:KFP_matrix}. Therefore, by uniqueness, we conclude that $\Sigma_{\ell_n}$ converges to the solution $\Sigma$ of~\eqref{eq:KFP_matrix} found in Theorem~2 of~\cite{BardiPriuli}. \noindent By passing to the limit $\ell\to 0^+$ also in the equation for the average vector $\mu_\ell$ in~\eqref{eq:discount1}, we obtain \begin{equation} \mu_\ell=({\cal B}_\ell^N)^{-1}\Big(QH+(N-1)\,{B\over 2}\,\Delta\Big) \qquad\longrightarrow\qquad \mu=({\cal B}^N)^{-1}\Big(QH+(N-1)\,{B\over 2}\,\Delta\Big)\,, \end{equation} i.e., $\mu_\ell$ converges to the average vector $\mu$ found in~\eqref{eq:KFP_matrix}. In turn, $\Sigma_\ell\to\Sigma$ and $\mu_\ell\to\mu$ together with~\eqref{eq:KFP_matrix_discount}, imply $\Lambda_\ell\to\Lambda$ and $\rho_\ell\to\rho$ to the coefficients $\Lambda,\rho$ in~\eqref{eq:KFP_matrix}. \noindent Finally, from~\eqref{eq:v0_disc} we deduce easily that $c^i\to+\infty$, but also $\ell\, c^i\to\lambda^i$ and $\ell v^i_\ell(x)\to\lambda^i$ with $\lambda^i=F^i(\Sigma,\mu)+\mathrm{tr}(\nu R\nu\Sigma+\nu R A)-\mu^T\,{\Sigma \nu R\nu \Sigma \over 2}\, \mu$ and $F^i$ given by~\eqref{eq:v1}, as in section~\ref{sec:LQG}. Thus, we conclude $$ v^i_\ell(x)-v^i_\ell(0)=x^T\,{\Lambda_\ell\over 2}\,x+\rho_\ell x~~ \longrightarrow ~~ x^T\,{\Lambda\over 2}\,x+\rho x=v^i(x)\,, $$ recovering the expected value function of the problem with ergodic cost. \smallskip \noindent{\it Step 2.} Now we study the limit as $\ell\to 0^+$ of the mean field system~\eqref{eq:mfpde_disc}, and we fix $\tilde\ell>0$ small enough to have that $\ell<\tilde\ell$ implies invertibility of the matrix ${\cal B}_\ell^\infty$, defined in Theorem~\ref{thm:disc_LIMplay}. \noindent For games with $\ell<\tilde\ell$, the part of step~1 about solutions of the ARE can be repeated, provided we replace $Q$ with $\hat Q$ in the various formulas derived from~\eqref{eq:mfpde_disc}. Namely, we can prove that the positive definite solutions $\hat\Sigma_\ell$ converge, as $\ell\to 0^+$, to the matrix $\Sigma\in\mathrm{Sym_d^+}$ which solves ARE in~\eqref{eq:KFP_matrix_limit}. Then, by passing to the limit in the equation for the average $\mu_\ell$ in~\eqref{eq:KFP_matrix_limit}, we obtain \begin{equation} \mu_\ell=({\cal B}_\ell^\infty)^{-1}\Big(\hat QH+\,{\hat B\over 2}\,\Delta\Big) \qquad\longrightarrow\qquad \mu=({\cal B}^\infty)^{-1}\Big(\hat QH+\,{\hat B\over 2}\,\Delta\Big)\,. \end{equation} The remaining coefficients converge like in step~1. In particular, $\ell v_\ell\to\lambda=\hat F(\Sigma,\mu)+\mathrm{tr}(\nu R\nu\Sigma+\nu R A)-\mu^T\,{\Sigma \nu R\nu \Sigma \over 2}\, \mu$, with $\hat F$ given by~\eqref{eq:v1_limit}, as in section~\ref{sec:LQG}. \smallskip \noindent{\it Step 3.} By combining step~1 with the result on the mean field system~\eqref{eq:mfpde} in section~\ref{sec:LQG}, we obtain that $$ \lim_{N\to+\infty}\lim_{\ell\to0^+}\Big[v^i_{\ell,N}-v^i_{\ell,N}(0)\Big]=v\,, \qquad \lim_{N\to+\infty}\lim_{\ell\to0^+}m^i_{\ell,N}=m\,, \qquad \lim_{N\to+\infty}\lim_{\ell\to0^+}\ell v^i_{\ell,N}=\lambda\,, $$ in the appropriate topologies. Now, if we denote with $\hat\ell>0$ the minimum between the value found in Theorem~\ref{thm:disc_LIMplay}{\it (a)} and the value $\tilde \ell$ in step~2, for $\ell<\hat\ell$ the $N$--players game~\eqref{eq:sde2}--\eqref{eq:cost_disc2} has QG solutions, for all $N\in\mathbb{N}$. Moreover, taking $N$ large enough so that ${\cal B}^N_\ell$ is invertible (because it converges to the invertible matrix ${\cal B}_\ell^\infty$, as $N\to+\infty$), the QG solution is unique and it converges as $N\to+\infty$ to the solution of~\eqref{eq:mfpde_disc}, by Theorem~\ref{thm:disc_LIMplay}{\it (c)}. Thus, owing to step~2, $$ \lim_{\ell\to0^+}\lim_{N\to+\infty}\Big[v^i_{\ell,N}-v^i_{\ell,N}(0)\Big]=v\,, \qquad \lim_{\ell\to0^+}\lim_{N\to+\infty}m^i_{\ell,N}=m\,, \qquad \lim_{\ell\to0^+}\lim_{N\to+\infty}\ell v^i_{\ell,N}=\lambda\,, $$ so that~\eqref{eq:commute_disc1}--\eqref{eq:commute_disc3} hold and this concludes the proof.~~$\diamond$ \medskip \noindent {\bf Proof of Theorem~\ref{thm:vv}.} {\it Step 1.} Using again the notation ${\cal B}^N= Q+\,{A^TRA\over 2}\,+(N-1)~{B\over 2}$, the convergence ${\cal B}^N\to{\cal B}^\infty$ as $N\to+\infty$ implies that there exists $\bar N\in\mathbb{N}$ such that ${\cal B}^N$ is invertible for $N\geq\bar N$. We then fix $k>0$, $N\geq \bar N$ and consider an $N$--players game satisfying assumptions {\bf (H1)}--{\bf (H3)} with $\nu=k\,\mathrm{I}_d$. Instead of QG solutions of the form~\eqref{eq:ansatz_nearly_id}, we look for solutions to the HJB--KFP system~\eqref{eq:hjkfp} satisfying \begin{equation}\label{eq:ansatz_h3} v^i(x)=x^T\,{\Lambda\over 2}\, x+\rho x\,,\qquad\qquad m^i(x)=\gamma\exp\left\{-\,{1\over 2} (x-\mu)^T\,{V\over k\,\sqrt{r}}(x-\mu)\right\}\,, \end{equation} for suitable matrices $V\in\mathrm{Sym_d^+}$, $\Lambda\in\mathrm{Mat}_{d\times d}(\mathbb{R})$ and vectors $\mu,\rho\in\mathbb{R}^d$, which are the same for all the players. Here, $\gamma$ is a normalization constant explicitly given by $(2\pi)^{-d/2}\sqrt{\mathrm{det}(V^{-1})}$. \noindent By plugging these expressions into system~\eqref{eq:hjkfp}, or by setting $V= k\,\sqrt{r}\,\Sigma$ in the proof of Theorem~2 of~\cite{BardiPriuli}, one finds that the coefficients $\Lambda,V,\rho,\mu$ must satisfy \begin{equation}\label{eq:diff_var1} V^2=2Q^N+r\,A^2\,, \quad (V^2+(N-1)B^N)\,\mu=P\,, \quad \Lambda=\sqrt{r}\big(V+\sqrt{r}\,A\big)\,, \quad \rho=-\sqrt{r}\,V\mu\,, \end{equation} where $P:=\left(2Q^NH+(N-1)B^N\Delta\right)$, and \begin{equation}\label{eq:diff_var4} \lambda^i=F^i\left({V\over k\sqrt{r}},\mu\right)-\mu^T\,{V^2\over 2}\, \mu+k \sqrt{r}\,\mathrm{tr}(V+\sqrt{r}A)\,, \end{equation} with $F^i$ as in~\eqref{eq:v1}. It is immediate to check that, under our assumptions, the first two equations in~\eqref{eq:diff_var1} admit unique solution in $\mathrm{Sym_d^+}$ and $\mathbb{R}^d$, respectively, given by \begin{equation}\label{eq:Vmu_vanish_visc} V=\sqrt{2Q^N+r A^2}\,, \qquad\qquad \mu=(V^2+(N-1)B^N)^{-1}\left(2Q^NH+(N-1)B^N\Delta\right)\,, \end{equation} Since~\eqref{eq:diff_var1} do not depend on $k$, the same is true for the value function $v^i$ and for the mean vector $\mu$. Only the value $\lambda^i$ in~\eqref{eq:diff_var4} is modified by a change of $k$. Passing finally to the limit as $k\to 0^+$ in~\eqref{eq:diff_var1}--\eqref{eq:diff_var4}, we conclude $$ v^i_{k,N}(x)\to\sqrt{r} V(x-\mu)+rAx\,, \quad m^i_{k,N}={\cal N}\left(\mu,{V_k\over k\sqrt{r}}\right)\to \delta_\mu\,, \quad \lambda^i_{k,N}\to F^i(0,\mu)-\mu^T\,{V^2\over 2}\, \mu\,, $$ in the correct topologies, with $V$ and $\mu$ given by~\eqref{eq:Vmu_vanish_visc}. \smallskip \noindent{\it Step 2.} Analogous computations can be performed for the mean field equations~\eqref{eq:mfpde}. In this case, the expression~\eqref{eq:vv_limit} for the value function remains valid as $k\to 0^+$, and it is easy to verify that $m={\cal N}\left(\hat \mu, {\hat V\over k\sqrt{r}}\right)\to \delta_{\hat\mu}$ in distributional sense. Since we also have $$ \lambda=\hat F\left({\hat V\over k\sqrt{r}},\hat \mu\right)-\hat \mu^T\,{\hat V^2\over 2}\, \hat \mu+k \sqrt{r}\,\mathrm{tr}(\hat V+\sqrt{r}A) ~\longrightarrow~ \hat F(0,\hat \mu)-\hat \mu^T\,{\hat V^2\over 2}\, \hat \mu\,, $$ it is enough to pass to the limit as $N\to\infty$ in the formulas~\eqref{eq:Vmu_vanish_visc} to complete the proof. ~~$\diamond$ \medskip \begin{rem}\label{rem:extenstion} Since~\eqref{eq:diff_var1} is independent from the matrix $\nu$, the assumption {\bf (H3)} in Theorem~\ref{thm:vv} can be replaced by the following, slightly more general, one: \begin{description} \item{{\bf (H3$'$)}} The matrix $A$ is symmetric, there exist a constant $r>0$ such that $R=r\,\mathrm{I}_d$, and there holds $\nu = k\,\bar\nu$ for a constant $k>0$ and a matrix $\bar\nu$ such that $\bar\nu \sqrt{2Q+r A^2}\in\mathrm{Sym_d}$. \end{description} \end{rem} \medskip \noindent {\bf Proof of Theorem~\ref{thm:cc}.} {\it Step 1.} The convergence $\overline{\cal B}^N:=Q^N+(N-1)\,{B^N\over 2}\to\overline{\cal B}^\infty$ as $N\to+\infty$ implies that there exists $\bar N\in\mathbb{N}$ such that $\overline{\cal B}^N$ is invertible for $N\geq\bar N$. Fixed $N\geq \bar N$, we thus consider the $N$--players game~\eqref{eq:sde}--\eqref{eq:cost_ergodic} and let $r>0$ small enough so that ${\cal B}^N:=Q^N+r\,{A^2\over 2}+(N-1)\,{B^N\over 2}$ is invertible too. \noindent By looking for solutions of the form~\eqref{eq:ansatz_h3} in the HJB--KFP system with cost $R=r\,\mathrm{I}_d$, we find that matrices $\Lambda,V$ and vectors $\rho,\mu$ satisfy again~\eqref{eq:diff_var1}--\eqref{eq:diff_var4}. \noindent When $r\to 0^+$, we claim that the value functions $v^i\to 0$ uniformly on compact sets. Indeed, for fixed $r>0$, the ARE in~\eqref{eq:diff_var1} admits a unique solution $V_r\in\mathrm{Sym_d^+}$, given by $V_r:=\sqrt{2Q^N+r A^2}$. As $r\to 0^+$, we thus have $V_r\to\overline{V}=\sqrt{2Q^N}$. Similarly, by passing to the limit in the equation~\eqref{eq:diff_var1} for $\mu$, we obtain $$ \mu_r:=(V_r^2+(N-1)B^N)^{-1}P ~~\longrightarrow~~ \overline{\mu}:=(\overline{V}^2+(N-1)B^N)^{-1}P\,, $$ which in turn implies $\Lambda\to 0$ and $\rho\to 0$. It is also simple to verify that the measures $m^i$ converge in distributional sense to a Dirac delta $\delta_{\overline{\mu}}$, centered at $\overline{\mu}$, and that $\lambda^i_{r,N} \longrightarrow F^i(0,\overline{\mu})-\overline{\mu}^T\,{\overline{V}^2\over 2}\, \overline{\mu}$, as in the deterministic limit. \smallskip \noindent {\it Step 2.} Fixed $r>0$ small enough to have invertibility of the matrix ${\cal B}^\infty:=\hat Q+r\,{A^2\over 2}+\,{\hat B\over 2}$, we repeat the argument used in step~1 for the mean field equations~\eqref{eq:mfpde}. In this case, we get $$ v_r(x) = \sqrt{r} \hat V_r(x-\mu)+rAx\,, \qquad\qquad m_r= {\cal N}\left(\hat\mu_r,{\hat V_r\over k\sqrt{r}}\right)\,, $$ with $\hat V_r:=\sqrt{2\hat Q+r A^2}$ and $\hat\mu_r:=(\hat V_r^2+\hat B)^{-1}\left(2\hat QH+\hat B\,\Delta\right)$. Then, as $r\to 0^+$ we obtain the convergence of $v_r$ and $m_r$ to the value function and the measure in~\eqref{eq:cc_limit}. From $$ \lambda_r=\hat F\left({\hat V_r\over k\sqrt{r}},\hat \mu_r\right)-\hat \mu_r^T\,{\hat V_r^2\over 2}\, \hat \mu_r+k \sqrt{r}\,\mathrm{tr}(\hat V_r+\sqrt{r}A) ~\longrightarrow~ \hat F(0,\hat \mu)-\hat \mu^T\,{\hat V^2\over 2}\, \hat \mu\,. $$ also the convergence of $\lambda_r$ follows. Finally, by passing to the limit as $N\to+\infty$ in the formulas in step~1, it is immediate to prove $\overline{V}\to\hat V$ and $\overline{\mu}\to\hat\mu$, whence the conclusion follows. ~~$\diamond$ \section{Extensions and open problems}\label{sec:conclusion} \subsection{Games with $N$ different players} For sake of notational simplicity, in this work we have focused our attention on games satisfying {\bf (H2)}, i.e. games with nearly identical players (see also Definition~4.1 in~\cite{BardiPriuli}). However, some of the results we have presented admit a straightforward generalization to games whose cost for the player's state~\eqref{eq:quad_cost} consists of more general matrix coefficients $Q^i_{jk}$. The most interesting extension is probably the characterization of affine Nash equilibria strategies for general games with discounted cost~\eqref{eq:cost_disc}. We replace assumptions {\bf (H1)}--{\bf (H3)} with the following \begin{description} \item{\bf (H)} The matrix $\sigma$ is invertible and the matrix $R$ belongs to $\mathrm{Sym_d^+}$. Moreover, for all $i\in\{1,\ldots,N\}$, let assume that matrices $Q^i$ are symmetric, that the block $Q^i_{ii}\in\mathrm{Sym_d^+}$ and that $(A,\nu,R,Q^i_{ii})$ satisfy the Riccati--Sylvester property in the sense of Definition~\ref{def:RSprop}. \end{description} In this case system~\eqref{eq:hjkfp_disc_intro} takes the same form as~\eqref{eq:hjkfp}, but with $\ell v^i$ in place of $\lambda^i$ in the first equation for each player. Then, we can prove the following result. \begin{thm}\label{thm:disc_Nplay2} Under assumption {\bf (H)}, there exists $\bar\ell>0$ such that for $\ell<\bar\ell$ the system of HJB--KFP equations for the game admits a unique solution $(v^i,m^i)$ of the form $$ v^i(x)=x^T\,{\Lambda^i\over 2}\, x+\rho^i x+c^i\,,\qquad\qquad m^i(x)= {\cal N}(\mu^i,(\Sigma^i)^{-1})\,, $$ for suitable symmetric matrices $\Lambda^i,\Sigma^i$, with $\Sigma^i$ positive definite, vectors $\mu^i,\rho^i$ and numbers $c^i$, if and only if the $Nd \times Nd$ matrix $$ \widetilde{\cal B}:=\big(\widetilde{\cal B}_{\alpha\beta}\big)_{\alpha,\beta=1,\ldots,N} \qquad\qquad \widetilde{\cal B}_{\alpha\beta}:= Q^\alpha_{\alpha\beta}+\delta_{\alpha\beta}\left(\,{A^T R A\over 2}\,-\ell\,{R A\over 2}\right)\,\in\mathrm{Mat}_{d\times d}(\mathbb{R})\,. $$ is invertible, $\delta_{\alpha\beta}$ being the Kronecker delta. Moreover, the affine feedbacks $\overline{\alpha}^i(x)=R^{-1}\nabla v^i(x)$, for $x\in\mathbb{R}^d$ and $i=1,\ldots,N$, provide a Nash equilibrium strategy for the game~\eqref{eq:sde}--\eqref{eq:cost_disc}, for all initial states $X\in\mathbb{R}^{Nd}$. \end{thm} \noindent The proof proceeds along the same lines of the one for Theorem~\ref{thm:disc_Nplay}, and it is therefore omitted. Further extensions to games having matrices $A^i,\sigma^i,R^i$ and discount factors $\ell^i$ also depending on the players just require changes in the corresponding notations. \subsection{Games not satisfying {\bf (H3)}} Comparing the results presented in this paper with the ones in~\cite{BardiPriuli}, one might easily wonder why the assumption {\bf (H3)} is here imposed on some matrix coefficients in the dynamics and the cost. The answer is related to the algebraic Riccati equations whose solutions give the (inverse of the) covariance matrix of the desired Gaussian measure. Indeed, in the case of deterministic and cheap control limits, a large part of the manipulations done on the system~\eqref{eq:hjkfp} of HJB--KFP equations can still be repeated for games not satisfying {\bf (H3)}. By searching for solutions of the form \begin{equation}\label{eq:ansatz_gen} v^i(x)=x^T\,{\Lambda\over 2}\, x+\rho x\,,\qquad\qquad m^i(x)=\gamma\exp\left\{-\,{1\over 2} (x-\mu)^T\,\nu^{-1}R^{-1/2}V\,(x-\mu)\right\}\,, \end{equation} one finds relations similar to~\eqref{eq:diff_var1} and, in particular, we have that $V$ must solve $V^TV=2Q+A^TRA$. However, in this context we are not searching for solutions $V\in\mathrm{Sym_d}$ anymore, but for a $V$ which makes $\Sigma:=\nu^{-1}R^{-1/2}V\in\mathrm{Sym_d^+}$. For any fixed choice of the matrices $\nu$ and $R$, the existence and uniqueness result in Theorem~2 of~\cite{BardiPriuli} (see also section~\ref{sec:LQG}) allows to prove that a unique $V$ with the required properties exists, and thus that a unique QG solution to~\eqref{eq:hjkfp} exists of the form~\eqref{eq:ansatz_gen}, at least when $(A,\nu,R,Q)$ satisfy the Riccati--Sylvester property in the sense of Definition~\ref{def:RSprop} and ${\cal B}= Q+\,{A^TRA\over 2}\,+(N-1)~{B\over 2}$ is invertible. The problems for these more general games arise when we try to pass to the limit: Indeed, except for the simple extension mentioned in Remark~\ref{rem:extenstion}, it is not clear whether the sequences of solutions converge, either as $\nu\to 0$ or as $R\to 0$, to a specific limit matrix $\overline{V}$ among the many solutions of the limit ARE equation which is, respectively, $$ \overline{V}^T\overline{V}=2Q+A^TRA\,, \qquad\qquad \overline{V}^T\overline{V}=2Q\,. $$ Analogous issues are found when studying the limits of mean field equations~\eqref{eq:mfpde}. For the vanishing discount limit there is an additional difficulty, because it is not clear whether symmetric positive definite solutions to the ARE in~\eqref{eq:discount1} exist when {\bf (H3)} is not satisfied. Indeed, the matrix ${\cal A}_\ell+{\cal R}Y_\ell=\,{\nu R\over 2}\,\left({\ell\over 2}\,\mathrm{I}_d+\nu Y_\ell\right)$, with $Y_\ell\in\mathrm{Sym_d}$ given by Proposition~\ref{prop:ARE}{\it (iii)}, might be not symmetric and have complex eigenvalues if we do not assume {\bf (H3)}. In this case, it does not seem possible to generally deduce $Y_\ell>0$ from the estimates on the real part of eigenvalues of ${\cal A}_\ell+{\cal R}Y_\ell$. In our opinion, new results on the algebraic Riccati equations would be necessary to extend our analysis to more general games, but such extensions are beyond the scope of this work. \subsection{Comparison with previous works on infinite horizon games with discounted cost}\label{sec:compare_caines} For $N$--person infinite horizon games with discounted costs there is a rich literature (see~\cite{HCM07ieee,LZ} and references therein). Typically, the games considered have a dynamics \begin{equation}\label{eq:caines_sde} dX^i_t=\left({\bf A}\,X^i_t+{\bf B}\,\alpha^i_t\right)dt+{\bf D}\,dW^i_t\,, \qquad\qquad i=1,\ldots,N\,, \end{equation} and cost \begin{equation}\label{eq:caines_cost} J^i(X,\alpha^1,\ldots,\alpha^N):= \mathbb{E}\left[\int_0^{+\infty}e^{-\ell t}\left(\,{(\alpha_t^i)^T\,{\bf R}\,\alpha_t^i\over 2}\,+(X^i_t-\Xi_t)^T {\bf Q}\,(X_t-\Xi_t)\right)\,dt\right]\,, \end{equation} where ${\bf A,B,D,R,Q}\in\mathrm{Mat}_{d\times d}(\mathbb{R})$ suitable matrices, ${\bf R}>0$ and ${\bf Q}\geq 0$, and where $$ \Xi_t := \Gamma \cdot\left(\,{1\over N}\,\sum_{j=1}^N X^j_t\right) +\eta\,\in\mathbb{R}^d \,, \qquad\qquad \Gamma\in\mathrm{Mat}_{d\times d}(\mathbb{R})\,,~\eta\in\mathbb{R}^d\,, $$ with the term $1/N\sum_{j=1}^N X^j_t$ representing a sort of average position among the agents (referred to as the ``mean field term'' of the game). The typical result for these games is that the solution of a suitable ``mean field system'' of ODEs, obtained by formally passing to the limit as $N\to+\infty$ in the HJB equation for~\eqref{eq:caines_sde}--\eqref{eq:caines_cost} and replacing the mean field term in the cost with a suitable deterministic function, provides an approximate Nash equilibria for the game~\eqref{eq:caines_sde}--\eqref{eq:caines_cost}. Namely, the feedback strategy corresponding to such a solution is an $\varepsilon$--Nash equilibrium strategy with $\varepsilon={\cal O}\left({1\over N}\right)$. \smallskip Now observe that the second term in the cost~\eqref{eq:caines_cost} can be rewritten in the form~\eqref{eq:quad_cost}, by choosing $\overline{X}^i_i=\eta$, $\overline{X}^j_i=0$ for $j\neq i$, and $$ Q^i_{ii}=\left(\mathrm{I}_d-\,{\Gamma^T\over N}\right){\bf Q}\left(\mathrm{I}_d-\,{\Gamma\over N}\right)\,, \qquad Q^i_{ij}=-\left(\mathrm{I}_d-\,{\Gamma^T\over N}\right){\bf Q}\,{\Gamma\over N}\,, \qquad Q^i_{jk}=\,{\Gamma^T\over N}\,{\bf Q}\,{\Gamma\over N}\,, $$ and that {\bf (H1)} is satisfied whenever ${\bf Q}>0$, since $\mathrm{I}_d-\,{\Gamma^T\over N}$ is always invertible for $N$ large enough. Therefore, games~\eqref{eq:caines_sde}--\eqref{eq:caines_cost} are very similar to the ones we considered in Theorem~\ref{thm:disc_Nplay}. The main difference is that the cost $J^i$ in~\eqref{eq:caines_cost} depends on other players directly through their state $X^j$, while in~\eqref{eq:cost_disc} the dependence is present only through their asymptotic distribution $m^j$ in the environment. The novelty in our results is that, thanks to the particular form of the cost, we are able to characterize \emph{exact} Nash equilibria for the discounted game (at least for small values of the discount factor $\ell$) and not only of $\varepsilon$--approximate ones. Moreover, we prove rigorously the convergence of such Nash equilibria to the solutions of the mean field game. The analogous study in the case of games with cost~\eqref{eq:caines_cost} is still an open problem, to our knowledge.
1,314,259,993,069
arxiv
\section{Introduction} \label{sec:intro} A major goal of numerical relativity is to simulate the coalescence of an orbiting pair of black holes. In studying such systems, we will be interested in determining many quantities: the energy and momentum radiated, the associated waveforms, the total angular momentum of the system, etc. In addition to these common physical quantities, we will also want to understand the causal structure of the spacetime. Not only will this give us a more complete picture of the dynamics, but it also seems that tracking the causal structure may prove to be a crucial step in successfully evolving black-hole spacetimes\cite{seidel_suen92}. Knowing which events are inside black holes may allow them to be excised from the computational domain, thereby avoiding numerical difficulties that have plagued black hole evolutions. Ideally, we would like to be able to track all of the event horizons in a given spacetime. However, this is not possible {\em during} the evolution: event horizons can only be reconstructed after the evolution is complete. Instead, {\em apparent horizons} can be located on each individual space-like hypersurface during an evolution. Since apparent horizons must lie inside event horizons and asymptote towards them as the system settles down, they provide much of the desired causal information. They can also be used to define regions that can be excised from a computation. Various methods exist for locating apparent horizons. In practice, one searches for marginally outer-trapped surfaces (MOTS)\cite{hawkingellis}. The apparent horizon is the outermost such surface. For axisymmetric problems, shooting methods\cite{cadez74,bishop82,bishop84,st92} have been the most widely used for locating apparent horizons, although decomposition into orthogonal polynomials\cite{eppley77}, the solution of elliptic boundary-value problems\cite{cook90,cookabrahams92}, and the use of curvature flows\cite{tod91} have been used. Unfortunately, shooting methods do not generalize to three-dimensional spatial slices. The first general apparent-horizon finders were based on a spherical-harmonic decomposition of the MOTS\cite{nakamura84,nakamura85,bishop91}. In this approach, each coefficient in the spherical-harmonic expansion is determined, iteratively, by performing an integral over a complicated function that characterizes the MOTS. The MOTS equation can also be posed as an elliptic equation for a function that parametrically specifies the location of the MOTS\cite{huq96}. Curvature flow methods are also certainly applicable in the general case of a three-dimensional spatial hypersurface. Recently, a variant of the spherical-harmonic decomposition method has been proposed by Libson {\it et al}.\cite{libson95}. This approach is conceptually appealing, having two particularly nice features. First, the coefficients in the spherical-harmonic expansion are determined by a minimization procedure, eliminating the need to perform surface integrals. Second, Libson {\it et al}. have proposed the use of symmetric trace-free (STF) tensors for parametrically representing the MOTS. This latter feature is particularly appealing when Cartesian coordinates are used on the three-dimensional hypersurface. In this paper, we will review the method proposed by Libson {\it et al}. and describe how we generalize the method by extending the expansion in STF tensors to arbitrary order. Because one is always using a truncated expansion, it is important to understand clearly the behavior of the apparent-horizon finder when the maximum order of the expansion is varied. As we will see, the number of points where the MOTS is determined will also affect the behavior of the apparent-horizon finder. We have examined both of these effects in detail. The paper is organized as follows: In \S~\ref{sec:methods} we outline the method and basic equations, and in \S~\ref{sec:numerics} we explain our numerical implementation. In \S~\ref{sec:tests} we carefully discuss results from various test calculations, and in \S~\ref{sec:summary} we briefly summarize the most important results. All technical details are provided in the appendices. Appendix~\ref{app:expressions} contains a number of useful equations relating to STF tensors. Appendix~\ref{app:storage} describes the storage for arbitrary rank tensors. In Appendix~\ref{app:init_recur} we derive recurrence relations for STF tensors. Finally, in Appendix~\ref{app:area_elem} we derive an expression for the area element on the apparent horizon. \section{Method and basic equations} \label{sec:methods} A MOTS is a closed two-surface embedded in a three-dimensional spatial hypersurface and, therefore, can be defined as a level surface of some scalar function $\tau$. If we use Cartesian coordinates on the hypersurface then we can parametrically define the level surface as \begin{equation} \tau(x,y,z) = \sqrt{\delta_{ij}(x^i - C^i)(x^j - C^j)} - f(\theta,\phi) = 0. \end{equation} Here, $x^i$ are Cartesian coordinates, $C^i$ is a location {\em inside} the $\tau = 0$ surface, and $\theta$ and $\phi$ are polar coordinates centered on $C^i$. The function $f(\theta,\phi)$ then measures the {\em coordinate} distance between $C^i$ and the $\tau=0$ surface in the direction $(\theta,\phi)$. The outward pointing unit normal on the $\tau=0$ surface is \begin{equation} S_i = \lambda \partial_i \tau, \end{equation} where $\lambda$ is the normalization factor \begin{equation} \lambda \equiv \left[ g^{ij} (\partial_i \tau)(\partial_j \tau ) \right]^{-\frac{1}{2}} \end{equation} and $g_{ij}$ is the metric on the spatial hypersurface. The expansion $\Theta$ of an outgoing null-bundle can now be written \begin{equation} \Theta = D_i S^i + K_{ij} S^i S^j - K^i_i, \end{equation} where $K_{ij}$ is the extrinsic curvature and $D_i$ is the covariant derivative operator associated with $g_{ij}$. Note that the term $D_i S^i$ involves both first and second derivatives of $\tau$ and hence of $f(\theta,\phi)$. Writing these out explicitly yields \begin{equation} \label{theta} \Theta = (g^{ij} - S^i S^j) \left( \frac{\lambda}{f} (\delta_{ij} - n_i n_j) - \lambda \partial_i \partial_j f - S_k \Gamma^k_{ij} - K_{ij} \right). \end{equation} Here $\delta_{ij}$ is the Kronecker delta, $n^i$ is the unit vector in the $(\theta,\phi)$ direction, and $\Gamma^k_{ij}$ are the connection coefficients associated with $g_{ij}$. One must be careful in deriving this equation since there are effectively two metrics being used: the spatial metric $g_{ij}$ and the Kronecker delta $\delta_{ij}$ used in computing coordinate distances. In particular, we note that \begin{eqnarray} n^i &=& \frac{x^i - C^i}{\sqrt{\delta_{jk}(x^j - C^j)(x^k - C^k)}} \\ n_i &\equiv& \delta_{ij}n^j \ \ (\delta_{ij}n^in^j = 1) \\ S_i &=& \lambda(n_i - \partial_if) \\ S^i &\equiv& g^{ij}S_j \ \ (g_{ij}S^iS^j = 1) \end{eqnarray} Our goal is now to find a function $f(\theta,\phi)$ such that the $\tau = 0$ surface is a MOTS, i.e. that the expansion~(\ref{theta}) vanishes on that surface. In practice, instead of making the expansion vanish, we can evaluate it at a number $N_\Theta$ of points on the surface $\tau = 0$ and look for a function $f$ such that \begin{equation} \label{AHsum} {\cal S}(N_\Theta) \equiv \sum_{\alpha = 1}^{N_\Theta}W_\alpha \Theta_\alpha^2 \end{equation} vanishes. If (\ref{AHsum}) vanishes for arbitrary weights $W_\alpha$, then in the limit $N_\Theta\rightarrow\infty$ (so as to completely cover the surface) we are guaranteed of having located a MOTS. Our strategy will be to expand $f(\theta,\phi)$ in terms of multipole moments and to search for a minimum in ${\cal S}$. The sum~(\ref{AHsum}) then depends on the corresponding expansion coefficients, which can be varied until the sum assumes a minimum. If this minimum comes arbitrarily close to zero an apparent horizon has been found. Thus, the problem has been reduced to a multidimensional minimization, for which standard methods can be used. An obvious choice of basis functions for the expansion of $f(\theta,\phi)$ are the spherical harmonics, \begin{equation} \label{sph} f(\theta,\phi) = \sum_{\ell=0}^{L} \sum_{m=-\ell}^{\ell} F^{\ell m} Y^{\ell m}(\theta,\phi), \end{equation} where the expansion is truncated at order $L$. However, since we have to take up to second derivatives with respect to Cartesian coordinates, an expansion in terms of STF tensors \begin{equation} \label{stf} f(\theta,\phi) = \sum_{\ell=0}^L {\cal F}_{K_\ell} N_{K_\ell} \end{equation} turns out to be a better choice. In the following we adopt the notation of~\cite{thorne80}, where additional details of this formalism can be found. In particular, repeated indices will always be summed over. The subscript $K_\ell$ denotes a multi-index of length $\ell$, and $N_{K_\ell}$ is the vector product of $\ell$ unit vectors $n_i$: \begin{equation} \label{nkl} N_{K_\ell}= n_{k_1} n_{k_2} \cdots n_{k_\ell}. \end{equation} In~(\ref{stf}), these are contracted with the STF tensors ${\cal F}_{K_\ell}$ (of rank $\ell$). These are the {\em location-independent} expansion coefficients equivalent to the $F^{\ell m}$ in~(\ref{sph}). Note that a STF tensor of rank $\ell$ has $2\ell+1$ independent components, just like the spherical harmonics. The relationship between~(\ref{sph}) and~(\ref{stf}) can be seen even more clearly by choosing the ${\cal Y}^{\ell m}_{K_\ell}$ basis for the STF tensors as defined in Ref.~\cite{thorne80} (see Appendix~\ref{app:expressions}). In terms of these, ${\cal F}_{K_\ell}$ can be written \begin{equation} \label{sum} {\cal F}_{K_\ell} = \sum_{m=-\ell}^{\ell} F^{\ell m} {\cal Y}^{\ell m}_{K_\ell}, \end{equation} where the $F^{\ell m}$ are the same as in~(\ref{sph}). The ${\cal Y}^{\ell m}_{K_\ell}$ also provide a relation between the spherical harmonics and the $N_{K_\ell}$ \begin{equation} \label{ylm} Y^{\ell m}(\theta,\phi) = {\cal Y}^{\ell m}_{K_\ell} N_{K_\ell}(\theta,\phi). \end{equation} Inserting this into~(\ref{sph}) and using~(\ref{sum}) immediately yields~(\ref{stf}). Note that because the coefficients ${\cal F}_{K_\ell}$ are location independent, derivatives of $f$ can be calculated from derivatives of $N_{K_\ell}$ \begin{equation} \partial_i f(\theta,\phi) = \sum_{\ell=0}^L {\cal F}_{K_\ell} \partial_i N_{K_\ell} \end{equation} and similarly for second derivatives. \section{Numerical Implementation} \label{sec:numerics} Our code is designed in such a way that it can find a MOTS to arbitrary order $L$ in the multipole expansion. On input we therefore have to specify the order $L$. Also, we have to specify the number of points $N_\Theta$ on the surface at which the expansion~(\ref{theta}) (and, of course the sum for ${\cal S}$) are evaluated. These points must be distributed somehow over the surface. Currently, they are distributed equally in $\phi$ and $\cos\theta$ on the unit sphere, but different choices could easily be made. Results for different values of $L$ and $N_\Theta$ will be presented in the next section. Next the tensors $N_{K_\ell}$, their first and second derivatives, as well as the basis STF tensors ${\cal Y}^{\ell m}_{K_\ell}$ have to be initialized. The latter are independent of location, so that we need to calculate them only once. The $N_{K_\ell}$ do, however, depend on the direction $(\theta,\phi)$, and therefore have to be calculated once for every point on the surface. In the code, we define arrays of length $N_\Theta$ to store $N_{K_\ell}$ and its derivatives. Since all of the STF tensors are completely symmetric, we can store the independent components in a very elegant and efficient way. This is explained in detail in Appendix~\ref{app:storage}. In Appendix~\ref{app:init_recur} we present recurrence relations that allow for a very efficient initialization of these objects. The MOTS search is started with a set of trial expansion coefficients $F^{\ell m}$. These are then contracted with the basis STF tensors ${\cal Y}^{\ell m}_{K_\ell}$, which yields the ${\cal F}_{K_\ell}$. Since these quantities are independent of location, this needs to be done only once per iteration step. The ${\cal F}_{K_\ell}$ are then contracted with $N_{K_\ell}$ and its derivatives to find $f$, $\partial_i f$ and $\partial_i\partial_jf$ for each direction $(\theta,\phi)$. Once $f$ is known we can construct the coordinate location \begin{equation} x^i = f n^i + C^i \end{equation} of the trial surface ($\tau = 0$) at each of the $N_\Theta$ points on the surface. For each point, we read in $g_{ij}$, $K_{ij}$ and $\Gamma^k_{ij}$. These can either be numerically evolved quantities or, for the test purposes in this paper, analytical values. Eq.~(\ref{theta}) now yields the expansion $\Theta$ for this location on the trial surface. Repeating these steps for every point $N_\Theta$ we can finally construct the sum ${\cal S}$. Currently, we choose the weights $W_\alpha$ in (\ref{AHsum}) based on the proper area element (\ref{area_elem}) defined in Appendix~\ref{app:area_elem}. Thus, equation~(\ref{AHsum}) is an approximation to the mean square of the expansion: \begin{equation} \label{AHint} {\cal S} = \oint\Theta^2d^2\sigma \end{equation} Any multidimensional minimization routine can now be used to vary the $F^{\ell m}$ until ${\cal S}$ has assumed a minimum. So far we have found best results with Powell's method\cite{numrec_c}, although it is likely that a method that uses derivatives with respect to the expansion coefficients $F^{\ell m}$ will be significantly faster, especially when the initial guess is already close to the final answer. We hope to explore this in the future. Once a minimum has been found we shift the center of the black hole $C^i$ according to the dipole moment (i.e.\ we choose $C^i$ so that the $\ell=1$ moments vanish). We then repeat the MOTS search until the $\ell=1$ moments stay below a predetermined maximum value. This procedure enables us to locate apparent horizons even when the initial guess is very poor (see next section) and should allow us to follow black holes that move through a numerical grid. \section{Tests} \label{sec:tests} \subsection{Schwarzschild} \label{sec:schwtest} An obvious test for the apparent-horizon finder is the Schwarzschild spacetime. Since the MOTS is spherically symmetric, it can be described with the monopole term alone. This test strongly demonstrated how well the shifting of the center $C^i$ according to the $\ell=1$ moments works. The code was able to locate the MOTS accurately even when the initial guess was completely disjoint from the true horizon. The code worked equally well when we located the black hole away from the origin of the coordinate system. In all cases the sum ${\cal S}$, as well as all expansion coefficients $F^{\ell m}$ with $\ell > 0$, vanished to whatever tolerance we specified. \subsection{Two black holes} \label{sec:2_bh} A spacetime containing two black holes has multiple MOTS, some of which can be highly distorted. Such a spacetime provides a much stronger test for the apparent-horizon finder. A metric for two time-symmetric black holes can be written in the conformally-flat form \begin{equation} ds^2 = \psi^4(dx^2 + dy^2 + dz^2), \end{equation} where the conformal factor $\psi$ is given by \begin{equation} \label{confact} \psi = 1 + \frac{M}{2r_1} + \frac{M}{2r_2} \end{equation} and $r_1$ and $r_2$ are \begin{eqnarray} r_1 & = & \left( x^2 + y^2 + (z + z_0)^2 \right)^{1/2} \nonumber\\ r_2 & = & \left( x^2 + y^2 + (z - z_0)^2 \right)^{1/2}. \end{eqnarray} Here $M$ is the mass of the individual black holes, and $z_0$ is their coordinate distance from the origin of the coordinate system. Note that the singularities in~(\ref{confact}) can be removed by adding matter sources (see Ref.~\cite{st92}). Since this is advantageous in a numerical application and does not change the external metric we have implemented this form of the equations. The causal structure of this spacetime has been investigated in detail by Bishop\cite{bishop82}. The MOTS can be found by using a shooting method to solve a set of coupled differential equations to high accuracy. This provides us with a solution that we can check the STF based apparent-horizon finder against. In the following we will refer to these solutions as the ``true horizons''. In general there will be MOTS around the individual holes. If the holes are close enough, a pair of encompassing MOTS will also appear (see Ref.~\cite{bishop82} for a careful discussion). According to \v{C}ade\v{z}~\cite{cadez74} these encompassing MOTS first appear at as separation of $z_0 = 0.765$. For separations close to the critical separation (i.e.\ $z_0 \lesssim 0.765$) they will be strongly distorted. Note that this situation is quite similar to what we expect in a binary black-hole evolution. Having that application in mind, it provides a strong test for our apparent-horizon finder and it can help us to decide to which order $L$ we need to expand and at how many points $N_\Theta$ we need to evaluate the expansion $\Theta$ in order to accurately locate the an encompassing MOTS. In Fig.~\ref{fig:location} we plot the estimated location of the MOTS based on expansions to order $L = 2$, $4$, $6$ and $8$ for the case $z_0 = 0.74$. (By symmetry, only even $L$ can contribute since $C^i\rightarrow0$). The ``true horizon'' is fairly distorted, causing the lower order expansions to perform very poorly. \begin{figure} \epsfxsize=7in\epsffile{location.ps} \caption{The estimated location of the MOTS found with expansions taken to order $L = 2$, $4$, $6$ and $8$ for $z_0 = 0.74$ (using $N_\Theta = 25\times11$ points). The ``true location'' was found independently by solving a set of coupled differential equations (see Ref.~\protect\cite{st92}). The dot marks the coordinate center of one of the two black holes.} \label{fig:location} \end{figure} Note that all the lower order expansions find a location for the MOTS outside of the true horizon. This behavior could have severe consequences in numerical evolution codes that use ``apparent horizon boundary conditions''\cite{seidel_suen92} and that ignore the causally disconnected region inside a MOTS. If we were to use one of the lower-order MOTS solutions for this purpose we could be ignoring a region that is {\em not} causally disconnected. This test clearly demonstrates that high order expansion is absolutely necessary for the detection of highly distorted horizons. On the other hand it also demonstrates that high order expansion is very expensive: increasing the order of the expansion by 2 increases the CPU time by roughly a factor of 3, ranging from several seconds for $L = 2$ to several minutes for $L = 8$ (on a serial computer). Details will depend on the particular numerical implementation as well as parameters associated with a given minimization routine. However, since we are searching for minima in an $(L+1)^2$-dimensional space (see eq.~(\ref{flm})) the required CPU time will always be a steep function of $L$. Another important factor for both the accuracy and the CPU time is the number of points $N_\Theta$ at which the expansion $\Theta$ is evaluated. In Fig.~\ref{fig:errors} we show results for $z_0 = 0.6$ using different numbers of points $N_\Theta = n_{\theta} \times n_{\phi}$, where $n_{\theta}$ is the number of points in the $\theta$-direction and $n_{\phi}$ in the $\phi$-direction. Since the $F^{\ell m}$ up to order $L$ have $(L+1)^2$ independent components, we will need at least $(L+1)^2$ points. From Fig.~\ref{fig:errors} it is obvious that $8\times8$ points are not enough for an expansion to order $L=8$: the result is worse than for a lower order expansion. However, it can also be seen that increasing the number of points beyond this minimum can drastically increase the accuracy. \begin{figure} \begin{picture}(504,504) \put(0,268){\epsfxsize=3.5in\epsffile{error_8x8.ps}} \put(252,268){\epsfxsize=3.5in\epsffile{error_9x9.ps}} \put(0,16){\epsfxsize=3.5in\epsffile{error_15x15.ps}} \put(252,16){\epsfxsize=3.5in\epsffile{error_25x11.ps}} \end{picture} \caption{Relative errors $\Delta r / r$ as a function of $\cos \theta$ for expansion to order $L = 2$, $4$, $6$ and $8$ and for different values of $N_\Theta = n_{\theta} \times n_{\phi}$. ($z_0 = 0.6$).} \label{fig:errors} \end{figure} As a next test we plot in Fig.~\ref{fig:expansion} the integral (\ref{AHint}) as a function of $z_0$ for different expansion orders. For all these calculations we start with an initial guess close to where we expect a common MOTS. For $z_0 < 0.765$, when the two black holes have a common MOTS, we would therefore expect this sum to vanish, if we could resolve the MOTS arbitrarily well. For values of $z_0$ larger than $0.765$ the minimization routine will find a nonzero minimum -- however we expect these minima to go to zero as we approach $z_0 = 0.765$. \begin{figure} \epsfxsize=7in\epsffile{exp.ps} \caption{Integrals over the expansion $\Theta$ (eq.~(\protect\ref{AHint})) as a function of $z_0$ for expansions to order $L = 2$, $4$, $6$ and $8$ (using $N_\Theta = 25 \times 11$ points). For $z_0 < 0.765$, i.e. left of the vertical line, the two black holes have an encompassing MOTS.} \label{fig:expansion} \end{figure} Since we use an expansion to finite order we cannot resolve the MOTS arbitrarily well. This means that, while we will find a minimum, it will typically be different from zero even for $z_0 < 0.765$. The value will, again, depend on a number of parameters, but primarily on the order of the expansion $L$. This can be seen very clearly in Fig.~\ref{fig:expansion}. In particular for the lower order expansions the expected drop at $z_0 = 0.765$ cannot be detected at all -- a significant decrease can only be seen for $L = 8$. This demonstrates again that an early detection of a common MOTS will only be possible with a high order expansion. Also, this suggests that it is hardly possible to decide on the basis of the value of the sum ${\cal S}$ if a MOTS has been found. As a better test, we suggest checking whether $\Theta$ is negative everywhere on a surface just inside the approximate MOTS. This surface ``just inside'' can be determined very easily by reducing the monopole term $F^{00}$ by a small fraction. \section{Summary} \label{sec:summary} We have developed an apparent-horizon finder based on a multipole expansion to arbitrary order $L$. The primary application we have in mind is the numerical evolution of a binary black-hole system. In order to check the performance of the MOTS finder in a spacetime of similar structure we have performed careful tests using initial data for two time-symmetric black holes. From these tests it is evident that a reliable search for highly distorted MOTS requires high order expansion. On the other hand, using a high order expansion is very expensive and it is questionable if this will be affordable during a dynamical evolution. However, in the evolution of a binary black-hole system for example, it is desirable to detect a common MOTS as early as possible since the region interior to this surface no longer needs to be evolved. It may, therefore, be worthwhile searching for this common MOTS using a high order expansion. As a compromise, it is possible to use a low order expansion for nearly spherical MOTS (as will be the case for the MOTS around individual black holes or the common MOTS in the later phases of an evolution) and a high order expansion for highly distorted MOTS (as in the early phase of the common MOTS). Unfortunately, ``nearly spherical'' is a coordinate dependent concept in this context, and it is not clear how well the coordinates will behave in a binary black-hole evolution code. An obvious optimization of the code is to change to a minimization scheme that uses derivatives. This is a fairly tedious, though straightforward task. \acknowledgments We would like to thank Peter Anninos and Edward Seidel for helpful discussions. This work was supported by NSF Grants AST 91-19475 and PHY 94-08378, NASA Grant NAG-2809, and by the Grand Challenge grant NSF PHY 93-18152 / ASC 93-18152 (ARPA supplemented). Computations were performed at the Cornell Center for Theory and Simulation in Science and Engineering, which is supported in part by the National Science Foundation, IBM Corporation, New York State, and the Cornell Research Institute.
1,314,259,993,070
arxiv
\section{Introduction} The star WASP-14 (\textit{V}\,=\,9.75 mag) was found to have a transiting planet by \citet{2009MNRAS.392.1532J}. The planet orbits the host star with a period of 2.24\,d causing transits with a depth of 11 millimag (mmag) and a duration of 2.8\,h. High-accuracy photometric and spectroscopic follow-up observations allowed one to determine the mass of the planet $M_\mathrm{P}\,=\,7.34\,\pm\,0.50\,M_{\mathrm{Jup}}$, the planet radius $R_\mathrm{P}\,=\,1.28\,\pm\,0.08\,R_{\mathrm{Jup}}$, and the orbital semimajor axis $a\,=\,0.036\,\pm\,0.001$\,au. These findings show that WASP-14\,b is one of the densest exoplanets with an orbital period of less than three days. The stellar density, the effective temperature, the rotation rate, and the high lithium-abundance indicate a very young age between 0.5 and 1.0\,Gyr. All system parameters known from literature are summarized in Table~\ref{Werte_WASP14}. \begin{table} \centering \caption{Physical and orbital properties of the WASP-14\,b system summarized from literature.} \label{Werte_WASP14} \begin{tabular}{cr@{\,$\pm$\,}lc} \hline \hline Parameter & \multicolumn{2}{c}{Value} & Ref \\ \hline Epoch zero transit time $T_{0}$ [d] & \multicolumn{2}{c}{2454963.93676} & [1] \\ & & 0.00025 & [1] \\ Orbital period $P$ [d] & 2.2437704 & 0.0000028 & [1] \\ Semimajor axis $a$ [au] & 0.036 & 0.001 & [2] \\ Inclination $i$ [$^{\circ}$] & 84.32 & 0.60 & [2] \\ Eccentricity $e$ & 0.087 & 0.002 & [3] \\ Argument of pericentre $\omega$ [$^{\circ}$] & -107.1 & 0.5 & [3] \\ Mass star $M_{\mathrm{A}}$ [M$_{\odot}$] & 1.211 & 0.125 & [2] \\ Radius star $R_{\mathrm{A}}$ [R$_{\odot}$] & 1.306 & 0.070 & [2] \\ Effective temperature $T_{\mathrm{eff}}$ [K] & 6475 & 100 & [2] \\ Surface gravity star log$\,g_{\mathrm{A}}$ & 4.29 & 0.04 & [2] \\ Metallicity $\left[ \frac{M}{H}\right] $ & 0.0 & 0.2 & [2] \\ Mass planet $M_{\mathrm{b}}$ [$M_{\mathrm{Jup}}$] & 7.341 & 0.500 & [2] \\ Radius planet $R_{\mathrm{b}}$ [$R_{\mathrm{Jup}}$] & 1.281 & 0.079 & [2] \\ Distance $d$ [pc] & 160 & 20 & [3] \\ Age [Gyr] & \multicolumn{2}{c}{$\sim$\,0.5\,-\,1.0} & [2] \\ Spectral type & \multicolumn{2}{c}{F5V} & [2] \\ \hline \hline \end{tabular} \\ References: [1] \citet{2009PASP..121.1104J}, [2] \citet{2009MNRAS.392.1532J}, and [3] \citet{2013ApJ...779....5B} \end{table} \\A very interesting feature of WASP-14\,b reported by \citet{2009MNRAS.392.1532J} and confirmed by \citet{2011MNRAS.413.2500H} is its high orbital eccentricity ($e=0.087\,\pm\,0.002$) for its small orbital distance. At this distance, tidal interactions with the star are expected to circularize the orbit of the transiting planet \citep{1977A&A....57..383Z}. The nearly circular orbits of most close-orbiting exoplanets agree with tidal circularization time-scales significantly shorter than the system age. \citet{2011MNRAS.413.2500H} determined a circularization time-scale for WASP-14\,b to be $5\times10^{7}$ yr, which is significantly shorter than its age. The high eccentricity of WASP-14\,b may thus indicate either a tidal circularization time-scale comparable or longer to the system age or the presence of an additional perturbing body in the system which may significantly delay the process of circularization \citep{2007MNRAS.382.1768M}. It is also possible that the planet may have arrived on its current orbit recently.\\ \citet{2009PASP..121.1104J} have found evidence for a spin--orbit-misalignment ($\lambda\,=\,-33.1^{\circ}\,\pm\,7.4^{\circ}$) in the WASP-14 system by measuring the Rossiter--McLaughlin effect.\\ WASP-14\,b belongs to the class of highly irradiated hot Jupiters. Observations of the thermal emission during three secondary eclipses with \textit{Spitzer} by \citet{2013ApJ...779....5B} gave first indications about the atmospheric composition and thermal structure of the planet. The observations neither indicate a temperature inversion nor an effective heat exchange between day and night side. The chemical composition is consistent with the solar abundances.\\ WASP-14\,b is a very massive transiting planet. Over 50\% of these massive hot Jupiters show a significant eccentricity while intermediate-mass planets tend to be in circular orbits. Although this finding could be biased because the circularization time-scales with planet mass \citep[][circularization time-scale is longer for massive planets]{2007MNRAS.382.1768M} it may suggest that the formation and evolution of these planets differ from the scenarios for lower mass planets. Therefore, the observation of WASP-14 is particularly interesting to constrain theories of planet formation, migration, and planet--star interaction.\\ Although the non-zero eccentricity and the spin--orbit-misalignment may indicate that there could be additional bodies in the system, WASP-14\,b has never been a target of photometric follow-up observations or observing campaigns, dedicated to detect and characterize signals of transit timing variation (TTV). \citet{2012MNRAS.426.1291S} re-analysed the data sets of \citet{2009MNRAS.392.1532J} and \citet{2009PASP..121.1104J} and found a significantly different set of physical properties of WASP-14 compared to previous studies. For these reasons, WASP-14\,b was selected as a target of our TTV campaign. \section{Observations, data reduction and photometry} \begin{table*} \caption{Observatories and instruments which observed transits of WASP-14.} \label{CCD_Kameras} \begin{tabular}{cccccccc} \hline \hline Observatory & Long. (E) & Lat. (N) & Elevation & Telescope $\diameter$ & Camera & \# Pixel & Pixel scale \\ & ($^{\circ}$) & ($^{\circ}$) & (m) & (m) & & & ($''$/pixel) \\ \hline University Observatory Jena & 11.48 & 50.93 & 370 & 0.25 & CTK$^{a}$ & 1024\,x\,1024 & 2.23 \\ & & & & 0.60 & STK$^{b}$ & 2048\,x\,2048 & 1.55 \\ & & & & 0.25 & CTK-II & 1056\,x\,1027 & 1.19 \\ Calar Alto Observatory & 357.45 & 37.22 & 2168 & 2.20 & CAFOS$^{c}$ & 2048\,x\,2048 & 0.47 \\ Star\'{a} Lesn\'{a} Observatory & 20.29 & 49.15 & 785 & 0.50 & ST10XE & 2184\,x\,1472 & 0.56 \\ Observatorio de Sierra Nevada & 356.62 & 30.06 & 2896 & 1.50 & VersArray: & & \\ (OSN) & & & & & 2048B & 2048\,x\,2048 & 0.23 \\ T\"{U}BITAK National Observatory & 30.34 & 36.82 & 2485 & 1.00 & SI 1100 Cryo & 4096\,x\,4097 & 0.31 \\ \hline \hline \end{tabular} \\ $^{a}$\citet{2009AN....330..419M}, $^{b}$\citet{2010AN....331..449M}, $^{c}$\citet{1994S&W....33..516M} \end{table*} \begin{table*} \caption{Summary of the WASP-14\,b observations in the period from 2009 April to 2013 April: $N_{\mathrm{exp}}$ -- number of exposures, $T_{\mathrm{exp}}$ -- exposure times, $\Gamma$ -- median number of exposures per minute, pnr -- photometric noise rate.} \label{Beobachtungslog_WASP14} \begin{tabular}{lcccccccc} \hline \hline Date & Epoch$^{a}$ & Observatory$^{b}$ & Filter & $N_{\mathrm{exp}}$ & $T_{\mathrm{exp}}$ (s) & $rms$ (mmag) & $\Gamma$ & pnr \\\hline 2009 Apr. 01 & 205 & Jena-CTK & \textit{I} &324 & 60, 50, 40 & 5.16 & 0.87 & 5.54 \\ 2009 Apr. 19 & 213 & Jena-CTK & \textit{I} &320 & 30 & 5.16 & 1.02 & 5.11 \\ 2009 May 07 & 221 & Jena-CTK & \textit{R} & 320 & 25, 20 & 10.03 & 1.22 & 9.08 \\ 2011 Feb. 12 & 509 & OSN & \textit{R} & 1215 & 10 & 3.56 & 0.71$^{c}$ & 4.23 \\ 2011 Mar. 02 & 517 & Jena-STK* & \textit{R} & 395 & 30 & 3.27 & 1.39 & 2.77 \\ & & Jena-CTK-II* & \textit{R} & 366 & 40 & 5.00 & 1.39 & 4.24 \\ & & Calar Alto* & \textit{B} &144 & 30 & 6.07 & 1.09 & 5.83 \\ 2011 Mar. 11 & 521 & Jena-STK & \textit{R} & 330 & 45 & 2.46 & 1.10 & 2.34 \\ 2011 Mar. 20 & 525 & Jena-STK & \textit{R} & 360 & 30 & 1.87 & 1.39 & 1.59 \\ & & Jena-CTK-II & \textit{V} & 535 & 30, 25 & 4.65 & 2.17 & 3.16 \\ & & Calar Alto & \textit{B} &322 & 30 & 10.36 & 1.10 & 9.87 \\ 2011 Mar. 29 & 529 & Jena-STK & \textit{R} & 401 & 30 & 1.25 & 1.39 & 1.06 \\ & & Star\'{a} Lesn\'{a} & \textit{R} & 696 & 30 & 3.19 & 3.02 & 1.84 \\ 2011 Apr. 07 & 533 & OSN & \textit{R} & 570 & 30 & 4.10 & 0.34$^{c}$ & 7.08 \\ 2011 Apr. 16 & 537 & Calar Alto & \textit{V} & 404 & 25 & 9.99 & 1.20 & 9.13 \\ 2012 Feb. 06 & 669 & Jena-STK* & \textit{R} & 310 & 30 & 4.21 & 1.39 & 3.57 \\ & & Jena-CTK-II* & \textit{V} & 208 & 60 & 6.58 & 0.95 & 6.75 \\ 2012 Feb. 24 & 677 & Calar Alto & \textit{V} & 350 & 30 & 0.90 & 0.39$^{c}$ & 1.44 \\ 2013 Apr. 30 & 869 & T\"{U}BITAK & \textit{R} & 294 & 30 & 3.34 & 0.75 & 3.85 \\ \hline \hline \end{tabular} \\ $^{\ast}$partial transit or gaps in the LCs. \\ $^{a}$calculated using the ephemeris in \citet{2009MNRAS.392.1532J}.\\ $^{b}$for a description see Table~\ref{CCD_Kameras}. For the University Observatory Jena also the used camera is listed to avoid confusion. \\ $^{c}$for the final binned LC. \end{table*} First observations of WASP-14 were already carried out in 2009 at the University Observatory Jena. Since 2011 February WASP-14\,b has been a target of our TTV campaign. Altogether we collected 19 light curves (LCs) of 13 individual transit events. We used six telescopes (with apertures ranging from 0.25\,--\,2.2\,m) located in five observatories distributed in Europe and Asia. The participating observatories with their telescopes and instruments are summarized in Table~\ref{CCD_Kameras}.\\ The photometric data were reduced by the standard procedures including subtraction of bias and dark and dividing by a sky flat-field. We calibrated the CCD images using the \begin{scriptsize}IRAF\end{scriptsize}\footnote{\begin{scriptsize}IRAF\end{scriptsize} is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} routines \textit{darkcombine}, \textit{flatcombine}, and \textit{ccdproc}.\\ Aperture photometry was performed with a modified version of the standard \begin{scriptsize}IRAF\end{scriptsize} routine \textit{phot}. Differential magnitudes were calculated using the method of the optimized artificial comparison star developed by \citet{2005AN....326..134B}. All available field stars are combined to create an optimiszd artificial comparison star by taking a weighted average. Very faint or variable stars get a low weight while bright and constant stars dominate the artificial comparison star. The final LC is produced by the comparison of WASP-14 with this artificial standard star. \\ Ten different aperture radii were tested. The aperture that produced LCs with the smallest scatter (smallest root mean square, rms) was finally chosen. \\ As a preparation for the LC analysis we fitted the LCs with the \begin{scriptsize}JKTEBOP\end{scriptsize} code \citep{2004MNRAS.349..547S,2004MNRAS.351.1277S}, which is based on the \begin{scriptsize}EBOP\end{scriptsize} program \citep{1981psbs.conf..111E,1981AJ.....86..102P}. It allows us to remove photometric trends (caused by a colour difference between target and comparison star, differential extinction, changes in airmass during the observations) by fitting polynomials up to fifth order simultaneously to the modelling. Throughout this work the LCs were detrended by fitting a second-order polynomial. \\ Finally the differential magnitudes were transformed into fluxes and divided by the average out-of-transit value in order to normalize the LCs. All 19 LCs of WASP-14 are shown at the end of the paper. To give an estimate of the varying quality of the observed LCs, we computed the photometric noise rate \citep[pnr;][]{2011AJ....142...84F}. The pnr is calculated by dividing the rms of each LC, which is a result of the LC fitting with \begin{scriptsize}JKTEBOP\end{scriptsize}, by the square root of the median number of exposures per minute ($\Gamma$), \begin{equation} \mathrm{pnr}=\frac{\mathrm{rms}}{\sqrt{\Gamma}} \end{equation} A summary of all observations is given in Table~\ref{Beobachtungslog_WASP14}. \subsection{University Observatory Jena photometry} Most observations were carried out at the University Observatory Jena which is located close to the village Gro{\ss}schwabhausen, 10\,km west of the city of Jena. There we have three telescopes (90, 25, and 20\,cm) on the same mount each equipped with an optical CCD camera. For our transit observations we used the 90\,cm Schmidt telescope (60\,cm free aperture in Schmidt-mode) with the CCD-camera STK \citep{2010AN....331..449M} and the 25\,cm Cassegrain auxiliary telescope with the CCD-camera CTK \citep{2009AN....330..419M}. In 2010 August, the CTK was replaced with a new CCD camera called CTK-II (\textit{Cassegrain Teleskop Kamera-II}). The properties of the camera are given in Table~\ref{CTKII}. \\ Between 2009 March and 2012 March, we observed 11 LCs of eight individual transits of WASP-14 at the University Observatory Jena. Six of these eight transit are fully covered. The remaining two transits show gaps in the data due to passing clouds. For the transit from 2012 February 6 only a part of the flat bottom is missing, so that the shape of the transit is still identifiable. The situation is different for the transit from 2011 March 2. In this case the whole egress was lost due to clouds that makes it difficult to determine a precise transit time. \\ Observations were performed in Johnson $V$, Cousins $R$ or $I$ filters. The optics were defocused slightly for the observations with the STK. The exposure times varied from 20 to 60\,s with the 25\,cm Cassegrain telescope and from 30 to 45\,s with the 90/60\,cm Schmidt telescope, depending on weather conditions, airmass, and telescope defocusing. \begin{table} \caption{Camera facts of the CTK-II} \label{CTKII} \begin{tabular}{lr}\hline \hline Parameter & Value \\ \hline Detector: & E2V CCD47-10 \\ Pixel: & 1056\,$\times$\,1027 \\ Pixelscale: & (1.1956 $\pm$ 0.0001)\,arcsec\,pixel$^{-1}$ \\ Field of view: & 21.0\,arcmin\,$\times$\,20.4\,arcmin \\ Filter: & Bessell $B, V, R, I$, Gunn $z$ \\ Focus: & Cassegrain \\ \hline \hline \end{tabular} \end{table} \subsection{TTV$@$YETI photometry} The observation of a large number of individual transit of a planet in front of its host star by a single observatory is difficult. Due to the weather conditions in Central Europe and the orbital period of the transiting planet, it is almost impossible to observe consecutive transits.\\ In 2009, we launched an international observation campaign which is dedicated to the detection and characterization of TTVs for carefully selected transiting planets. The programme is realized by the observation with globally distributed telescopes at different longitudes and is based on the cooperation in the framework of the `Young Exoplanet Transit Initiative' \citep[YETI,][]{2011AN....332..547N}. A description of `TTV$@$ YETI' is available at the website of the project\footnote{http://www.home.umk.pl/~gmac/TTV}. \subsubsection{Calar Alto 2.2\,m telescope} Between 2011 February and April, we awarded 2.5 nights (5\,x\,0.5 nights, project F11-2.2-007) with the Calar Alto Faint Object Spectrograph (CAFOS) at the Calar Alto Observatory. Due to extremely bad weather in winter 2011 two out of five transits were lost completely and one was only observed partially. The remaining two events could be observed but also under bad conditions like thin clouds, fog, and full moon. Therefore, the data have a relatively low quality which is not sufficient to measure precise transit times. One additional LC was observed as back-up of project F12-2.2-009 on 2012 February 24.\\ For the observations we used CAFOS in imaging mode and 2\,$\times$\,2 binning. We windowed the field of view to $7.9$\,arcmin\,$\times5.2$\,arcmin to shorten the read-out time. The observed field included WASP-14 and three suitable comparison stars. Because WASP-14 is a relatively bright star the observations were carried out in Johnson $B$ or $V$ band. It turned out that the best LCs could be observed in the $V$ band. To minimize random and flat-fielding errors, we defocused the telescope significantly. The telescope was autoguided during the observations. Depending on the atmosphere transparency and telescope defocusing, we used exposure times of 25 or 30\,s. For the observations from 2012 February 24, we combined three measurements into one data point to obtain a binned light curve. \subsubsection{Observatorio de Sierra Nevada} Two complete transit LCs (2011 February 12 and 2011 April 7) were obtained at the Observatorio de Sierra Nevada using the 1.5\,m reflector. With a VersArray:2048B CCD camera with 2048\,$\times$\,2048 pixels and a pixel scale of 0.23\,arcsec\,pixel$^{-1}$, we could observe a field of view of $7.85$\,arcmin\,$\times$\,$7.85$\,arcmin. The exposure times of the $R$-band observations were chosen between 10 and 30\,s. While the first LC were obtained under good weather conditions, the second transit suffered from passing clouds in the first half of the observation. To reduce the scatter, we binned the LCs by averaging every five data points. \subsubsection{T\"{U}BITAK National Observatory} One transit that occurred in 2013 April 30 was observed using the Spectral Instruments SI1100 series 4096$\times$4096 CCD camera mounted on the 1.0 m Telescope (T100) at T\"{U}BITAK National Observatory (TUG) in Turkey. The images were acquired in $R$ band with an exposure of 15\,--\,30\,s under good weather conditions. \subsubsection{Star\'{a} Lesn\'{a} Observatory} One additional LC of WASP-14 was observed at the Star\'{a} Lesn\'{a} Observatory in Slovakia. The observation in the night of 2011 March 29 was performed with a 50\,cm Newtonian telescope equipped with an SBIG ST10 MXE CCD camera. The CCD-chip consists of 2148\,$\times$\,1472 6.8 $\mathrm{\mu}$m pixels and has a pixel scale of 0.56\,arcsec\,pixel$^{-1}$ corresponding to the field of view of about 24$\times$16 arcmin.The LC was obtained in Cousins $R$ band with an exposure time of 20\,s. \\ Unlike all other observations used in this study, the data reduction and photometry for this transit was already carried out at the Star\'{a} Lesn\'{a} Observatory. The standard correction procedure (bias, dark and flat-field correction) and subsequently aperture photometry was performed by \begin{scriptsize}C-MUNIPACK\end{scriptsize} package\footnote{http://c-munipack.sourceforge.net/}. Since the \begin{scriptsize}C-MUNIPACK \end{scriptsize} package is also based on the \begin{scriptsize}DAOPHOT\end{scriptsize} program \citep{1987PASP...99..191S} like \begin{scriptsize}IRAF\end{scriptsize}, the results of both routines are comparable. To generate an artificial comparison star, at least 20\,--\,30\,\% of stars with the lowest LC scatter were selected iteratively from the field stars brighter than 2.5\,--\,3 mag below the saturation level. To measure instrumental magnitudes, various aperture radii were used. The aperture which was found to produce LC with the smallest scatter was applied for generation of final LC. Due to problems with the flat-field and because the observations were acquired without autoguiding the LC contains correlated noise. \section{Light-curve analysis} \label{lc_analysis} \begin{table} \centering \caption{System parameters resulting from the simultaneous wavelet-based red noise MCMC analysis of the four best quality LCs.} \label{tbl:TAPmcmc1} \renewcommand{\arraystretch}{1.1} \begin{tabular}{lc} \hline \hline Parameter & Value \\ \hline Period & 2.2437704*\\ Inclination & 85.3 $^{+1.8}_{-1.1}$\\ $a$/$R_{\mathrm{A}}$ & 5.98 $^{+0.42}_{-0.32}$\\ $R_{\mathrm{b}}$/$R_{\mathrm{A}}$ & 0.0965 $^{+0.0021}_{-0.0027}$\\ Linear LD (\textit{R} band) & 0.39 $^{+0.29}_{-0.24}$ \\ Quad LD (\textit{R} band) & 0.18 $^{+0.38}_{-0.45}$ \\ Linear LD (\textit{V} band) & 0.53 $^{+0.28}_{-0.30}$\\ Quad LD (\textit{V} band) & 0.12 $^{+0.41}_{-0.40}$\\ Eccentricity & 0.087*\\ \hline \hline \end{tabular} \\ $^{\ast}$Value fixed in MCMC analysis. \end{table} To refine the parameters of the system and to determine precise mid-transit times, it is necessary to model the individual LCs. Therefore, we used the Transit Analysis Package\footnote{http://ifa.hawaii.edu/users/zgazak/IfA/\begin{scriptsize}TAP\end{scriptsize}.html} \citep[\begin{scriptsize}TAP\end{scriptsize} v2.1;][]{2012AdAst2012E..30G}, which employs Markov Chain Monte Carlo (MCMC) techniques to fit transit LCs using the \citet{2002ApJ...580L.171M} model. To calculate the model, \begin{scriptsize}TAP\end{scriptsize} uses the fast exoplanetary fitting code \begin{scriptsize}EXOFAST\end{scriptsize} which was developed by \citet{2013PASP..125...83E}. \begin{scriptsize}TAP\end{scriptsize} has some major advantages. First, it incorporates the wavelet-based technique of \citet{2009ApJ...704...51C} which allows one to estimate more robust parameter uncertainties than classic $\chi^{2}$ methods because it parameterizes uncorrelated as well as time-correlated noise. Secondly, it is able to simultaneously analyse multiple transits observed in different conditions (instrument, filter, weather, etc). Finally, the \begin{scriptsize}TAP\end{scriptsize} code employs the quadratic limb darkening (LD) law which is a better choice to represent the observations than a simple linear law as shown, for example, in \citet{2014MNRAS.444.1351R}. The theoretical LD coefficients that were used as initial values for the fitting were bilinearly interpolated (in effective temperature and surface gravity) from the tables by \citet{2000A&A...363.1081C} using the stellar parameters in Table~\ref{Werte_WASP14}. \\ The LC analysis was carried out as explained in \citet[][a]{2013A&A...551A.108M} and \citet[][b]{2013AJ....146..147M}. We selected the best-quality LCs sorted by their pnr for a simultaneous fit using 10 MCMC chains with $10^{5}$ steps each. The orbital inclination $i$ , the semimajor-axis scaled by stellar radius $\frac{a}{R_{\mathrm{A}}}$, and the planetary to stellar radii ratio $\frac{R_{\mathrm{b}}}{R_{\mathrm{A}}}$ were linked together for all LCs. We also connected the LD coefficients $u$ and $v$ but only for the LCs that were observed in the same filter. The orbital period $P$, the eccentricity $e$ and the argument of periastron $\omega$ were kept fixed while the mid-transit times $T_{\mathrm{c}}$, the airmass slopes, and the flux offsets were allowed to vary separately. The selection of the best LCs was done iteratively. The fitting procedure started with a few LCs with small pnr and was repeated several times after adding more LCs with higher pnr. The final selection consists of the four best LCs with a pnr\,$<$\,2 mmag, three LCs in \textit{R} and one in the \textit{V} band. Including transits with a higher pnr degraded the quality of the fit. A phase-folded LC of these four transits including the best-fitting model that correspond to the resulting system parameters given in Table~\ref{tbl:TAPmcmc1} are shown in Fig.~\ref{alltransit_phased_lc}. \begin{figure} \centering \includegraphics[width=0.33\textwidth,angle=270]{Transit_Template.eps} \caption{Phase-folded LC of the four best LCs with a pnr\,$<$\,2 mmag that were used to create the template shown as solid black line. The dark grey points show the same fluxes binned in phase, with a bin size of 0.005\,d.} \label{alltransit_phased_lc} \end{figure} \section{Physical properties} The results given in Table~\ref{tbl:TAPmcmc1} allow us to calculate stellar, planetary, and geometrical parameters. The mean stellar density $\rho_{\mathrm{A}}$ can be derived directly from the parameters obtained from the light-curve modelling using \begin{equation} \label{density} \rho_{\mathrm{A}}=\frac{3\mathrm{\pi}}{GP^{2}}\left( \frac{a}{R_{\mathrm{A}}}\right)^{3} \end{equation} \citep{2010exop.book...55W}, where $G$ is the gravitational constant. To determine the stellar mass we used the $T_{\mathrm{eff}}$ from Table~\ref{Werte_WASP14} and the stellar density to plot WASP-14 into a modified version of the Hertzsprung--Russel diagram (HRD), together with PARSEC isochrones \citep[version 1.2S;][]{2012MNRAS.427..127B}. The result for $\left[ \frac{M}{H}\right]=0.0$ is shown in Fig.~\ref{HRD}. WASP-14 is in an area of the HRD with overlapping isochrones of very young ($\sim$\,20\,Myr) and young ($\sim$\,1\,Gyr) ages. Since \citet{2009MNRAS.392.1532J} estimated an age range of 0.5\,-\,1.0\,Gyr based on lithium abundance, the rotation velocity and a comparison with the models of \citet{2007ApJ...659.1661F}, we excluded the very young ages. Taken also the uncertainty in the metallicity into account the stellar evolutionary models yielded a stellar mass of $M_{\mathrm{A}}=1.30\pm0.06\,\mathrm{M_{\odot}}$ which is consistent with the mass given in \citet{2009MNRAS.392.1532J}. The improved period (see Section \ref{Transit_times}), the stellar mass, the orbital inclination $i$, the eccentricity $e$, and the amplitude of the star's radial velocity $K_{\mathrm{A}}$ taken from \citet{2013ApJ...779....5B} allowed us to redetermine the planetary mass $M_{\mathrm{b}}$. To calculate the semimajor axis $a$, we inserted the orbital period and the masses of star and planet into Kepler's third law. By resolving $a$/$R_{\mathrm{A}}$ and $R_{\mathrm{b}}$/$R_{\mathrm{A}}$ using the already determined value for $a$ we deduce values for $R_{\mathrm{A}}$ and $R_{\mathrm{b}}$. The planetary radius as well as the mass were used to calculate the planetary density $\rho_{\mathrm{b}}$. The impact parameter $b$ which depends on the eccentricity $e$ were calculated using \begin{equation} \label{Impakt_exz} b=\frac{a\,\mathrm{cos}\,i}{R_{\mathrm{A}}}\frac{1-e^{2}}{1+e\,\mathrm{cos}\,\omega}. \end{equation} The surface gravities of star and planet $g_{\mathrm{A}}$ and $g_{\mathrm{b}}$ were calculated using the formulae \begin{equation} \label{gb} g_{\mathrm{b}}=\left( \frac{2\mathrm{\pi}}{P}\right) \frac{(1-e^{2})^{1/2}}{\mathrm{sin}\,i}\frac{a^{2}K_{\mathrm{A}}}{R_{\mathrm{b}}^{2}} \end{equation} and \begin{equation} \label{gA} g_{\mathrm{A}}=\left( \frac{2\mathrm{\pi}}{P}\right) \frac{(1-e^{2})^{1/2}}{\mathrm{sin}\,i}\frac{a^{2}K_{\mathrm{b}}}{R_{\mathrm{A}}^{2}} \end{equation} with \begin{equation} \label{Kb} K_{\mathrm{b}}=\frac{2\mathrm{\pi} a M_{\mathrm{A}}\mathrm{sin}\,i}{(M_{\mathrm{A}}+M_{\mathrm{b}})P\sqrt{1-e^{2}}} \end{equation} \citep{2009MNRAS.394..272S}, where $K_{\mathrm{A}}$ and $K_{\mathrm{b}}$ are the amplitude of the star's and planet's radial velocity, respectively. In addition, we calculated the equilibrium temperature of the planet $T_{\mathrm{eq}}$ \citep[assuming a Bond albedo\,=\,0 and only little energy redistribution across the surface of the planet;][]{2007ApJ...671..861H} and the Safronov number $\Theta$ \citep{1972epcf.book.....S}, the square of the ratio of escape velocity of the planet $v_{\mathrm{esc}}$ and orbital velocity $v_{\mathrm{orb}}$: \begin{equation} \label{Safronov} \Theta=\frac{1}{2}\left(\frac{v_{\mathrm{esc}}}{v_{\mathrm{orb}}}\right)^{2}=\frac{a}{R_{\mathrm{b}}}\frac{M_{\mathrm{b}}}{M_{\mathrm{A}}} \end{equation} The results of the calculations are summarized in Table~\ref{phys_prop_WASP14}. Our results are consistent with the values in \citet{2009MNRAS.392.1532J} but have larger error bars. This is not surprising taking into account the quality of our ground-based LCs. The physical properties given in \citet{2012MNRAS.426.1291S}, however, are significantly different from our findings. The reason for this discrepancy lies in the interpretation of the results of the LC analysis. \citet{2012MNRAS.426.1291S} modelled two available LCs and obtained a set of solutions (for different LD laws and for fitting either one or two LD coefficients) which was averaged to a final solution. Since both LCs yielded different results, the final derived averaged properties deviate from the already published values. \begin{figure} \centering \includegraphics[width=0.33\textwidth,angle=270]{Wasp14_Z0.0147.ps} \caption{Position of WASP-14 in the $\rho_{\mathrm{A}}^{-1/3}\,-\,T_{\mathrm{eff}}$ plane. The PARSEC isochrones of solar metallicity for log(age)\,=\,7.24\,-\,7.30 with steps of 0.01 and log(age)\,=\,8.95\,-\,9.40 with steps of 0.05 for the very young age and the young age, respectively, are also shown.} \label{HRD} \end{figure} \begin{table} \centering \caption{Physical properties of the WASP-14 system derived from LC modelling. Values derived by \citet[][J09]{2009MNRAS.392.1532J} and \citet[][S12]{2012MNRAS.426.1291S} are given for comparison.} \label{phys_prop_WASP14} \renewcommand{\arraystretch}{1.1} \begin{tabular}{lr@{\,$\pm$\,}lr@{\,$\pm$\,}lr@{\,$\pm$\,}l} \hline \hline Parameter & \multicolumn{2}{c}{This work} & \multicolumn{2}{c}{J09} & \multicolumn{2}{c}{S12} \\ \hline \hline & \multicolumn{6}{c}{Planetary parameters} \\ \hline $R_{\mathrm{b}}$ [R$_{\mathrm{Jup}}$] & 1.240 & $^{0.116}_{0.103}$ & 1.281 & $^{0.075}_{0.082}$ & 1.633 & 0.092 \\ $M_{\mathrm{b}}$ [M$_{\mathrm{Jup}}$] & 7.59 & $^{0.24}_{0.23}$ & 7.34 & 0.50 & 7.90 & 0.46 \\ $\rho_{\mathrm{b}}$ [$\mathrm{\rho}_{\mathrm{Jup}}$] & 3.73 & $^{1.05}_{0.93}$ & 3.50 & $^{0.64}_{0.50}$ & 1.69 & 0.25 \\ log\,$g_{\mathrm{b}}$ & 4.090 & $^{0.080}_{0.071}$ & 4.010 & $^{0.049}_{0.042}$ & 3.866 & 0.042 \\ $T_{\mathrm{eq}}$ [K] & 1872 & $^{29}_{29}$ & 1866 & $^{37}_{42}$ & 2090 & 59 \\ $\Theta$ & 0.345 & $^{0.037}_{0.037}$ & \multicolumn{2}{c}{} & 0.265 & 0.015 \\ \hline & \multicolumn{6}{c}{Stellar parameters} \\ \hline $R_{\mathrm{A}}$ [R$_{\mathrm{\odot}}$] & 1.318 & $^{0.095}_{0.073}$ & 1.306 & $^{0.066}_{0.073}$ & 1.666 & 0.097 \\ $M_{\mathrm{A}}$ [M$_{\mathrm{\odot}}$] & 1.300 & 0.060 & 1.211 & $^{0.127}_{0.122}$ & 1.350 & 0.120 \\ $\rho_{\mathrm{A}}$ [$\mathrm{\rho}_{\mathrm{\odot}}$] & 0.570 & $^{0.120}_{0.092}$ & 0.542 & $^{0.079}_{0.060}$ & 0.293 & 0.042 \\ log\,$g_{\mathrm{A}}$ & 4.312 & $^{0.061}_{0.047}$ & 4.287 & $^{0.043}_{0.038}$ & 4.126 & 0.042 \\ $L_{\mathrm{A}}$ [L$_{\mathrm{\odot}}$] & 0.435 & 0.085 & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\\hline & \multicolumn{6}{c}{Geometrical parameters} \\ \hline $a$ [au] & 0.037 & 0.001 & 0.036 & 0.001 & 0.037 & 0.001 \\ $i$ [$^{\circ}$] & 85.30 & $^{+1.80}_{-1.10}$ & 84.32 & $^{0.67}_{0.57}$ & 81.1 & 1.5 \\ $b$ & 0.499 & $^{0.194}_{0.120}$ & 0.535 & $^{0.031}_{0.041}$ & 0.752 & 0.133* \\ \hline \hline \end{tabular} \\ $^{\ast}$This parameter is not given in \citet{2012MNRAS.426.1291S} but was calculated from $R_{\mathrm{A}}$, $a$, $i$ \end{table} \section{Transit Timing} \label{Transit_times} The mid-transit times were determined by applying the transit model created in Section~\ref{lc_analysis} to the individual LCs using \begin{scriptsize}TAP\end{scriptsize}. The initial parameters for the fitting were the ones given in Table~\ref{tbl:TAPmcmc1}. The mid-transit time, as well as the flux slope and intercept were always a free parameter while the orbital period $P$ and the eccentricity $e$ were kept fixed. The best-model parameters and their uncertainties were set as Gaussian priors and $i$, $a$/$R_{\mathrm{A}}$, $R_{\mathrm{b}}$/$R_{\mathrm{A}}$, and the LD coefficients of each transit were allowed to vary within this range. With this approach we assured that the best-model uncertainties are included in the error bars of the mid-transit time. Several LCs were observed in a filter that has not contributed to the template LC. In those cases the theoretical values $\pm0.5$ was used as Gaussian prior for the LD coefficients. Ten chains of a length of $10^{5}$ steps were used for the MCMC analysis of each LC. Four transits were observed with more than one telescope. These LCs were fitted simultaneously to increase the timing precision. The times have been converted into the barycentric Julian Date based on the barycentric dynamic time (BJD$_{\mathrm{TDB}}$) using the online converter\footnote{http://astroutils.astronomy.ohio-state.edu/time/utc2bjd.html} by \citet{2010PASP..122..935E}. \begin{figure} \begin{minipage}[]{0.45\textwidth} \centering \includegraphics[height=0.32\textheight, angle=270]{Histogramm_11_03_29.eps} \caption{Distribution of the transit times obtained from the analysis of 185 LCs that were created as a transit time robustness check (see text for details) for the transit at epoch 529 (2011 March 29). The thick solid line gives the result of TAP only for the initial LC, the dashed lines give the TAP error bars. The width of the transit time distribution, shown here as grey shaded area, is nicely reproduced by the TAP uncertainties (ratio between distribution width and TAP error bars = 0.8).} \label{Histogramm_11_03_29} \end{minipage} \begin{minipage}[]{0.45\textwidth} \centering \includegraphics[height=0.34\textheight, angle=270]{Histogramm_11_03_11.eps} \caption{Same as Fig.~\ref{Histogramm_11_03_29} but for the transit at epoch 521 (2011 March 11). Given the smaller number of data points in the initial LC only 100 LCs could be created and analysed. The TAP error bars clearly underestimate the width of the transit time distribution (ratio between distribution width and TAP error bars = 1.6).} \label{Histogramm_11_03_11} \end{minipage} \end{figure} \subsection{Timing error estimates} The observations at the University Observatory Jena and the YETI network yield 13 transit times. Looking at the 19 individual LCs revealed some problems with several of the observations.\\ The transit from 2011 March 2 was observed with three different telescopes but all of them only yielded a partial LC due to bad weather and technical problems. Since the egress is missing in all three cases, the shape of the transit cannot be determined properly. We excluded this data point from any further timing analysis. \\ The data point from 2011 March 20 consist of three individual transits that were modelled simultaneously and therefore has small error bars. But looking at the individual transits reveals that the transit time from the Calar Alto observations differ by $\sim$10\,min from the other two LCs. Since the reason for this difference stayed unclear (most likely the bad conditions like thin clouds, fog, and full moon during the observation) we excluded the Calar Alto LC and modelled the remaining two simultaneously. \\ The LCs from 2011 February 12 and 2011 March 11 suffer from outliers and bad weather conditions in the egress, respectively. In both cases we realized significant changes in the resulting transit times for different LC treatments like removing outliers or binning. Since these effects should be accounted for in the error budget we run a test to check the robustness of the transit times. From the original (raw) LC we first created a set of 100-200 LCs (depending on the initial number of points in the LCs) by removing data points (every second, third, etc.), sigma clipping, using different binning factors, different artificial comparison stars, and removing trends. Then we modelled all LCs, determined the transit time and compiled a histogram. The width of the transit time distribution (represented by a Gauss function) gave the final timing error which were then compared to the outcome of \begin{scriptsize}TAP\end{scriptsize}. In most cases we could reproduce the uncertainties from the wavelet-based MCMC analysis (ratio between distribution width and \begin{scriptsize}TAP\end{scriptsize} error bars $\sim$\,0.8\,--\,1.2) meaning the MCMC produces very robust error bars even for LCs with a very low quality. One example for a good agreement of the \begin{scriptsize}TAP\end{scriptsize} error bars and the distribution width is given in Fig.~\ref{Histogramm_11_03_29}. Only for two LCs (2011 March 11 and 2011 April 7) the \begin{scriptsize}TAP\end{scriptsize} error bars are by a factor of 1.5-2 smaller than the distribution width as shown in Fig.~\ref{Histogramm_11_03_11}. In both cases either the ingress or the egress is affected by the weather conditions which makes it difficult to recover the transit shape. \\ As final uncertainties for the transit times, we always adopted the maximum of the \begin{scriptsize}TAP\end{scriptsize} error bars and the distribution width. \subsection{Transit ephemeris} To extend the observational baseline, we included the data point of \citet{2009MNRAS.392.1532J} to our timing analysis. Note, that this is not a single transit observation but the original published mid-transit time at epoch zero computed from many individual transits. Another transit observed with the University of Hawaii 2.2\,m (UH.2.2\,m) telescope on Mauna Kea is published in \citet{2009PASP..121.1104J}. The mid-transit time for this observation was determined like for our measurements as described in Section \ref{Transit_times}. With these altogether 14 mid-transit times (the partial transit at epoch 517 was excluded) we re-calculated the transit ephemeris using an error weighted linear fit. The result is given in equation (\ref{Elemente_WASP14}), where $E$ denotes the epoch (reduced $\chi^{2}$\,=\,0.64): \begin{equation} \label{Elemente_WASP14} \begin{array}{r@{.}lcr@{.}l} T_{\mathrm{c[BJD_{TDB}]}}(E)=(2454463 & 57688 & + & E\cdot 2 & 2437655)\,\mathrm{d} \\ \pm0 & 00047 & & \pm0 & 0000010 \end{array} \end{equation} The orbital period $P$ is 1.2\,s longer and 10 times more precise than the one given in \citet{2009MNRAS.392.1532J} but in agreement with \citet{2012MNRAS.426.1291S} and \citet{2013ApJ...779....5B}. \subsection{Transit Timing Variations} If we subtract the predicted transit times (calculated with the refined ephemeris) from the observed value, we obtain the O--C. The results for the mid-transit times and the O--C are listed in Table~\ref{WASP14_Transit_Times}. Fig.~\ref{O_C_2014} shows the O--C diagram where the black solid line represents the refined ephemeris. To search for a periodicity in the O--C diagram we computed the generalized Lomb--Scargle periodogram \citep[\begin{scriptsize}GLS\end{scriptsize};][]{2009A&A...496..577Z}. The periodogram is shown in Fig.~\ref{GLS_plot} where the highest peak ($P_{\mathrm{TTV}}$\,=\,26.33\,$\pm$\,0.09\,epochs, power of 0.66) corresponds to a False-Alarm probability (FAP) of 44.7\%. Within our data set we could not detect any evidence for TTV.\\ Residuals in transit timing allow us to place constraints on the properties of any hypothetical additional planet in the system. A set of synthetic O--C diagrams for WASP-14\,b in the presence of a fictitious perturbing planet was generated with the \begin{scriptsize}MERCURY\end{scriptsize} 6 package \citep{1999MNRAS.304..793C} and the implemented Bulirsch--Stoer integrator. The mass of this second planet was set at 0.5, 1, 5, 10, 50, 100, and 500 $M_{\mathrm{Earth}}$ (Earth masses). The initial semimajor axis was iterated from 0.01 to 0.12 au with a step of $2\times10^{-6}$ au. The system was assumed to be coplanar. For WASP-14\,b, the initial argument of periastron, $\omega_{\mathrm{b}}$, and orbital eccentricity, $e_{\mathrm{b}}$ were taken from Blecic et al. (2013), and the mean anomaly was set at $0^{\circ}$. For the fictitious planet, its argument of periastron was set at the value equal to $\omega_{\mathrm{b}}$ at the beginning of each simulation. Calculations were done for three values of the orbital eccentricity of the fictitious planet, $e_{\mathrm{p}}$, set to 0.0, 0.1, and 0.2. The initial mean anomaly was shifted by $180^{\circ}$. The integration for each planetary configuration covered 2250\,d (i.e. 1000 orbit of WASP-14\,b -- the time span of transit timing observations). We calculated the rms of the signal at $P_{\mathrm{TTV}}$ as rms$_{\mathrm{TTV}}=A_{\mathrm{max}} / \sqrt{2}$, where $A_{\mathrm{max}}$ is an amplitude of this signal. We derived rms$_{\mathrm{TTV}}=37\,\mathrm{s}$. Analogously, rms of the residuals from a linear ephemeris was calculated for each set of simulated observations. Then, for each orbital distance, we determined the range of planet masses in which the value of rms$_{\mathrm{TTV}}$ fell. An upper mass of the fictitious planet at the detection limit was found by linear interpolation for masses below 500 $M_{\mathrm{Earth}}$. If the TTV sinal was found to be generated by a more massive body, the limiting mass was extrapolated using a linear trend as fitted to 100 and 500 $M_{\mathrm{Earth}}$. \\ Exemplary results for $e_{\rm{p}}=0.1$ are illustrated in Fig.~\ref{fig-limit}. The timing technique allows us to probe the Earth-mass regime close to MMRs. Most of orbits located between the inner 1:2 and outer 2:1 orbital period commensurabilities, were found to be highly unstable and planetary close encounters or planet ejections occurred during the relatively short time of integration. Thus, planetary configurations which are placed in this space are highly unlikely to exist. \begin{table*} \centering \caption{Transit times for all observed transits of WASP-14\,b including the publicly available transits. The O--C was calculated with the ephemeris given in equation (\ref{Elemente_WASP14}).} \label{WASP14_Transit_Times} \renewcommand{\arraystretch}{1.1} \begin{tabular}{ccr@{\,$\pm$\,}lr@{\,$\pm$\,}lc} \hline \hline Date & Epoch & \multicolumn{2}{c}{$T_{\mathrm{c}}$ (BJD)} & \multicolumn{2}{c}{O--C (min)} & Ref. \\ \hline \hline & 0 & 2454463.57657 & 0.00053 & -0.44 & 0.76 & \citet{2009MNRAS.392.1532J} \\ 2009 Apr. 1 & 205 & 2454923.54564 & $^{0.00310}_{0.00310}$ & -4.55 & $^{4.46}_{4.46}$ & This work \\ 2009 Apr. 19 & 213 & 2454941.50043 & $^{0.00545}_{0.00545}$ & 2.16 & $^{7.85}_{7.85}$ & This work \\ 2009 May 7 & 221 & 2454959.45027 & $^{0.01000}_{0.00760}$ & 1.75 & $^{14.40}_{10.94}$ & This work \\ & 223$^{a}$ & 2454963.93776 & $^{0.00069}_{0.00070}$ & 1.69 & $^{0.99}_{1.01}$ & \citet{2009PASP..121.1104J} \\ 2011 Feb. 12 & 509 & 2455605.65997 & $^{0.00330}_{0.00310}$ & 4.83 & $^{4.75}_{4.46}$ & This work \\ (2011 Mar. 2 & 517$^{b}$ & 2455623.59980 & $^{0.00240}_{0.00200}$ & -5.53 & $^{3.46}_{2.88}$ & This work)* \\ 2011 Mar. 11 & 521 & 2455632.57247 & $^{0.00306}_{0.00306}$ & -3.01 & $^{4.40}_{4.40}$ & This work \\ 2011 Mar. 20 & 525$^{b}$ & 2455641.54831 & $^{0.00055}_{0.00055}$ & -0.86 & $^{0.79}_{0.79}$ & This work \\ 2011 Mar. 29 & 529$^{b}$ & 2455650.52899 & $^{0.00058}_{0.00058}$ & 0.24 & $^{0.84}_{0.84}$ & This work \\ 2011 Apr. 7 & 533 & 2455659.50506 & $^{0.00528}_{0.00528}$ & 1.69 & $^{7.60}_{7.60}$ & This work \\ 2011 Apr. 16 & 537 & 2455668.47584 & $^{0.00503}_{0.00503}$ & -4.48 & $^{7.24}_{7.24}$ & This work \\ 2012 Feb. 6 & 669$^{b}$ & 2455964.65659 & $^{0.00140}_{0.00140}$ & 0.86 & $^{2.02}_{2.02}$ & This work \\ 2012 Feb. 24 & 677 & 2455982.60621 & $^{0.00090}_{0.00090}$ & 0.13 & $^{1.29}_{1.29}$ & This work \\ 2013 Apr. 30 & 869 & 2456413.40914 & $^{0.00163}_{0.00163}$ & 0.07 & $^{2.35}_{2.35}$ & This work \\ \hline \hline \end{tabular} \\ $^{\ast}$Data point was excluded from timing analysis.\\ $^{a}$Transit was re-analysed with \begin{scriptsize}TAP\end{scriptsize}.\\ $^{b}$The LCs have been combined during the MCMC analysis to improve timing precision. \end{table*} \begin{figure} \centering \includegraphics[width=0.33\textwidth,angle=270]{O_C_2014.eps} \caption{The O--C-diagram of WASP-14\,b. The black filled and open (with dashed error bars) symbols denote the complete and the partial transits, respectively. Note that the transit at epoch 669 (2012 February 6) is considered here as `complete' since the ingress as well as the egress, and therefore the transit shape, are intact. The data points from \citet{2009MNRAS.392.1532J} and \citet{2009PASP..121.1104J} are shown in grey. The solid line represents the updated ephemeris given in equation (\ref{Elemente_WASP14}). } \label{O_C_2014} \end{figure} \begin{figure} \centering \includegraphics[width=0.32\textwidth,angle=270]{GLS.ps} \caption{Generalized Lomb--Scargle periodogram (top panel) and window function (bottom panel) for the O--C diagram of WASP-14\,b. The highest peak with a period of $P_{\mathrm{TTV}}$\,=\,26.33\,$\pm$\,0.09\,epochs at a power of 0.66 shows a FAP of 44.7\%.} \label{GLS_plot} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fig-limit.eps} \caption{Upper mass limit for a fictitious additional planet in the WASP-14 system, based on the timing data set, as a function of the orbital period of that planet, $P_{\mathrm{p}}$. The greyed area shows unexplored configurations because they are below a detection threshold of the timing data set.} \label{fig-limit} \end{figure} \section{Conclusions} \begin{figure} \centering \includegraphics[width=0.33\textwidth,angle=270]{Transit_period_mass_err.ps} \caption{The period-mass diagram for close-in exoplanets ($P\,<\,10$\,d and $M\,<\,15\,M_{\mathrm{Jup}}$). Only 12 planets are more massive than 6.5\,$M_{\mathrm{Jup}}$. No planets were found in he region between 4.5 and 6.5\,$M_{\mathrm{Jup}}$ (except WASP-33\,b which is just an upper mass limit $M\,<\,4.59\,M_{\mathrm{Jup}}$, upper limits are marked with an arrow)} \label{Transit_period_mass.ps} \end{figure} WASP-14\,b is, for several reasons, a very interesting target. First, because of its very high mass it is one of the densest exoplanet with an orbital period shorter than three days. Furthermore, despite of its close-in orbit WASP-14\,b has a rather high eccentricity. Interestingly, there is a strong tendency that massive planets on close-in orbits have elliptical orbits. Approximately 58\% (7 out of 12, exoplanet.eu, 2014 November 12) of transiting exoplanets with masses greater than $M\,=\,5\,M_{\mathrm{Jup}}$ and periods less than 10\,d have $e\,\neq$\,0 while only 20\% (53 out of 265, exoplanet.eu, 2014 November 12) of the less massive planets show a significant eccentricity. This may indicate that there are two distinct types of exoplanets. Another hint is the distribution in the period-mass diagram for close-in exoplanets ($P\,<\,10$\,d and $M\,<\,15\,M_{\mathrm{Jup}}$) which is shown in Fig.~\ref{Transit_period_mass.ps}. Two distinct areas are clearly identified. Most of the planets have a mass below 5\,$M_{\mathrm{Jup}}$. So far, no planets were discovered in the mass range between 4.5 and 6.5\,$M_{\mathrm{Jup}}$ (The only point in this area belongs just to an upper mass limit of WASP-33\,b, $M\,<\,4.59\,M_{\mathrm{Jup}}$), while several objects were found with masses $\geq$\,6.5\,$M_{\mathrm{Jup}}$, which are often referred to as hot super-Jupiter. This clear distinction could indicate a physical difference in the planet formation and evolution. Since only a handful of hot super-Jupiter known so far, this massive area is insufficiently explored to make general statements. \\ We observed 19 LCs of WASP-14\,b with six telescopes at five different observatories within the TTV$@$YETI collaboration. Due to the weather conditions and observations done with small telescopes the LCs are of highly variable quality. All transits are shown in Figs~\ref{LC_Wasp14a} and \ref{LC_Wasp14b}, grouped according to their quality to LC with rms$\,>$\,4\,mmag and $rms\,<$\,4\,mmag, respectively. \\ From the simultaneous LC modelling of our four best LCs we could determine planetary, stellar and geometrical properties of the system. Our values are in agreement with the values in \citet{2009MNRAS.392.1532J} but differ significantly from the physical properties given in \citet{2012MNRAS.426.1291S}. This discrepancy, however, is not physical since it is only caused by the interpretation of the final results by \citet[][averaging of two different sets of best-fitting parameters for two individual LCs]{2012MNRAS.426.1291S}. Since the two high precision transit LCs available in literature were found to be inconsistent with each other our findings are from great importance for the determination of the system parameters.\\ Including the two publicly available data points, altogether 14 mid-transit times (the partial transit at epoch 517 was excluded) were used in the transit timing analysis. To investigate the error budget we ran a test to check the robustness of the transit times. We found, that if the ingress or egress, and hence the shape of the transit, is missing or show outlier and/or large scatter even the uncertainties determined with a wavelet-based MCMC may be underestimated. \\ We found no significant periodic signal in the O--C diagram. The strongest period at $\sim$26\,epochs has a FAP of 44.7\%. Hence, there is no evidence for TTV in the system. From our three-body-simulations, we can exclude even earth-mass perturbers in some resonance-orbital configurations. \\ Measurements of the Rossiter--McLaughlin effect found a misalignment between the stellar spin axis and the orbital axis of the planet which cannot be explained by planet-disc migration theories \citep[e.g.][]{2007ApJ...655..550G}. Another possible mechanism that may be responsible for such massive close-in planets is gravitational scattering by larger planets \citep{1996Natur.384..619W}. This planet--planet scattering scenario is one way to explain the properties of WASP-14\,b. \\ The significant eccentricity indicate either that the tides did not had sufficient time to influence the planetary orbit, which would support the planet--planet scattering theory or that the tidal effects in planetary systems are weaker than expected. Long-term follow-up studies of WASP-14 will add stricter constraints on these theories. \begin{figure*} \centering \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_09_04_01_CTK_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_09_04_19_CTK_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_09_05_07_CTK_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_11_03_02_CTKII_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_11_03_02_Cafos_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_11_03_20_CTKII_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_11_03_20_Cafos_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_11_04_16_Cafos_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_12_02_06_STK_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_12_02_06_CTKII_paper.eps} \caption{LCs of WASP-14\,b with an rms$\,>$\,4\,mmag. The date of observation, observatory, filter, pnr, and the rms of the fit are indicated in each individual panel} \label{LC_Wasp14a} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_11_02_12_OSN_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_11_03_02_STK_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_11_03_11_STK_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_11_03_20_STK_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_11_03_29_STK_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_11_03_29_Slovakia_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_11_04_07_OSN_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_12_02_24_Cafos_paper.eps} \includegraphics[width=0.21\textwidth, angle=270]{Wasp14_13_04_30_Tubitak_paper.eps} \caption{The same as Fig. \ref{LC_Wasp14a} but for the higher quality LCs with an rms$\,<$\,4\,mmag. The four best quality LCs (in terms of pnr) that were used to create the template are marked with an asterisk.} \label{LC_Wasp14b} \end{figure*} \section*{Acknowledgements} We would like to thank H. Gilbert, S. Fiedler, I. H\"{a}usler, A. Reithe, and W. Rammo for participating in some of the observations at the University Observatory Jena.\\ SR is currently a Research Fellow at ESA/ESTEC. SR would like to thank DFG for support in the Priority Programme SPP 1385 on the `First Ten Million Years of the Solar system' in projects NE 515/33-1 and -2. MM acknowledge DFG for support in programme MU2695/13-1. AB would like to thank DFG for support in project NE 515/32-1. TE and LT would like to thank the DFG for support from the SFB-TR 7. CM acknowledges support from the DFG through grant SCHR665/7-1. MV would like to thank the projects APVV-0158-11 and VEGA 2/0143/14. GM and MV would like to thank the European Union in the Framework Programme FP6 Marie Curie Transfer of Knowledge project MTKD-CT-2006-042514 for support. TG acknowledges support from Bilim Akademisi -- The Science Academy, Turkey under the BAGEP programme. TG has been supported in part by Istanbul University: Project number 39742. T100 observations were performed under the project 12CT100-388. We would like to acknowledge financial support from the Thuringian government (B 515-07010) for the STK CCD camera used in this project. \bibliographystyle{mn2e}
1,314,259,993,071
arxiv
\section{Introduction} \label{sec:0} A numerical semigroup is a subset $S$ of $\N$ (here $\N$ denotes the set of non-negative integers) closed under addition, containing zero and such that $\N\backslash S$ is finite. Numerical semigroups were first considered while studying the set of nonnegative solutions of Diophantine equations and their study is closely related to the analysis of monomial curves (see \cite{delorme}). By these reasons, the theory of numerical semigroups has attracted a number of researchers from the algebraic community. For instance, some terminology from algebraic geometry has been exported to this field as the multiplicity (the smallest positive integer belonging to the semigroup), the genus (the number of nonnegative integers not belonging to the semigroup), or the embedding dimension (the cardinal of the minimal system of generators of the semigroup). Further details about the theory of numerical semigroups can be found in the recent monograph by Rosales and Garc\'ia-S\'anchez \cite{springer}. A numerical semigroup is said irreducible if it cannot be expressed as an intersection of two numerical semigroups containing it properly. This notion was introduced in \cite{rosales03} where it is also shown that the family of irreducible numerical semigroups is the union of two families of numerical semigroups with special importance in this theory: symmetric and pseudo-symmetric numerical semigroups. The Frobenius number of a numerical semigroup is the largest integer not belonging to the semigroup. Then, symmetric (resp. pseudo-symmetric) numerical are those irreducible numerical semigroups with odd (resp. even) Frobenius number (see \cite{barucci97,froberg87}). The irreducibility of a numerical semigroup have been widely studied in the literature (see \cite{branco-nuno07,rosales02,rosales02b,rosales03,rosales03b,rosales04}). Furthermore, apart from the theory of semigroups, this notion is connected with commutative ring theory. In fact, let $S$ be a numerical semigroup, $\K$ a field, and $\K[[t]]$ the ring of formal power series over $\K$. It is well-known that $\K[[S]]=\{\dsum_{s\in S} a_s t^s: a_s \in \K\}$ is a subring of $\K[[t]]$, called the ring of the semigroup associated to $S$ (see for instance, \cite{barucci97}). Properties over the numerical semigroup $S$ are translated to the ring associated to $S$. Actually, it is well-known that if the numerical semigroup is symmetric, the ring associated to it is a Gorenstein ring (see \cite{kunz1}) and if the semigroup is pseudo-symmetric, the ring is a Kunz ring (see \cite{barucci-froberg}). In \cite{ijac11}, it is introduced the notion of $m$-irreducibility, which extends the concept of irreducibility when the multiplicity is fixed. A numerical semigroup with multiplicity $m$ is said $m$-irreducible if it cannot be expressed as an intersection of two numerical semigroups with multiplicity $m$ and containing it properly. In \cite{ijac11} apart from introducing this notion, the set of $m$-irreducible numerical semigroups is characterized in terms of its special gaps and also by its genus and Frobenius number. An interesting problem when treating the irreducibility of a numerical semigroup is to minimally decompose a numerical semigroup into ($m$-)irreducible ones, in the sense that the decomposition involves the minimum number of semigroups. In \cite{ijac11} it is also given an algorithm to compute such a decomposition. A different approach, by applying integer programming tools, is proposed in \cite{siam11} to compute more efficiently (in polynomial time) minimal decompositions of numerical semigroups with multiplicity $m$, into $m$-irreducible numerical semigroups. In that approach it is used the notion of Kunz-coordinates vector to translate the considered problem in the problem of finding some integer optimal solutions, with respect to appropriate objective functions, in a Kunz polytope. The encoding of a numerical semigroup as a integer vector was first considered in \cite{kunz} and \cite{london02} and it is based on the Ap\'ery set codification of a semigroup with respect to its multiplicity. This useful tool has been also applied to compute the number of numerical semigroups with a given genus \cite{counting}. Here, we analyze the irreducibility of the family of numerical semigroups with multiplicities $3$ and $4$. The characterizations of this set of numerical semigroups in terms of the Frobenius number ,the genus or the ratio is studied in \cite{rosales05}. Note that the case when the multiplicity of the semigroup is $2$ is trivial since any numerical semigroup with multiplicity $2$ is symmetric (see \cite{rosales05}) and then irreducible. Although in general the notions of irreducibility and $m$-irreducibility are different, we prove that these notions coincide when $m$ is three or four (and obviously for $m=2$). Furthermore, we give explicit and simple conditions over the Kunz-coordinates vector of a numerical semigroups to be irreducible in these cases. Then, for a given numerical semigroup with multiplicity $3$ (resp. $4$), we describe minimal decompositions into $3$-irreducible (resp. $4$-irreducible) numerical semigroups. We apply this approach to analyze some subfamilies of numerical semigroups with multiplicities $3$ or $4$: $3$ and $4$-symmetric numerical semigroups, $3$ and $4$-pseudosymmetric numerical semigroups or semigroups generated by generalized arithmetic sequences. In Section \ref{sec:1} we recall the main definitions and results needed through this paper. Section \ref{sec:2} is devoted to analyze the family of numerical semigroups with multiplicity $3$. We characterize in this section the set of $3$-irreducible numerical semigroups in terms of its Kunz-coordinates vector, and we explicitly describe the minimal decomposition of any numerical semigroup with multiplicity $3$ into irreducible numerical semigroups. An analogous analysis is done in Section \ref{sec:3} for the case when the multiplicity is four. \section{Preliminaries} \label{sec:1} A numerical semigroup is a subset $S$ of $\N$ closed under addition, containing zero and such that $\N\backslash S$ is finite. The reader is referred to the recent monograph by Rosales and Garc\'ia-S\'anchez \cite{springer} for further details about the theory of numerical semigroups. The multiplicity of a numerical semigroup $S$ is the smallest non zero element belonging to it, and it is usually denoted by $\m(S)$. A numerical semigroup $S$ is said irreducible (resp. $m$-irreducible) if it cannot be expressed as an intersection of two numerical semigroups (resp. numerical semigroups with multiplicity $m$) containing it properly. In \cite{ijac11} the authors characterize the set of $m$-irreducible numerical semigroups for any positive integer $m$. For the sake of completeness, we recall some of these characterizations that will be useful for the development done in this paper. For any numerical semigroup $S$, the Frobenius number of $S$, $\F(S)$, is the largest integer not belonging to $S$, and the genus of $S$, $\g(S)$, is the number of nonnegative integers that do not belong to $S$. The following result completely determines the set of $m$-irreducible numerical semigroups. Here $\lceil q \rceil$ stands for the ceiling integer part of any $q \in \Q$. \begin{lemma}[Proposition 6 in \cite{ijac11}] \label{lemma:1} A numerical semigroup, $S$, with multiplicity $m$ is $m$-irreducible if and only if one of the following conditions holds: \begin{enumerate} \item $S=\{x \in \N: x\ge m\} \cup \{0\}$. \item $S=\{x \in \N: x\ge m, x\neq \F(S)\} \cup \{0\}$. \item $S$ is an irreducible numerical semigroup. \end{enumerate} \end{lemma} From the above result is easy to obtain the next lemma. \begin{lemma} \label{lemma:2} Let $S$ be a numerical semigroup with multiplicity $m$. Then, $S$ is $m$-irreducible if and only if $\g(S) = \left\{m-1, m, \left\lceil \dfrac{\F(S)+1}{2} \right\rceil\right\}$. \end{lemma} Other useful set that appear when one analyzes irreducible numerical semigroups is the set of special gaps of a semigroup. Let $S$ be a numerical semigroup, the set of special gaps of $S$ is: $$ \SG(S)=\{h \in \N\backslash S: S \cup \{h\} \text{ is a numerical semigroup}\} $$ If $\m(S)=m$, we denote by $\SG_m(S)=\{ h \in \SG(S): h > m\}$ the set of special gaps of $S$ larger than the multiplicity. Note that when $h \in \SG_m(S)$, $S'=S \cup \{h\}$ is a numerical semigroup with multiplicity $m$ and that $\F(S) \in SG_m(S)$ if $S \neq \{0, m, \rightarrow\}$. Then, a $m$-irreducible numerical semigroup can be detected by counting the special gaps larger than $m$. We denote by $\{n_1, \ldots, n_k, \rightarrow\} = \{n_1, \ldots, n_k\} \cup \{n \in \N: n \ge n_k+1\}$ for any $n_1, \ldots, n_k \in \N$. \begin{lemma}[\cite{ijac11}] \label{lemma:3} Let $S$ be a numerical semigroup with multiplicity $m$. Then, $S$ is $m$-irreducible if and only if $\#\SG_m(S) \leq 1$. Furthermore, $\SG_m(S)=\emptyset$ if and only if $S=\{0, m, \rightarrow\}$. \end{lemma} Once the set of $m$-irreducible numerical semigroups is characterized, one may be interested in decomposing a numerical semigroup as an intersection of $m$-irreducible numerical semigroups. Actually, any numerical semigroup with multiplicity $m$ can be decomposed into $m$-irreducible numerical semigroups (\cite[Proposition 1]{ijac11}). In that paper it is also given an algorithm for obtaining a minimal (in the sense of the minimum number of elements involved in the intersection) decomposition into $m$-irreducible numerical semigroup by using the Ap\'ery set of the numerical semigroup. In \cite{siam11}, some algorithms for obtaining such a minimal decomposition by formulating the problem as an integer programming model and by using the notion of Kunz-coordinated vector of a numerical semigroup. \begin{defi}[Ap\'ery set and Kunz-coordinates] \label{def:5} Let $S$ be a numerical semigroup and $s \in S$. \begin{enumerate} \item The \emph{Ap\'ery set} of $S$ with respect to $s \in S$ is the set $\Ap(S,s) = \{w_0=0, w_1, \ldots, w_{s-1}\}$, where $w_i$ is the least element in $S$ congruent with $i$ modulo $s$, for $i=1, \ldots, s-1$. \item The \emph{Kunz-coordinates vector} of $S$ is the integer vector $x \in \N^{m-1}$ whose components are $x_i=\frac{w_i-i}{m}$ where $m=\m(S)$ and $\Ap(S, m)=\{w_0=0, w_1, \ldots, w_{m-1}\}$. \end{enumerate} \end{defi} The Kunz-coordinates vector were implicity used in \cite{london02} and in \cite{counting} for counting numerical semigroups of a given genus. The importance of the Kunz-coordinates vector for representing a numerical semigroup is also shown in the following result. \begin{lemma}[Theorem 11 in \cite{london02}] \label{lemma:6} Each numerical semigroup is one-to-one identified with its Kunz-coordinates. Furthermore, the set of Kunz-coordinates vectors of the numerical semigroups with multiplicity $m$ is the set of solutions of the following system of diophantine inequalities: \begin{align} x_i \geqslant&1 & \mbox{for all $i \in \{1, \ldots, m-1\}$,}\nonumber\\ x_i+x_j-x_{i+j} \geqslant& 0 & \mbox{for all $1 \leqslant i \leqslant j \leqslant m-1$, $i+j \leqslant m-1$,}\label{kunz}\\ x_i+x_j-x_{i+j-m} \geqslant& -1 &\mbox{for all $1 \leqslant i \leqslant j \leqslant m-1$, $i+j > m$},\nonumber\\ x_i \in \N & &\mbox{for all $i \in \{1, \ldots, m-1\}$}\nonumber. \end{align} \end{lemma} The bijective correspondence given in the above result between numerical semigroups with multiplicity $m$ is given by $(x_1, \ldots, x_{m-1}) \mapsto \langle mx_1+1, \ldots, mx_{m-1}+m-1\rangle$. An integer vector $x \in \N^{m-1}$ is said a Kunz-coordinates vector if it is the Kunz-coordinates vector of some numerical semigroup with multiplicity $m$. Thus, being a Kunz-coordinates vector is equivalent to be a solution of the diophantine system of inequalities \eqref{kunz}. The Frobenius number, the genus and the special gaps larger than the multiplicity of a numerical semigroup can be computed by manipulating the Kunz-coordinates vector of a numerical semigroup (see \cite{siam11}): Let $S$ be a numerical semigroup with multiplicity $m$ and $x\in \N^{m-1}$ its Kunz-coordinates vector. Then, by Selmer's formulas \cite{selmer77}, $\g(S)=\sum_{i=1}^{m-1} x_i$, $\F(S)=\max\{mx_i+i\}-m$ and \begin{equation} \label{sg} \begin{array}{lll} \SG_m(S)&=\{h_i=m(x_i-1)+i :& x_i+x_j > x_{i+j} \text{ for $j$ such that } i+j<m,\\ & & x_i+x_j > x_{i+j-m}-1 \text{ for $j$ such that } i+j>m,\\ & &\text{and } 2h_i \ge mx_{2h_i \pmod m} + {2h_i \pmod m},\\ & &i=1, \ldots, m-1\}. \end{array} \end{equation} Hence, by lemmas \ref{lemma:6} and \ref{lemma:2} and the expression of the genus and the Frobenius number of a numerical semigroup in terms of its Kunz-coordinates vector, the set of Kunz-coordinates vectors of all the $m$-irreducible numerical semigroups is the entire set of solutions of system \eqref{kunz} when adding the constraint $\dsum_{i=1}^{m-1} x_i \in \{m-1, m, \max_i\;\{mx_i+i\}- m\}$. We are interested in decomposing a numerical semigroup $S$ with multiplicity $m$ into $m$-irreducible numerical semigroups. Then, the $m$-irreducible numerical semigroups involved in such a decomposition can be found among the set of oversemigroups of $S$ with multiplicity $m$, that is, the set $$ \mathcal{O}_m(S) = \{S' \text{ numerical semigroup}: S \subseteq S' \text{ and } \m(S')=m\} $$ If $x \in \N^{m-1}$ is the Kunz-coordinates vector of $S$, the set $\mathcal{O}_m(S)$ is one-to-one identified, in terms of Kunz-coordinates vectors, with the set of \emph{undercoordinates} of $x$, that is, with the set $$ \mathcal{U}_m(x) = \{x' \in \N^{m-1}: x' \text{ is a Kunz-coordinates vector and } x' \leq x\} $$ where $\leq$ stands for the component-wise order in $\N^{m-1}$ (see \cite{ijac11}). Thus, the undercoordinates of a Kunz-coordinates vector are in the form $x-y$ with $y\in\N^{m-1}$ and such that $x-y$ is a Kunz-coordinates vector. Since we are interested in decomposing a numerical semigroup $S$ whose Kunz-coordinates vector is $x\in \N^{m-1}$, into $m$-irreducible numerical semigroups, by applying \eqref{kunz} to $x-y$, the conditions for $y$ to be $x-y$ a $m$-irreducible Kunz-coordinates vector are: \begin{align} y_i \leqslant x_i - 1 & \mbox{ for all $i \in \{1, \ldots, m-1\}$,}\nonumber\\ y_i + y_j - y_{i+j} \leqslant x_i + x_j - x_{i+j} & \mbox{ for all $1 \leqslant i \leqslant j \leqslant m-1$, $i+j \leqslant m-1$,}\nonumber\\ y_i + y_j - y_{i+j} \leqslant x_i + x_j - x_{i+j} + 1 &\mbox{ for all $1 \leqslant i \leqslant j \leqslant m-1$, $i+j > m$},\label{polytope:x}\tag{${\rm P}^m(x)$}\\ \dsum_{i=1}^{m-1} y_i \in M(x, y),\label{eq:disj}\\ y \in \N^{m-1}_+.\nonumber \end{align} where $M(x, y) = \{\dsum_{i=1}^{m-1} x_i -m, \dsum_{i=1}^{m-1} x_i -m + 1,$ $\dsum_{i=1}^{m-1} x_i - $ $\left\lceil \frac{\max_i\{m(x_i-y_i) + i\} - m +1}{2} \right\rceil\}$. If condition \eqref{eq:disj} is $\dsum_{i=1}^{m-1} y_i= \dsum_{i=1}^{m-1} x_i -m + 1,$, the unique solution is $x-y=(1, \ldots, 1)$, whose associated numerical semigroup is $S_m=\{0,m,\rightarrow\}$. $S_m$ is the maximal element in the set of numerical semigroups with multiplicity $m$. Hence, $S_m$ appears only in its own decomposition and in no one else. By solving the above system of diophantine inequalities, we obtain a decomposition of $S$ into $m$-irreducible numerical semigroups, in terms of its Kunz-coordinates vectors. To get a minimal decomposition the entire set of solutions of \eqref{polytope:x} must be filtered conveniently to avoid redundant solutions. For designing such a filter we use the following result: \begin{lemma}[\cite{ijac11}] \label{lemma:7} Let $S$ be numerical semigroups with multiplicity $m$ and $S_1, \ldots,$ $S_n \in \mathcal{O}_m(S)$. $S=S_{1} \cap \cdots \cap S_{n}$ if and only if $\SG_m(S) \cap \left(\G(S_{1}) \cup \cdots \cup \G(S_{n})\right) = \SG_m(S)$. \end{lemma} From the above result, the problem of minimally decomposing a numerical semigroup with multiplicity $m$, $S$, into $m$-irreducible numerical semigroups is translated into the problem of finding a set of oversemigroups of $S$ that minimally covers the special gaps larger than $m$ of $S$. To check if a solution of \eqref{polytope:x} contains an specific special gap $h \in \SG_m(S)$, in \cite{siam11} it is proven the following result that analyzes the structure of the system in terms of the elements in $\SG_m(S)$. \begin{lemma} \label{lemma:8} Let $S$ be a numerical semigroup with multiplicity $m$ and Kun-coordinates vector $x\in \N^{m-1}$. Then, there exists a minimal decomposition of $S$ into $m$-irreducible numerical semigroups $S=S_1 \cap \cdots \cap S_k$ with the following properties: \begin{enumerate} \item $h_i = \F(S_i) \in \SG_m(S)$. \item If $x^i=x-y^i$ is the Kunz-coordinates vector of $S_i$, then $y^i_{h_{i} \pmod m}=0$, for $i=1, \ldots, k$. \item $\dsum_{j=1}^{m-1} y^i_j = \left\{\begin{array}{rl} \dsum_{i=1}^{m-1} x_i - \left\lceil \dfrac{h_i+1}{2} \right\rceil & \mbox{if $h_i>2m$,}\\ \dsum_{i=1}^{m-1} x_i -m & \mbox{if $h_i<2m$.} \end{array}\right.$, for $i=1, \ldots, k$. \end{enumerate} \end{lemma} Finally, we recall this useful result that also appears in \cite{siam11}: \begin{prop} \label{prop:9} Let $x \in \N^{m-1}_+$ be a Kunz-coordinates vector, $y \in \N^{m-1}_+$ and $h \in \SG_m(x)$. If $x-y$ is a undercoordinate of $x$, then, $h \in \G(x-y)$ if and only if $y_{h \pmod m} =0$. Furthermore, $\F(x-y)$ is the unique element in $\{h \in \SG_m{x}: h \pmod m = \max\{i \in \{1, \ldots, m-1\}: y_i=0\}\}$. \end{prop} \section{3-irreducible numerical semigroups} \label{sec:2} In this section we analyze the set of $3$-irreducible numerical semigroups, that is, the set of numerical semigroups with multiplicity $3$ and such that cannot be expressed as an intersection of numerical semigroups with multiplicity $3$. Once this set is described, we give explicit decomposition into $3$-irreducible numerical semigroups for any numerical semigroup with multiplicity three. It is clear that every irreducible numerical semigroup with multiplicity $m$ is also $m$-irreducible. However, in general, the converse is not true. First we prove that both notions are equivalent when the multiplicity is three. \begin{lemma} \label{lemma:10} Every $3$-irreducible numerical semigroup is irreducible. \end{lemma} \begin{proof} Let $S$ be a $3$-irreducible numerical semigroup. By Lemma \ref{lemma:1} one of the following conditions must be satisfied: \begin{enumerate} \item $S=\{x \in \N: x\ge 3\} \cup \{0\}$. In this case $S=\langle 3,4,5 \rangle$, that is irreducible ($2=\g(S) = \left\lceil \frac{\F(S)+1}{2}\right\rceil = \left\lceil \frac{2+1}{2}\right\rceil = 2$. \item $S=\{x \in \N: x\ge m, x\neq \F(S)\} \cup \{0\}$. In this case, either $S=\langle 3, 4\rangle$ or $S=\langle 3, 5, 7\rangle$. Both numerical semigroups are irreducible. \item $S$ is an irreducible numerical semigroup. In this case we are done. \end{enumerate} \end{proof} From the above lemma the conditions for a numerical semigroup with multiplicity $3$ to be $3$-irreducible are also valid to check if the numerical semigroup is irreducible. Furthermore, as a consequence of Proposition 1 in \cite{ijac11} and the above result, any numerical semigroup with multiplicity $3$ can be minimally decomposed into irreducible numerical semigroups with multiplicity $3$. Let $S$ be a numerical semigroup with multiplicity $3$. Its Kunz-coordinates vector consists of a positive integer vector with two components $x=(x_1, x_2) \in \N^2$. Then, $S$ is $3$-irreducible if $(x_1, x_2)$ is a solution of any of the two systems described by system \eqref{kunz} when adding the constraint $\dsum_{i=1}^{m-1} x_i \in \{m-1, m, \max_i\;\{mx_i+i\}- m\}$. Also, by Lemma \ref{lemma:3}, $S$ is $3$-irreducible if its set of special gaps larger than the multiplicity has $0$ or $1$ elements. In the following result we explicitly describe the set of special gaps greater than $3$ of $S$. \begin{prop} \label{prop:11} Let $S$ be a numerical semigroup with multiplicity $3$ with Kunz coordinates vector $x=(x_1, x_2)$. Then, the set of special gaps larger than $3$ is: $$ \SG_3(S) = \left\{\begin{array}{cl} \{\} & \mbox{if $x=(1,1)$},\\ \{3x_1-2\} & \mbox{if $2x_1 \geq x_2+2$, $x_1\geq 2$ and $2x_2\leq x_1$},\\ \{3x_2-1\} & \mbox{if $2x_2\geq x_1+1$, $x_2\geq 2$ and $2x_1 \leq x_2+1$,}\\ \{3x_1-2, 3x_2-1\} & \mbox{if $2x_1 \geq x_2+2$, $2x_2 \geq x_1+1$ and $x_1, x_2 \geq 2$}.\end{array}\right. $$ \end{prop} \begin{proof} By the description of $\SG_3(S)$ in terms of its Kunz-coordinates vector \eqref{sg}, we only need to check if $3(x_1-1)+1 = 3x_1-2$ and $3(x_2-1)+2=3x_2-1$ are special gaps larger than the multiplicity or not. First, those elements must be greater than $m$, i.e., $x_1 \geq 2$ and $x_2\geq 2$. \begin{itemize} \item For $h_1=3x_1-2$, $h_1\in \SG_3(S)$ if and only if $x_1 + x_1 > x_2$ (to be in $M$) and $2\left(3(x_1-2)+1)\right) \ge 3x_2 +2$ since $h_1 \pmod 3 = 1$. Equivalently, if $2x_1 > x_2$ and $2x_1 \ge x_2+2$. Clearly, it is deduced that $x_1\geq 2$. \item For $h_2=3x_2-1$, $h_2\in \SG_3(S)$ if and only if $x_2 + x_2 > x_1-1$ and $2\left(3(x_2-1)+1)\right) \ge 3x_1 +1$ since $h_2 \pmod 3 = 2$. Equivalently, if $2x_2 > x_1-1$ and $2x_2 \ge x_1+1$. We also have that $x_2\geq 2$. \end{itemize} By combining both possibilities we obtain the result. \end{proof} From the above result we can completely characterize, in terms of their Kunz.coordinates vectors, those numerical semigroups with multiplicity $3$ that are $3$-irreducible by applying Lemma \ref{lemma:3}. \begin{theo} \label{theo:10} Let $S$ be a numerical semigroup with multiplicity $3$ and Kunz-coordinates vector $x=(x_1, x_2) \in \N^2$. Then, $S$ is irreducible if an only if one of the following conditions holds: \begin{enumerate} \item $x_1=x_2=1$, \item $2x_1 \geq x_2+2$ and $2x_2\leq x_1$. \item $2x_2 \geq x_1+1$ and $2x_1\leq x_2+1$. \end{enumerate} \end{theo} \begin{ex}[Numerical semigroups generated by a generalized arithmetic sequence] \label{ex:11} For $h, d, k$ positive integer such that $k \leq 2$ and $\gcd(d, 3)=1$, the numerical semigroup with multiplicity $3$, $S=\langle 3, 3h+d, 3h+2d, \ldots, 3+kd\rangle$ is said that is generated by a generalized arithmetic sequence. In \cite{ramirezalfonsin} it is proved that $\Ap(S, 3)=\{0, 3h\left\lceil\dfrac{1}{k}\right\rceil + d, 3h\left\lceil\dfrac{2}{k}\right\rceil + 2d\}$. Then, if $d=3D+1$ ($d\equiv 1 \pmod 3$), the Kunz-coordinates vector of $S$ is $x=(h\left\lceil \frac{1}{k}\right\rceil + D, h\left\lceil \frac{2}{k}\right\rceil + 2D)$, and if $d = 3D+2$ ($d \equiv 2 \pmod 3$), $x=(h\left\lceil \frac{2}{k}\right\rceil + 2D+1, h\left\lceil \frac{1}{k}\right\rceil + D)$ otherwise. Then, $S$ is irreducible, by applying Theorem \ref{theo:10}, if and only if one of the following condition holds: \begin{itemize} \item $k=1$, ($S=\langle 3, 3h+d\rangle$). \item $k= 2$ and $h=1$ (In this case, $S=\langle 3, 3+d, 3+2d \rangle$, in particular, for $d=1$, $S=\{0, 3, \rightarrow\}$). \end{itemize} Furthermore, if $k=1$, by Selmer's formulas $\F(S)=6h+2d-3$ which is odd, so $S=\langle 3, 3h+d\rangle$ is symmetric. If $k=2$ and $h=1$, $\F(S)=2d$, so $S$ is always pesudosymmetric in this case. This result has been previously proven by Matthews in \cite{mathews04} and partially by Estrada and L\'opez in \cite{estrada94}. \end{ex} In what follows we show how to decompose a numerical semigroup with multiplicity $3$ that is not irreducible into irreducible numerical semigroups with multiplicity three (the decomposition of a irreducible numerical semigroup into irreducible numerical semigroups is trivial). Assume that $S$ is a numerical semigroup that is not irreducible. By Lemma \ref{lemma:3}, $\#\SG_3(S)=2$, and then, if $x=(x_1, x_2) \in \N^2$ is the Kunz-coordintes vector of $S$, by Proposition \ref{prop:9}, $\SG_3(S)=\{3x_1-2, 3x_2-1\}$ with $2x_1 \geq x_2+2$, $2x_2 \geq x_1+1$ and $x_1, x_2 \geq 2$. First, we characterize the set of $3$-irreducible oversemigroups of $S$ in terms of their Kunz-coordinates vectors. We denote here by $\mathcal{I}_3(S)$ the set of undercoordinates of the Kunz-coordinates vector of $S$ that are $3$-irreducible. \begin{theo} \label{theo:12} Let $S$ be a numerical semigroup with multiplicity $3$ and such that $S$ is not irreducible. If $x=(x_1, x_2) \in \N^2$ is the Kunz-coordinates vector of $S$, then, the set of irreducible undercoordinates of $S$ with multiplicity $3$ is: $$ \mathcal{I}_3(S) = \widehat{\mathcal{I}}_3(S) \cup \left\{\begin{array}{rl} \{(1,1), (x_1, \frac{x_1-1}{2}), (\frac{x_2}{2}, x_2)\} & \mbox{ if $x_1$ is odd and $x_2$ is even,}\\ \{(1,1), (x_1, \frac{x_1-1}{2}), (\frac{x_2+1}{2}, x_2)\} & \mbox{ if $x_1, x_2$ are odd},\\ \{(1,1), (x_1, \frac{x_1}{2}), (\frac{x_2}{2}, x_2)\} & \mbox{ if $x_1, x_2$ are even,}\\ \{(1,1), (x_1, \frac{x_1}{2}), (\frac{x_2+1}{2}, x_2)\} & \mbox{ if $x_1$ is even and $x_2$ is odd,} \end{array}\right. $$ where $\widehat{\mathcal{I}}_3(S) = \left\{\begin{array}{rl} \{\{(2,1)\} & \mbox{if $x_1 \geq 2$, $x_2=1$,}\\ \{(1,2)\} & \mbox{if $x_2 \geq 2$, $x_1=1$,}\\ \{(1,2), (2,1) & \mbox{if $x_1, x_2 \geq 2$,} \end{array}\right.$. \end{theo} \begin{proof} Let $S'$ be a irreducible oversemigroup of $S$. Then, it has Kunz-coordinates vector $(x_1', x_2') =(x_1, x_2)-(y_1, y_2)$ verifying the diophantine inequalities in \eqref{polytope:x}. Then, for $m=3$, this system is: \begin{align*} 2y_1-y_2 &\leq 2x_1-x_2\\ 2y_2-y_1 &\leq 2x_2-x_1+1 \end{align*} Let $S'$ be an irreducible oversemigroup of $S$ with multiplicity $3$. It is clear that at least one special gap in $\SG_3(S)$ does not belong to $S'$, otherwise $S'$ has the same special gaps larger than $3$ that $S$, being $S=S'$. We can distinguish here two cases: \begin{enumerate} \item $h_1=3x_1-2 \not\in S'$. By Proposition \ref{prop:9}, $h_1 \not\in S'$ if and only if $y_1=0$, and since $S'$ is $3$-irreducible $y_1+y_2 = x_1+x_2 - \left\lceil\frac{3x_1-1}{2}\right\rceil$, that is, $y_2 = x_1+x_2 - \left\lceil\frac{3x_1-1}{2}\right\rceil$. \item $h_2=3x_2-1 \not\in S'$. By Proposition \ref{prop:9}, $h_2 \not\in S'$ if and only if $y_2=0$, and since $S'$ is $3$-irreducible $y_1+y_2 = x_1+x_2 - \left\lceil\frac{3x_2}{2}\right\rceil$, that is, $y_1 = x_1+x_2 - \left\lceil\frac{3x_2}{2}\right\rceil$. \end{enumerate} Also, if $x_1 \geq 2$, $(2,1) \leq (x_1, x_2)$ is an irreducible undercoordinate of $S$, and if $x_2 \geq 2$, $(1,2) \leq (x_1,x_2)$ is in $\mathcal{I}_3(S)$. Then, the set of irreducible oversemigroups of $S$ is given by the Kunz-coordinates vectors $(x_1, x_1+ \left\lceil\frac{3x_1-1}{2}\right\rceil)$, $(x_2 + \left\lceil\frac{3x_2}{2}\right\rceil, x_2)$, $(1,2)$, and $(2,1)$. The result follows by expanding the ceiling par of these vectors and by adding the Kunz-coordinates vector $(1,1)$ which is a irreducible oversemigroup for any numerical semigroup with multiplicity $3$. \end{proof} We illustrate the usage of the above result in the following example. \begin{ex} Let $S = \langle 3, 10, 14 \rangle$. $S$ is a numerical semigroup with multiplicity $3$ and $\Ap(S, 3)=\{0, 10, 14\}$. Then, its Kunz-coordinates vector is $x=(\frac{10-1}{3}, \frac{4-1}{3})=(3,4)$. Since $3$ is odd, $4$ is even, and $3, 4 \geq 2$, the set of irreducible undercoordinates of $S$ is $\{(1,2), (2,1)\} \cup \{(1,1), (3, \frac{3-1}{2}), (\frac{4}{2}, 4)\}=\{(1,1), (1,2), (2,1), (3,1), (2,4)\}$. Consequently, the set of irreducible oversemigroups of $S$ with multiplicity $3$ is: $$ \{\langle 3, 4, 5\rangle, \langle 3, 4 \rangle, \langle 3, 5, 7 \rangle, \langle 3, 5 \rangle, \langle 3, 7 \rangle\} $$ \end{ex} As a direct consequence of Theorem \ref{theo:12} and the identification between Kunz-coordinates vectors and numerical semigroups, we are able to describe minimal decompositions into $3$-irreducible numerical semigroups for a numerical semigroup with multiplicity $3$. \begin{cor} \label{cor:13} Let $S$ be a numerical semigroup with multiplicity $3$ and $(x_1, x_2) \in \N^2$ its Kunz-coordinates vector. Then, either $S$ is irreducible or can be (minimally) decomposed into $3$-irreducible numerical semigroups as: $$ S= \left\{\begin{array}{rl} \langle 3, \frac{3x_1+1}{2} \rangle \cap \langle 3, \frac{3x_2+2}{2}\rangle & \mbox{ if $x_1$ is odd and $x_2$ is even,}\\ \langle 3, \frac{3x_1+1}{2} \rangle \cap \langle 3, \frac{3x_2+5}{2}, 3x_2+2 \rangle & \mbox{ if $x_1, x_2$ are odd,}\\ \langle 3, 3x_1+1, \frac{3x_1+4}{2} \rangle \cap \langle 3, \frac{3x_2+2}{2} \rangle & \mbox{ if $x_1, x_2$ are even,}\\ \langle 3, 3x_1+1, \frac{3x_1+4}{2} \rangle \cap \langle 3, \frac{3x_2+5}{2}, 3x_2+2 \rangle & \mbox{ if $x_1$ is even and $x_2$ is odd,} \end{array}\right. $$ \end{cor} \begin{cor} \label{cor:14} Let $S$ be a numerical semigroup with multiplicity $3$. The decompositions of $S$ into irreducible numerical semigroups given in Corollary \ref{cor:13} are unique. \end{cor} \begin{proof} The proof follows by noting that $\SG_3(S)=\SG(S)$ when $S \neq \{0, 3, \rightarrow\}$ and then, if a numerical semigroup is not irreducible (being then $\#\SG(S)=2$) a decomposition of $S$ into irreducible numerical semigroups must consist of two numerical semigroups with Frobenius number each of the special gaps in $\SG_3(S)$. Furthermore, since those semigroups must be irreducible, its genus is also fixed. By Corollary 4 in \cite{rosales05}, there is only one numerical semigroup with fixed genus and Frobenius number. Hence, the decomposition is unique. \end{proof} \begin{ex} \label{ex:15} Let $S=\langle 3, 23, 40 \rangle$. $S$ is a numerical semigroup with multiplicity $3$ and $\Ap(S, 3)=\{0, 40, 23\}$. Hence, its Kunz-coordinates vector is $x=(\frac{40-1}{3}, \frac{23-2}{3})=(13, 7)$. Since $x$ does not verify any of the conditions of Theorem \ref{theo:10}, $S$ is not irreducible. Then, since $13$ and $7$ are both odd, the minimal decomposition of $S$ into irreducible numerical semigroups is $$ S= \langle 3, \frac{3\times 13 +1}{2} \rangle \cap \langle 3, \frac{3\times 7+5}{2}, 3\times 7 +2 \rangle = \langle 3, 20\rangle \cap \langle 3, 13, 23 \rangle $$ \end{ex} In \cite{ijac11} it is also defined the notion of $m$-symmetry and $m$-pseudosymmetry of a numerical semigroup with multiplicity $m$, extending the previous notions of symmetry and pseudosymmetry (see \cite{springer}). A numerical semigroup, $S$, with multiplicity $m$ is $m$\textit{-symmetric} if $S$ is $m$-irreducible and $\F(S)$ is odd. On the other hand, $S$ is $m$\textit{-pseudosymmetric} if $S$ is $m$-irreducible and $\F(S)$ is even. Another well-known set of numerical semigroups is the one of numerical semigroups that can be decomposed into $m$-symmetric numerical semigroups (ISYM-semigroups). For the case when $m=3$, clearly, if $S$ is a numerical semigroup with multiplicity $3$ we can distinguish two cases: when $S$ is $3$-irreducible numerical semigroup or when it is not. In the first case, $S$ is a ISYM-semigroup if $S$ is $3$-symmetric, in the second case, $S$ is an ISYM-semigroup if the two $3$-irreducible numerical semigroups in the decomposition (Corollary \ref{cor:13}) are $3$-symmetric. Since the Frobenius numbers of both $3$-irreducible numerical semigroups are known, the elements in $\SG_3(S)$, $S$ is a ISYM-semigroup if all the elements in $\SG_3(S)$ are odd. This results in the following corollaries. \begin{cor} \label{cor:16} Let $S$ be a numerical semigroup with multiplicity $3$ and Kunz-coordinates vector $x=(x_1, x_2) \in \N^2$. Then, $S$ is decomposable as an intersection of symmetric numerical semigroups with multiplicity $3$ if and only if one of the following conditions holds: \begin{enumerate} \item $S$ is a $3$-symmetric numerical semigroup. \item $S$ is not $3$-irreducible, $x_1$ is odd and $x_2$ is even. \end{enumerate} \end{cor} An analogous treatment can be done two analyze those numerical semigroups with multiplicity $3$ that can be decomposed as an intersection of $3$-pseudosymmetric numerical semigroups. \begin{cor} \label{cor:17} Let $S$ be a numerical semigroup with multiplicity $3$ and Kunz-coordinates vector $x=(x_1, x_2) \in \N^2$. Then, $S$ is decomposable as an intersection of pseudosymmetric numerical semigroups with multiplicity $3$ if and only if one of the following conditions holds: \begin{enumerate} \item $S$ is a $3$-pseudosymmetric numerical semigroup. \item $S$ is not $3$-irreducible, $x_1$ is even and $x_2$ is odd. \end{enumerate} \end{cor} \section{4-irreducible numerical semigroups} \label{sec:3} In this section we study the set of irreducible numerical semigroups with multiplicity four. In this case, \begin{lemma} \label{lemma:18} Every $4$-irreducible numerical semigroup different from $\{0, 4, \rightarrow\}$ is irreducible. \end{lemma} \begin{proof} Let $S$ be a $4$-irreducible numerical semigroup. By Lemma \ref{lemma:1} one of the following conditions holds: \begin{enumerate} \item $S=\{x \in \N: x\ge m, x\neq \F(S)\} \cup \{0\}$. In this case, either $S=\langle 4, 5, 6\rangle$ , $S=\langle 4, 5, 7\rangle$, or $S=\langle 4, 6, 7\rangle$. All of them are irreducible. \item $S$ is an irreducible numerical semigroup. In this case we are done. \end{enumerate} The semigroup $\{0, 4, \rightarrow\}$ is $4$-irreducible but it is not irreducible since $3 = \g(S) \neq \left\lceil\frac{\F(S)+1}{2}\right\rceil = 2$. \end{proof} Although the complete set of $4$-irreducible numerical semigroups does not coincide with the set of irreducible numerical semigroups with multiplicity $4$ as in the case when the multiplicity is $3$, the difference is only one element, $\widehat{S}=\{0, 4, \rightarrow\}$, which is closed under decompositions, i.e., it only appears in its own decomposition into $4$-irreducible numerical semigroups. Then, through this section we assume that the numerical semigroups with multiplicity $4$ are different from $\widehat{S}$. Note also that the above result, is not further true when the multiplicity is greater than $4$. For the case when $m=5$, $\langle 5,6,8,9 \rangle$ is $5$-irreducible but it is not irreducible. In the following lemma we describe the set of special gaps larger than $4$ of a numerical semigroup with multiplicity $4$. \begin{lemma} \label{lemma:19} Let $S$ be a numerical semigroup with multiplicity $4$ and Kunz-coordinates vector $x=(x_1, x_2, x_3) \in \N^3$. Then, the set of special gaps larger than $4$ is: {\scriptsize $$ \SG_4(S) = \left\{\begin{array}{cl} \{\} & \mbox{if $x_1=x_2=x_3=1$},\\ \{4x_1-3\} & \mbox{ if $x_1+x_2 \geq x_3+1$, $2x_1\geq x_2+2$, and $x_2+x_3 \leq x_1-1$,}\\ \{4x_1-3\} & \mbox{ if $x_1+x_2 \geq x_3+1$, $2x_1\geq x_2+2$, $x_2=1$ and $2x_3 \leq x_2$,}\\ \{4x_1-3\} & \mbox{ if $x_1+x_2 \geq x_3+1$, $2x_1\geq x_2+2$, $x_2=1$ and $x_2+x_3 \leq x_1-1$,}\\ \{4x_1-3\} & \mbox{ if $x_1=2$, $x_2=x_3=1$,}\\ \{4x_2-2\} & \mbox{ if $x_1+x_2 \geq x_3+1$, $x_2+x_3\geq x_1$, $2x_1\leq x_2+1$ and $2x_3\leq x_2$,}\\ \{4x_3-1\} & \mbox{ if $x_2+x_3 \geq x_1$, $2x_3\geq x_2+1$, and $x_1+x_2 \leq x_3$,}\\ \{4x_1-3, 4x_2-2\} & \mbox{ if $x_1+x_2 \geq x_3+1$, $2x_1\geq x_2+2$, $x_2+x_3\geq x_1$, $2x_3\geq x_2+1$, $x_1\geq 2$, and $x_2\geq 2$}\\ \{4x_1-3, 4x_3-1\} & \mbox{ if $x_1+x_2 \geq x_3+1$, $2x_1\leq x_2+1$, $x_2+x_3\geq x_1$, $2x_3\geq x_2+1$, $x_1\geq 2$, and $x_2\geq 2$}\\ \{4x_1-3, 4x_3-1\} & \mbox{ if $x_2=1$, $x_3 \leq x_1$, $x_3 \geq x_1-1$, $x_1\geq 2$, and $x_3\geq 2$,}\\ \{4x_2-2, 4x_3-1\} & \mbox{ if $x_1+x_2 \geq x_3+1$, $x_2+x_3\geq x_1$, $2x_3\geq x_2+1$, $2x_1\leq x_2+1$, $x_2\geq 2$, and $x_3\geq 2$}\\ \{4x_1-3, 4x_2-2, 4x_3-1\} & \mbox{ if $x_1+x_2 \geq x_3+1$, $2x_1\geq x_2+2$, $x_2+x_3\geq x_1$, $2x_3\geq x_2+1$, $x_2 \geq 2$, and $x_3 \geq 2$.} \end{array}\right. $$} \end{lemma} \begin{proof} The result follows by applying \eqref{sg} to compute the set $\SG_4(S)$ in terms of its Kunz-coordinates vector. \end{proof} Also, analogously to the case with multiplicity $3$ we have the following result concerning the irreducibility of a numerical semigroup with multiplicity $4$. \begin{cor} \label{cor:20} Let $S$ be a numerical semigroup with multiplicity $4$ and $\Ap(S, 4)=\{0, 4x_1+1, 4x_2+2, 4x_3+3\}$. Then, $S$ is $4$-irreducible if an only if one of the following conditions holds: \begin{enumerate} \item $x_1=x_2=x_3=1$ \item $x_1+x_2 \geq x_3+1$, $2x_1\geq x_2+2$, and $x_2+x_3 \leq x_1-1$, \item $x_1+x_2 \geq x_3+1$, $2x_1\geq x_2+2$, $x_2=1$ and $2x_3 \leq x_2$, \item $x_1+x_2 \geq x_3+1$, $2x_1\geq x_2+2$, $x_2=1$ and $x_2+x_3 \leq x_1-1$, \item $x_1=2$, $x_2=x_3=1$, \item $x_1+x_2 \geq x_3+1$, $x_2+x_3\geq x_1$, $2x_1\leq x_2+1$ and $2x_3\leq x_2$, \item $x_2+x_3 \geq x_1$, $2x_3\geq x_2+1$, and $x_1+x_2 \leq x_3$. \end{enumerate} \end{cor} In the following example we analyze irreducible numerical semigroups with multiplicity four that are generated by generalized arithmetic sequences by applying the above corollary. \begin{ex} \label{ex:27} For $h, d$ positive integers, $k \in \{1,2,3\}$ and such that $\gcd(d, 4)=1$, $S=\langle 4, 4h+d, 4h+2d,\ldots, 4+kd\rangle$ is said that generated by a generalized arithmetic sequence. In \cite{ramirezalfonsin} it is proved that $\Ap(S, 4)=\{0, 4h\left\lceil\dfrac{1}{k}\right\rceil + d, 4h\left\lceil\dfrac{2}{k}\right\rceil + 2d, 4h\left\lceil\dfrac{3}{k}\right\rceil + 3d\}$. Distinguishing the possible values of $k$ and $d \pmod 4$, the Kunz-coordinates vector of $S$ are: $$ \left\{ \begin{array}{rl} (h+D, 2h+2D, 3h+3D) & \mbox{if $k=1$ and $d \equiv 1 \pmod 4$,}\\ (3h+3D+2, 2h+2D+1, h+D) & \mbox{if $k=1$ and $d \equiv 3 \pmod 4$,}\\ (h+D, h+2D, 2h+3D) & \mbox{if $k=2$ and $d \equiv 1 \pmod 4$,}\\ (2h+3D+2, h+2D+1, h+D) & \mbox{if $k=2$ and $d \equiv 3 \pmod 4$,}\\ (h+D, h+2D, h+3D) & \mbox{if $k=3$ and $d \equiv 1 \pmod 4$,}\\ (h+3D+2, h+2D+1, h+D) & \mbox{if $k=3$ and $d \equiv 3 \pmod 4$,}\\ \end{array}\right. $$ where $D = \frac{d - d \pmod 4}{4}$. Corollary \ref{cor:20} allows us to decide the irreducibility of those numerical semigroups just by checking in some inequalities are satisfied by the above Kunz-coordinates vectors. By writting down the inequalities, it is easy to check that when $k=1$ or $k=2$, $S$ is always irreducible, while for $k=3$ the inequalities are never hold so $S$ is not irreducible. Furthermore, by Selmer's formulas $\F(S)=4(3h-1) + 3d$ if $k=1$ and $\F(S)=4(2h-1) + 3d$ if $k=2$. Since $d$, in these cases $\F(S)$ is always odd, being then $S$ symmetric. This result was also proved by Matthews in \cite{mathews04}. \end{ex} In case the numerical semigroup is not $4$-irreducible, that is, in one of the cases of Corollary \ref{cor:20}, we can find a decomposition into $4$-irreducible numerical semigroups with at least two $4$-irreducible numerical semigroups. If $\#\SG_4(S)=2$, then a minimal decomposition will be given as an intersection of two $4$-irreducible numerical semigroups while if $\#\SG_4(S)=3$, such a decomposition may consists of two or three $4$-irreducible numerical semigroups. First we analyze the case when the number of special gaps larger than $4$ is two. Then, it is enough to look for two $4$-irreducible numerical semigroups such that each of then has as Frobenius number each of the special gaps. When the cardinality of $\SG_4(S)$ is two, to decompose $S$ into $4$-irreducible numerical semigroups we have to search for irreducible oversemigroups of $S$ with Frobenius number each of the two special gaps of $S$. \begin{theo} \label{theo:21} Let $S$ a numerical semigroup with multiplicity $4$ with Kunz-coordinates vector $x = (x_1, x_2, x_3) \in \N^3\backslash\{(1,1,1)\}$, and let $S'$ be an irreducible oversemigrop of $S$ with special gap $h_i=4(x_i-1)+i$. Then, the Kunz-coordinates vector of $S'$ are in the form $x-y \in \N^3$, with $y \in \N$ and such that: $$ \begin{array}{rl} y_1=0, y_2 \in [ x_2-x_1, x_2-1] \mbox{ and } y_3=-x_1+x_2+x_3+1-y_2 & \mbox{if i=1}\\ y_1 \in [x_1-x_3+1,x_1-\frac{x_3}{3}], y_2=0, y_3=x_1-x_2+x_3-y_1 & \mbox{if i=3}\\ y_1\in [x_1-x_3+1,x_1-\frac{x_3}{3}], y_2=x_1+x_2+x_3-y_1, y_3=0 & \mbox{if i=3} \end{array} $$ \end{theo} \begin{proof} Any oversemigroup of $S$ has Kunz-coordinates vector in the form $x^k=x-y^k$, where $y^k \in \N^3$ verifies the inequalities in \eqref{polytope:x} for $m=4$, that is \begin{align*} 2y_1^k - y_2^k &\leq 2x_1-x_2\\ y_1^k + y_2^k - y_3^k &\leq x_1+x_2-x_3\\ y_2^k + y_3^k - y_1^k& \leq x_1+x_3-x_1+1\\ 2y_3^k - y_2 &\leq 2x_3-x_2 +1 \end{align*} and such that those oversemigroups are irreducible with $\F(x^k)=4(x_k-1)+k$, that is $y^k_k =0$ and $$ y_1^k+y_2^k+y_3^k = x_1+x_2+x_3 - \left\lceil \frac{4(x_k-1)+k+1}{2} \right\rceil $$ In what follows we analyze each $y^k$, for $k=1,2,3$: \begin{enumerate} \item For $k=1$ the conditions are, by fixing $y_1^1=0$ and $y_3^1 = -x_1+x_2+x_3+1-y_2^1$: \begin{align} y^1_2 &\leq x_2-1\label{1.1}\\ y^1_2 &\geq -x_1+x_2+2\label{1.2}\\ y^1_2 &\geq x_2-2x_1\label{1.3}\\ y^1_2 &\leq x_2\label{1.4}\\ y^1_2 &\geq x_2 - \frac{2x_1+1}{2}\label{1.5}\\ y^1_2 \in \N\label{1.6} \end{align} Moreover, constraint \eqref{1.3} is redundant with constraint \eqref{1.5}, and constraint \eqref{1.2} is redundant when imposing \eqref{1.1} and \eqref{1.5}, and also \eqref{1.4} by \eqref{1.1}, so finally the lattice is $$ y_2 \in [ x_2-\frac{2x_1+1}{2}, x_2-1] \cap \N = [ x_2-x_1, x_2-1] \cap \N $$ \item For $i=2$, $y^2_2=0$, and $y_3^2 = x_1-x_2+x_3-y_1^2$, so all the constraints can be written in terms of $y_1^2$. Then, the conditions are: \begin{align} y^2_1 &\leq x_1-1\label{2.1}\\ y^2_1 &\geq x_1-x_2+1\label{2.2}\\ y^2_1 &\leq x_1-\frac{x_2}{2}\label{2.3}\\ y^2_1 &\leq x_1\label{2.4}\\ y^2_1 &\geq x_1 - x_2\label{2.5}\\ y^2_1 &\geq x_1 - \frac{x_2+1}{2}\label{2.6}\\ y^2_1 \in \N\label{2.7} \end{align} By an analogous discarding procedure, the above constraints can be written as the integer points inside the interval $[x_1-\frac{x_2+1}{2}, x_1-\frac{x_2}{2}]$. Then, $h_1$ is in $\G(x-y)$ if and only if $y_1=0$. We can choose $y_1=0$ in the above system if $0 \geq x_1-\frac{x_2+1}{2}$ (note that $0\geq x_1-\frac{x_2}{2}$ is always true by the conditions of being a Kunz-coordinates vector). Then, the condition is $2x_1 \geq x_2+1$. On the other hand, $h_3 \in \G(x-y)$ if $y_3=0$ is a eligible choice, that is, $x_1-x_2+x_3-y_2 = 0$ is a solution. This is equivalent to $x_1-x_2+x_3 \leq x_1-\frac{x_2}{2}$, that is, to $x_2 \geq 2x_3$. \item Finally, for $h_3=4(x_3-1)+3$, the constraints can be written, by fixing $y_3=0$ and $y_2=x_1+x_2-x_3-y_1$, as: \begin{align} y_1 &\leq x_1-1\label{3.1}\\ y_1 &\geq x_1-x_3+1\label{3.2}\\ y_1 &\leq x_1-\frac{x_3}{3}\label{3.3}\\ y_1 &\geq x_1-x_3\label{3.4}\\ y_1 &\leq x_1+x_3+1\label{3.5}\\ y_1 \in \N\label{3.6} \end{align} Whose set of solutions for $y_1$ is $[x_1-x_3+1,x_1-\frac{x_3}{3}] \cap \N$. Then $h_1 \in \G(x-y)$ if and only if $0 \geq x_1-x_3 +1$, that is, when $x_3 \geq x_1 + 1$. And $h_2 \in \G(x-y)$ if and only if $x_1+x_2-x_3 \leq x_1-\frac{x_3}{3}$, that is, when $2x_3 \geq 3x_2$. \end{enumerate} \end{proof} As a consequence of the above theorem, minimal decomposition into irreducible numerical semigroups with multiplicity $4$ can be described when the number of special gaps larger than $4$ is two. \begin{cor} \label{cor:22} Let $S$ be a numerical semigroup with multiplicity $4$ and Kunz-coordinates vector $x=(x_1, x_2, x_3) \in \N^3\backslash\{(1,1,1)\}$. Then, if $\SG_4(S)=\{4(x_i-1)+i, 4(x_j-1)+j\}$, a minimal decomposition of $S$ into irreducible numerical semigroups with multiplicity four is {\small $$ S = \left\{\begin{array}{rl} \langle 4, 4x_1+ 1, 4(x_2-y_2)+2, 4(x_1-x_2-1+y_2) + 3 \rangle \cap \langle 4, 4(x_1-y_1)+1, 4x_2+2, 4(x_2-x_1+y_1)+3 \rangle & \mbox{ if $i=1$, $j=2$}\\ \langle 4, 4x_1+ 1, 4(x_2-y_2)+2, 4(x_1-x_2-1+y_2) + 3 \rangle \cap \langle 4, 4(x_1-y_3)+1, 4(x_3-x_1+y_3)+2, 4x_3+3 \rangle & \mbox{ if $i=1$, $j=3$}\\ \langle 4, 4(x_1-y_1)+1, 4x_2+2, 4(x_2-x_1+y_1)+3 \rangle \cap \langle 4, 4(x_1-y_3)+1, 4(x_3-x_1+y_3)+2, 4x_3+3 \rangle & \mbox{ if $i=2$, $j=3$} \end{array}\right. $$} with $y_1 \in [x_1-x_3+1,x_1-\frac{x_3}{3}]\cap \N$, $y_2 \in [ x_2-x_1, x_2-1]\cap \N$ and $y_3\in [x_1-x_3+1,x_1-\frac{x_3}{3}] \cap \N$. \end{cor} \begin{proof} It follows directly by applying Theorem \ref{theo:21} and Lemma \ref{lemma:7}. \end{proof} \begin{ex} \label{ex:22} Let $S = \langle 4, 31, 53 \rangle$. $S$ is a numerical semigroup with multiplicity $4$ and $\Ap(S,4)=\{0,53,62,31 \}$. Then, its Kunz-coordinates vector is $x=(\frac{53-1}{4},\frac{62-2}{4}, \frac{31-3}{4}) = (13, 15, 7)$. By Lemma \ref{lemma:19}, the set $\SG_4(S)=\{4\times 13 - 3, 4\times 15 - 2\} = \{49, 58\}$. By Theorem \ref{theo:21}, the decompositions of $S$ into irreducible numerical semigroups with multiplicity $4$ are in the form: \begin{align*} S &=\langle 4, 53, 62 - 4y_2, -9 + 4y_2 \rangle \cap \langle 4, 53 - 4y_1, 62, 11 + 4y_1 \rangle \end{align*} with $y_1 \in [7, 10] \cap \N$ and $y_2 \in [2, 14]$. For example, taking $y_1=8$ and $y_2=6$, a minimal decomposition of $S$ into irreducible numerical semigroups is given by: $$ S= \langle 4, 53, 38, 15 \rangle \cap \langle 4, 21, 62, 43 \rangle = \langle 4, 15 \rangle \cap \langle 4, 21, 43 \rangle $$ \end{ex} A direct consequence of Theorem \ref{theo:21} is the following result that states the number of minimal decompositions into irreducible numerical semigroups for a numerical semigroup with multiplicity $4$. \begin{cor} \label{cor:23} Let $S$ be a numerical semigroup with multiplicity $4$, with Kunz-coordinates vector $x=(x_1, x_2, x_3) \in \N$ and $\SG_4(S)=\{4(x_i-1)+i, 4(x_j-1)+j\}$. Then, the number of minimal decompositions of $S$ into irreducible numerical semigroups with multiplicity $4$ is the following: $$ \left\{\begin{array}{rl} \left\lfloor \frac{2x_3}{3} \right\rfloor x_1 & \mbox{if $i=1$ and $j=2$}\\ \left\lfloor \frac{2x_3}{3} \right\rfloor x_1 & \mbox{if $i=1$ and $j=3$}\\ \left\lfloor \frac{2x_3}{3} \right\rfloor^2 & \mbox{if $i=2$ and $j=3$}\\ \end{array}\right. $$ where $\lfloor q \rfloor$ is the floor part for any $q \in \Q$. \end{cor} \begin{proof} The result follows by counting the integer points in $[x_1-x_3+1,x_1-\frac{x_3}{3}]$, $[ x_2-x_1, x_2-1]$ and $[x_1-x_3+1,x_1-\frac{x_3}{3}]$, which are the interval where the $y$-variables take values in Theorem \ref{theo:21}. \end{proof} \begin{ex} \label{ex:24} For $S= \langle 4, 31, 53 \rangle$ in Example \ref{ex:22}, since the Kunz-coordinates vector of $S$ is $(13,15,7)$ and $\SG_4=\{49, 58\}$, the number of minimal decomposituion in $S$ into irreducible numerical semigroups with multiplicity $4$ is $\left\lfloor \frac{2\times 7}{3} \right\rfloor \times 13 = 4 \times 13 = 52$, which is the number of integer points inside $[7,10] \times [2, 14]$. \end{ex} In what follows, we analyze those numerical semigroups with multiplicity $4$ with three special gaps greater than $4$. In this case one could find that the number of numerical semigroups involved in a minimal decomposition into $4$-irreducible numerical semigroups is $2$ or $3$. The following result states when such a numerical semigroup is decomposable into $2$ irreducible numerical semigroups with multiplicity $4$. \begin{theo} \label{theo:25} Let $S$ be a numerical semigroup with multiplicity $4$, with Kunz-coordinates vector $x=(x_1,x_2,x_3) \in \N^3$ and $\#\SG_4(S)=3$. Then, $S$ can be decomposed into two $4$-irreducible numerical semigroups if and only if one of the following conditions holds: \begin{enumerate} \item $x_2\leq x_1$, and $S=\langle 4, 4x_1+ 1, 4x_2+2, 4(x_1-x_2-1) + 3 \rangle \cap \langle 4, 4(x_1-y_3)+1, 4(x_3-x_1+y_3)+2, 4x_3+3 \rangle$, \item $x_1 \geq x_3+2$, and $S=\langle 4, 4x_1+ 1, 4(x_1-x_3-1)+2, 4x_3 + 3 \rangle \cap \langle 4, 4(x_1-y_1)+1, 4x_2+2, 4(x_2-x_1+y_1)+3 \rangle$, \item $2x_1\leq x_2+1$, and $S=\langle 4, 4x_1+1, 4x_2+2, 4(x_2-x_1)+3 \rangle \cap \langle 4, 4(x_1-y_3)+1, 4(x_3-x_1+y_3)+2, 4x_3+3 \rangle$, \item $2x_3 \leq x_2$, and $S=\langle 4, 4x_1+ 1, 4(x_2-y_2)+2, 4(x_1-x_2-1+y_2) + 3 \rangle \cap \langle 4, 4(x_1-y_1)+1, 4x_2+2, 4(x_2-x_1+y_1)+3 \rangle$, \item $x_3 \geq x_1+1$, and $S=\langle 4, 4(x_1-y_1)+1, 4x_2+2, 4(x_2-x_1+y_1)+3 \rangle \cap \langle 4, 4(x_1-y_3)+1, 4(x_3-x_1+y_3)+2, 4x_3+3 \rangle $, \item $3x_2\leq 2x_3$, and $S=\langle 4, 4x_1+ 1, 4(x_2-y_2)+2, 4(x_1-x_2-1+y_2) + 3 \rangle \cap \langle 4, 4(x_1-y_3)+1, 4(x_3-x_1+y_3)+2, 4x_3+3 \rangle$. \end{enumerate} \end{theo} \begin{proof} Since $\#SG_4(S)=3$, $\SG_4(S)=\{4(x_1-1)+1, 4(x_2-2)+2, 4(x_3-1)+3\}$. Let us analyze each one of those special gaps: \begin{enumerate} \item $h_1=4(x_1-1)+1$. Then, the semigroup $x-y$ whose Frobenius number is $h_1$ covers also $h_2$ if $0$ is in $[ x_2-x_1, x_2-1]$, and then $y_2=0$ can be chosen, that is when $0 \geq x_2-x_1$, or equivalently, when $x_2 \leq x_1$. Then, a minimal decomposition is that of choosing $y_2=0$, and when $h_1$ and $h_3$ must be covered in Corollary \ref{cor:22}. However, $x-y$ may not covers $h_2$ but can cover $h_3$, which is equivalent to $y_3=0$, that is possible when $y_3 = -x_1+x_2+x_3+1-y_2 = 0$ can be chosen, or equivalently, when $-x_1+x_2+x_3+1 \in [ x_2-x_1, x_2-1] \cap \N$, which is the same that $x_1 \geq x_3+1$. The decomposition is given by fixing $y_3=0$ (or equivalently $y_2 = -x_1+x_2+x_3+1$) in Corollary \ref{cor:22} when $h_1$ and $h_2$ are the special gaps. \item $h_2=4(x_2-1)+2$. $h_1$ is in $\G(x-y)$ if and only if $y_1=0$. We can choose $y_1=0$ in the interval in Theorem \ref{theo:21} if $0 \geq x_1-\frac{x_2+1}{2}$ (note that $0\geq x_1-\frac{x_2}{2}$ is always true by the conditions of being a Kunz-coordinates vector). The minimal decomposition follows by fixing $y_1=0$ in Corollary \ref{cor:22} when $h_2$ and $h_3$ are the special gaps. Then, the condition is $2x_1 \geq x_2+1$. On the other hand, $h_3 \in \G(x-y)$ if $y_3=0$ is a eligible choice, that is, $x_1-x_2+x_3-y_1 = 0$ is a solution. This is equivalent to $x_1-x_2+x_3 \leq x_1-\frac{x_2}{2}$, that is, to $x_2 \geq 2x_3$. Then, the decomposition is obtained by applying Corollary \ref{cor:22} when $y_1=x_1-x_2+x_3$ in the case when $h_1$ and $h_2$ are the special gaps to be covered. \item Finally, for $h_3=4(x_3-1)+3$. $h_1 \in \G(x-y)$ if and only if, again by Theorem \ref{theo:21}, $0 \geq x_1-x_3 +1$, that is, when $x_3 \geq x_1 + 1$. The decomposition is again obtained by aplying Corollary \ref{cor:22} when $h_3$ and $h_2$ are the special gaps. And $h_2 \in \G(x-y)$ if and only if $x_1+x_2-x_3 \leq x_1-\frac{x_3}{3}$, that is, when $2x_3 \geq 3x_2$. By Corollary \ref{cor:22}, the minimal decomposition coincided with the one when $y_1=x_1+x_3-x_3$ in the case when $h_1$ and $h_3$ are the special gaps. \end{enumerate} \end{proof} If none of the conditions of Theorem \ref{theo:25} holds, a minimal decomposition into irreducible numerical semigroups with multiplicity $4$ consists of the intersection of three $4$-irreducible numerical semigroups. Those three semigroups can be described by choosing Kunz-coordinates vectors such that each of them has as Frobenius number each of the different three special gaps. These choices are described as the integer points inside the above intervals. We summarize in the following result such a methodology to construct the decomposition. \begin{theo} \label{theo:26} Let $S$ be a numerical semigroups with multiplicity $4$, Kunz-coordinates vector $x=(x_1, x_2, x_3) \in \N^3$, and such that $\#\SG_4(S)=3$. Then, if none of the conditions of Theorem \ref{theo:25} holds, the minimal decompositions of $S$ into irreducible numerical semigroups with multiplicity $4$ is the following: $$ S= \begin{array}{c}\langle 4, 4(x_1-y_1) +1, 4x_2+ 2, 4(x_2-x_1+y_1)+3 \rangle\\ \cap\\ \langle 4, 4x_1 +1, 4(x_2-y_2)+ 2, 4(x_1-x_2-1+y_2)+3 \rangle\\ \cap\\ \langle 4, 4(x_3-x_2+y_2) +1, 4(x_2-y_3)+ 2, 4x_3+3 \rangle \end{array}$$ where $y_1 \in [x_1-\frac{x_2+1}{2}, x_1-\frac{x_2}{2}] \cap \N$, $y_2 \in [ x_2-x_1, x_2-1] \cap \N$ and $y_3 \in [x_1-x_3+1,x_1-\frac{x_3}{3}] \cap \N$. \end{theo} The following example illustrates the usage of the above result. \begin{ex} Let $S = \langle 4,21,18,23 \rangle$. $S$ is a numerical semigroup with multiplicity $4$ and its Kunz-coordinates vector is $x=(5,4,5)$. The set of special gaps greater than four of $S$ is $\SG_4(S)=\{17, 14, 19\}$. It is easy to check that $x$ does not verify any of the conditions of Theorem \ref{theo:25}, and then, a minimal decomposition of $S$ into $4$-irreducible numerical semigroups involves $3$ semigroups. To compute one of those minimal decompositions, we apply Theorem \ref{theo:26} that gives us directly the decomposition as: $$ S= \langle 4, 21-y_1, 18, 4y_1-1 \rangle \cap \langle 4, 21, 18 - 4y_2, 4y_2+3 \rangle \cap \langle 4, 5+4y_2, 18-4y_3, 23 \rangle $$ for any $y_1 \in [3,3] \in \N$, $y_2 \in [ 0, 3] \cap \N$ and $y_3 \in [1,3] \cap \N$. For instance, for $y_1=3$, $y_2=2$ and $y_3=3$ we have the decomposition $$ S= \langle 4, 18, 18, 11 \rangle \cap \langle 4, 21, 10, 11\rangle \cap \langle 4, 13, 6, 23 \rangle = \langle 4, 11, 18 \rangle \cap \langle 4, 10, 11\rangle \cap \langle 4, 6, 13\rangle $$ \end{ex} Finally, we analyze with our approach the $4$-symmetry and $4$-pseudosymmetry of a numerical semigroup with multiplicity $4$. Let $S \neq \{0, 4 , \rightarrow\}$ be a numerical semigroup with $\m(S)=4$ and Kunz-coordinates vector $x=(x_1, x_2, x_3) \in \N^3$. If $S$ is irreducible, then, $\SG_4=\{4(x_i-1)+i\}$ for some $i \in \{1,2,3\}$, so $\F(S)=4(x_i-1)+i$. Hence, $S$ is symmetric if and only if $i$ is even, or equivalently, if $i =1, 3$, and $S$ is pseudosymmetric if and only if $i=2$. If $S$ is not irreducible and $\SG_4(S)=\{4(x_i-1)+i, 4(x_j-1)+j\}$, $S$ is decomposable into symmetric numerical semigroups with multiplicity $4$ if and only $i=1$ and $j=3$, while $S$ is never decomposable as an intersection of pseudosymmetric numerical semigroups. If $\#\SG_4(S)=3$, then, $S$ is decomposable into symmetric numerical semigroups if condition \emph{ 1.} or \emph{ 6.} in Theorem \ref{theo:25} is satisfied. Note that these conditions are hold if $S$ is decomposed into one numerical semigroup with Frobenius number $4(x_1-1)+1$ and other with Frobenius number $4(x_3-3)+3$, covering some of them the even special gap $4(x_2-1)+2$. Clearly, in this case $S$ can be never decomposed into pseudosymmetric numerical semigroups.
1,314,259,993,072
arxiv
\section{Introduction} \label{S:intro} Rotating stratified flows are particularly important in the understanding of the dynamics of our planet and the Sun. Several of the key concepts needed in order to progress in predictions of the weather and in the global evolution of the climate depend crucially on a fundamental understanding of these flows. At different scales, different physical regimes become salient, and yet all scales interact. The nonlinear advection produces steepening, albeit slowly in the presence of strong waves. Thus, these fronts and turbulent eddies lead to enhanced dissipation and dispersion of particles and tracers, affecting the global energetic behavior of the atmosphere and climate systems, for example for atmospheric synoptic scales, and for oceanic currents, in the latter case modifying the meridional circulation. In the atmosphere, such effects on energetics can in turn impair assessments of whether a given super-cell can spawn a tornado, and they affect both the evaluation of hurricane intensity and of climate variability. Rotating stratified turbulence (RST hereafter) thus plays a crucial role in the dynamics of the atmosphere and oceans, with nonlinear interactions--responsible for the complexity of turbulent flows--having to compete with the waves due to rotation and stratification. All of this takes place in the presence of a variety of other phenomena, including reactive chemical transport, biological or hydrological processes, as well as large-scale shear and bounday layers for example. One common approach is to tackle the problem in its entirety and construct a succession of models with increasing degrees of complexity. Conversely, one can take the simplest problem with what may be the most essential ingredients and examine the dynamics of such flows from a fundamental point of view, an approach taken in this paper. One of the inherent difficulties is the fact that such flows are represented, in the dry Boussinesq framework, by four independent dimensionless parameters, the Reynolds, Froude, Rossby and Prandtl numbers defined as: \begin{equation} Re=\frac{U_0L_0}{\nu}, \ Fr=\frac{U_0}{L_0N}, \ Ro=\frac{U_0}{L_0f}, \ Pr=\frac{\nu}{\kappa} \ , \label{PARAM} \end{equation} where $U_0$ and $L_0$ are, respectively, a characteristic velocity and length scale, $\nu$ and $\kappa$ are the kinematic viscosity and scalar diffusivity (taken to be equal, $Pr=1$), $N$ is the Brunt-V\"ais\"al\"a frequency, and finally $f=2\Omega$ with $\Omega$ the rotation frequency. Other dimensionless parameters, combinations or variants of these basic ones, are commonly defined as well (see \S \ref{ss:param1}). A number of studies have shown, at least in the absence of rotation, that the buoyancy Reynolds number $R_B=ReFr^2$ needs to be large enough for vigorous turbulence to develop in the small scales (see for example the review in \cite{ivey_08} and references therein). Indeed, at $R_B=1$, the Ozmidov scale \begin{equation} \ell_{OZ}=2\pi \sqrt{\varepsilon_V/N^3}, \label{OZM} \end{equation} at which isotropy recovers in a purely stratified flow, is comparable to the dissipation (or Kolmogorov) scale, $\ell_\eta=2\pi (\nu^3/\varepsilon_V)^{1/4}$, where $\varepsilon_V=|dE_V/dt|$ is the rate of dissipation of kinetic energy (note these length scales are written for a domain with dimensionless length of $2\pi$, such that $k=2\pi /\ell$ is the wavenumber). For $R_B>>1$, a Kolmogorov range, typical of isotropic and homogeneous turbulence, develops before dissipation can become effective. One can similarly define the Zeman scale, $\ell_{\Omega}=2\pi \sqrt{\varepsilon_V/f^3}$, for recovery of isotropy in a purely rotating flow, as shown in \cite{3072}. According to the relative values of these parameters, several ranges can co-exist, with one effect overcoming others in each range (say, nonlinearities over wave motions or vice-versa). Thus, such flows support multi-scale interactions that have to be explicitly resolved. The interaction between oscillatory waves and steepening nonlinear interactions can also result, e.g., in the development of strong and localized vertical velocity fields \cite{rorai_14}. Different spectra are also observed in the purely stratified case; for example, a spectrum shallower than $k^{-1}$ is obtained in \cite{kimura_12}, whereas spectra steeper than $k^{-3}$ are observed in several other studies (see \cite{polzin} for a recent review of oceanic observations and analytical models). In both cases, non-local interactions between widely separated scales may well be dominant \cite{lvov_12}. Thus, large scale separations have to be achieved in order to be able to unravel the different competing phenomena. A high-resolution direct numerical simulation (DNS) of homogeneous isotropic turbulence on a grid of $4096^3$ points, with Taylor Reynolds numbers of up to 1200 was performed a decade ago \cite{kaneda, kaneda_rev} (for the case of passive tracers and Lagrangian particles, see \cite{sawford_11, sawford_13b}). For purely stratified flows, runs with a slightly smaller resolution were presented recently in \cite{almalkie_12}, with grids up to $4096^2 \times 2048$ points at the largest buoyancy Reynolds number, and $4096^2 \times 512$ for the more strongly stratified flow. In these simulations, energy cascades are found both in the vertical and the horizontal directions, with 1/3 of the dissipation coming from the former as in three-dimensional (3D) homogenous isotropic turbulence, and with a Kolmogorov spectrum in terms of the horizontal wavenumber at scales both larger and smaller than the Ozmidov scale. Other DNSs of purely stratified flows at linear resolutions of up to 2048 points, at least in one direction, focus on the influence on the resulting dynamics and energy distribution among scales of resolving or not either the buoyancy scale characteristic of the thickness of the vertical layers \begin{equation} L_B= 2\pi U_0/N \ , \label{LB} \end{equation} or the Ozmidov scale $\ell_{OZ}$ at which isotropy recovers \cite{waite2011,augier_12,bartello_13}. Part of the difficulty in determining spectral distribution among scales resides in the well-known fact \cite{cambon_89} that the dynamics is anisotropic, and thus the isotropic spectrum should be replaced by an axisymmetric two-dimensional spectrum, or by anisotropic correlation functions. Similar characteristic length scales can be defined on the rotation rate $f$, and in fact, when both rotation and stratification are present, other scales can be defined (see equations (\ref{defBO}), (\ref{LBmod})). It should be noted that numerical simulations are quite complementary to laboratory experiments. In the latter case, the Reynolds number can be quite high, reaching in some cases geophysical values of $10^5$ or $10^6$, although Froude numbers often remain close to (but less than) unity \cite{ivey_91,barry_01}. This means that the buoyancy Reynolds numbers $R_B$ are high as well in these cases, although the stratification is not so strongly felt. By contrast, DNSs can only be performed at still modest values of Reynolds numbers (up to $\approx 10^4$, unless some parametrization scheme for the unresolved small-scales is used), but the Froude numbers can be taken as low as $10^{-2}$ or even $10^{-3}$ (for laboratory flows at small buoyancy Reynolds number, see the recent review in \cite{waite_14}). \begin{figure*} \begin{minipage}{0.45\textwidth} \large \begin{lpic}[]{fig1a(0.4,0.4)} \lbl[l]{34,190;(a)} \normalsize \end{lpic} \end{minipage} \hspace{-1cm} \begin{minipage}{0.45\textwidth} \large \begin{lpic}[]{fig1b(0.4,0.4)} \lbl[l]{34,190;(b)} \end{lpic} \normalsize \end{minipage} \caption{Temporal variations of (a) kinetic energy dissipation rate and (b) the ratio of kinetic to potential energy. In (a) is displayed with a dashed line (red) the run using the $3072^3$ grid which evolved until $t=6.7$. The green squares represent the run performed on the grid of $4096^3$ points, evolved for $5\le t \le 5.88$ (i.e., for a duration of $\approx 77$ gravity wave periods), and the black triangles indicate the early-time run on a grid of $1536^3$ points. All runs have the same physical parameters and time step.} \label{compaenergy} \end{figure*} While these results were obtained for purely stratified flows, the role of rotation on stratified turbulence has been investigated by a number of authors. Besides the energy, rotating stratified flows also conserve the pointwise potential vorticity which can be defined as $P_V=f\partial_z \rho - N \omega_z + \omega \cdot \nabla \rho$, with $\rho$ the density (or temperature) fluctuations, and $\omega=\nabla \times {\bf u}$ the vorticity, ${\bf u}$ being the velocity. Because of the nonlinear term $\omega \cdot \nabla \rho$ in the expression of $P_V$, its ${\cal L}_2$ norm is quartic and thus it is not conserved by each triadic interaction in a truncated ensemble of modes. The extent to which this is relevant to the dynamical evolution of the flow is not entirely known, but several studies for shallow water \cite{warn_86} or the Boussinesq equations \cite{bartello_95, aluie_11, waite_13} assess the relative importance of the different contributions to $P_V$, with the general assumption that the high-order terms can be neglected when the waves are strong enough, i.e., at small Froude and/or Rossby numbers. In contrast, for the particular case of stable stratification, it was hypothesized in \cite{waite_13} that when $R_B$ is large enough the nonlinear term in $P_V$ affects the dynamics, becoming important at the same time as Kelvin-Helmoltz instabilities develop in the flow. Since in many cases of geophysical interest, the ratio of the stratification to rotation frequencies $N/f$ is quite high (of the order of 100), most studies of RST consider the case of weak rotation. In reduced models relevant for geophysical flows, the geostrophic balance that results (between pressure gradients, Coriolis force and gravity) and the quasi-geostrophic (QG) regime, are central tenets of large-scale behavior and have been studied extensively over the years \cite{rhines_79,julien_12, klein_rev_10, vanneste_13}, including their breaking down through, for example, fronto-genesis \cite{molemaker_10a}. \begin{figure*} \begin{minipage}{0.45\textwidth} \large \begin{lpic}[]{fig2a(0.4,0.4)} \lbl[l]{38,192;(a)} \normalsize \end{lpic} \end{minipage} \hspace{-1cm} \begin{minipage}{0.45\textwidth} \large \begin{lpic}[]{fig2b(0.4,0.4)} \lbl[l]{38,192;(b)} \end{lpic} \normalsize \end{minipage} \caption{Temporal evolution of (a) the ratio of the volume averaged vertical to horizontal kinetic energy, $\left<w^2\right> / \left<u^2+v^2\right>$, and (b) the vertical length scale $\ell_z$ defined in \eq{LZ}, which is characteristic of vertical shear layers. The integral scale $L_{int}$ is also provided in order to compare with $\ell_z$. } \label{fig_time} \end{figure*} In the Boussinesq framework, a number of pioneering analyses of RST were performed in \cite{billant_01, lindborg2005, liechtenstein_05, liechtenstein_06b, waite_06, hanazaki_02, smith_02}. The role played by the ratio $N/f$ in these flows is relevant although, in some ways, poorly understood. In \cite{billant_01} it was shown that, while stratification in the absence of rotation determines the vertical length scale $L_\parallel$ (basically, the buoyancy scale $L_B$ associated with the thickness of vertical layers, with a Froude number based on this vertical length scale of order unity) independently of the horizontal scale, $L_\perp$, in RST this scale has a more complex dependence on the buoyancy scale and on $N/f$, in which Rossby number is the chief discriminating factor. However, specifically in the quasi-geostrophic limit, it is found \cite{waite_06} that $L_\parallel \propto f L_\perp/N$, with the proportionality indeed consisting of a function of Rossby number, as suggested in \cite{billant_01}. We use this finding to help explain spectral features in our DNS. In \cite{lindborg2005}, elongated boxes were considered to study the emergence of a direct energy cascade in RST with a Kolmogorov spectrum in the horizontal direction, and it was shown that such is the case provided the Rossby number is greater than a critical value of $\approx 0.1$. The case of large $N/f$ ($\gtrsim 45$) was also considered, and the runs were performed using hyper-viscosity. The aspect ratio of the computational domain seems to play an important role in these studies, and to influence the dynamics especially at unit Burger number $Bu=NL_\perp/fL_\parallel$. The linear regime of potential vorticity at $Bu=1$ was analyzed in \cite{kurien_12} (see also \cite{remmel_10}), and it was found that vortical modes dominate over waves at large scales, and that the parameter $\Gamma=f k_\parallel/(Nk_\perp)$ is relevant as a measure of the relative importance of terms in the linear part of the expression for potential vorticity: the two sources of dispersion become comparable when $f k_\parallel \sim N k_\perp$. A more recent work on RST \cite{marino} deals with the emergence of helicity (vorticity-velocity correlations) in such flows, helicity being measured to be relatively strong in tornadoes and hurricanes \cite{moli1}, and also being an important ingredient in the origin of large-scale magnetic fields in astrophysics. Finally, besides DNS, rapid distortion theory for RST was considered in \cite{hanazaki_02} where it was shown that $N/f$ governs the final distribution of energy among the horizontal and vertical kinetic energy components and potential modes, as well as the normalized vertical flux $\left<\rho w\right>$, where $w$ is the vertical velocity, together with the root mean square vertical vorticity, whereas stratification dominates the unsteadiness of these flows. As already mentioned, $N/f$ is rather large in many applications. However, the case of RST with $N/f$ of order unity (or slightly larger) is also of interest for geophysical flows. One example is the abyssal southern ocean at mid latitude \cite{nikurashin_12}, which serves as a motivation for the present study and for which $N/f $ is estimated to be between roughly 5 and 10. Flows with $N/f$ ranging from $0.1$ to 10 were analyzed in \cite{liechtenstein_05, liechtenstein_06b}; all runs were spin-down with initial conditions at $k_0\approx 10$. These authors stressed the importance of computing for long times compared to both the inertial and stratified periods of the waves, because of what are called slow modes, i.e., modes with zero wave frequency, as already emphasized in \cite{smith_02} (see also \cite{herbert_14}. In \cite{smith_02}, it was also noted that energy builds up with time at small scales, the flow being strongly intermittent. Previous studies in the regime of moderate $N/f$ also showed that the inverse cascade of energy to large scales is more efficient in the range $1/2\le N/f \le 2$ \cite{EPL}, when wave resonances disappear \cite{smith_02}. Moreover, when forcing RST at small scales, it can be shown that there is a clear tendency towards a $-5/3$ spectrum for the inverse cascade, as the Reynolds number increases for fixed parameters, together with the existence of a dual energy cascade: to small scales with a positive and constant energy flux, and to large scales with again a constant but negative energy flux \cite{pouquet_13b}. Noticing the scarcity of high-resolution DNS for turbulence in the presence of both rotation and stratification to date, and considering the geophysical relevance of flows with moderate values of $N/f$, we thus now analyze results stemming from one such run with a numerical resolution using up to $4096^3$ grid points at the peak of dissipation. In the next section are given the equations, the numerical procedure and the overall parameters. Sections \S \ref{S:temp} and \S \ref{S:spec} provide, respectively, the temporal and spectral dynamics of the flow, \S \ref{S:struct} describes the physical structures that develop, and finally, \S \ref{S:conclu} offers a brief discussion and our conclusions. \begin{figure} \begin{minipage}{0.45\textwidth} \large \begin{lpic}[]{fig3a(0.3,0.3)} \lbl[l]{34,190;(a)} \normalsize \end{lpic} \large \begin{lpic}[]{fig3c(0.3,0.3)} \lbl[l]{34,190;(c)} \end{lpic} \end{minipage} \hspace{-1cm} \begin{minipage}{0.45\textwidth} \large \begin{lpic}[]{fig3b(0.3,0.3)} \lbl[l]{34,190;(b)} \end{lpic} \large \begin{lpic}[]{fig3d(0.3,0.3)} \lbl[l]{34,190;(d)} \end{lpic} \normalsize \end{minipage} \caption{ (a) High-resolution isotropic spectrum of the total energy, averaged over the time interval $t\in [5.3,5.7]$ corresponding to the peak in enstrophy, and compensated by a Kolmogorov 5/3 law. Note the break in the slope for $k\approx 12$. (b) Kinetic (solid line) and potential (dashed line) energy spectra compensated by $k^{11/5}$ and $k^{7/5}$, respectively, with the same temporal averaging. (c) Plot of total energy flux, and, separately, the kinetic and potential energy fluxes, as well as the buoyancy flux term obtained from \eq{buoyflux}. All fluxes are averated over the same time interval. Note the negative total flux at large scale, indicative of the effect of rotation. (d) Ratio of kinetic to potential energy spectra averaged over the same time interval; note again a transition around $k\approx 12$, and a scaling close to $k^{-4/5}$. } \label{compaspec} \end{figure} \section{Numerical set-up} \label{S:num} \subsection{Equations } \label{SS:old} The Boussinesq equations in the presence of solid body rotation, for a fluid with velocity ${\bf u}$, vertical velocity component $w$, and density (or temperature) fluctuations $\rho$, are: \begin{eqnarray} \frac{\partial {\bf u}}{\partial t} + \mbox{\boldmath $\omega$} \times {\bf u} + 2 \mbox{\boldmath $\Omega$} \times {\bf u} &=& -N \rho \hat e_z - \nabla {\cal P} + \nu \nabla^2 {\bf u} \ \ , \\ \frac{\partial\rho}{\partial t} + {\bf u} \cdot \nabla \rho &=& Nw + \kappa \nabla^2 \rho \ , \label{eq:momentum} \end{eqnarray} together with $\nabla \cdot {\bf u} =0$ assuming incompressibility. ${\cal P}$ is the total pressure and $\hat e_z$ is the unit vector in the vertical direction which is in the direction of the imposed rotation and opposed to the imposed gravity; therefore, $\mbox{\boldmath $\Omega$} = \Omega \hat{z}$. The initial conditions for the velocity are centered on the large scales, with excited wavenumbers $k_0\in [2,3]$ and isotropic with random phases. In the absence of dissipation ($\nu=\eta=0$), the total energy $E_T=E_V+E_P$ is conserved, with $E_V=\frac{1}{2}\left< |{\bf u}|^2 \right>$ and $E_P=\frac{1}{2}\left< \rho^2 \right>$ respectively the kinetic and potential energies; the point-wise potential vorticity is also conserved. Lastly, $E_P=0$ initially. \begin{figure} \begin{minipage}{0.45\textwidth} \large \begin{lpic}[]{fig4a(0.3,0.3)} \lbl[l]{34,190;(a)} \normalsize \end{lpic} \large \begin{lpic}[]{fig4c(0.3,0.3)} \lbl[l]{34,190;(c)} \end{lpic} \end{minipage} \hspace{-1cm} \begin{minipage}{0.45\textwidth} \large \begin{lpic}[]{fig4b(0.3,0.3)} \lbl[l]{34,190;(b)} \end{lpic} \large \begin{lpic}[]{fig4d(0.3,0.3)} \lbl[l]{34,190;(d)} \end{lpic} \normalsize \end{minipage} \caption{ Helicity dynamics using the data from the $4096^3$ run. (a) Relative helicity spectrum $|H_V(k)|/[kE_V(k)]$, which is seen as rather flat at large scale and decaying faster than $1/k$ at small scale. (b) Perpendicular spectrum of the helicity compensated with $k_\perp^{2}$. Note the region of excess helicity for small wavenumbers followed, for $k>k_c$ with $k_c \approx 12$, by a drop in the amplitude of the compensated spectrum, and with fluctuations associated with rapid changes in sign of the helicity. For $k>300$, a sharp drop is observed. (c) Temporal evolution of the volume integrated helicity. (d) Probability distribution function of the relative helicity (cosine of the angle between velocity and vorticity) at the peak of dissipation, $t=5.54$. Alignment and anti-alignment of ${\bf u}$ and $\omega$ are equally likely, as in homogeneous isotopic turbulence. } \label{helicity} \end{figure} When linearizing the above equations in the absence of dissipation, one obtains inertia-gravity waves of frequency \begin{equation} \omega_k= k^{-1} \sqrt{N^2k_\perp^2+f^2k_\parallel^2} \, , \label{dispersion} \end{equation} with $k=\sqrt{k_\perp^2+k_\parallel^2}$, $k_\perp=\sqrt{k_x^2+k_y^2}$, and $k_\parallel = k_z$, respectively the total, horizontal (or perpendicular), and vertical (or parallel) wavenumbers (see, e.g., \cite{bartello_95, sagaut_cambon_08}). Fourier spectra will be built-up from their axisymmetric counterparts defined from the two-point one-time velocity covariance $U({\bf k})$ (see, e.g., \cite{3072}) \begin{eqnarray} e_V(|{\bf k}_{\perp}|,k_{\parallel})= \sum_{\substack{ k_{\perp}\le |{\bf k}\times \hat {\bf z}| < k_{\perp}+1 \\ k_{\parallel}\le k_z < k_{\parallel}+1}} U({\bf k}) & = \int U({\bf k}) |{\bf k}| \sin \theta d \phi = e(|{\bf k}|, \theta) = e(k, \theta) \ ; \label{etheta} \end{eqnarray} here $\phi$ is the longitude with respect to the $k_x$ axis and $\theta$ the co-latitude in Fourier space with respect to the vertical axis. The function $e_V({\bf k}_\perp,k_\parallel=0)$ may be regarded as the spectrum of two-dimensional (2D) modes, having no vertical variation. Note that for an isotropic flow, at a given point ${\mathbf k}$ in wavenumber space, the ratio of the axisymmetric spectrum $e_V(|{\bf k}_{\perp}|,k_{\parallel})$ to the isotropic spectrum is $\sim 1/|{\bf k}|$ because the size of the volume element in the isotropic case contains an additional (integrating) factor of $|{\bf k}|$ compared to the axisymmetric case. Hence, if the axisymmetric spectrum behaves as $k_\perp^{-\alpha}$, then the corresponding isotropic scaling will be $k^{-\alpha+1}$. The spectrum $e_V(|{\bf k}_{\perp}|,k_{\parallel})$ can also be decomposed into the kinetic energy spectrum of the horizontal components (velocity components $u$ and $v$), and of the vertical kinetic energy (velocity component $w$): \begin{equation} e_V(|{\bf k}_{\perp}|,k_{\parallel}) = e_\perp(|{\bf k}_{\perp}|,k_{\parallel}) + e_\parallel(|{\bf k}_{\perp}|,k_{\parallel}) \, . \label{eee} \end{equation} In the following we will also consider the reduced perpendicular spectrum \cite{sen2} \begin{equation} E_V(k_\perp) = \Sigma_{k_\parallel} e_V({\bf k}_\perp,k_\parallel)\, , \label{ekperp} \end{equation} the reduced parallel spectrum $E_V(k_\parallel)$ (which has a sum over $k_\perp$), and the spectrum representing the perpendicular energy of the strictly three-dimensional (3D) modes: \begin{equation} E_{3D}(k_\perp) = E_V(k_\perp) - e_V({\bf k}_\perp,k_\parallel=0) \, . \label{ek3dperp} \end{equation} Similar definitions hold for the helicity and potential energy spectra, $h_V({\bf k}_\perp,k_\parallel)$ and $e_P({\bf k}_\perp,k_\parallel)$, their reduced forms, $H_V({\bf k}_\perp)$ and $E_P({\bf k}_\perp)$, as well as their 3D expressions (i.e., the perpendicular spectra of the 3D modes), $H_{V, 3D}({\bf k}_\perp)$ and $E_{P, 3D}({\bf k}_\perp)$. They will be analyzed in the following sections. \begin{figure} \begin{minipage}{0.5\textwidth} \large \begin{lpic}[]{fig5a(0.4,0.4)} \lbl[l]{35,190;(a)} \normalsize \end{lpic} \end{minipage} \hspace{-1cm} \begin{minipage}{0.5\textwidth} \large \begin{lpic}[]{fig5b(0.4,0.4)} \lbl[l]{35,190;(b)} \end{lpic} \normalsize \end{minipage} \caption{ (a) Ratio $E_{3D}(k_\perp)/e(k_\perp,k_\parallel=0)$ of spectral energy in 3D modes {\it versus} that in 2D modes. Note again the transitions for $k\approx 12$ and $k\approx 300$. (b) Parallel spectrum of horizontal kinetic energy $e_\perp(k_\perp=0,k_\parallel)$ (solid line, see equation (\ref{eee})) and parallel spectrum of potential energy $e_P(k_\perp=0,k_\parallel)$ (dashed line), both compensated by $k_\parallel^{-3}$. Power laws are indicated as references. All spectra are averaged over the peak of enstrophy, $t \in [5.3,5.7]$. Note the small flat range at large scales in $e_\perp$, both ending with equipartition at $k\approx 12$. } \label{other_spec} \end{figure} \subsection{Specific numerical procedure} \label{SS:proc} \begin{figure*} \includegraphics[width=12.0cm]{fig6} \caption{ Angular total energy spectra $e(|k|,\theta)$ (\eq{etheta}) for various co-latitude, $\theta$, (in degrees) averaged over the peak of enstrophy, $t \in [5.3,5.7]$, and compensated by $k(\theta)^{-16/5}$: $\theta=10^\circ$ (black circles), $\theta=20^\circ$ (red crosses), $\theta=40^\circ$ (blue asterisk), $\theta=60^\circ$ (magenta squares), and finally $\theta=80^\circ$ (green triangles). The compensating slope corresponds to an (uncompensated) isotropic (BO) scaling of $k^{-11/5}$. } \label{ang_spec} \end{figure*} \begin{figure} \hspace{-3cm} \begin{minipage}{0.45\textwidth} \large \begin{lpic}[]{fig7a(0.4,0.4)} \lbl[l]{80,200;(a)} \normalsize \end{lpic} \end{minipage} \hspace{-1cm} \begin{minipage}{0.45\textwidth} \large \begin{lpic}[]{fig7b(0.4,0.4)} \lbl[l]{80,200;(b)} \end{lpic} \normalsize \end{minipage} \vspace{-1cm} \caption{ Perspective volume renderings of a thin y-z sub-volume of size $0.4\times0.7$ times the compute box size at $t=5.54$ (close to the peak of enstrophy). The y-axis is directed horizontally, and the z-axis, vertically. Presented are (a) perpendicular and (b) vertical velocity with identical color mapping. Note that the perpendicular velocity is dominant in magnitude. The slab thickness in the x (depth) direction is $0.04$ times the box size. All renderings were made using the using the VAPOR visualization system \cite{clyne07}. } \label{pvr1} \end{figure} The code used in this paper is the Geophysical High Order Suite for Turbulence (GHOST), which is fully parallelized using a hybrid methodology \cite{hybrid2011}. It uses parallel multidimensional FFTs in a pseudo-spectral method for 2D and 3D domains on regular structured grid, and can solve a variety of neutral-fluid partial differential equations, as well as several that include a magnetic field. Boundary conditions are periodic, and the time-integration is performed using a Runge-Kutta algorithm up to 4th-order with double precision arithmetic. The code uses a ``slab'' (1D) domain decomposition among MPI tasks, and OpenMP threads provide a second level of parallelization within each slab or MPI task. The code demonstrates good parallelization to more than $100,000$ compute cores. In order to achieve a high resolution at peak of dissipation when gradients of variables are the strongest, we have implemented a ``bootstrapping'' procedure in which we start the simulation at a lower resolution until the {\it dynamic range} of the energy spectrum decreases to some fiducial value. Here, by dynamic range we refer to the ratio of the energy at the peak of the spectrum, to the energy at the largest available wavenumber at a given resolution. When the lower threshold is reached, we increase the resolution and continue running until the dynamic range of the DNS at the new resolution decreases again to the fiducial value, repeating the process. Bootstrapping requires that a field at a reduced resolution be ``padded'' spectrally with zeros from its largest allowed wavenumber to the larger wavenumber allowed at the next (higher) resolution. This is handled in a processing step before the next highest resolution DNS is computed. This bootstrapping procedure was recently implemented, tested and used in the context of ideal magnetohydrodynamics \cite{brachet_13}. We thus began with a $1536^3$ run up to $t=2$, then doubled the resolution on a grid of $3072^3$ grid points up to $t=5$, and then completed the run on the grid with $4096^3$ points. The maximum resolved wavenumber using a classical 2/3 de-aliasing rule is $k_{max}=N/3=1365$, with the length of the box corresponding to wavenumber $k_{min}=1$. The viscosity and scalar diffusivity were chosen to be the same for these three successive runs, each run representing the evolution of the same physical problem at earlier times. The time step for each was chosen on the basis of the highest resolution considered in order to minimize time stepping errors at lower resolution. The first bootstrapping was done during the inviscid phase before the small scale structures that can dissipate energy develop. The run on the intermediate grid of $3072^3$ points, was also pursued to later times ($t_{max}=6.7$); this enabled us to inspect the convergence of the overall statistics at the same evolutionary times. Figure \ref{compaenergy}(a-b) displays the time evolution of the kinetic energy dissipation rate (proportional to the kinetic enstrophy $\left< |\omega|^2 \right>$), and the ratio of the kinetic to potential energy, to illustrate the three distinct intervals with bootstraping and the overall evolution of the system. \subsection{Other dimensionless parameters} \label{ss:param1} As mentioned in the introduction, a variety of dimensionless combinations of relevant physical parameters can be defined for rotating stratified turbulence, beyond those written in \eq{PARAM}. One of the central limitations to a better understanding of such flows is the need to unravel what the key parameters are that govern the dynamics. Beyond the Reynolds, Froude, Rossby and Prandtl numbers, one also considers the ratio $N/f$, as well as the Froude number based on a characteristic vertical length scale, $$F_z=U_0/(\ell_ZN) \ .$$ Moreover, the combined effect of turbulent eddies and waves can be encompassed in the buoyancy and rotational Reynolds numbers, mentioned previously and respectively defined as \begin{equation} R_B=ReFr^2 , \ R_\Omega=ReRo^2 \ . \label{RB} \end{equation} When $R_B\ge 1$ in a stratified flow, isotropy recovers beyond the so-called Ozmidov scale. Similarly, in a purely rotating flow, isotropy recovers beyond the Zeman scale for $R_\Omega\ge 1$ \cite{3072}. The partition of energy between kinetic and potential modes can be measured by their ratio, $E_V/E_P$, which is one possible definition of the Richardson number. Another definition is simply to measure the relative strength of the buoyancy to the inertial forces, or $$Ri=1/Fr^2 \ .$$ However, in order to emphasize the role of the development of small scales in mixing, one can also define a (local) Richardson number based on velocity gradients, $Ri_g$, as: \begin{equation} Ri_g= N(N-\partial_z \rho)/ (\partial_z u_\perp)^2 \ . \label{eq:Ri} \end{equation} This definition suggests that a sufficiently large vertical gradient locally leads to negative values of $Ri_g$, which is consistent with the intuitive picture of overturning when a denser parcel of fluid lies atop a less dense parcel. \subsection{Run parameters and general characterization} \label{ss:param11} \begin{figure} \large \begin{lpic}[]{fig8a(0.27,0.27)} \lbl[l]{70,240;(a)} \normalsize \end{lpic} \vspace{-2.5cm} \large \begin{lpic}[]{fig8b(0.27,0.27)} \lbl[l]{70,240;(b)} \normalsize \end{lpic} \vspace{-1cm} \large \begin{lpic}[]{fig8c(0.27,0.27)} \lbl[l]{59,240;(c)} \end{lpic} \normalsize \caption{ Perspective volume renderings of a thin x-z sub-volume of size $0.12\times0.1$ times the compute box size at $t=5.54$ (close to the peak of dissipation); the slab thickness in the y-direction is 0.01 times the box size. The x-axis is directed horizontally, and the z-axis, vertically. Presented are (a) vorticity magnitude, (b) temperature fluctuations, and (c) local Richardson number $Ri_g$ defined in equation (\ref{eq:Ri}). The color bar of vorticity illustrates the relatively intense vortices that are generated, and note the slanted Kelvin-Helmoltz layer. } \label{pvr22} \end{figure} We use $N/f=4.95$ with $N=13.2$ and $\Omega=f/2=1.33$ (thus, $f=2.66$). The viscosity is chosen to have the simulation well resolved: $\nu= 4\times 10^{-5}$. In dimensionless units, the resulting overall energetics of the flow lead to several scales that are of interest, and to a characterization of the flow in terms of the dimensionless parameters. Considered at the peak of enstrophy, the characteristic velocity is $U_0\approx 0.83$ and the integral length scale, computed from $L_{int} = 2\pi \int E_V(k)dk / \int k E_V(k)dk \approx 2.6$, very close as expected to the scale at which the energy spectrum initially peaks, namely $L_0=2\pi/k_0\approx 2.5$. The dissipation rate of kinetic energy is taken from a computation of kinetic enstrophy at the peak of dissipation: $\varepsilon_V =\nu\left<|\omega|^2 \right> \approx 0.0124$ (see \figp{compaenergy}{b}). Note that in the isotropic case, $\varepsilon_V=\epsilon_{K41} = U_0^3/L_{int} \approx 0.22$, but this relation does not hold in the highly anisotropic system we are investigating. Rather, we can take an estimate coming from weak turbulence, namely $\epsilon_{K41}*Fr\approx 0.005$, within a factor of two of the measured rate of energy dissipation. The Kolmogorov dissipation wavenumber is computed at the peak of dissipation to be $k_\eta \approx 660$. The Zeman and Ozmidov wavenumbers are therefore found to be, respectively, $k_{\Omega} \approx 39$ and $k_{OZ} \approx 431$. The buoyancy wavenumber is $k_B = 2\pi/L_B \approx 16$; the lack of scale separation between $k_{\Omega}$ and $k_B$ suggests that it will be difficult to distinguish as separate effects those due to rotation and those due to stratification. The Reynolds number is thus found to be $Re\approx5.4\times 10^4$, the Froude number $Fr \approx 0.0242$, and the Rossby number $Ro \approx 0.12$. Consequently, the buoyancy and rotational Reynolds numbers are $R_B \approx 32$, and $R_\Omega \approx 775$. The Richardson number is determined to be $Ri \approx 1700$, so the flow is, indeed, found to be strongly stratified. Finally, we can define a Taylor Reynolds number as $R_{\lambda}=U_0\lambda/\nu$, with $\lambda = 2\pi [\int E_V(k)dk / \int k^2 E_V(k)dk]^{1/2}$ the Taylor scale. In classical homogeneous isotropic turbulence (HIT) $R_{\lambda}$ measures the degree of development of small scales. At peak of dissipation, $\lambda\approx 0.31$, leading to a rather large $R_\lambda \approx 6400$, quite high compared to similar computations in HIT (e.g., $R_{\lambda}\approx 1200$ in a HIT run at similar grid resolution \cite{kaneda, kaneda_rev}). This is linked to the fact that, in the presence of strong waves, the transport of energy to small scales is hindered and not as efficient, and the energy spectrum becomes steeper at least at large scales, resulting in a larger Taylor scale for the same viscosity. It is worth noticing that in the atmosphere the Taylor Reynolds number is estimated to be $R_\lambda \approx 20000$, and it may be the case that realistic simulations of stratified and rotating atmospheric turbulence may be feasible in the near future as a result of this effect. Finally, note also that the value of $R_\lambda$ puts the present computation above the different thresholds in $R_{\lambda}$ identified in \cite{laval_03} for various instabilities to develop, as, e.g., for the growth of vertical shear and the growth of vertical energy. \begin{figure*} \includegraphics[width=7.5cm]{fig9} \caption{ Probability distribution function of the gradient Richardson number defined in \eq{eq:Ri}, at the latest time in the simulation. The (red) crosses indicate where $|Ri_g| \le 0.25$, the classical criterion for overturning instability \cite{miles_61,howard_61}.} \label{richardson} \end{figure*} When the dimensionless numbers obtained in the simulation at peak of dissipation given above are now dimensionalized using the characteristic length and velocity of the abyssal southern ocean at mid latitudes, i.e. with $L_0=1000$ m (corresponding to the peak of energy input in the ocean from bathymetry \cite{scott_11}) and $U_0=0.024$ m s$^{-1}$, as measured for example in the Drake passage \cite{nikurashin_12}, we obtain kinematic viscosity and scalar diffusivity, respectively, of $\nu=\kappa=4.5 \times 10^{-4}$ m$^2$ s$^{-1}$, too large by roughly two orders of magnitude. The corresponding overall effective energy dissipation rate would be $\epsilon \sim U_0^3/L_0 \approx 1.4 \times 10^{-8}$ m$^2$ s$^{-3}$; this latter value corresponds to the enhanced dissipation measured in the southern ocean \cite{naveira_04}. As a comparison, measurements in the atmosphere indicate $\epsilon \approx 10^{-6}$ m$^2$ s$^{-3}$ at intermediate altitude and at scales between 3 and $600$ km \cite{heas_12}. With a rotation frequency of $\Omega = 10^{-4} \ s^{-1}$, our choice of parameters leads to a Brunt-V\"ais\"al\"a frequency of $N \approx 10^{-3}$ s$^{-1}$, and $Fr\approx 0.024$, corresponding to the parameters of the run described above. Then, the buoyancy scale is $150$ m, the Ozmidov scale is $4$ m, and the Kolmogorov dissipation scale is around $0.15$ m. This last value is too large, because the viscosity is too large and the numerical resolution is still insufficient. Also, note that another lacking element in our simulation is the interaction with a larger-scale (mean) flow, say at the scale of several hundred kilometers, together with proper boundary conditions in the vertical. \section{Overall temporal dynamics} \label{S:temp} We now examine in more detail the overall temporal evolution of large-scale features. Figure \ref{compaenergy}(a-b) display, respectively, the kinetic energy dissipation, $\nu \left< \omega^2 \right>$, and the ratio of kinetic to potential energy. Easily identifiable initial oscillations due to the waves prevail at early times; these oscillations, stronger and thus more visible at large scale in the evolution of the energy, are due to inertia-gravity waves and their irregularity is linked with nonlinear coupling which, at that Reynolds number, is sizable. However, the ratio of kinetic to potential energy remains relatively constant on average throughout the run after the initial phase, at a value close to 3. This initial phase is essential, since, even though our initial conditions have $E_P=0$ (and random phases for the velocity at large scale), the gravity waves provide a source of organized potential energy for the next temporal phase when nonlinearities arise and constant-flux self-similar spectral scaling develops (see \S \ref{S:spec}). The kinetic energy (not shown) starts to decay rather slowly as small scales have been formed. By the end of the run, the dissipation has reached a plateau and the flow is fully developed. When examining the temporal evolution of the energy and dissipation for the flows computed on $3072^3$ and $4096^3$ points, no differences are visible, indicative of a converged simulation and of a well-resolved flow. At the peak, $\varepsilon_V\approx 0.0124$, and the dissipation of potential energy is $\varepsilon_P= \kappa \left< |\nabla \rho|^2 \right> \approx 0.0077$ (not shown). In \fig{fig_time} are given the temporal evolution of the ratio of the ${\cal L}_2$ norms (volume averages) of the vertical to horizontal kinetic energy, as well as a characteristic vertical length scale defined as \begin{equation} \ell_z=[\left< u_\perp^2 \right> / \left< (\partial_z u_\perp)^2 \right>]^{1/2} \ . \label{LZ} \end{equation} Note that $\ell_z$ can be viewed as a vertical Taylor scale, since it is based on vertical gradients of the velocity. As expected, the horizontal energy dominates over the vertical at all times, by a factor close to 4, and increasingly so after the peak of enstrophy. The vertical length-scale, of order unity to start with, undergoes a steady decrease and stabilizes as the peak of enstrophy is approached; it is one order of magnitude smaller at peak of dissipation when compared with its initial value. Considering now the vertical Froude number based on this vertical shearing length, $F_z=U_0/(N\ell_z)$, we find $F_z \approx 0.9 \lesssim 1$ at the latest time of the run. This value for $F_z$ is predicted for strongly stratified flows from the self-similarity analysis in \cite{billant_01}, if $\ell_z$ is taken to be the vertical scale of the dynamics, since, in this case, it is shown that $\ell_z \sim U_0/N$. One can contrast the anisotropy arising from rotation and stratification and say that the flow is fully turbulent but in an anisotropic manner \cite{marino_aniso}, although it still does feel the effect of rotation, as can be seen in \figp{compaspec}{c}, with a negative energy flux at large scale. \section{Spectral behavior} \label{S:spec} \subsection{Evidence for a large-scale Bolgiano-Obukhov scaling} \label{s:BO} In \fig{compaspec} we show several isotropic spectra, which are all averaged averaged around the peak of dissipation in the interval $t\in [5.3,5.7]$ (see \figp{compaenergy}{a}). The total isotropic energy spectrum is compensated by a classical Kolmogorov $k^{-5/3}$ law. Such a law is compatible with the scaling of the spectrum observed at smaller scales, for $k_c \le k \le 100$ with $k_c\approx 12$; note that this value is close to the buoyancy wavenumber $k_B\approx 16$ but may nevertheless differ from it (see below). At larger scales, a steeper spectrum is observed with a spectral slope close to $-11/5$, a value of 2.2 being computed from a least-squares fit on the interval $k\in[2,14]$). Note that spectra with a power-law index close to $-2$ were found in \cite{kurien_14} for $N/f$ varying from 4 to 32, and observations in the ocean also indicate values that are similar and in fact closer to $2.5$ \cite{arbic_13}. One can invoke a dimensional argument to explain the large-scale spectral distribution, namely the Bolgiano-Obukhov scaling (\cite{bolgiano_59, obukhov_59}; BO hereafter) derived for purely and stabley stratified turbulence. This scaling is obtained under the assumption that the source of energy at large scale is contained in the buoyancy, or in the potential modes, with nonlinear transfer rate $\varepsilon_P=|dE_P|/dt$, assumed constant, and with a negligible advection term in the momentum equation. Since $\rho$ in the primitive equations written in \eq{eq:momentum} has the dimension of a velocity, we have to re--introduce the physical dimension of the buoyancy flux in terms of length and time, i.e., $L^2 T^{-5}$; similarly one can use $\varepsilon_P N^2$ for the constant flux. This then leads to (see \cite{lohse_10} for a review): \begin{equation} E_V(k)\sim \varepsilon_P^{2/5} k^{-11/5} \ \ , \ \ E_P(k)\sim \varepsilon_P^{4/5} k^{-7/5} \ \ . \label{eq:BO} \end{equation} In the BO phenomenology, the scalar actively modifies the velocity field. Note that the Coriolis force does not contribute to the energy balance but only to an angular redistribution of energy favoring negative flux to large scales, and thus does not perturb the dynamics leading to the BO scaling. The phenomenology derives from the idea that at large scales, the nonlinear advection term is not strong enough in the direct cascade to small scales, and the only available source of energy is therefore that coming from the scalar fluctuations. Requiring that the kinetic and potential energy spectra depend only on the dimensional buoyancy flux, $\varepsilon_P$, and wavenumber, $k$, leads to the above spectra. There are indications that the BO scaling has been observed in stably stratified in the atmosphere \cite{lovejoy_09}, as well as at the bottom boundary of convectively unstable cells, using temporal structure functions conditionally averaged on local values of the thermal dissipation rate \cite{ching_13}. A recent three-dimensional DNS analysis of Rayleigh-B\'enard convection shows such a scaling as well \cite{kumar_14}. BO scaling has been associated with a bi-dimensionalization of the flow due to stratification and the growth of the mixing layer leading to a confined dynamics \cite{chertkov_03, boffetta_12}. In the case of the present computation, we note that the quasi 2D large-scale dynamics is reinforced by the presence of rotation, as observed in the kinetic energy flux which is negative, corresponding to inverse transfer (see below). We show in \figp{compaspec}{b} the kinetic and potential energy spectra averaged over the time interval corresponding to the peak of enstrophy and compensated by the BO scaling. This scaling seems to hold at large scales, up to $k\approx 12$ for the velocity, and on a shorter range for the temperature field. In \figp{compaspec}{d} is shown the ratio of kinetic to potential energies, each averaged over time, and their ratio is consistent with a $k^{-4/5}$ law at large scale, as predicted by \eq{eq:BO} to within constants of order unity, whereas in the next regime, close to a Kolmogorov law, this ratio is close to equipartition in these units. \figp{compaspec}{c} displays several fluxes. The (forward) flux of total energy (solid line) is approximately constant, at a level of $\approx 0.022$ in these two identified ranges, indicative of a classical turbulent cascade. Note also that it becomes negative (reaching $\approx -0.0085$) at scales larger than the scale of the initial conditions; it can be expected, therefore, that, in the presence of forcing, a small inverse cascade may develop, as observed in \cite{aluie_11} and as it does when the forcing is placed at smaller scale (see e.g., \cite{smith_96,EPL, pouquet_13b}). We also show in \figp{compaspec}{c} the energy flux decomposed into its kinetic (dashed) and potential (dash-dotted) components, $\Pi_{V,P}$, as well as the buoyancy flux, $\Pi_{w\rho}$, (dotted line), defined in wavenumber space as: \begin{equation} \Pi_{w\rho}(k) = \sum_{k'=0}^{k'=k} \sum_{\, k'< |k''|<k'+1} \Re( \hat{w}(\mathbf{k}'') \hat{\rho}(\mathbf{k}'')^* )\,\, , \label{buoyflux} \end{equation} where $\hat{w}(\mathbf{k})$ and $\hat{\rho}(\mathbf{k})$ are the Fourier coefficients for the vertical velocity and the scalar, respectively. The first two fluxes, $\Pi_{V,P}$, correspond to a scale--by--scale analysis of the two non-linear flux terms, $\rho {\bf u} \cdot \nabla \rho$ and ${\bf u} \cdot [{\bf u} \cdot \nabla] {\bf u}$, whereas the buoyancy flux concerns the energetic exchanges between the velocity and density fluctuations. The sum of the kinetic enstrophy at its peak (see \figp{compaenergy}{a}) plus the kinetic energy flux, $\Pi_v(k=1)\approx -0.01$ is $\approx 0.0024$, which is in excellent agreement with the nearly constant value of $\Pi_v$ in the region $k\in[4,20]$ seen in this figure. Furthermore, it can be seen that, as hypothesized in the BO phenomenology, the potential flux to small scales is dominant, constant and positive for a wide range of scales. The kinetic flux has a strong peak at wavenumbers smaller than $k_0$. It is in fact negative throughout the wavenumber range around the peak of enstrophy; this is likely due to the fact that the buoyancy flux acts as a source of energy for the velocity in a wide range of scales. We present the time average of $\Pi_{w\rho}$ in \fig{compaspec} (c;dotted curve), where it is seen that it is, in fact, comparable to the total energy flux, and can serve potentially as a kinetic energy source. We note that large temporal fluctuations in the buoyancy flux are observed; they correspond to gravity waves directly affecting vertical motions. Finally, we can evaluate the wavenumber, $K_{BO}$, at which the transition to a Kolmogorov spectrum $E_V(k)\sim \varepsilon_V^{2/3}k^{-5/3}$ is taking place, in the framework of the BO scaling, by equating the two spectra at that scale. This leads immediately to \begin{equation} K_{BO}\sim \varepsilon_P^{3/4} \varepsilon_V^{-5/4} \ . \label{defBO} \end{equation} The value for $\varepsilon_P$ is taken to be that obtained in the large scales corresponding to the broad flat region in $\Pi_P$ observed in \figp{compaspec}{c}; thus, $\varepsilon_P \approx 0.023$. For $\varepsilon_V$, we must be careful: this should be the value that {\it would} be seen if we were able to resolve the Kolmogorov spectrum beyond the Ozmidov scale; however, this scale is barely resolved in this DNS. Hence, we select the value of the kinetic energy flux at the largest wavenumber in the calculation to find $|\varepsilon_V|\approx0.015$. Using these values for the rates, we find that $K_{BO}\approx 11$, quite close to the observed value of $k_c\approx 12$. The excellent agreement of the spectral scalings as well as the compatibility between the $K_{BO}$ computed with measured data and the observed $k_c$ offer compelling evidence of BO scaling in this decaying strongly stratified, weakly rotating DNS. The problem remains, however, that there is little scale separation for $k<K_{BO}$ before a different dynamics dominates at larger wavenumbers. A parametric study at high Reynolds number, achieved by varying the buoyancy force may help to determine the likelihood of such scaling laws in unbounded stratified turbulence; conditional averaging \cite{ching_13} may be effective for such a study. However, while shear is not imposed in our run, strong shear layers develop in the vertical in stably stratified flows, even in the presence of rotation (in which case they are slanted; see \figs{pvr1}{pvr22}). Shear is created locally and leads to strong instabilities (see \figs{pvr22}{richardson} below), so we must consider its effect on spectral behavior. A shear scaling leads to the following spectra: $$ E_V(k)\sim \epsilon_V^{1/3} S k^{-7/3} \ \ , \ \ E_P(k)\sim \epsilon_P \epsilon_V^{-1/6} S^{-1/2} k^{-4/3} \ \ , $$ where $S$ is the shear rate (which can also be expressed in terms of a shear length scale) \cite{lohse_10}. In this case, the scalar is passive, and the ratio of the two spectra varies as $k^{-1}$, so the spectral indices are close to those that we find in our results. However, we have argued in part by considering $\Pi_{w\rho}$ (\eq{buoyflux}) and its magnitude relative to the total energy flux that the scalar field is not passive. Furthermore, the excellent agreement of the observed spectral indices and the accord between the observed break in the spectra at $k_c$ and the computed $K_{BO}$ seem to suggest that BO scaling is more likely; this may be a first instance of such a scaling in a DNS of strongly stably stratified unbounded flows at relatively high Reynolds number (although, see \cite{kimura_96} and \cite{kimura_12}). \subsection{The lack of isotropy} \label{SS:aniso} The transition in the spectral slope at $k_c\approx 12$ is not visible in the total energy flux; this was already noticed in \cite{3072} in the purely rotating case: even though characteristic time scales and nonlinear dynamics change with wavenumber, the flow of energy across scales is smooth. However, the wavenumber $k_c$ marks a clear transition in the character of the spectra, exhibiting also a sharp decrease of the ratio of kinetic to potential energy at large scales (see \figp{compaspec}{d}), followed by a quasi-equipartition between both energies for $k\ge k_c$ all the way to the dissipative scale (although with a slight variation with wavenumber). This change of behavior in the ratio of kinetic to potential energy at $k\approx k_c$ clearly indicates that wavenumbers $k\ge k_c$ corresponds to scales dominated by energetic exchanges between nonlinear eddies and wave modes eventually leading to the quasi-equipartition between kinetic and potential energy expected for strongly stratified flow \cite{billant_01}, while wavenumbers $k< k_c$ are sensitive to the effect of both buoyancy and rotation. Lastly, at the smallest scales of the flow dominated by dissipation processes, there is a broad decrease of kinetic energy compared to potential energy which is a likely a manifestation of overturning resolved in the small scales and leading to dissipative events and mixing (see also \fig{pvr22} below). Moreover, in the presence of rotation and stratification, the flow loses its mirror symmetry. A measure of the departure from mirror symmetry can be obtained from the examination of the relative helicity spectrum, defined here in absolute value terms as: \begin{equation} \sigma_V(k)=|H_V(k)|/[kE_V(k)] \ , \label{sigma} \end{equation} with $\sigma_V(k) \le 1 \ \forall k$ through a Schwarz inequality; $\sigma_V(k)$ is shown in \figp{helicity}{a}. In HIT, $E(k)\sim k^{-e}$, and $H(k)\sim k^{-h}$ with $e=h=5/3$ so that $\sigma_V(k)\sim 1/k$ indicating a (slow) return to mirror symmetry in the small scales. In our case, the evolution is different: $\sigma_V(k)$ is rather flat for small wavenumbers, and decays as $\sim k^{-3/2}$ for wavenumbers larger than $k_c$. In the purely rotating case, it can be shown using dimensional arguments \cite{pouquet_10} that $e+h=4$, on the basis of a small-scale flux dominated by helicity which is an ideal invariant in that case (though not here). Assuming that the large-scale flow is dominated by rotation in a quasi-geostrophic regime, this leads to $e\approx 5/2$, close (but not identical) to the value found here for $k<k_c$, namely $e\approx 11/5$. It should be noted that this regime with $e=5/2$ corresponds to a fully helical flow ($\sigma_V(k)=1 \ \forall k$), a state which is known to be unstable \cite{podvigina_94}, and therefore an energy spectrum slightly shallower than $k^{-5/2}$ should be expected instead. This energy spectrum (together with the flat spectrum of helicity) ends at a wavenumber $\approx k_c$, and one enters a rapid decrease of the helicity with wavenumber, slightly steeper than $1/k$, and with strong fluctuations likely corresponding to a rapid changes of sign in the helicity at various scales. In \figp{helicity}{b} is presented the helicity spectrum $H(k_\perp)$ compensated with $k_\perp^{2}$. Note the region of excess helicity for small wavenumbers followed, for $k>k_c$ with $k_c \approx 12$, by a drop in the amplitude of the compensated spectrum, and with fluctuations associated with rapid changes in sign of the helicity. For $k>300$, a sharp drop is observed. Indeed, for wave numbers $k \lesssim k_c$ the compensated spectrum concentrates most of the helicity, which then decreases abruptly. This excess helicity at intermediate scales may derive from the alignment of the vortical structures produced by the rotation with vertical motions caused by buoyancy due to strong stratification, and may represent the physical mechanism for the generation of helicity proposed by \cite{Moffatt92,hide_76} and seen in direct numerical simulations in \cite{marino}. In \figp{helicity}{c} we also show the temporal behavior of the volume-averaged helicity. The flow starts with some residual positive helicity (resulting from the random initial conditions), but after $t\lesssim 4$ helicity fluctuates around zero. The lack of preference towards anti-alignment or alignment of velocity and vorticity can also be seen in \figp{helicity}{d}, which displays an average of PDFs of the cosine of the angle between velocity and vorticity. Note that instantaneous PDFs (not shown) can display some slight excess at $\pm 1$, corresponding to the fluctuations in the global helicity given in \figp{helicity}{c}. In the presence of rotation and stratification, the flow also loses its isotropy. In \figp{other_spec}{a}, we show the ratio of $E_{3D}(k_\perp)/e(k_\perp,k_\parallel=0)$, as defined in Eqs.~(\ref{etheta}) and (\ref{ek3dperp}). Both the numerator and denominator are averaged about the peak of dissipation on the time interval $t\in[5.3,5.7]$. This plot shows that at very large scales, there is roughly a constant and small amount of energy in the 3D modes compared with that in the 2D modes. Rotation seems to play a role at these scales, mediating the distribution of kinetic energy between 2D and 3D modes, and accumulating more energy in 2D modes \cite{EPL}. As larger $k_\perp$ wavenumbers are considered, this distribution changes rapidly until it reaches a local maximum around $k_B$. After a small decrease, the amount of energy in 3D modes far outpaces the distribution among 2D modes as expected in strongly stratified flows, as energy is transferred to large $k_\perp$ and potential modes are excited. In other words, the ratio $E_{3D}(k_\perp)/e(k_\perp,k_\parallel=0)$ is consistent with a scenario in which the rotation, effective at large scales (presumably for $k<k_\Omega$), controls the anisotropy, while at smaller scales as the system becomes dominated by stratification at the buoyancy scale, $k_B\approx 16$, the energy is transferred towards modes with small $k_\perp$ but with $k_\parallel \ne 0$, resulting in most of the energy being in 3D modes. According to \cite{billant_01}, under conditions of strong stratification ($Fr\to0$), the equations describing the flow become self similar. With rotation, self-similarity still holds, but the buoyancy scale \eq{LB} is suggested to take the modified form \begin{equation} \tilde{L}_B = U_0 \mathcal{F}(Ro)/N \ = \ L_B \mathcal{F}(Ro) \, , \label{LBmod} \end{equation} where $\mathcal{F}(Ro) \to 1$ when $Ro \to \infty$, and $\mathcal{F}(Ro) \to Ro^{-1}$ when $Ro \to 0$. In other words, under the effect of increasing rotation at fixed stratification, the scale at which the effective Froude number in the vertical is of order unity increases as well, meaning that the large scales are more unstable. In the quasi-geostrophic (QG) limit, for strong rotation and strong stratification, one can write that $N L_v/f= L_\perp$, a relationship that can be obtained simply, for example, by equating in the dispersion relation the terms due to rotation and to stratification. This therefore defines a scale where rotation and stratification balance each other. Writing that $L_\perp$ is the integral scale $\approx 2.6$, we now find for the wavenumber where a change of behavior occurs between a rotation-dominated regime to a stratification dominated regime to be $\tilde{k}_B \approx 12$, a value that is in good agreement with $k_c$ as a break-point identified on several of the spectra presented here. To reconcile this with the evaluation of $k_{BO}$ given earlier, we could conjecture that the energetics of the flow at large scale is dominated by the buoyancy but the precise scale distribution of the energy is governed by the rotation as in the QG limit. In \figp{other_spec}{b}, we also show plots of both $e_\perp(k_\perp=0,k_\parallel)$ and of the spectrum of potential energy, both compensated by $k_\parallel^{-3}$, and shown at the peak of enstrophy. It has been predicted \cite{billant_01} that $e_\perp(k_\perp=0,k_\parallel) \propto k_\parallel^{-3}$, and similarly that the spectrum of the temperature fluctuations should also scale as $e_P \propto k_\parallel^{-3}$. The figure shows the existence of this prediction in the kinetic energy, but if such a range exists in the potential energy, it is rather narrow. Both spectra seem to develop shallower power laws (other power laws are indicated in \figp{other_spec}{b} as references). For $k>k_B$ the temperature and horizontal kinetic energy in these spectra are in approximate equipartition, which is expected for a self-similar range corresponding, in the primitive equations, to a balance between nonlinearity and wave dynamics. Note that a $k_\parallel^{-3}$ spectrum is often observed in the ocean, and is called the saturation spectrum; it is the regime in which, at least in the purely stratified case, intermittency of the vertical velocity is expected \cite{rorai_14}. Lastly, in \fig{ang_spec} are shown the angular spectra for the {\it total} energy, (cf., \eq{etheta} for the kinetic energy) for several values of the co-latitude, $\theta$, i.e. the angle between the wave-vector ${\bf k}$ and the vertical. All spectra are averaged evenly around the peak of dissipation using ten temporal snapshots, and are compensated by $k_\perp^{-16/5}$, which is equivalent to compensating the isotropic spectra by $k^{-11/5}$ (see the discussion after \eq{etheta}). The angular spectra are computed by interpolating the time-averaged 2D axisymmetric spectra along the line at a given co-latitude using a cubic interpolating polynomial. All scales are anisotropic, except close to the dissipative range; this is expected, since, in this simulation, $k_{OZ}\approx 431$ and $k_{\eta}\approx 660$ (see \S \ref{ss:param11}). Due to the dispersion relation, \eq{dispersion}, as $\theta\to 0$, inertial waves will dominate gravity waves, and as $\theta \to \pi/2$, the reverse will occur; the angular spectra reflect roughly a continuum in this behavior. The apparent tendency at small co-latitude for the spectrum to become very steep at large scales suggests a quasi-two-dimensionalization due to strong rotational effects \cite{smith_02}. At $\theta=20$, the steep range governed by strong rotation at the largest scales gives way to a BO scaling at around $k\sim 10$, and the BO scaling range seems to spread to larger scales as $\theta$ approaches intermediate values. But as the perpendicular direction is reached, multiple spectral ranges emerge after the BO scaling ends at the break-point $k=k_c\sim 12$ above. In fact, a new characteristic scale seems to materialize at $k\sim 45$ for the largest co-latitudes that may serve to separate distinct dynamical balances as illustrated by the reference slopes. \section{Structures } \label{S:struct} The salient physical structures that develop in this flow are relatively large, slanted layers, as can be seen in \fig{pvr1} displaying the horizontal and vertical velocity. The plots are perspective volume renderings of a thin y-z slab, and the dimensions of areas shown are $0.4\times0.7$ times the box size, comparable to the integral scale. The variation in the vertical direction is seen in these plots to be large, varying from filamentary-like thickness to structure at the integral scale, which is comparable to the domain size. Additionally, in \fig{pvr22} are presented several renderings of a thin x-z slab, zooming in on an area of $0.12 \times 0.1$ times the box size, comparable to the vertical Taylor scale. Note that $\ell_{OZ}$ is about $\frac{1}{3}$ of this slab size. These visualizations show scales at which overturning can occur and demonstrate the clear onset of Kelvin-Helmholtz instabilities due to shear layers. In both Figs. \ref{pvr1} and \ref{pvr22}, the thickness of the layers being visualized is $0.01$ in terms of the box size, roughly 1/6th of the Kolmogorov (dissipation) length. The velocity is dominated by its perpendicular component, as already noted in \figp{fig_time}{a}. As expected, the vorticity displays more small-scale variation (see \fig{pvr22}, left). A few large-scale vortices can be observed as well in the flow, but they are not visible in this sub-volume; they can be related to the role played by rotation, as already noted when examining the energy flux. The aspect ratio of the vortices has been found to depend on the global value of $N/f$ through, for example, the variation of correlation length scales \cite{lindborg2005, sukhatme_08}. It also depends on local values, as determined, for example, by the local rotation of the vortex \cite{aubert_12}. In \fig{pvr22}, a clear vortex street appears at that time in the vorticity (left), the density (middle) and the gradient Richardson number (right) defined in \eq{eq:Ri}, showing that the flow can be locally unstable to overturning. Note the strong correlation between vorticity and temperature fluctuations, and the fact that the most unstable regions of the flow at this time are not strongly linked to the vortex street but that, in fact, other layers are being destabilized. Note also the inter-mingling of stable and unstable structures at these scales. As mentioned earlier, the Richardson number based on velocity gradients (which can be defined in terms of $\ell_z$) can be considered as an overall index of the potential instability of the flow. A decrease in $\ell_z$ can thus be interpreted as leading to a more negative gradient Richarson number, which is indicative of an evolution towards a flow more prone to overturning instability. Indeed, the probability distribution function of $Ri_g$ shown in \fig{richardson} indicates a strong probability of the flow meeting the classical criterion for overturning. It was found in \cite{laval_03} that $Ri$ can become negative above $R_{\lambda} \approx 900$, with the change in sign coming from the change in sign of the vertical gradient of density. These results indicate that instabilities are triggered at various locations in the flow. In fact, actual bumps in the energy spectra have been observed in \cite{laval_03} at times of minima in the Richardson number for sufficiently high $R_{\lambda}$, that correspond to Kelvin-Helmholtz instabilities feeding directly the small scales. \section{Conclusion} \label{S:conclu} We have analyzed in this paper the results obtained from a high Reynolds number run of rotating stratified turbulence with $N/f=4.95$, characteristic of the abyssal southern ocean at mid latitudes. With a Froude number of $\approx 0.024$ and $Re\approx 5.5 \times10^4$, this run is not realistic in terms of Reynolds number for geophysical fluid dynamics, and we have chosen to emphasize an examination of scales that are still dominated by the waves, with a barely resolved isotropic Kolmogorov range at small scales. To unravel the role played by different phenomena, we examine the partition of several fields among scales. We conclude that the largest scales (for $k < k_0$) are dominated by rotation, with a negative energy flux, and that for scales larger than a critical scale, $k_0< k < k_c$, the constant-flux range is one where the source of the energy is the potential energy stored in the large-scale gravity waves. We have presented evidence that this energy source leads potentially to a Bolgiano-Obukhov scaling (\eq{eq:BO}). We have also demonstrated that this scaling is not necessarily inconsistent with the self--similarity argument of \cite{billant_01}. The steep power-law observed at large scale is consistent with many oceanic observations, as analyzed for example in \cite{scott_05, arbic_13}. The tendency for energy to pile-up in the large scales, even in the spin-down case, was already noted in \cite{metais_96}, where the inverse transfer was attributed to the geostrophic modes, whereas the wave modes undergo a direct energy cascade (for a high-resolution forced case using hyper-viscosity, see \cite{kitamura_06}). At smaller scales, a Kolmogorov spectrum, in terms of horizontal wave numbers, obtains before isotropy is recovered, as already found in several studies of stratified flows. In addition to the conspicuous Kelvin-Helmoltz instabilities observed at small scale, strong mixing at small scale is clearly favored as indicated both by an overall Froude number based on a vertical length scale of order unity, and by a PDF of the gradient Richardson number that shows directly the significant likelihood of overturning instability. The regime with small Froude number and yet large buoyancy Reynolds number and moderate rotation, characteristic of many flows in geophysical fluid dynamics, remains a computational challenge, in particular when assessing highly non-local interactions between large scales fed by the inverse cascade of energy in the presence of rotation, even if weak, and small scales fed by the direct cascade of energy. Non-local interactions have been identified in such flows, for example in purely rotating flows \cite{alexrot}, in the context of the zig-zag instability \cite{deloncle_08}, and in rotating stratified turbulence \cite{aluie_11}. This clearly points out to the need of resolving the large-scale as well as the small-scale dynamics. In this regard, fundamental and idealized studies such as the one presented in this paper will remain valuable for some time to come, if only because they might lead to improved anisotropic and multi-scale parametrizations of such flows. Many issues remain unexplored and one should analyze in detail for example the distribution of energy among the normal modes of the flow (see e.g., \cite{bartello_95, sukhatme_08}), the small-scale behavior of the flow, and the role that helical coherent structures can play in mixing, transport and intermittency in RST flows. Indeed, helicity, or velocity-vorticity correlations is an ideal ($\nu=0$) invariant of the homogeneous isotropic case (as well as in the presence of solid body rotation), but when stratification is added, it can be created--as evidenced here--by quasi-geostrophic large-scale flows as a consequence of thermal winds \cite{hide_76, marino}. It is known that, for HIT in the presence of helical coherent structures, mixing is modified. There are already sub-grid scale models of turbulence showing that, when taking helicity into account, the modeling capability is enhanced in a measurable fashion \cite{yokoi_93, baerenzung_11}, and thus the present study at high resolution may provide a useful database for testing a variety of parametrization schemes. \begin{acknowledgments} This work was supported by CMG/NSF grant 1025183, and used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. Computer time was provided through a DOE INCITE award, number ENP008, and an NSF XSEDE allocation award, number TG-PHY110044. Additional computer time through an ASD allocation at NCAR is also gratefully acknowledged. PDM is a member of the Carrera del Investigador Cient\'{\i}fico of CONICET. Support for AP, from LASP and Bob Ergun, is gratefully acknowledged. \end{acknowledgments}
1,314,259,993,073
arxiv
\section{Introduction} Recently, the LIGO~\citep{advanced-ligo} and Virgo~\citep{advanced-virgo} scientific collaborations (LVC) reported the detection of one of the most enigmatic gravitational wave (GW) mergers till date~\citep{Abbott:2020khf}. This event, named GW190814, has been associated with a compact object binary of mass-ratio, $q = 0.112^{+0.008}_{-0.009}$, and primary and secondary masses $m_1 = 23.2^{+1.1}_{-1.0} M_\odot$ and $m_2 = 2.59^{+0.08}_{-0.09}$, respectively. Since, an electromagnetic (EM) counterpart has not been found for this particular event and the tidal deformability has not been measurable from the GW signal, the secondary component might well be the lightest BH ever found. However, EM emissions are expected to be observed for only a fraction of NS binaries, and tidal deformabilities are known to be small for massive NSs, hence the secondary in this case cannot be ruled out as a NS. In the latter scenario, it would become the heaviest NS observed in a binary system, given its well-constrained mass. Either hypothesis deserves a deep study owing to its far-reaching implications on the formation channels of such objects and the nature of the densest form of matter in the universe. Discoveries of massive pulsars in past decades have severely constrained the EoS of supranuclear matter inside their cores~\citep{Demorest:2010bx,Antoniadis:2013pzd,Fonseca:2016tux,Arzoumanian:2017puf,Cromartie:2019kug}. These observations provided a very strong lower bound of $\sim 2 M_\odot$ on the maximum mass of nonrotating NSs that all the competing EoS models from nuclear physics must satisfy. Furthermore, GW170817~\citep{TheLIGOScientific:2017qsa} has prompted several studies predicting an upper bound of $\sim 2.2-2.3 M_\odot$ on $M_{\rm max}$ of nonrotating NSs, based on the mass ejecta, kilonova signal and absence of a prompt collapse \citep{Shibata:2017xdx, Margalit:2017dij, Ruiz:2017due, Rezzolla:2017aly, Shibata:2019ctb, PhysRevD.101.063029}. While the simultaneous mass-radius measurements of PSR J0030+0451 by NICER collaboration \citep{Riley:2019yda,Miller:2019cac} indicate a tilt towards slightly stiffer EoS \citep{Raaijmakers:2019dks, Landry_2020PhRvD.101l3007L,Biswas_arXiv_2008.01582B}, the distribution of $m_2$ would require even higher $M_{\rm max}$. Possible formation channels of GW190814-type binaries have also been studied in some recent works \citep{Zevin:2020gma,Safarzadeh_2020ApJ...899L..15S,Kinugawa_2020arXiv200713343K}. While there is a general consensus that the fallback of a significant amount of bound supernova ejecta on the secondary compact remnant leads to its formation in the lower mass-gap region, the nature of its state at the time of the merger being a BH or a NS remains unclear. Nevertheless, GW190814 has motivated experts to reevaluate the knowledge of dense matter and stellar structure to determine the possible scenarios in which one can construct such configurations of NSs while satisfying relevant constraints \citep{Most_2020MNRAS.499L..82M, Zhang_2020ApJ...902...38Z, Fattoyev_2020PhRvC.102f5805F, Tsokaros_2020ApJ...905...48T, Tews_2021ApJ...908L...1T, Lim_2020arXiv200706526L, Dexheimer_arXiv_2007.08493D, Sedrakian_2020PhRvD.102d1301S, Godzieba_arxiv_2020.10999, Huang_2020ApJ...904...39H, Demircik:2020jkc,LI2020135812}. Most of these works suggest rapid uniform rotation with or without exotic matter, such as hyperons or quark matter, exploiting the caveat that the spin of $m_2$ is unconstrained. Other possibilities such as $m_2$ being a primordial BH~\citep{Vattis:2020iuz,Jedamzik_2021PhRvL.126e1302J,Clesse_arXiv_2007.06481C}, an anisotropic object~\citep{Roupas_2021Ap&SS.366....9R} [see also~\citep{Biswas:2019gkw} for a detailed study on anisotropic object] or a NS in scalar-tensor gravity~\citep{Rosca-Mead_2020Symm...12.1384R} have also been considered. In this article, we investigate the possibility of the GW190814's secondary being a NS within a hybrid nuclear+PP EoS parameterization~\citep{Biswas_arXiv_2008.01582B}, and study its related properties under assumptions of it being both slowly and rapidly rotating. We also constrain its spin using a universal relation developed by \citet{Breu:2016ufb}. \section{A brief review of our previous work} \label{previuos-work} In a previous work~\citep{Biswas_arXiv_2008.01582B}, we have employed Bayesian statistics to constrain the EoS of NS combining multiple astrophysical observations. We have formulated a hybrid nuclear+PP EoS model which uses a parabolic expansion based nuclear empirical parameterization around the nuclear saturation density ($\rho_0$) and a 3-segment PP parameterization at higher densities. Within the parabolic expansion, the energy per nucleon $e (\rho, \delta)$ of asymmetric nuclear matter can be expressed as: \begin{equation} e(\rho,\delta) \approx e_0(\rho) + e_{\rm sym}(\rho)\delta^2, \end{equation} where $e_0(\rho)$ is the energy of the symmetric nuclear matter which holds equal number of neutrons and protons. The $e_{\rm sym}(\rho)$ is known as the symmetry energy which characterizes the strength of asymmetry in neutron to proton ratio, and $\delta=(\rho_n-\rho_p)/\rho$ is known as symmetry parameter. $e_0(\rho)$ and $e_{\rm sym}(\rho)$ can be further expanded in a Taylor series around $\rho_0$: \begin{eqnarray} e_0(\rho) &=& e_0(\rho_0) + \frac{ K_0}{2}\chi^2 \label{eq:e0} +\,...,\\ e_{\rm sym}(\rho) &=& e_{\rm sym}(\rho_0) + L\chi + \frac{ K_{\rm sym}}{2}\chi^2 + ..., \label{eq:esym} \end{eqnarray} where $\chi \equiv (\rho-\rho_0)/3\rho_0$. At higher densities the EoS of nuclear matter is completely unknown to us. This is the reason we choose a generic 3-segment PP parameterization after $1.25 \rho_0$. This particular transition density is motivated by Bayesian evidence calculation which is detailed in~\cite{Biswas_arXiv_2008.01582B}. Then, we construct the posterior of the EoS parameters using Bayesian statistics based on this hybrid nuclear+PP model by combining astrophysical data from the radio observation of PSR J0740+6620, GW170817, GW190425, and NICER observations: \begin{equation} P(\theta | {d}) = \frac{P ({d} | \theta) \times P(\theta)}{P(d)}\, = \frac{\Pi_i P ({d_i} | \theta) \times P(\theta)}{P(d)}\,, \label{bayes theorem} \end{equation} where $\theta$ is the set of EoS parameters in the model, $d = (d_{\rm GW}, d_{\rm X-ray}, d_{\rm Radio})$ is the set of data from the three different types of observations that are used to construct the likelihood. The mathematical expressions to compute each of the individual likelihoods are given in Eq. 5, 6, and 7 of ~\cite{Biswas_arXiv_2008.01582B} respectively. In this present paper, we make use the methodology built in our previous work and investigate the properties of the secondary object of GW190814 under a variety of assumption. \section{Lightest BH or heaviest NS?} The mass of the secondary object in GW190814 measured by the LVC falls into the so called ``mass gap" region~\citep{Bailyn_1998,_zel_2010} and, therefore, demands a careful inspection of its properties before it can be ruled out as a BH or NS. A non-informative measurement of the tidal deformability or the spin of the secondary, or the absence of an EM counterpart associated with this event, have made it difficult to make a robust statement about the nature of this object. We begin by examining if the GW mass measurement along with hybrid nuclear+PP model alone can rule it out as a NS. In Fig.~\ref{fig:BHNS-prob} the posterior distribution of secondary mass $m_2$ is plotted, in blue, by using publicly available LVC posterior samples~\footnote{LVK collaboration,~\href{https://dcc.ligo.org/LIGO-P2000183/public}{https://dcc.ligo.org/LIGO-P2000183/public}}. In orange, the posterior distribution of $M_{\rm max}$ is overlaid from hybrid nuclear+PP model analysis by~\citet{Biswas_arXiv_2008.01582B} using PSR J0740+6620~\citep{Cromartie:2019kug}, combined GW170817~\footnote{LVK collaboration,~\href{https://dcc.ligo.org/LIGO-P1800115/public}{https://dcc.ligo.org/LIGO-P1800115/public}} and GW190425~\footnote{LVK collaboration,~\href{https://dcc.ligo.org/LIGO-P2000026/public}{https://dcc.ligo.org/LIGO-P2000026/public}}, and NICER~\footnote{PSR J0030+0451 mass-radius samples released by ~\citet{Miller:2019cac},~\href{https://zenodo.org/record/3473466\#.XrOt1nWlxBc}{https://zenodo.org/record/3473466\#.XrOt1nWlxBc}} data. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{BHNS_prob.pdf} \caption{The probability distribution of $M_{\rm max}$ of NSs, obtained from~\citet{Biswas_arXiv_2008.01582B}, is shown in orange. The distribution shown in green is obtained with the same EoS samples as for the orange one, but considering uniform NS rotation at 716 Hz. These two distributions are compared with the probability distribution of the secondary's mass $m_2$ (in blue) deduced from the GW190814 posterior samples in ~\citet{Abbott:2020khf}. } \label{fig:BHNS-prob} \end{figure} Given these two distributions -- both for nonrotating stars -- we calculate the probability of $m_2$ being greater than $ M_{\rm max}$, i.e., $P(m_2 > M_{\rm max})$ = $P(m_2 - M_{\rm max})$. This probability can be easily obtained by calculating the convolution of the $m_2$ and $- M_{\rm max}$ probability distributions, which yields $P(m_2 > M_{max}) = 0.99$. Therefore, the mass measurement implies that the probability that the secondary object in GW190814 is a NS is $\sim 1\%$. However, this type of analysis is highly sensitive to the choice of EoS parameterization as well as on the implementation of the maximum-mass constraint obtained from the heaviest pulsar observations. The LVC analysis~\citep{Abbott:2020khf} which is based on the spectral EoS parameterization~\citep{Lindblom:2010bb}, obtained~$\sim 3\%$ probability for the secondary to be a NS using GW170817-informed EoS samples from~\citet{Abbott:2018exr}. The addition of NICER data might increase this probability. ~\citet{Essick_2020ApJ...904...80E} added NICER data in their analysis of GW observations based on a nonparametric EoS and also examined the impact of different assumptions about the compact object mass distribution. The $P(m_2 > M_{max})$ probabilities technically depend on the mass prior assumed for the secondary, but ~\citet{Essick-2020arXiv} showed that, regardless of assumed population model, there is a less than $\sim 6\%$ probability for the GW190814 secondary to be a NS. In the discovery paper, LVC also reported an EoS-independent result using the pulsar mass distribution, following ~\cite{Farr2020}, which suggests that there is less than $\sim 29\%$ probability that the secondary is a NS. Despite the differences inherent to these studies, they all suggest that there is a small but finite probability of the secondary object in GW190814 to be a NS. It is also important to note that they all assumed the NS to be either nonrotating or slowly rotating ($\chi < 0.05$). Another possibility is that the secondary object is a rapidly rotating NS~\citep{Most_2020MNRAS.499L..82M,Tsokaros_2020ApJ...905...48T}. It is known that uniform rotation can increase the maximum mass of a NS by $\sim20 \%$~\citep{1987ApJ...314..594F,Cook-a,Cook-b}. Therefore, rapid rotation may improve the chances that the GW190814 data are consistent with a NS. From pulsar observations, we know that NSs with spin frequencies as high as $\nu^{\rm obs}_{\rm max} = 716$ Hz exist in nature~\citep{Hessels:2006ze}. Using this value for the spin frequency and the EoS samples of~\citet{Biswas_arXiv_2008.01582B} we can deduce the maximum improvement in probability that the GW190814 secondary is a NS. We used this information in the {\tt RNS} code~\citep{Stergioulas:1994ea} and obtained a corresponding distribution of maximum mass denoted as $M_{\rm max}^{716 \rm Hz}$. The superscript ``716 Hz" emphasizes that all configurations here are computed at that fixed spin frequency. In Fig.~\ref{fig:BHNS-prob}, the distribution of $M_{\rm max}^{716 \rm Hz}$ is shown in green. From the overlap of this distribution with $P (m_2)$, we find there is $\sim 8 \%$ probability that $m_2$ is a rapidly rotating NS. Alternatively, if the GW190814's secondary were indeed a NS, then the LVC mass measurement sets a lower limit on the maximum NS mass for any spin at least up to $\nu^{\rm obs}_{\rm max}$. We next relax this constraint by considering all theoretically allowed values of the spin frequency, which for some masses and EoSs may exceed the maximum observed value. In the next two sections, we investigate the properties of NSs -- for various rotational frequencies -- using a Bayesian approach based on hybrid nuclear+PP EoS parameterization. \section{Properties assuming a slowly rotating NS} For slowly rotating NS, a Bayesian methodology was already developed in~\citet{Biswas_arXiv_2008.01582B} (also briefly described in Sec.~\ref{previuos-work} ) by combining multiple observations based on hybrid nuclear+PP EoS parameterization. In this paper, instead of marginalizing over the mass of PSR J0740+6620 taking into account of its measurement uncertainties (as described in~\citet{Biswas_arXiv_2008.01582B}), we consider the $m_2$ distribution of GW190814 as the heaviest pulsar mass measurement. We use Gaussian kernel-density to approximate the posterior distribution of $m_2$. The resulting posteriors of radius ($R_{1.4}$) and tidal deformability ($\Lambda_{1.4}$) obtained from this analysis are plotted in Fig.~\ref{fig:nonrotating-prop}. We find that $R_{1.4}=13.3^{+0.5}_{-0.6} $km and $\Lambda_{1.4}=795^{+151}_{-194}$, at $90 \%$ CI, which are in good agreement with previous studies~\citep{Abbott:2020khf,Essick_2020ApJ...904...80E,Tews_2021ApJ...908L...1T}. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{GW190814_non-rot_r14_l14_dist.pdf} \caption{Posterior distributions of $R_{1.4}$ (left panel) and $\Lambda_{1.4}$ (middle panel), as well as the pressure as a function of energy density (right panel) are plotted assuming that the secondary companion of GW190814 is a nonrotating NS. Median and $90\%$ CI are shown by solid and dashed lines, respectively.} \label{fig:nonrotating-prop} \end{figure*} The addition of GW190814 makes the EoS stiffer, especially in the high density region since now a very small subspace of the EoS family can support a $\sim 2.6M_{\odot}$ NS. In the right panel of Fig.~\ref{fig:nonrotating-prop}, the $90 \%$ CI of posterior of the pressure inside the NS is plotted as a function of energy density in shaded blue colour and the corresponding $90 \%$ CI of prior is shown by the black dotted lines. This plot clearly shows that the addition of GW190814 places a very tight constraint on the high-density part of the EoS. \section{Properties assuming a rapidly rotating NS} \label{rapid-rotation} In this article, for the first time, we develop a Bayesian formalism to constrain the EoS of NS that allows for rapid rotation. We use a universal relation found by~\citet{Breu:2016ufb} which relates the maximum mass of a uniformly rotating star ($M_{\rm rmax}^{\rm rot}$) with the maximum mass of a nonrotating star ($M_{\rm max}^{\rm TOV}$) for the same EoS, \begin{equation} M_{\rm rmax}^{\rm rot} = M_{\rm max}^{\rm TOV} \left[1+a_1\left(\frac{\chi}{\chi_{\rm kep}}\right)^2 +a_2\left(\frac{\chi}{\chi_{\rm kep}}\right)^4\right]\, , \label{breu and rezolla:universal relation} \end{equation} where $a_1=0.132$ and $a_2=0.071$. $\chi$ is the dimensionless spin magnitude of a uniformly rotating star and $\chi_{\rm kep}$ is the maximum allowed dimensionless spin magnitude at the mass-shedding limit. Given a $\chi/\chi_{\rm kep}$ value, we calculate $M_{\rm rmax}^{\rm rot}$ using this universal relation. Its use makes our computation much faster but can cause up to $\sim 2\%$ deviation from the exact result, as noted by~\citet{Breu:2016ufb}. We assume that the error is constant throughout the parameter space; we took it to be distributed uniformly in $[-2\%,2\%]$ and marginalized over it to get an unbiased estimate of the properties of the object. We combine data from PSR J0740+6620, two other binary neutron stars, namely GW170817 and GW190425, as well as NICER data assuming nonrotating NS following~\citet{Biswas_arXiv_2008.01582B}. Then, the $m_2$ distribution of GW190814 is used for the maximum-mass threshold of a uniformly rotating star, i.e., $M_{\rm rmax}^{\rm rot}$. We use a nested sampler algorithm implemented in {\tt Pymultinest}~\citep{Buchner:2014nha} to simultaneously sample the EoS parameters and $\chi/\chi_{\rm kep}$. These posterior samples are then used in the {\tt RNS} code~\citep{Stergioulas:1994ea} to calculate several properties of the secondary object associated with GW190814. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{m2_prop.pdf} \caption{Posterior distribution of various properties of the secondary companion of GW190814 are shown assuming a rapidly rotating NS: Equatorial radius $R_e$ (upper left), ellipticity $e$ (upper middle), dimensionless spin magnitude $\chi$ (upper right), rotational frequency $f$ in Hz (lower left), moment of Inertia $I$ (lower middle) and quadrupole moment $Q$ (lower right). Median and $90\%$ CI are shown by solid and dashed lines, respectively.} \label{fig:rotating-prop} \end{figure*} In the upper left and middle panel of Fig.~\ref{fig:rotating-prop} posterior distributions of equatorial radius ($R_e$) and ellipticiy ($e$) are plotted, respectively. Within the $90\%$ CI we find $R_e =14.1^{+1.5}_{-2.0} $km and $e = 0.60^{+0.07}_{-0.23}$. Such high values of equatorial radius and ellipticity imply a considerable deviation from a spherically symmetric static configuration. From the distribution of $\chi$ shown in the upper left panel of Fig.~\ref{fig:rotating-prop} we find its value to be $\chi = 0.57^{+0.09}_{-0.26}$. \citet{Most_2020MNRAS.499L..82M} have also obtained a similar bound on $\chi$ with simpler arguments. In this paper, we provide a distribution for $\chi$ employing a Bayesian framework as well as place a more robust bound on this parameter. In the lower left panel of Fig.~\ref{fig:rotating-prop}, the posterior distribution of rotational frequency is plotted in Hz. We find its value to be $f=1170^{+389}_{-495}$ Hz. As noted above, till date PSR J1748–2446a~\citep{Hessels:2006ze} is known as the fastest rotating pulsar, with a rotational frequency of 716 Hz. {\em Therefore, if the secondary of GW190814 is indeed a rapidly rotating NS, it would definitely be the fastest rotating NS observed so far.} In the lower-middle and right panels, the posterior distributions of the moment of inertia and quadrupole moment of the secondary are shown, respectively. \subsection{Maximum spin frequencies and rotational instabilities} EoS constraints derived from the observation of nonrotating NSs also provide an upper bound on the maximum spin of a NS. The maximum spin frequency is given empirically as $f_{\rm lim} \simeq \frac{1}{2 \pi} (0.468 + 0.378 \chi_{s}) \sqrt{\frac{G M_{\rm max}}{R_{\rm max}^3}}$, \citep{1996ApJ...456..300L,Paschalidis:2016vmz} where $\chi_{s} = \frac{2 G M_{\rm max}}{R_{\rm max} c^2}$, with $M_{\rm max}$ and $R_{\rm max}$ being the maximum mass and its corresponding radius of a nonrotating NS, respectively. We use $M_{\rm max}-R_{\rm max}$ posterior samples that were deduced in~\citet{Biswas_arXiv_2008.01582B} by using PSR J0740+6620, combined GWs, and NICER data to calculate $f_{\rm lim}$. In the left panel of Fig.~\ref{fig:rot-NS-prob}, its distribution is shown by the grey shaded region. We overlay that distribution with distributions of frequencies of the secondary object of GW190814 and those of a few hypothetical rotating NSs of various masses -- all Gaussian distributed, but with medians of 2.4 $M_{\odot}$, 2.8 $M_{\odot}$ and 3.0 $M_{\odot}$, respectively, and each having a measurement uncertainty of $0.1 M_{\odot}$. We also assume the primary component of GW190425 to be a rapidly rotating NS, since by using a high-spin prior LVC determined its mass to be $1.61 M_{\odot}-2.52 M_{\odot}$. In our calculations, for GW190425 we used the publicly available high-spin posterior of $m_1$ obtained by using the PhenomPNRT waveform. We find observations like $m_1$ of GW190425 and simulations like $\mathcal{N} (2.4 M_{\odot},0.1 M_{\odot})$ correspond to posteriors of rotational frequency that are comparatively lower than limiting values of rotational frequencies. However, as the mass increases, the posterior of frequency eventually almost coincides with $f_{\rm lim}$. Therefore, if the secondary of GW190814 was a rapidly rotating NS, it would have to be rotating rather close to the limiting frequency. \begin{figure*} \centering \includegraphics[width=\textwidth]{multiple_source_dist.pdf} \caption{In the left panel, the probability distribution of $f_{\rm lim}$ is shown in brown shade. The distribution of $f_{\rm lim}$ is plotted considering three simulated rapidly rotating NS whose mass measurements are Gaussian distributed with median 2.4 $M_{\odot}$, 2.8 $M_{\odot}$ and 3.0 $M_{\odot}$, respectively and each having a measurement uncertainty of $.1 M_{\odot}$. The same has been overlaid using the secondary component of the GW190814 and the primary of the GW190425 events. In the right panel, the corresponding ratio of rotational to gravitational potential energy $T/|W|$ is shown. } \label{fig:rot-NS-prob} \end{figure*} Any rotating star is generically unstable through the Chandrasekhar-Friedman-Schutz (CFS) mechanism~\citep{Chandrasekhar:1992pr,Friedman:1978hf}. This instability occurs when a certain retrograde mode in the rotating frame becomes prograde in the inertial frame. For example, the $f$-modes of a rotating NS can always be made unstable for a sufficiently large mode number $m$ (not to be confused with component masses $m_{1,2}$) even for low spin frequencies, but, the instability timescale increases rapidly with the increase of $m$. Numerical calculations have shown~\citep{Stergioulas:1997ja,Morsink_1999}, that for maximum mass stars $m=2$ mode changes from retrograde to prograde at $T/|W| \sim 0.06$, where $T$ is the rotional energy and $W$ the gravitational potential energy of the NS. We computed this ratio for all the cases considered in this section and plot the distributions in the right panel of Fig.~\ref{fig:rot-NS-prob}. From this analysis we find that the secondary of GW190814 should be $f-$mode unstable as for most of the allowed EoSs $T/|W|$ is significantly larger than $0.06$. The CFS instability is even more effective for $r-$modes~\citep{Lindblom:1998wf,1999ApJ...510..846A} as they are generically unstable for all values of spin frequency. However, an instability can develop, only if its growth timescale is shorter than the timescale of the strongest damping mechanism affecting it. A multitude of damping mechanisms, such as shear viscosity, bulk viscosity, viscous boundary layer, crustal resonances and superfluid mutual friction (each having each own temperature dependence) have been investigated (see \citep{2016EPJA...52...38K,Paschalidis:2016vmz,Andersson:2019yve,2021ApJ...910...62Z} and references therein). The spin-distribution of millisecond pulsars in accreting systems \citep{Papitto:2014yia} can be explained, if the $r$-mode instability is effectively damped up to spin frequencies of $\sim 700\,$Hz \citep{2011PhRvL.107j1101H}, and operating at higher spin rates. This would not allow for the secondary in GW190814 to be a rapidly rotating NS at the limiting spin frequency. On the other hand, if the secondary of GW190814 {\it was} a rapidly rotating NS at the limiting frequency, then the $f$-mode and $r$-mode instabilities must be effectively damped both during the spin-up phase in a low-mass-X-ray binary, where it acquires rapid rotation, as well as during its subsequent lifetime up to the moment of merger. This might be possible, if both the $f$-mode and the $r$-mode instabilities are damped by a particularly strong mutual friction of superfluid vortices below the superfluid transition temperature of $\sim 10^9$K (see \cite{ 2000PhRvD..61j4003L, 2011PhRvL.107j1102G} and in particular the case of an intermediate drag parameter ${\cal R}\sim 1$ in \citet{10.1111/j.1365-2966.2009.14963.x}). If this is the case, then the limiting frequency observed in the spin-distribution of millisecond pulsars must be explained by other mechanisms; see \citet{Gittins:2018cdw}. A possible presence of rapidly rotating NS in merging binaries thus would have strong implications on the physics of superfluidity in NS matter (in particular constraining the drag parameter $\cal R$ of mutual friction) and on the astrophysics of accreting systems. \section{Constraining NS EoS assuming that the GW190814 secondary is a BH.} So far, we have analyzed the impact on NS EoS properties arising from the hypothesis that the secondary object in GW190814 is a NS. On the other hand, if that secondary object is a BH, then again novel information about the NS EoS can be obtained, since it will set an upper bound on the NS maximum mass, but only if one were to assume that the NS and BH mass distributions do not overlap. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{mmax_GW190814_BBH.pdf} \caption{The probability of NS $M_{\rm max}$ is plotted in blue, under the hypothesis that the GW190814 secondary is a BH. Overlaid in orange is the LVC posterior of the primary in GW190425, for the high-spin prior.} \label{fig:mmax_GW190814_BBH} \end{figure} In our analysis, we take this value to be $2.5 M_{\odot}$, which is the lowest possible value of the secondary object within $90 \%$ CI. Then, using Bayesian inference for nonrotating stars, we combine PSR J0740+6620, GWs, and NICER data to place further constraints on the NS EoS. In Fig.~\ref{fig:mmax_GW190814_BBH}, the distribution of the maximum mass for nonrotating NSs is shown in blue using the EoS samples obtained from this analysis. Within the $90 \%$ CI we find $M_{\rm max}=2.22^{+0.19}_{-0.21} M_{\odot}$, which is the most conservative bound on NS maximum mass obtained so far in this work. Assuming a high-spin prior, the mass of the primary component of GW190425 is constrained between $1.61-2.52 M_{\odot}$. In Fig.~\ref{fig:mmax_GW190814_BBH}, its distribution is over-plotted in orange. From the overlap with the newly obtained $M_{\rm max}$ distribution and the $m_1$ distribution of GW190425, we find that there is $\sim 40 \%$ probability that the primary of GW190425 is a BH. \section{Conclusion} Based on the maximum mass samples obtained from~\citet{Biswas_arXiv_2008.01582B}, we find that there is $\sim 1 \%$ probability that the secondary object associated with GW190814 is a nonrotating NS. However, such an estimation depends on the choice of EoS parameterization and the maximum mass threshold. Nevertheless, the possibility of the secondary being a nonrotating NS is not inconsistent with the data. Based on our hybrid nuclear+PP EoS parameterization, we find that the addition of GW190814 as a nonrotating stars provides a very stringent constraint on the EoS specially in the high density region. We also discussed the alternative that the secondary is a rapidly rotating NS. We find that in order to satisfy the secondary mass estimate of GW190814, its spin magnitude has to be close to the limiting spin frequency for uniform rotation. In fact, it would be the fastest rotating NS ever observed. However, this could be the case, only if gravitational-wave instabilities are effectively damped for rapidly rotating stars, which opens the possibility of constraining physical mechanisms, such as mutual friction in a superfluid interior. \section*{Note added in proof} Recently the mass of PSR J0740+6620 was revised slightly downwards -- to $2.08^{+0.07}_{-0.07} M_{\odot}$, at $68 \%$ CI~\citep{2021arXiv210400880F}. This has a marginal effect on the EoS posterior, potentially making some slightly softer EoSs viable~\citep{Biswas:2021yge}. Consequently, under the rapidly rotating scenario, the spin of the secondary would become slightly higher than what is reported in this paper. \section*{Acknowledgements} We thank Philippe Landry and Toni Font for carefully reading the manuscript and making several useful suggestions. We gratefully acknowledge the use of high performance super-computing cluster Pegasus at IUCAA for this work. P.C. is supported by the Fonds de la Recherche Scientifique-FNRS, Belgium, under grant No. 4.4503.19. This research has made use of data, software and/or web tools obtained from the Gravitational Wave Open Science Center (\href{https://www.gw-openscience.org}{https://www.gw-openscience.org}), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. LIGO is funded by the U.S. National Science Foundation. The authors gratefully acknowledge the Italian Istituto Nazionale di Fisica Nucleare (INFN), the French Centre National de la Recherche Scientifique (CNRS) and the Netherlands Organization for Scientific Research, for the construction and operation of the Virgo detector and the creation and support of the EGO consortium. We would like to thank all of the essential workers who put their health at risk during the COVID-19 pandemic, without whom we would not have been able to complete this work. \bibliographystyle{mnras}
1,314,259,993,074
arxiv
\section{Introduction} \label{sec:Intro} The LHC experiments are now going on to discover new physics, for which the supersymmetry (SUSY) is one of the most attractive candidates. The SUSY signals have not yet observed although the Higgs-like events are almost confirmed~\cite{Higgs}. The lower bounds of the superparticle masses increase gradually. The squark and the gluino masses are expected to be larger than $1$ TeV~\cite{squarkmass}. On the other hand, the LHCb collaboration has reported new data of the CP violation of $B$ mesons and the branching ratios of rare $B$ decays~\cite{Bediaga:2012py,Aaij:2012ct,Lambert:2012gf}. The new physics is also expected to be indirectly found in the $B$ meson decays. For a long years the CP violation in the $K$ and $B^0$ mesons has been successfully understood within the framework of the standard model (SM), so called Kobayashi-Maskawa (KM) model~\cite{Kobayashi:1973fv}, where the source of the CP violation is the KM phase in the quark sector with three families. While, there are new sources of the CP violation if the SM is extended to the SUSY models. The soft squark mass matrices contain the CP-violating phases, which contribute to the flavor changing neutral current (FCNC) with the CP violation. Therefore, we expect the effect of the SUSY contribution in the CP-violating phenomena. However, the clear deviation from the prediction of the SM has not been observed yet in the LHCb experiment~\cite{Bediaga:2012py,Aaij:2012ct,Lambert:2012gf}. In our previous works~\cite{Hayakawa:2012ua,Shimizu:2012ru}, we studied the SUSY contribution, which comes from the gluino-squark mediated flavor changing process~\cite{King:2010np}-\cite{Ishimori:2011nv}. We used the only experimental data of the $b\to s\gamma $ decay to constrain the mass insertion (MI) parameters of squarks. And then, we predicted the CP violations of a few $b\to s$ transition processes. In the present paper, we give the systematic studies for the effect of the gluino-squark mediated flavor changing process in the CP violation of the $b\to s$ and $b\to d$ transitions. In order to obtain more precise numerical results, we take account of the QCD corrections for the SUSY contribution. Moreover, in order to constrain the MI parameters, we also input the recent experimental data, the time dependent CP asymmetries of $B^0$ non-leptonic decays, in addition to the experimental data of the $b\to s\gamma $ decay. The LHCb collaboration reported the time dependent CP asymmetry $S_{J/\psi \phi}$ in the non-leptonic $B_s$ decay, which gives a constraint of the SUSY contribution on the $b\to s$ transition. The CP asymmetry of $B_s \to \phi \phi $ is expected to be observed in the near future at LHCb~\cite{Lambert:2012gf}. If there is the contribution of the squark flavor mixing in the FCNC, we expect to observe the sizeable time dependent CP asymmetry in this process, in which the SM prediction is very small. The typical process of the $b\to s$ transition is the $b\to s\gamma $ decay, in which the experimental data of the branching ratio, the direct CP violation, and the time dependent CP asymmetry $S_{K^*\gamma }$ have been reported. The SUSY contribution is also constrained by the data of the time dependent CP asymmetries in $B^0\to \phi K_S$ and $B^0\to \eta ' K^0$ decays~\cite{PDG,Amhis:2012bh}. On the other hand, the $b\to d$ transition also becomes available to investigate the SUSY contribution quantitatively taking account of the recent experimental branching ratio of the $b\to d\gamma $ decay~\cite{delAmoSanchez:2010ae,Crivellin:2011ba}. In this transition, the time dependent CP asymmetry of the $B^0 \to K^0 \bar K^0$ decay is an attractive one to search for the SUSY effect because the penguin amplitude dominates this process. We also predict the time dependent CP asymmetry of $B^0 \to \rho \gamma $, $S_{\rho \gamma }$. The dominant contribution of the SUSY is the gluino-squark mediated flavor changing process for the $B$ meson decays discussing in this work. We present the constraint for the MI parameters $(\delta _d^{LR})_{23}$ and $(\delta _d^{LR})_{13}$ by putting the experimental data, where squarks and the gluino masses are at the TeV scale. By using these MI parameters, we predict the CP violation of the $B$ meson decays, in which the interesting one is the $B_s\to \phi \phi $ decay. The CP violation in this decay will be measured at LHCb in the near future. In section 2, we present the formulation of the gluino-squark contribution on the CP violation of $B$ mesons in our framework. In section 3, we discuss the $b\to s$ transition, and present numerical predictions for the direct CP violation and time dependent CP asymmetries in $B^0\to K^*\gamma $, $B_s\to \phi \phi $, and $B_s\to \eta' \phi$ decays. In section 4, we discuss the $b\to d$ transition, and present numerical predictions for the direct CP violation and time dependent CP asymmetries in $B^0\to \rho \gamma $ and $B^0\to K^0\bar K^0$ decays. Section 5 is devoted to the summary. \section{Squark flavor mixing in CP violation of $B$ mesons} \label{sec:Deviation} Let us present the framework of the calculations for the contribution of the squark flavor mixing, which is the coupling among down-type quarks, down-type squarks, and the gluino. The effective Hamiltonian for the $\Delta B=1$ process is given as \begin{equation} H_{eff}=\frac{4G_F}{\sqrt{2}}\left [\sum _{q'=u,c}V_{q'b}V_{q'q}^* \sum _{i=1,2}C_iO_i^{(q')}-V_{tb}V_{tq}^* \sum _{i=3-6,7\gamma ,8G}\left (C_iO_i+\widetilde C_i\widetilde O_i\right )\right ], \end{equation} where $q=s,d$. The local operators are given as \begin{align} &O_1^{(q')}=(\bar q_\alpha\gamma _\mu P_Lq_\beta') (\bar q_\beta'\gamma ^\mu P_Lb_\alpha), \qquad O_2^{(q')}=(\bar q_\alpha\gamma _\mu P_Lq_\alpha') (\bar q_\beta'\gamma ^\mu P_Lb_\beta), \nonumber \\ &O_3=(\bar q_\alpha\gamma _\mu P_Lb_\alpha)\sum _Q(\bar Q_\beta\gamma ^\mu P_LQ_\beta), \quad O_4=(\bar q_\alpha\gamma _\mu P_Lb_\beta)\sum _Q(\bar Q_\beta\gamma ^\mu P_LQ_\alpha), \nonumber \\ &O_5=(\bar q_\alpha\gamma _\mu P_Lb_\alpha)\sum _Q(\bar Q_\beta\gamma ^\mu P_RQ_\beta), \quad O_6=(\bar q_\alpha\gamma _\mu P_Lb_\beta)\sum _Q(\bar Q_\beta\gamma ^\mu P_RQ_\alpha), \nonumber \\ &O_{7\gamma }=\frac{e}{16\pi ^2}m_b\bar q_\alpha\sigma ^{\mu \nu }P_Rb_\alpha F_{\mu \nu }, \qquad O_{8G}=\frac{g_s}{16\pi ^2}m_b\bar q_\alpha\sigma ^{\mu \nu } P_RT_{\alpha\beta}^ab_\beta G_{\mu \nu }^a, \end{align} where $P_R=(1+\gamma _5)/2$, $P_L=(1-\gamma _5)/2$, and $\alpha $, $\beta $ are color indices, and $Q$ is taken to be $u,d,s,c$ quarks. Here, $C_i$'s and $\widetilde C_i$'s are the Wilson coefficients, and $\widetilde O_i$'s are the operators by replacing $L(R)$ with $R(L)$ in $O_i$. In this paper, $C_i$ includes both SM contribution and gluino one, such as $C_i=C_i^{\rm SM}+C_i^{\tilde g}$, where $C_i^{\text{SM}}$'s are given in Ref.~\cite{Buchalla:1995vs}. In order to estimate the SUSY contribution for $C_i^{\tilde g}$, we take the most popular ansatz, a degenerate SUSY breaking mass spectrum for the flavor structure of squarks. In the super-CKM basis, we can parametrize the soft scalar masses squared of the down-type squarks, $M^2_{\tilde d_{LL}}$, $M^2_{\tilde d_{RR}}$, $M^2_{\tilde d_{LR}}$, and $M^2_{\tilde d_{RL}}$ as follows: \begin{align} M^2_{\tilde d_{LL}}&=m_{\tilde q}^2 \begin{pmatrix} 1+(\delta _d^{LL})_{11} & (\delta _d^{LL})_{12} & (\delta _d^{LL})_{13} \\ (\delta _d^{LL})_{12}^* & 1+(\delta _d^{LL})_{22} & (\delta _d^{LL})_{23} \\ (\delta _d^{LL})_{13}^* & (\delta _d^{LL})_{23}^* & 1+(\delta _d^{LL})_{33} \end{pmatrix}, \nonumber \\ M^2_{\tilde d_{RR}}&=m_{\tilde q}^2 \begin{pmatrix} 1+(\delta _d^{RR})_{11} & (\delta _d^{RR})_{12} & (\delta _d^{RR})_{13} \\ (\delta _d^{RR})_{12}^* & 1+(\delta _d^{RR})_{22} & (\delta _d^{RR})_{23} \\ (\delta _d^{RR})_{13}^* & (\delta _d^{RR})_{23}^* & 1+(\delta _d^{RR})_{33} \end{pmatrix}, \nonumber \\ M^2_{\tilde d_{LR}}&=(M_{\tilde d_{RL}}^2)^\dagger =m_{\tilde q}^2 \begin{pmatrix} (\delta _d^{LR})_{11} & (\delta _d^{LR})_{12} & (\delta _d^{LR})_{13} \\ (\delta _d^{LR})_{21} & (\delta _d^{LR})_{22} & (\delta _d^{LR})_{23} \\ (\delta _d^{LR})_{31} & (\delta _d^{LR})_{32} & (\delta _d^{LR})_{33} \end{pmatrix}, \end{align} where $m_{\tilde q}$ is the average squark mass, and $(\delta _d^{LL})_{ij}$, $(\delta _d^{RR})_{ij}$, $(\delta _d^{LR})_{ij}$, and $(\delta _d^{RL})_{ij}$ are called as the mass insertion (MI) parameters. The Wilson coefficients of the gluino contribution $C_i^{\tilde g}$ are given as follows~\cite{Endo:2004fx}: \begin{align} C_3^{\tilde g}(m_{\tilde g})&\simeq \frac{\sqrt{2}\alpha _s^2}{4G_FV_{tb}V_{tq}^*m_{\tilde q}^2}(\delta _d^{LL})_{k3} \left [-\frac{1}{9}B_1(x)-\frac{5}{9}B_2(x)-\frac{1}{18}P_1(x)-\frac{1}{2}P_2(x)\right ], \nonumber \\ C_4^{\tilde g}(m_{\tilde g})&\simeq \frac{\sqrt{2}\alpha _s^2}{4G_FV_{tb}V_{tq}^*m_{\tilde q}^2}(\delta _d^{LL})_{k3} \left [-\frac{7}{3}B_1(x)+\frac{1}{3}B_2(x)+\frac{1}{6}P_1(x)+\frac{3}{2}P_2(x)\right ], \nonumber \\ C_5^{\tilde g}(m_{\tilde g})&\simeq \frac{\sqrt{2}\alpha _s^2}{4G_FV_{tb}V_{tq}^*m_{\tilde q}^2}(\delta _d^{LL})_{k3} \left [\frac{10}{9}B_1(x)+\frac{1}{18}B_2(x)-\frac{1}{18}P_1(x)-\frac{1}{2}P_2(x)\right ], \nonumber \\ C_6^{\tilde g}(m_{\tilde g})&\simeq \frac{\sqrt{2}\alpha _s^2}{4G_FV_{tb}V_{tq}^*m_{\tilde q}^2}(\delta _d^{LL})_{k3} \left [-\frac{2}{3}B_1(x)+\frac{7}{6}B_2(x)+\frac{1}{6}P_1(x)+\frac{3}{2}P_2(x)\right ], \nonumber \\ C_{7\gamma }^{\tilde g}(m_{\tilde g})&\simeq -\frac{\sqrt{2}\alpha _s\pi }{6G_FV_{tb}V_{tq}^*m_{\tilde q}^2} \Bigg [(\delta _d^{LL})_{k3}\left (\frac{8}{3}M_3(x)- \mu \tan \beta \frac{m_{\tilde g}}{m_{\tilde q}^2}\frac{8}{3}M_a(x)\right ) +(\delta _d^{LR})_{k3}\frac{m_{\tilde g}}{m_b}\frac{8}{3}M_1(x)\Bigg ], \nonumber \\ C_{8G}^{\tilde g}(m_{\tilde g})&\simeq -\frac{\sqrt{2}\alpha _s\pi }{2G_FV_{tb}V_{tq}^*m_{\tilde q}^2} \Bigg [(\delta _d^{LL})_{k3}\Bigg \{ \left (\frac{1}{3}M_3(x)+3M_4(x)\right ) \nonumber \\ &-\mu \tan \beta \frac{m_{\tilde g}}{m_{\tilde q}^2}\left (\frac{1}{3}M_a(x)+3M_b(x)\right )\Bigg \} +(\delta _d^{LR})_{k3}\frac{m_{\tilde g}}{m_b}\left (\frac{1}{3}M_1(x)+3M_2(x)\right )\Bigg ], \label{Coeff} \end{align} where $k=2, 1$ correspond to $b\to q$ ($q=s,d$) transitions, respectively. Here the double mass insertion is included in $C_{7\gamma }^{\tilde g}(m_{\tilde g})$ and $C_{8G}^{\tilde g}(m_{\tilde g})$. The Wilson coefficients $\widetilde C_i^{\tilde g}(m_{\tilde g})$'s are obtained by replacing $L(R)$ with $R(L)$ in $C_i^{\tilde g}(m_{\tilde g})$'s. The loop functions in Eq.(\ref{Coeff}) are presented in our previous work~\cite{Hayakawa:2012ua}. In our calculations, $C_{7\gamma}$ and $C_{8G}$ give dominant contributions to the CP violations in $b\to s$ and $b\to d$ transitions. The effective Wilson coefficients of $C_{7\gamma}(m_b)$ and $C_{8G}(m_b)$ are given at the leading order of QCD as follows~\cite{Buchalla:1995vs}: \begin{equation} \begin{split} C_{7\gamma}^{\tilde g}(m_b) &= \zeta C_{7\gamma}^{\tilde g}(m_{\tilde g}) +\frac{8}{3}(\eta-\zeta) C_{8G}^{\tilde g}(m_{\tilde g}), \cr C_{8G}^{\tilde g}(m_b) &=\eta C_{8G}^{\tilde g}(m_{\tilde g}), \end{split} \end{equation} where \begin{equation} \zeta=\left ( \frac{\alpha_s(m_{\tilde g})}{\alpha_s(m_t)} \right )^{\frac{16}{21}} \left ( \frac{\alpha_s(m_t)}{\alpha_s(m_b)} \right )^{\frac{16}{23}} \ , \qquad \eta=\left ( \frac{\alpha_s(m_{\tilde g})}{\alpha_s(m_t)} \right )^{\frac{14}{21}} \left ( \frac{\alpha_s(m_t)}{\alpha_s(m_b)} \right )^{\frac{14}{23}} \ . \end{equation} Let us discuss the time dependent CP asymmetries of $B^0$ and $B_s$ decaying into the final state $f$, which are defined as~\cite{Aushev:2010bq} \begin{equation} S_f=\frac{2\text{Im}\lambda _{f}}{1+|\lambda_{f}|^2}\ , \qquad C_f=\frac{1-|\lambda_{f}|^2}{1+|\lambda_{f}|^2}\ , \label{sf} \end{equation} where \begin{equation} \lambda_{f}=\frac{q}{p} \bar \rho\ , \qquad \frac{q}{p}\simeq \sqrt{\frac{M_{12}^{q*}}{M_{12}^{q}}}, \qquad \bar \rho \equiv \frac{\bar A(\bar B_q^0\to f)}{A(B_q^0\to f)}. \label{lambdaf} \end{equation} Here $M_{12}^q(q=s,d)$ are the dispersive parts of the $B_q$-$\bar B_q$ mixing, in which the quark-squark-gluino interaction contributes in addition to the SM one. The MI parameters $(\delta _d^{LL})_{k3}$ and $(\delta _d^{RR})_{k3}$ $(k=2,1)$ are constrained by CP violations in the $\Delta B=2$ transition as discussed in our previous works~\cite{Hayakawa:2012ua,Shimizu:2012ru}. On the other hand, $(\delta _d^{LR})_{k3}$ and $(\delta _d^{RL})_{k3}$ $(k=2,1)$ are constrained in the $\Delta B=1$ transition. In the $B^0\to J/\psi K_S$ and $B_s\to J/\psi \phi$ decays, we write $\lambda_{J/\psi K_S}$ and $\lambda_{J/\psi \phi}$ in terms of phase factors, respectively: \begin{equation} \lambda_{J/\psi K_S}\equiv -e^{-i\phi _d}, \qquad \lambda _{J/\psi \phi } \equiv e^{-i\phi _s}. \label{new} \end{equation} The recent experimental data of these phases are \cite{Amhis:2012bh,ICHEP2012} \begin{equation} \sin \phi _d=0.679\pm 0.020\ , \qquad \phi _s=-0.002\pm 0.083\pm 0.027 \ . \label{phasedata} \end{equation} We expect the SUSY contribution to be included in these observed values. Since the $B^0\to J/\psi K_S$ process occurs at the tree level in the SM, the CP asymmetry mainly originates from $M_{12}^d$. Although the $B^0\to \phi K_S$ and $B^0\to\eta 'K^0$ decays are penguin dominant ones, their CP asymmetries also come from $M_{12}^d$ in the SM. Then, the CP asymmetries of $B^0\to J/\psi K_S$, $B^0\to \phi K_S$, and $B^0\to \eta 'K^0$ decays are expected to be the same magnitude. On the other hand, if the squark flavor mixing contributes to the decay at the one-loop level, its magnitude could be comparable to the SM penguin one in $B^0\to \phi K_S$ and $B^0\to \eta 'K^0$ decays, but the squark flavor mixing contribution is tiny in the $B^0\to J/\psi K_S$ decay because this process is at the tree level in the SM. Therefore, there is a possibility to find the SUSY contribution by observing the different CP asymmetries among those processes~\cite{Endo:2004dc}. The time dependent CP asymmetry $S_{ J/\psi K_S}$ has been precisely measured. On the other hand, PDG~\cite{PDG} and Heavy Flavor Averaging Group (HFAG)~\cite{Amhis:2012bh} presented considerably different values for $S_{\phi K_S}$ while almost same one for $S_{\eta 'K^0}$. Each of the observed ones in HFAG is consistent with the SM prediction. In order to get conservative constraints, we take the data of these time dependent CP asymmetries in HFAG~\cite{Amhis:2012bh}, which are \begin{equation} S_{ J/\psi K_S}=0.679\pm 0.020 \ , \qquad S_{\phi K_S}= 0.74^{+0.11}_{-0.13}\ , \qquad S_{\eta 'K^0}= 0.59\pm 0.07\ . \label{Sfdata} \end{equation} These values may be regarded to be same within the experimental error bar. Thus, the experimental values are consistent with the prediction of the SM. In other words, these data severely constrain the MI parameters $(\delta _d^{LR})_{23}$ and $(\delta _d^{RL})_{23}$ in our following analyses. \section{The $b \to s $ transition} \label{sec:bstransitions} At first we discuss the contributions of the squark flavor mixing for the $b\to s$ transition, which are given in terms of the MI parameters $(\delta _d^{LL})_{23}$, $(\delta _d^{RR})_{23}$, $(\delta _d^{LR})_{23}$, and $(\delta _d^{RL})_{23}$. These MI parameters are constrained by the experimental data of $B$ meson decays. Let us show the formulation of the $b\to s$ transition. The CP asymmetries $S_f$ of Eq.~(\ref{sf}) in the $b\to ss{\bar s}$ transition are one of the most important processes when we investigate the new physics. The CP asymmetries $S_f$ for $B^0\to \phi K_S$ and $B^0\to \eta 'K^0$ are given in terms of $\lambda_f$ in Eq.~(\ref{lambdaf}): \begin{align} \lambda _{\phi K_S,~\eta 'K^0}&=-e^{-i\phi _d}\frac{\displaystyle \sum _{i=3-6,7\gamma ,8G} \left (C_i^\text{SM}\langle O_i \rangle +C_i^{\tilde g}\langle O_i \rangle + \widetilde C_i^{\tilde g}\langle \widetilde O_i \rangle \right )} {\displaystyle \sum _{i=3-6,7\gamma ,8G}\left (C_i^{\text{SM}*}\langle O_i \rangle +C_i^{{\tilde g}*}\langle O_i \rangle +\widetilde C_i^{{\tilde g}*}\langle \widetilde O_i \rangle \right )}~, \label{asymBd} \end{align} where $\langle O_i \rangle $ is the abbreviation of $\langle f|O_i|B^0\rangle $. It is noticed $\langle \phi K_S|O_i|B^0\rangle =\langle \phi K_S|\widetilde O_i|B^0\rangle $ and $\langle \eta 'K^0|O_i|B^0\rangle =-\langle \eta 'K^0|\widetilde O_i|B^0\rangle $, because these final states have different parities~\cite{Endo:2004dc,Khalil:2003bi}. Since the dominant term comes from the gluon penguin $C_{8G}^{\tilde g}$, the decay amplitudes of $f=\phi K_S$ and $f=\eta 'K^0$ are given as follows: \begin{align} \bar A(\bar B^0 \to \phi K_S) & \propto C_{8G}(m_b) + {\tilde C}_{8G}(m_b), \nonumber \\ \bar A(\bar B^0 \to \eta '\bar K^0) & \propto C_{8G}(m_b) - {\tilde C}_{8G}(m_b). \end{align} Since ${\tilde C}_{8G}(m_b)$ is suppressed compared to $C_{8G}(m_b)$ in the SM, the magnitudes of the time dependent CP asymmetries $S_f \ (f=J/\psi \phi, \ \phi K_S,\ \eta 'K^0)$ are almost same in the SM prediction. However, the squark flavor mixing gives the unsuppressed ${\tilde C}_{8G}(m_b)$, then, the CP asymmetries in those decays are expected to be deviated among them. Therefore, those experimental data give us the tight constraint for $C_{8G}(m_b)$ and ${\tilde C_{8G}}(m_b)$. We have also $\lambda _{f}$ for $B_s\to \phi \phi $ and $B_s\to \phi \eta '$ as follow: \begin{align} \lambda _{\phi \phi ,\phi \eta '}&=e^{-i\phi _s}\frac{\displaystyle \sum _{i=3-6,7\gamma ,8G} C_i^\text{SM}\langle O_i \rangle +C_i^{\tilde g}\langle O_i \rangle + \widetilde C_i^{\tilde g}\langle \widetilde O_i \rangle } {\displaystyle \sum _{i=3-6,7\gamma ,8G} C_i^{\text{SM}*}\langle O_i \rangle +C_i^{{\tilde g}*} \langle O_i \rangle +\widetilde C_i^{{\tilde g}*}\langle \widetilde O_i \rangle }~, \label{asymBs} \end{align} with $\langle \phi \phi |O_i|B_s\rangle =-\langle \phi \phi |\widetilde O_i|B_s\rangle $ and $\langle \phi \eta '|O_i|B_s\rangle =\langle \phi \eta '|\widetilde O_i|B_s\rangle $. The decay amplitudes of $f=\phi \phi $ and $f=\phi \eta '$ are given as follows: \begin{align} \bar A(\bar B_s \to \phi \phi ) & \propto C_{8G}(m_b) - {\tilde C}_{8G}(m_b), \nonumber \\ \bar A(\bar B_s \to \phi \eta ') & \propto C_{8G}(m_b) + {\tilde C}_{8G}(m_b). \end{align} Since $C_{8G}\langle O_{8G}\rangle $ and $\tilde C_{8G}\langle \tilde O_{8G}\rangle $ dominate these amplitudes, our numerical results are insensitive to the hadronic matrix elements. In order to obtain precise results, we also take account of the small contributions from other Wilson coefficients $C_i~(i=3,4,5,6)$ and $\tilde C_i~(i=3,4,5,6)$ in our calculations. We estimate each hadronic matrix element by using the factorization relations in Ref.~\cite{Harnik:2002vs}: \begin{equation*} \langle O_{3} \rangle =\langle O_{4} \rangle =\left( 1+\frac{1}{N_c} \right) \langle O_{5} \rangle, \quad \langle O_{6} \rangle =\frac{1}{N_c}\langle O_{5} \rangle, \end{equation*} \begin{equation} \langle O_{8G} \rangle =\frac{\alpha _s(m_b)}{8 \pi } \left( -\frac{2 m_b}{ \sqrt{\langle q^2 \rangle }}\right ) \left( \langle O_4 \rangle +\langle O_6 \rangle -\frac{1}{N_c}(\langle O_3 \rangle +\langle O_5 \rangle )\right ), \end{equation} where $\langle q^2 \rangle ={\rm 6.3~GeV^2}$ and $N_c=3$ is the number of colors. One may worry about the reliability of these naive factorization relations. However, this approximation has been justified numerically in the relevant $b\to s$ transition as seen in the calculation of PQCD~\cite{Mishima:2003wm}. Let us discuss the contribution of the MI parameters to $C_{8G}^{\tilde g}$ in the Eq.~(\ref{Coeff}). Since the loop functions are of same order and $m_{\tilde q}\simeq m_{\tilde g}$, the ratio of the $LL$ component and the $LR$ one is $(\delta _d^{LL})_{23}\times \mu \tan \beta /m_{\tilde q}$ to $(\delta _d^{LR})_{23}\times m_{\tilde q}/m_b$. If ${\cal O}(\mu \tan \beta )\simeq {\cal O}(m_{\tilde q})$ and $m_{\tilde q}\gtrsim 1$ TeV, the $LR$ component may contribute significantly to $C_{8G}^{\tilde g}$ due to the enhancement factor $m_{\tilde q}/m_b={\cal O}(10^2)$. For example, in the case of $(\delta _d^{LL})_{23}=10^{-2}$ and $(\delta _d^{LR})_{23}=10^{-3}$, the $LR$ component dominates $C_{8G}^{\tilde g}$, while it is minor in $M_{12}^{d}$~\cite{Hayakawa:2012ua,Shimizu:2012ru}. Actually, the magnitude of $(\delta _d^{LL})_{23}$ is at most $10^{-2}$, which was estimated in our previous works~\cite{Hayakawa:2012ua,Shimizu:2012ru}. In our following calculations, we take $|(\delta _d^{LL})_{23}|\lesssim 10^{-2}$. We can also constrain the SUSY contribution from the $b\to s\gamma $ decay. Here we discuss three observable values, those are the branching ratio $\text{BR}(b\to s\gamma )$, the direct CP asymmetry $A_\text{CP}^{b\to s\gamma }$, and the time dependent CP asymmetry of $B^0 \to K^* \gamma $, $S_{K^* \gamma }$. The branching ratio BR$(b\to q\gamma )$($q=s,d$) is a typical process to investigate the new physics. It is given as~\cite{Buras:1998raa} \begin{equation} \frac{\text{BR}(b\to q\gamma )} {\text{BR}(b\to ce\bar {\nu _e})} = \frac{|V_{tq}^*V_{tb}|^2} {|V_{cb}|^2} \frac{6 \alpha }{\pi f(z)} (|C_{7\gamma }(m_b)|^2+|{\tilde C}_{7\gamma }(m_b)|^2), \label{Brbqgamma} \end{equation} where \begin{equation} f(z) = 1-8z+8z^3-z^4-12z^2 \text{ln}z~,\qquad z = \frac{m_{c,pole}^2}{m_{b,pole}^2}. \end{equation} Here $C_{7\gamma }(m_b)$ and $\tilde{C}_{7\gamma }(m_b)$ include both contributions from the SM and the gluino-squark mediated flavor changing process at the $m_b$ scale. As seen in Eq.~(\ref{Coeff}), MI parameters $(\delta _d^{LR})_{k3}$ dominate both $C_{7\gamma }^{\tilde g}$ and $C_{8G}^{\tilde g}$. Therefore, we should discuss the contribution from $(\delta _d^{LR})_{k3}$ in our numerical calculations. We can also estimate the direct CP violation $A_{\text{CP}}^{b\to q\gamma }$ in the $b\to q\gamma $ decay ($q=s,d$), which is given as~\cite{Kagan:1998bh} \begin{align} A_{\text{CP}}^{b\to q\gamma } &= \left . \frac{\Gamma (\bar {B}\to X_q\gamma ) - \Gamma (B\to X_{\bar q} \gamma )} {\Gamma (\bar B\to X_q\gamma ) + \Gamma (B\to X_{\bar q}\gamma )} \right |_{E_{\gamma } > (1-\delta ) E_{\gamma }^{\text{max}}} \nonumber \\ &= \frac{\alpha _s(m_b)} {|C_{7\gamma }|^2+|{\tilde C}_{7\gamma }|^2} \Bigg [ \frac{40}{81} \text{Im}\small [C_2 C_{7\gamma }^*\small ] - \frac{8 z}{9}\small [v(z)+b(z,\delta )\small ] \text{Im}\Big [\left (1+\frac{V_{uq}^* V_{ub}}{V_{tq}^* V_{tb}}\right ) C_2 C_{7\gamma }^*\Big ] \nonumber \\ & -\frac{4}{9} \text{Im}\small [ C_{8G} C_{7\gamma }^* + {\tilde C}_{8G} {\tilde C}_{7\gamma }^*\small ] + \frac{8z}{27} b(z,\delta ) \text{Im}\Big [\left (1+\frac{V_{uq}^* V_{ub}}{V_{tq}^* V_{tb}} \right ) C_2 C_{8G}^*\Big ]\Bigg ], \label{directbqgamma} \end{align} where $v(z)$ and $b(z,\delta )$ are explicitly given in~Ref.\cite{Kagan:1998bh}, and $C_i$, ${\tilde C}_i$ ($i=7\gamma,8G$) include both the SM and SUSY contributions at the $m_b$ scale. The time dependent CP asymmetry $S_{K^* \gamma}$ in the $B^0 \to K^*\gamma $ decay is also important measure of the CP violation: \begin{equation} S_{K^*\gamma } = \frac{2 {\rm Im}(-e^{-i\phi _d} {\tilde C}_{7\gamma}(m_b)/C_{7\gamma }(m_b))} {|{\tilde C}_{7\gamma }(m_b)/C_{7\gamma }(m_b)|^2+1}. \label{Kstargamma} \end{equation} This CP violation comes from the interference between $C_{7\gamma }(m_b)$ and ${\tilde C}_{7\gamma }(m_b)$~\cite{Endo:2004fx,Atwood:1997zr}. In the SM, ${\tilde C}_{7\gamma }^\text{SM}(m_b)/C_{7\gamma }^\text{SM}(m_b)\propto m_s/m_b$ for this process. Therefore, $S_{K^*\gamma }$ is suppressed~\cite{Atwood:1997zr}. However, $S_{K^*\gamma }$ could be enhanced owing to the squark flavor mixing. Our setup in our calculations are shown as follows. We take $\mu \tan \beta $ to be $1$~TeV, and set $|(\delta _d^{LL})_{23}|\simeq |(\delta _d^{RR})_{23}|\lesssim 10^{-2}$ following from our previous works~\cite{Hayakawa:2012ua,Shimizu:2012ru}. Then, the contribution of these MI parameters to $C_{7\gamma }^{\tilde g}$ and $C_{8G}^{\tilde g}$ are minor. On the other hand, $(\delta _d^{LR})_{23}$ and $(\delta _d^{RL})_{23}$ are severely constrained by magnitudes of $C_{7\gamma }$ and $C_{8G}$. In addition, we suppose $|(\delta _d^{LR})_{23}|=|(\delta _d^{RL})_{23}|$. Then, we can parametrize the MI parameters as follows: \begin{equation} (\delta _d^{LR})_{23}=|(\delta _d^{LR})_{23}|e^{2i\theta _{23}^{LR}}, \qquad (\delta _d^{RL})_{23}=|(\delta _d^{LR})_{23}|e^{2i\theta _{23}^{RL}}. \label{MILR} \end{equation} Now we show numerical analysis in our setup. In our following numerical calculations, we fix the squark mass and the gluino mass as \begin{equation} m_{\tilde q}=1.5~\text{TeV},\qquad m_{\tilde g}=1.5~\text{TeV}, \label{SUSYmass} \end{equation} which are consistent with recent lower bound of these masses at LHC~\cite{squarkmass}. In our analysis, the present experimental data of $\text{BR}(b\to s\gamma )$, $S_{J/\psi K_S}$, $S_{\phi K_S}$, and $S_{\eta 'K^0}$ give tight constraints for MI parameters. Here we put the experimental data \cite{PDG} \begin{equation} \text{BR}(b\to s\gamma )({\rm exp})=(3.53 \pm 0.24)\times 10^{-4}, \end{equation} on the other hand, the SM has predicted \cite{Misiak:2006zs} \begin{equation} \text{BR}(b\to s\gamma )({\rm SM})=(3.15 \pm 0.23)\times 10^{-4} . \end{equation} Therefore, there is a room for the contribution of the gluino-squark mediated flavor changing process.\footnote{In our analysis, we do not take account of the contribution of the charged Higgs and chargino in $b\to s \gamma $.} For $S_{J/\psi K_S}$, $S_{\phi K_S}$, and $S_{\eta 'K^0}$, we put the data in Eq.(\ref{Sfdata}). In the SM, these magnitudes of $S_f$ agree with among them. At first, in Fig.~\ref{fig:MIphase23} (a), we show the allowed region in the plane of the absolute value $|(\delta _d^{LR})_{23}|$ and the phase $\theta_{23}^{LR}$ , where the only experimental constraint of $\text{BR}(b\to s\gamma )$ is put. The magnitude of the MI parameter $(\delta _d^{LR})_{23}$ is allowed as $|(\delta _d^{LR})_{23}|\lesssim 9\times 10^{-2}$. We note that the SUSY contribution to $\text{BR}(b\to s\gamma )$ becomes superior compared with the SM in the region of $|(\delta _d^{LR})_{23}|\gtrsim 4\times 10^{-2}$. It is also noted that the $(\delta_d^{LR})_{23}$ is almost real around the upper bound $9\times 10^{-2}$, that is at $2\theta _{23}^{LR}=0$ or $2\pi $. The experimental constraints of $S_{J/\psi K_S}$, $S_{\phi K_S}$, and $S_{\eta 'K^0}$ give the severe cut as seen in Fig.~\ref{fig:MIphase23} (b), where these experimental data are put in addition to $\text{BR}(b\to s\gamma )$. In this figure, any value of the phase is allowed in $|(\delta _d^{LR})_{23}|\lesssim 5\times 10^{-3}$. On the other hand, the larger region of $|(\delta _d^{LR})_{23}|$ is allowed until $2\times 10^{-2}$ around the specific $\theta _{23}^{LR}$, $\pi /4$ and $3\pi /4$.\footnote{There still remains a very small allowed region around $|(\delta _d^{LR})_{23}|=9\times 10^{-2}$, where $(\delta _d^{LR})_{23}$ is almost real, since this region cannot be excluded by the time dependent CP asymmetries. In our work, we omit this region hereafter. This region is uninteresting because the SUSY contribution is much larger than the SM one in $b\to s \gamma$.} The obtained bound $|(\delta _d^{LR})_{23}|\lesssim 2\times 10^{-2}$ depends on the gluino and the squark masses. If they increase, the upper bound is rescaled approximately as $|(\delta _d^{LR})_{23}|\times m_{\tilde q}/(1.5~{\rm TeV})$. By using this allowed region of $(\delta _d^{LR})_{23}$, we predict $A_{\text{CP}}^{b\to s\gamma }$, $S_{K^* \gamma }$, $S_{\phi \phi }$, and $S_{\phi \eta '}$. In Fig.~\ref{fig:MIdirectbs}, we show the predicted direct CP asymmetry $A_{\text{CP}}^{b\to s\gamma }$ versus $|(\delta _d^{LR})_{23}|$. Here the value at $|(\delta _d^{LR})_{23}|=0$ is the SM one, $A_{\text{CP}}^{b\to s\gamma }({\rm SM})\simeq 4\times 10^{-3}$~\cite{Kagan:1998bh}. We predict $-3\times 10^{-2}\lesssim A_{\text{CP}}^{b\to s\gamma }\lesssim 3\times 10^{-2}$ owing to the squark flavor mixing. Recent experimental data is still consistent with our prediction due to the large error as seen in $A_{\text{CP}}^{b\to s\gamma }(\text{exp}) = -0.008 \pm 0.029$~\cite{PDG}. The precise data will give us an additional constraint of the MI parameters in the future. In Fig.~\ref{fig:MISKstargamma}, we show the predicted CP asymmetry, $S_{K^* \gamma }$. The predicted value in the SM is $S_{K^* \gamma }(\text{SM})\simeq (2m_s/m_b)\sin \phi _d \simeq 4\times 10^{-2}$~\cite{Atwood:1997zr}, while the experimental result is $S_{K^* \gamma }(\text{exp})= -0.15 \pm 0.22$~\cite{PDG}. Our prediction is $-0.4\lesssim S_{K^*\gamma }\lesssim 0.2$, which is still consistent with the experimental data. We also expect the precise data in the near future to test our prediction. Although the experimental data of the time dependent CP asymmetries $S_{\phi K_S}$ and $S_{\eta 'K^0}$ are taken as the input in our analysis, these calculated values do not always cover all experimental allowed regions due to the constraint from $\text{BR}(b\to s\gamma )$. Those allowed regions are shown in Fig.~\ref{fig:SphiKSetapK}. The SM prediction is $S_{J/\psi K_S}\text{(SM)}=S_{\phi K_S}\text{(SM)}=S_{\eta 'K^0}\text{(SM)}$, while the present data of these time dependent CP asymmetries are given in Eq.~(\ref{Sfdata}). The region of the right-down corner in the figure is excluded. It is testable in the future experiments. In Fig.~\ref{fig:SphiphiSphietap}, we predict the time dependent CP asymmetries $S_{\phi \phi }$ and $S_{\phi \eta '}$. These CP asymmetries must be equal to $S_{J/\psi \phi }$ in the SM. We use the experimental result of $S_{J/\psi \phi }$ for the phase $\phi _s$, which is given in Eq.~(\ref{phasedata}), in our calculations. We denote the small green line as the SM value $S_{J/\psi \phi }(\text{SM})=-0.0363^{+0.0016}_{-0.0015}$~\cite{Charles:2011va} in the figure. In conclusion, we predict $-0.2\lesssim S_{\phi \phi }\lesssim 0.4$ and $-0.5\lesssim S_{\phi \eta '}\lesssim 0.4$, respectively. Since the phase $\phi _s$ has still large experimental error bar, our prediction will be improved if the precise experimental data of $S_{J/\psi \phi }$ will be given in the near future at LHCb. Since the time dependent CP asymmetry $S_{\phi \phi }$ will be measured at LHCb, our prediction will be tested soon. \begin{figure}[h!] \begin{minipage}[]{0.45\linewidth} \hspace{4cm}(a) \includegraphics[width=7.5cm]{fig1a.eps} \end{minipage} \hspace{1cm} \begin{minipage}[]{0.45\linewidth} \hspace{4cm}(b) \includegraphics[width=7.5cm]{fig1b.eps} \end{minipage} \caption{The predicted region of $(\delta _d^{LR})_{23}$. In both figures (a) and (b), the horizontal and vertical axes denote the absolute value and the phase of $(\delta _d^{LR})_{23}$, respectively. In the figure (a), the only experimental constraint of BR$(b\to s\gamma )$ is taken account. In the figure (b), the experimental constraints of BR$(b\to s\gamma )$, $S_{\phi K_S}$, and $S_{\eta 'K^0}$ are taken account.} \label{fig:MIphase23} \end{figure} \begin{figure}[h!] \begin{minipage}[]{0.45\linewidth} \vspace{3mm} \includegraphics[width=7.5cm]{fig2.eps} \caption{The predicted direct CP asymmetry $A_{\text{CP}}^{b\to s\gamma }$ of $b\to s\gamma $ versus $|(\delta _d^{LR})_{23}|$. The red solid and two red dotted lines denote the best fit value, upper and lower bounds of the experimental data with $90\%$ C.L., respectively.} \label{fig:MIdirectbs} \end{minipage} \hspace{1cm} \begin{minipage}[]{0.45\linewidth} \includegraphics[width=7.5cm]{fig3.eps} \caption{The predicted CP asymmetry, $S_{K^* \gamma }$ of $B^0\to K^* \gamma $ versus $|(\delta _d^{LR})_{23}|$, where the red solid and two red dotted lines denote the best fit value, upper and lower bounds of the experimental data with $90\%$ C.L., respectively.} \label{fig:MISKstargamma} \end{minipage} \end{figure} \begin{figure}[h!] \begin{minipage}[]{0.45\linewidth} \vspace{3mm} \includegraphics[width=7.5cm]{fig4.eps} \caption{The allowed region of the time dependent CP asymmetries on the $S_{\phi K_S}$--$S_{\eta 'K^0}$ plane. The SM prediction $S_{J/\psi K_S}=S_{\phi K_S}=S_{\eta 'K^0}$ is plotted by the green slant line. The experimental data with error bar is plotted by the red solid lines at $90\%$ C.L..} \label{fig:SphiKSetapK} \end{minipage} \hspace{1cm} \begin{minipage}[]{0.45\linewidth} \vspace{-3mm} \includegraphics[width=7.5cm]{fig5.eps} \caption{The predicted time dependent CP asymmetries on the $S_{\phi \phi }$--$S_{\phi \eta '}$ plane. The small green line denotes the SM prediction from the experimental data of $S_{J/\psi \phi }$.} \label{fig:SphiphiSphietap} \end{minipage} \end{figure} \newpage \section{The $b\to d$ transition} \label{sec:bdtransitions} In this section, we discuss the $b\to d$ transition as the same way in the $b\to s$ one. The SUSY contribution is given in terms of the MI parameters $(\delta _d^{LL})_{13}$, $(\delta _d^{RR})_{13}$, $(\delta _d^{LR})_{13}$, and $(\delta _d^{RL})_{13}$. The typical $b\to d$ transition is the $b\to d\gamma $ decay. The experimental data of its branching ratio gives the constraint for these MI parameters. By using these MI parameters, we calculate SUSY contributions to the direct CP violation of the $b\to d\gamma $ decay and the time dependent CP asymmetry in the $B^0\to \rho \gamma $ decay. We also predict the time dependent CP asymmetry of the $B^0\to K^0\bar K^0$ decay. In order to constrain the MI parameters, we input the experimental data of the branching ratio of $b\to d\gamma $~\cite{delAmoSanchez:2010ae,Crivellin:2011ba}, \begin{equation} \text{BR}(b\to d\gamma )({\rm exp})=(1.41\pm 0.57)\times 10^{-5}, \label{BRbdgamma} \end{equation} on the other hand, the SM has predicted~\cite{Crivellin:2011ba} \begin{equation} \text{BR}(b\to d\gamma )({\rm SM})=(1.54_{-0.31}^{+0.26})\times 10^{-5}. \end{equation} Next we present the formulations of the time dependent CP asymmetries and direct CP violation including SUSY contributions. The branching ratio and direct CP violation in the $b\to d\gamma $ decay are given in Eqs.~(\ref{Brbqgamma}) and (\ref{directbqgamma}), respectively. The time dependent CP asymmetry $S_{\rho \gamma }$ in the $B^0 \to \rho \gamma $ decay is an important observable to search for the new physics and given as \begin{equation} S_{\rho \gamma } = \frac{2 {\rm Im}(-e^{-i\phi _d} {\tilde C}_{7\gamma }(m_b)/C_{7\gamma }(m_b))} {|{\tilde C}_{7\gamma }(m_b)/C_{7\gamma }(m_b)|^2+1}. \end{equation} Since ${\tilde C}_{7\gamma }^\text{SM}(m_b)/C_{7\gamma }^\text{SM}(m_b)\propto m_d/m_b$ in the SM, $S_{\rho \gamma }$ may be expected to be quite suppressed~\cite{Atwood:1997zr}. However, $S_{\rho \gamma }$ could be also enhanced owing to the gluino-squark mediated flavor changing process. The time dependent CP asymmetries $S_{K^0\bar K^0}$ and $C_{K^0\bar K^0}$ in the $B^0\to K^0\bar K^0$ decay are also interesting ones to search for the new physics since there is no tree process of the SM in the $B^0\to K^0\bar K^0$ decay ~\cite{Giri:2004af,Fleischer:2004vu}. These CP asymmetries are given in Eq.~(\ref{sf}) as \begin{equation} S_{K^0\bar K^0}=\frac{2\text{Im}\lambda _{K^0\bar K^0}} {1+|\lambda _{K^0\bar K^0}|^2}~, \qquad C_{K^0\bar K^0}=\frac{1-|\lambda _{K^0\bar K^0}|^2}{1+|\lambda _{K^0\bar K^0}|^2}~, \end{equation} where \begin{equation} \lambda _{K^0\bar K^0}=\frac{q}{p} \bar \rho ~, \qquad \frac{q}{p}\simeq \sqrt{\frac{M_{12}^{d*}}{M_{12}^{d}}}, \qquad \bar \rho \equiv \frac{\bar A(\bar B^0 \to K^0 \bar K^0)}{A(B^0\to K^0 \bar K^0)}. \end{equation} The amplitude $\bar A(\bar B^0\to K^0\bar K^0)$ is given in Ref.~\cite{Giri:2004af}, \footnote{The $\bar A(\bar B^0\to K^0\bar K^0)$ amplitude is explicitly presented in Refs.~\cite{Giri:2004af,Muta:2000ti}. In our calculation, we neglect $C_i$ $(i=8-10)$ since these Wilson coefficients are too small to contribute to the amplitude of $\bar B^0\to K^0\bar K^0$ in our model.} in which the QCD corrections are important for the hadronic matrix elements~\cite{Muta:2000ti}, as \begin{equation} \bar A(\bar B^0\to K^0\bar K^0) \simeq \frac{4G_F}{\sqrt{2}}\sum _{q=u,c}V_{qb}V_{qd}^* \left [a_4^q(m_b)+r_\chi a_6^q(m_b)\right ]X. \end{equation} Here $X$ is the factorized matrix element (See Ref.~\cite{Giri:2004af}.) as \begin{equation} X=-if_KF_0(m_K^2)(m_B^2-m_K^2), \end{equation} where $f_K$ and $F_0(m_K^2)$ denote the decay coupling constant of the $K$ meson and the form factor, respectively, and $r_\chi=2m_K^2/((m_b-m_s)(m_s+m_d))$ denotes the chiral enhancement factor. The coefficients $a_i^q$'s are given as~\cite{Giri:2004af,Muta:2000ti} \begin{align} a_4^q(m_b)&=(C_4-\tilde C_4)+\frac{(C_3-\tilde C_3)}{N_c}+\frac{\alpha _s(m_b)}{4\pi }\frac{C_F}{N_c} \Bigg [(C_3-\tilde C_3)\left [F_K+G_K(s_d)+G_K(s_b)\right ] \nonumber \\ &\hspace{1cm}+C_2G_K(s_q)+\left [ (C_4-\tilde C_4)+(C_6-\tilde C_6)\right ] \sum _{f=u}^bG_K(s_f)+(C_{8G}-\tilde C_{8G})G_{K,g}\Bigg ], \nonumber \\ a_6^q(m_b)&=(C_6-\tilde C_6)+\frac{(C_5-\tilde C_5)}{N_c}+\frac{\alpha _s(m_b)}{4\pi }\frac{C_F}{N_c} \Bigg [(C_3-\tilde C_3)\left [G_K'(s_d)+G_K'(s_b)\right ] \nonumber \\ &\hspace{1cm}+C_2G_K'(s_q)+\left [(C_4-\tilde C_4)+(C_6-\tilde C_6)\right ] \sum _{f=u}^bG_K'(s_f)+(C_{8G}-\tilde C_{8G})G_{K,g}'\Bigg ], \label{coefficients-BKK} \end{align} where $q$ takes $u$ and $c$ quarks, $C_F=(N_c^2-1)/(2N_c)$, and the loop functions $F_K$, $G_K$, $G_{K,g}$, $G_K'$, and $G_{K,g}'$ are given in Refs.~\cite{Giri:2004af,Muta:2000ti}. The internal quark mass in the penguin diagrams enters as $s_f=m_f^2/m_b^2$.\footnote{The $C_i^{\tilde g}~(i=3-6,8G)$ in Eq.~(\ref{coefficients-BKK}) should be taken as the replacement $C_i^{\tilde g}\rightarrow [(V_{tb}V_{td}^*)/(V_{qb}V_{qd}^*) ]C_i^{\tilde q}$ in Eq.~(\ref{Coeff}).} The minus sign in front of $\tilde C_i~(i=3-6,8G)$ comes from the parity of the final state as discussed in the previous section. By using above formulations, we estimate the SUSY contributions in the $b\to d$ transition. In our calculations, we take $\mu \tan \beta $ to be $1$~TeV and we set the MI parameters, $|(\delta _d^{LL})_{13}| = |(\delta _d^{LL})_{13}|\lesssim 10^{-2}$ from our previous works~\cite{Hayakawa:2012ua,Shimizu:2012ru}. We also assume that the magnitudes of the MI parameters $(\delta _d^{LR})_{13}$ and $(\delta _d^{RL})_{13}$ are same but each phase is different. Thus, we parameterize the MI parameters as follows: \begin{equation} (\delta _d^{LR})_{13}=|(\delta _d^{LR})_{13}|e^{2i\theta _{13}^{LR}},\quad (\delta _d^{RL})_{13}=|(\delta _d^{LR})_{13}|e^{2i\theta _{13}^{RL}}. \end{equation} Let us discuss the numerical analysis. In our calculations, we use the squark mass and the gluino mass as given in Eq.~(\ref{SUSYmass}). The present experimental data of BR$(b\to d\gamma )$ in Eq.~(\ref{BRbdgamma}) gives a constraint for the MI parameters as seen in Fig.~\ref{fig:MIBRbdgamma}. The SM contribution is larger than the SUSY one until $|(\delta _d^{LR})_{13}|\simeq 7\times 10^{-3}$, while the SUSY contribution dominates the $b\to d\gamma $ decay in the region of $|(\delta _d^{LR})_{13}|\gtrsim 7\times 10^{-3}$. It is remarked that there is a lower bound of the branching ratio around $5\times 10^{-6}$. In Fig.~\ref{fig:MIphase13}, we show the allowed region of $(\delta _d^{LR})_{13}$ within $90\% $ C.L. of BR$(b\to d\gamma )$. It is found that any value of the phase is allowed in $|(\delta _d^{LR})_{13}|\lesssim 5\times 10^{-3}$. The upper bound of the MI parameter is at $|(\delta _d^{LR})_{13}|\simeq 2\times 10^{-2}$ around the specific $\theta _{13}^{LR}$, $\pi /2$. By using this allowed region of $(\delta _d^{LR})_{13}$, we can predict the direct CP asymmetry $A_\text{CP}^{b\to d\gamma }$ and time dependent CP asymmetries $S_{\rho \gamma }$, $S_{K^0\bar K^0}$, and $C_{K^0\bar K^0}$. In Fig.~\ref{fig:MIdirectbd}, we show the predicted direct CP asymmetry $A_\text{CP}^{b\to d\gamma }$ versus $|(\delta _d^{LR})_{13}|$. Here the value at $|(\delta _d^{LR})_{13}|=0$ is the SM one, $A_\text{CP}^{b\to d\gamma }(\text{SM})\simeq -0.09$. Our prediction is $-0.16\lesssim A_\text{CP}^{b\to d\gamma }\lesssim 0.06$. If $A_\text{CP}^{b\to d\gamma }$ is measured in the future, we obtain an additional constraint of the MI parameters. In Fig.~\ref{fig:MISrhogamma}, we show the prediction of $S_{\rho \gamma }$ depending on $|(\delta _d^{LR})_{13}|$. The SM prediction is $S_{\rho \gamma }(\text{SM})\simeq (2m_d/m_b) \sin \phi _d\simeq 2.0\times 10^{-3}$~\cite{Atwood:1997zr}, while the experimental data is $S_{\rho \gamma }(\text{exp})=-0.8\pm0.7$~\cite{PDG}. In our prediction, the $S_{\rho \gamma }$ reaches $\pm 1$ at $|(\delta _d^{LR})_{13}|\gtrsim 7\times 10^{-3}$. Therefore, the $S_{\rho \gamma }$ is expected to be much larger than the SM prediction in the case of $|(\delta _d^{LR})_{13}|= {\cal O}(10^{-3})$. We expect the precise data to test our prediction in the future. In Figs.~\ref{fig:MISKK} and \ref{fig:MICKK}, we show the predictions of the time dependent CP asymmetries $S_{K^0\bar K^0}$ and $C_{K^0\bar K^0}$ depending on $|(\delta _d^{LR})_{13}|$, respectively. In the SM, one predicts $0.02 \le S_{K^0\bar K^0}(\text{SM})\le0.13$ and $-0.17 \le C_{K^0\bar K^0}(\text{SM})\le -0.15$~\cite{Giri:2004af}, while the experimental data are given as $S_{K^0\bar K^0}(\text{exp})=-0.8\pm 0.5$ and $C_{K^0\bar K^0}(\text{exp})=0.0\pm 0.4$~\cite{PDG}, respectively. The present experimental bounds do not give any additional constraints to Fig.~\ref{fig:MIphase13}. However, more precise experimental data provide intensive constraints for MI parameters. \begin{figure}[h!] \begin{minipage}[]{0.45\linewidth} \includegraphics[width=7.5cm]{fig6.eps} \caption{The predicted region on the $|(\delta _d^{LR})_{13}|$--$\text{BR}(b\to d\gamma )$ plane. The red solid and two red dotted lines denote the best fit value, upper and lower bounds of the experimental data with $90\% $~C.L., respectively.} \label{fig:MIBRbdgamma} \end{minipage} \hspace{1cm} \begin{minipage}[]{0.45\linewidth} \vspace{-1.7cm} \includegraphics[width=7.5cm]{fig7.eps} \caption{The predicted region on the $(\delta _d^{LR})_{13}$--$\theta _{13}^{LR}$ plane. The experimental constraint of BR$(b \to d \gamma)$ is taken account.} \label{fig:MIphase13} \end{minipage} \end{figure} \begin{figure}[h!] \begin{minipage}[]{0.45\linewidth} \includegraphics[width=7.5cm]{fig8.eps} \caption{The predicted direct CP asymmetry $A_{\text{CP}}^{b \to d\gamma }$ versus $|(\delta _d^{LR})_{13}|$.} \label{fig:MIdirectbd} \end{minipage} \hspace{1cm} \begin{minipage}[]{0.45\linewidth} \vspace{-3mm} \includegraphics[width=7.5cm]{fig9.eps} \caption{The predicted time dependent CP asymmetry $S_{\rho \gamma }$ versus $|(\delta _d^{LR})_{13}|$.} \label{fig:MISrhogamma} \end{minipage} \end{figure} \begin{figure}[h!] \begin{minipage}[]{0.45\linewidth} \vspace{-5mm} \includegraphics[width=7.5cm]{fig10.eps} \caption{The predicted time dependent CP asymmetry $S_{K^0\bar K^0}$ versus $|(\delta _d^{LR})_{13}|$. The red solid and red dotted lines denote the best fit value and the experimental data with $90\% $~C.L., respectively.} \label{fig:MISKK} \end{minipage} \hspace{1cm} \begin{minipage}[]{0.45\linewidth} \includegraphics[width=7.5cm]{fig11.eps} \caption{The predicted time dependent CP asymmetry $C_{K^0\bar K^0}$ versus $|(\delta _d^{LR})_{13}|$. The red solid and two red dotted lines denote the best fit value, upper and lower bounds of the experimental data with $90\% $~C.L., respectively.} \label{fig:MICKK} \end{minipage} \end{figure} \begin{table}[t] \begin{center} \begin{tabular}{|c||c|c|c|} \hline & Exp. & SM & our prediction \\ \hline \hline BR$(b \to s \gamma)$ & $(3.53\pm 0.24)\times 10^{-4}$ \cite{PDG}& $(3.15 \pm 0.23)\times 10^{-4} $~\cite{Misiak:2006zs} & constraint \\ BR$(b \to d \gamma )$ & $(1.41\pm 0.57)\times 10^{-5}$~\cite{delAmoSanchez:2010ae,Crivellin:2011ba} & $(1.54_{-0.31}^{+0.26})\times 10^{-5}$~\cite{Crivellin:2011ba} & constraint \\ \hline $A_\text{CP}^{b \to s \gamma}$ & $-0.008 \pm 0.029$ \cite{PDG} & $4\times 10^{-3}$~\cite{Kagan:1998bh} & $-0.03\sim 0.03$ \\ $A_\text{CP}^{b \to d \gamma}$ & ------ & $-0.09$ & $-0.16\sim 0.06$ \\ \hline $S_{J/\psi K_S}$ & $0.679\pm0.020$~\cite{Amhis:2012bh} & input & constraint \\ $S_{\phi K_S}$ & $0.74^{+0.11}_{-0.13}$~\cite{Amhis:2012bh} & $=S_{J/\psi K_S}$ & constraint \\ $S_{\eta' K^0}$ & $0.59\pm{0.07}$~\cite{Amhis:2012bh} & $=S_{J/\psi K_S}$ & constraint \\ $\phi_s (S_{J/\psi \phi }=\sin \phi _s)$ & $-0.004\pm 0.166\pm 0.054$~\cite{ICHEP2012} & $-0.0363^{+0.0016}_{-0.0015}$ \cite{Charles:2011va} & constraint \\ $S_{\phi \phi }$ & ------ & $=S_{J/\psi \phi }$ & $-0.2\sim 0.4$\\ $S_{\phi \eta '}$ & ------ & $=S_{J/\psi \phi }$ & $-0.5\sim 0.4$ \\ $S_{K^* \gamma}$ & $-0.15 \pm 0.22$~\cite{PDG} & $0.04$~\cite{Atwood:1997zr} & $-0.4\sim 0.2$ \\ $S_{\rho \gamma }$ & $-0.8\pm 0.7$~\cite{PDG} & $0.002$~\cite{Atwood:1997zr} & $-1\sim 1$ \\ $S_{K^0\bar K^0}$ & $-0.8\pm 0.5$~\cite{PDG} & $0.02\sim 0.13$~\cite{Giri:2004af} & $-1\sim 1$ \\ $C_{K^0\bar K^0}$ & $-0.0\pm 0.4$~\cite{PDG} & $-0.17\sim -0.15$~\cite{Giri:2004af} & $-1\sim 1$ \\ \hline \end{tabular} \end{center} \caption{Summary of the SM predictions, experimental values, and our predictions.} \label{tab:summary} \end{table} \section{Summary} We have discussed the contribution of the gluino-squark mediated flavor changing process to the CP violation in $b\to s$ and $b\to d$ transitions taking account of recent experimental data. We have presented the allowed region of the MI parameters $(\delta _d^{LR})_{23}$ and $(\delta _d^{LR})_{13}$, which are constrained by the branching ratios of $b\to s\gamma$ and $b\to d\gamma$ decays. In addition, the time dependent CP asymmetries of $B^0\to J/\psi K_S$, $B^0\to \phi K_S$, and $B^0\to \eta ' K^0$ decays severely restrict the allowed region of the MI parameter, $(\delta _d^{LR})_{23}$. These MI parameters $(\delta _d^{LR})_{23}$ and $(\delta _d^{LR})_{13}$ are still allowed up to $2\times 10^{-2}$ for the squark and gluino masses of $1.5$~TeV. If $m_{\tilde q}\simeq m_{\tilde g}$ increase, the bound of $(\delta _d^{LR})_{k3}~(k=2,1)$ is approximately rescaled as $(\delta _d^{LR})_{k3}\times m_{\tilde q}/(1.5~\text{TeV})$. By using these constraints, we predict the CP asymmetries of $B_s\to \phi \phi$, $B_s\to \eta '\phi $, and $B^0\to K^0 \bar K^0$ decays, as well as the CP asymmetries in $b\to s\gamma $ and $b\to d\gamma $ decays. We have summarized our results in Table~\ref{tab:summary}. It is remarked that the CP violation of the $B_s\to \phi \phi$ decay is expected to be large owing to the squark flavor mixing. This prediction will be tested soon at LHCb. \vspace{0.5 cm} \noindent {\bf Acknowledgment} We thank S. Mishima for useful discussions. We also thank A. Hayakawa and J. Kumagai for their help. M.T. is supported by JSPS Grand-in-Aid for Scientific Research, 21340055 and 24654062.
1,314,259,993,075
arxiv
\section{Introduction} Recently, the realization of a Bose-Einstein condensate (BEC) strongly coupled to the quantized photon field in an optical cavity has been shown \cite{Colombe,Brennecke}. This paves the way to study the interplay of atomic interactions and atom-photon interactions. For example, the novel quantum phase transition of a condensate coupled to a cavity has been demonstrated \cite{Baumann}. Strong atom-photon coupling is useful for quantum communications \cite{Duan} such as the light-matter interface \cite{Colombe,Brennecke}. Alternatively, strong coupling of ultracold atoms to a superconducting resonator has been recently proposed \cite{Henschel}. The two long-lived hyperfine states $|e\rangle=|F=2,m_F=1\rangle$ and $|g\rangle=|F=1,m_F=-1\rangle$ of ${}^{87}$Rb \cite{Matthews} are considered to be magnetically coupled to the microwave field via their magnetic dipoles \cite{Henschel,Imamoglu}. Since the high-Q superconducting resonator can be fabricated to a small mode volume \cite{Wang} and the coupling strength can be greatly increased due to the collective enhancement \cite{Duan}. The strong coupling of ultracold atoms in the microwave regime can be achieved \cite{Henschel}. In this paper, we study a two-component BEC in a double-well potential \cite{Ng1}, where all atoms are equally coupled to a single-mode of the microwave field inside a superconducting resonator. Two weakly linked condensates can be created in a magnetic double-well potential on an atom-chip \cite{Schumm,Maussang} or in an optical double-well potential \cite{Shin}. In fact, the tunneling dynamics between the atoms in two wells has been recently observed \cite{Albiez,Folling,Trotzky}. A double-well BEC coupled to an optical cavity has also been discussed in the literature \cite{Zhang,Chen,Larson}. However, the spontaneous emission rate of excited states used for optical transitions in experiments \cite{Colombe,Brennecke} is much higher than the tunneling rate of the atoms between the two wells \cite{Albiez,Folling,Trotzky}. Here we consider the two hyperfine states $|e\rangle$ and $|g\rangle$ of ${}^{87}$Rb with the transition frequency $2\pi\times{6.8}$~GHz \cite{Matthews}. The coherence times \cite{Harber,Treutlein} of these hyperfine spin states ($|e\rangle$ and $|g\rangle$) are much longer than both the timescales of tunneling and atom-photon interactions. Therefore, this system offers possibilities for the study of how the tunnel couplings between the two spatially separated condensates affect the atom-photon dynamics. We focus our investigation on the the system in the limits of the strong and weak tunnel couplings, respectively. We find that the system has the different dark-state subspaces \cite{Fleischhauer} in these two tunneling regimes. In the weak-tunneling regime, the system has a family of dark states which can be used for producing quantum entanglement between the condensates. Here we propose to efficiently generate steady-state entanglement between the two spatially separated condensates by evolving to a mixture of dark states through the dissipation of the photon field \cite{Yang,Plenio,Joshi}. Note that our scheme does not require any adjustment of the tunneling strength. It is different to other methods \cite{Ng1} which depend on the strength of tunnel couplings to generate entanglement. In addition, the entanglement generated between the two condensates can be used for the implementation of quantum state transfer \cite{Bose}. This may be useful for quantum information processing with atom-chip devices \cite{Treutlein}. \begin{figure}[ht] \centering \includegraphics[height=3.0cm]{DWResonator6_1} \caption{ \label{fig1} (Color online) Schematic of a two-component BEC coupled to a single-mode of the photon field inside a superconducting resonator. A two-component condensate is trapped in a double-well potential, and it is placed close to the surface of the superconducting resonator. The atoms are coupled to the magnetic field via their magnetic dipoles. The parameters $L$ and $w$ are the length and width of the superconducting resonator, respectively. } \end{figure} This paper is organized as follows: In Sec.~II, we introduce the system of a two-component condensate in a double-well potential, and the two-level atoms are coupled to a superconducting resonator. In Sec.~III, we derive the two effective Hamiltonians in the strong- and weak-tunneling regimes, respectively. In Sec.~IV, we investigate the dark-state subspaces and the atom-photon dynamics in the two tunneling limits. In Sec.~V, we provide a method to produce the steady-state entanglement between the two condensates in a double well. A summary is given in Sec.~VI. In Appendix A, we discuss the validity of the effective Hamiltonian in the strong tunneling regime. \section{System} We consider a two-component BEC being trapped in a double-well potential \cite{Ng1}, and the condensate is placed near the surface of a superconducting resonator as shown in Fig.~\ref{fig1}. The atoms, with two internal states $|e\rangle$ and $|g\rangle$, are coupled to a single mode of the photon field via their magnetic dipoles. \subsection{A two-component condensate trapped in a double-well potential} We first introduce the system of a two-component condensate in a one-dimensional (1D) double-well potential which can be described by the Hamiltonian as \begin{eqnarray} H_0\!&=&\!\sum_{\alpha}\!\!\int\!{dx}\Psi^\dag_{\alpha}(x)\!\Big[\!-\frac{\hbar^2}{2m_{\alpha}}\frac{\partial^2}{\partial{x^2}}+V_{{\rm DW}}(x) +{\tilde{U}_{\alpha}}\Psi^\dag_{\alpha}(x)\nonumber\\ &&\times\Psi_{\alpha}(x)\Big]\Psi_{\alpha}(x)\!+\!2\tilde{U}_{eg}\!\!\int\!\!{dx}\Psi^\dag_e(x)\Psi^\dag_g(x)\Psi_g(x)\Psi_e(x),\nonumber\\ \end{eqnarray} where $\Psi_{\alpha}(x)$ is the field operator of the atoms for the internal state $|\alpha\rangle$ at the position $x$, and the indices $\alpha=g,e$ represent the ground and the excited states, respectively. Here $m_{\alpha}$ is the mass of the atom in the state $|\alpha\rangle$ and $V_{{\rm DW}}(x)$ is the 1D double-well potential which is given by \cite{Maussang} \begin{equation} V_{\rm DW}(x)=V_{d}\Big[1-\Big(\frac{x}{x_0}\Big)^2\Big]^2, \end{equation} where $V_d$ is the barrier height and $x_0$ is the distance between the two separate potential wells. The atoms are transversely confined in the $y$- and $z$-directions with the trap frequencies $\omega_{\perp}$. The size of the ground-state wave function in the transverse motion is $a_{\perp}=\sqrt{\hbar/{m_{\alpha}\omega_{\perp}}}$ \cite{Olshanii,Pflanzer}, where $m_{e}$ and $m_{g}$ are nearly equal. Since the transverse frequencies are much larger than the trap frequency in the $x$-direction, the transverse motions of the atoms are frozen out. The parameters $\tilde{U}_{\alpha}$ and $\tilde{U}_{eg}$ are the effective 1D interaction strengths between the inter-, and the intra-component condensates, as \cite{Olshanii,Pflanzer} \begin{eqnarray} \tilde{U}_{\alpha}&=&\frac{2\hbar^2{a_\alpha}}{m_{\alpha}a^2_{\perp}}\Big(1-C\frac{a_\alpha}{\sqrt{2}a_{\perp}}\Big)^{-1},\\ \tilde{U}_{eg}&=&\frac{4\hbar^2m_em_g{a_{eg}}}{(m_{e}+m_{g})a^2_{\perp}}\Big(1-C\frac{a_{eg}}{\sqrt{2}a_{\perp}}\Big)^{-1}, \end{eqnarray} where $C\approx{1.4603}$. The parameters $a_{\alpha}$ and $a_{eg}$ are the three-dimensional s-wave scattering lengths for the inter-, and the intra-component condensates. We adopt the two-mode approximation \cite{Milburn} such that the field operator $\Psi_\alpha(x)$ can be expanded in terms of the two localized mode functions $u_{\alpha_L}(x)$ and $u_{\alpha_R}(x)$ as, \begin{eqnarray} \Psi_{\alpha}(x)&=&\alpha_{L}u_{\alpha{L}}(x)+\alpha_{R}u_{\alpha{R}}(x), \end{eqnarray} where $\alpha_{L}$ and $\alpha_{R}$ are the annihilator operators of the atoms in the state $\alpha=e,g$ for the left and right modes of the double-well potential, respectively. The Hamiltonian of the system \cite{Ng1}, within the two-mode approximation, can be written as \begin{eqnarray} \label{H0} H_0'&=&{\hbar}E_{e}(e^{\dag}_{L}e_{L}+e^\dag_{R}e_{R})+\hbar{E_g}(g^{\dag}_{R}g_{R}+g^{\dag}_{L}g_{L})\nonumber\\ &&-{\hbar}J_e(e^{\dag}_{L}e_{R}+e^\dag_{R}e_{L})-{\hbar}J_g(g^{\dag}_{L}g_{R}+g^\dag_{R}g_{L})\nonumber\\ &&+{\hbar}U_{ee}[(e^\dag_{L}e_{L})^2+(e^\dag_{R}e_{R})^2]+{\hbar}U_{gg}[(g^\dag_{L}g_{L})^2\nonumber\\ &&+(g^\dag_{R}g_{R})^2]+2{\hbar}U_{eg}(e^\dag_{L}e_{L}g^\dag_{L}g_{L}+e^\dag_{R}e_{R}g^\dag_{R}g_{R}),~~~~ \end{eqnarray} where \begin{eqnarray} E_{\alpha}&=&\frac{1}{\hbar}\int\!\!{dx}u^*_{\alpha{j}}(x)\Big[-\frac{\hbar^2}{2m_{\alpha}}\frac{\partial^2}{\partial{x^2}}+V_{\rm DW}(x)\Big]u_{\alpha{j}}(x),~~\\ J_{\alpha}&=&-\frac{1}{\hbar}\int\!\!{dr}u^*_{\alpha{L}}(x)\Big[-\frac{\hbar^2}{2m_{\alpha}}\frac{\partial}{\partial{x^2}}+V_{\rm DW}(x)\Big]u_{\alpha{R}}(x),~~~\\ U_{\alpha}&=&\frac{\tilde{U}_{\alpha}}{\hbar}\int\!\!{dx}|u_{\alpha{j}}(x)|^4,\\ U_{\alpha\beta}&=&\frac{\tilde{U}_{\alpha\beta}}{\hbar}\int\!\!{dr}|u_{\alpha{j}}(x)|^2|u_{\beta{j}}(x)|^2, \end{eqnarray} and $j=L,R$. The positive parameters $E_\alpha$ and $J_\alpha$ \cite{tunparameter} are the ground-state frequencies of the localized mode $\alpha_{L,R}$, and the tunneling strengths between the two wells for the atoms in the states $\alpha$. Here $U_\alpha$ and $U_{\alpha\beta}$ are the two positive parameters which describe the inter- and intra-component interaction strengths, respectively. \subsection{Atoms coupled to the photon field in a microwave cavity} We consider that the atoms are coupled to a single-mode of the photon field via their magnetic dipoles \cite{Henschel}. Within the two-mode approximation, the Hamiltonian, describing the system of cavity field, the atoms and their interactions, is given by \begin{eqnarray} H_I&=&\hbar{\omega_a}a^\dag{a}+\hbar{\omega_0}(e^\dag_Le_L+e^\dag_Re_R) +{\hbar}g[a(e^\dag_{L}g_{L}+e^\dag_{R}g_{R})\nonumber\\ &&+{\rm H.c.}], \end{eqnarray} where $\omega_a$ and $a$ are the frequency and the annihilator operator of the single-mode of the photon field, and $\omega_0$ is the transition frequency of the two internal states. Here we have assumed that the wavelength of the microwave field (${\sim}~1$~cm) is much larger than the size of the condensate ($\sim~{10}~\mu$m) \cite{Schumm,Maussang}. Therefore, all atoms are coupled to the photon field with the same coupling strength $g=\mu_B\sqrt{\mu_0\omega_a/2\hbar{V}}$ \cite{Imamoglu}, where $\mu_B$ is the Bohr magneton, $\mu_0$ is the vacuum permeability and $V$ is the volume of the superconducting resonator. The coupling strength $g$ can attain $1$~kHz \cite{Imamoglu} if the volume $V$ of the superconducting resonator is taken as $L\times{w}\times{{t_h}}\sim{1}~{\rm cm}\times{10}~\mu{\rm m}\times{200}~{\rm nm}$ \cite{Imamoglu,Wang}, where $L$ is the length, $w$ is the width, and ${t_h}$ is the thickness of the superconducting resonator. \section{Effective Hamiltonians in strong and weak tunneling regimes: Low atomic excitations} We will derive the effective Hamiltonians of the system in the limits of strong and weak tunnel couplings, respectively, where a few atomic excitations are only involved. Let us first write the total Hamiltonian of the system as \begin{eqnarray} \label{tHam} H&=&\hbar\omega_a{a^\dag{a}}+\hbar{\omega_0}(e^\dag_Le_L+e^\dag_Re_R)-{\hbar}J_e(e^{\dag}_{L}e_{R}+e^\dag_{R}e_{L})\nonumber\\ &&-{\hbar}J_g(g^{\dag}_{L}g_{R}+g^\dag_{R}g_{L})+{\hbar}U_{ee}[(e^\dag_{L}e_{L})^2+(e^\dag_{R}e_{R})^2]\nonumber\\ &&+{\hbar}U_{gg}[(g^\dag_{L}g_{L})^2+(g^\dag_{R}g_{R})^2]+2{\hbar}U_{eg}(e^\dag_{L}e_{L}g^\dag_{L}g_{L}\nonumber\\ &&+e^\dag_{R}e_{R}g^\dag_{R}g_{R})+{\hbar}g[a(e^\dag_{L}g_{L}+e^\dag_{R}g_{R})+{\rm H.c.}]. \end{eqnarray} The total number of atoms $N$ is conserved. We have omitted the constant term $E_0N$ for a symmetric double well, where $E_{\alpha}\approx{E_0}$ for the two masses $m_e$ and $m_g$ being equal. It is convenient to work in the rotating frame by applying the unitary transformation to the Hamiltonian $H$ in Eq.~(\ref{tHam}), where the unitary operator $U(t)$ is \begin{eqnarray} U(t)&=&\exp{[-i\omega_a({a^\dag{a}}+e^\dag_Le_L+e^\dag_Re_R)t]}. \end{eqnarray} The transformed Hamiltonian becomes \begin{eqnarray} H'&=&\hbar{\Delta}(e^\dag_Le_L+e^\dag_Re_R)-{\hbar}J_e(e^{\dag}_{L}e_{R}+e^\dag_{R}e_{L})-{\hbar}J_g(g^{\dag}_{L}g_{R}\nonumber\\ &&+g^\dag_{R}g_{L})+{\hbar}U_{ee}[(e^\dag_{L}e_{L})^2+(e^\dag_{R}e_{R})^2]+{\hbar}U_{gg}[(g^\dag_{L}g_{L})^2\nonumber\\ &&+(g^\dag_{R}g_{R})^2]+2{\hbar}U_{eg}(e^\dag_{L}e_{L}g^\dag_{L}g_{L}+e^\dag_{R}e_{R}g^\dag_{R}g_{R})\nonumber\\ &&+{\hbar}g[a(e^\dag_{L}g_{L}+e^\dag_{R}g_{R})+{\rm H.c.}] \end{eqnarray} where $\Delta=\omega_0-\omega_a$ is the detuning between the frequencies of the photon field and the two internal states. In the strong tunneling regime, the tunnel coupling is dominant and the strength of atom-atom interactions is relatively weak. On the contrary, in the weak tunneling regime, the atom-atom interactions become dominant and the tunneling strength is negligible. We will show that these two cases exhibit the different behaviours in the atom-photon dynamics. We will provide derivations of the two effective Hamiltonians in the two tunneling limits in the following subsections. \subsection{Strong-tunneling regime} In the limit of the strong tunnel coupling, the tunneling strengths are much larger than the strengths of the atom-atom interactions, i.e., $J_{e},J_{g}~{\gg}~U_{e},U_{g},U_{eg}$. The total Hamiltonian of the system can be approximated as \begin{eqnarray} H_1&=&\hbar\Delta(e^\dag_Le_L+e^\dag_Re_R)-{\hbar}J_e(e^{\dag}_{L}e_{R}+e^\dag_{R}e_{L})\nonumber\\ &&-{\hbar}J_g(g^{\dag}_{L}g_{R}+g^\dag_{R}g_{L})+{\hbar}g[a(e^\dag_{L}g_{L}+e^\dag_{R}g_{R})+{\rm H.c.}].\nonumber\\ \end{eqnarray} We have neglected the terms of the atom-atom interactions in this Hamiltonian. The symmetric and asymmetric modes $g_{\pm}$ and $e_{\pm}$ can be related to the localized modes as \begin{eqnarray} g_{\pm}&=&\frac{1}{\sqrt{2}}(g_L\pm{g_R}),\\ e_{\pm}&=&\frac{1}{\sqrt{2}}(e_L\pm{e_R}). \end{eqnarray} The Hamiltonian is then transformed as \begin{eqnarray} \label{wtHam1} H_1'&=&{\hbar}(\Delta-J_e)e^\dag_+e_++\hbar({\Delta+J_e})e^\dag_-e_- -{\hbar}J_g(g^{\dag}_{+}g_{+}\nonumber\\ &&-g^\dag_{-}g_{-})+{\hbar}g(ae^\dag_{+}g_{+}+{\rm H.c.})+{\hbar}g(ae^\dag_{-}g_{-}+{\rm H.c.}).\nonumber\\ \end{eqnarray} Here the atoms are in symmetric (asymmetric) mode if they are populated in the states $g^k_+|0\rangle_+$ or $e^k_+|0\rangle_+$ ($g^k_-|0\rangle_-$ or $e^k_-|0\rangle_-$), where $|0\rangle_+$ ($|0\rangle_-$) is the vacuum state of the symmetric (asymmetric) mode and $k$ is a non-negative integer. We consider the system to be initially prepared in the ground state in the limit of strong tunnel coupling, i.e., the ground state of the symmetric mode. The ground state can be obtained by applying the operator $(g^\dag_+)^N$ to the vacuum state $|0\rangle_+$ of the symmetric mode, i.e., \begin{equation} \label{Psi_s} |\Psi_{1}(0)\rangle=\frac{1}{\sqrt{N!}}(g^{\dag}_+)^N|0\rangle_+, \end{equation} where $N$ is the total number of atoms. Note that the atoms in the symmetric and asymmetric modes are independently coupled to the photon field in Eq.~(\ref{wtHam1}). Therefore, all atoms in the symmetric mode are only involved in the dynamics of the atom-photon interactions if the system starts with the state $|\Psi_1(0)\rangle$ in Eq.~(\ref{Psi_s}). In fact, there are only a few excitations in the asymmetric mode due to the atomic interactions. The effect of the excitations from the asymmetric mode to the dynamics of atom-photon interactions is very small. It is because the Rabi coupling strength cannot be greatly enhanced with a small number of atoms in the asymmetric mode. We briefly discuss the validity of this assumption in Appendix A. It is instructive to express the Hamiltonian in terms of angular momentum operators: \begin{eqnarray} \label{S+1} S^{(+)}_+&=&g_{+}e^\dag_{+},\\ \label{S+2} S^{(+)}_-&=&e_{+}g^\dag_{+},\\ \label{S+3} S^{(+)}_z&=&\frac{1}{2}(e^\dag_{+}e_{+}-g^\dag_{+}g_{+}). \end{eqnarray} The Hamiltonian can be rewritten as \begin{eqnarray} \tilde{H}_1'&=&\hbar{\Delta}S^{(+)}_z+{\hbar}g(aS^{(+)}_++{\rm H.c.}). \end{eqnarray} For simplicity, we have assumed that the tunneling strengths $J_e$ and $J_g$ are equal. We also have omitted the constant term $\hbar{N}\Delta/2$. By applying the Holstein-Primakoff transformation (HPT) \cite{Holstein}, the angular momentum operators can be mapped onto the harmonic oscillators which are given by \begin{eqnarray} \label{HPT1a} S^{(+)}_{+}&=&b^\dag\sqrt{N-b^\dag{b}},\\ \label{HPT1b} S^{(+)}_{-}&=&b\sqrt{N-b^\dag{b}},\\ \label{HPT1c} S^{(+)}_{z}&=&b^\dag{b}-\frac{N}{2}. \end{eqnarray} In the low degree of excitation, the mean excitation number $\langle{b^\dag{b}}\rangle$ are much smaller than the total number of atoms $N$. The angular momentum operators can be approximated by the bosonic operators \cite{Ng1,Ng3}. The effective Hamiltonian can be obtained as \begin{eqnarray} \label{eHam1} H^{(1)}_{\rm eff}&=&\hbar{\Delta}b^\dag{b}+{\hbar}g\sqrt{N}(a{b^\dag}+{\rm H.c.}). \end{eqnarray} Note that the effective Rabi frequency is enhanced by a factor of $\sqrt{N}$. This effective Hamiltonian $H^{(1)}_{\rm eff}$ in Eq.~(\ref{eHam1}) describes the interactions between the collective-excitation mode and the single mode of the photon field. \subsection{Weak-tunneling regime} Now we investigate the system in the weak tunneling regime, where the atom-atom interaction strengths are much larger than the tunneling strengths, i.e, $U_e,U_g,U_{eg}{\gg}J_e,J_g$. In this limit, we assume that the tunneling between the two condensates is effectively turned off. The total Hamiltonian can be approximated as \begin{eqnarray} H_2&=&{\hbar}\Delta(e^\dag_Le_L+e^\dag_Re_R)+{\hbar}g[a(e^\dag_{L}g_{L}+e^\dag_{R}g_{R})+{\rm H.c.}]\nonumber\\ &&+{\hbar}U_{ee}[(e^\dag_{L}e_{L})^2+(e^\dag_{R}e_{R})^2]+{\hbar}U_{gg}[(g^\dag_{L}g_{L})^2\nonumber\\ &&+(g^\dag_{R}g_{R})^2]+2{\hbar}U_{eg}(e^\dag_{L}e_{L}g^\dag_{L}g_{L}+e^\dag_{R}e_{R}g^\dag_{R}g_{R}). \end{eqnarray} Here we have ignored the terms of the tunnel couplings. This Hamiltonian can be expressed in terms of the angular momentum operators: \begin{eqnarray} S_{j+}&=&g_je^\dag_{j},\\ S_{j-}&=&e_{j}g^\dag_j,\\ S_{jz}&=&\frac{1}{2}(e^\dag_{j}e_{j}-g^\dag_{j}g_{j}), \end{eqnarray} where $j=L,R$. Now the Hamiltonian is rewritten as \begin{equation} \label{wtHam2} \tilde{H}_2=\hbar\sum_{j=L,R}(\Delta+\delta)S_{jz}+{\hbar}g(aS_{j+}+{\rm H.c.})+\hbar\chi{S}^2_{jz}, \end{equation} where $\delta=(U_{ee}-U_{gg})N/2$ and $\chi=U_{ee}+U_{gg}-2U_{eg}$. We have omitted the constant term $\hbar(U_{ee}+U_{gg}+2U_{eg})N^2/16+\hbar{N}\Delta/2$ in Eq.~(\ref{wtHam2}). We consider all atoms at the state $|g\rangle$ are initially prepared in the ground state of the Hamiltonian in Eq.~(\ref{wtHam2}), which can be described by a product of two number states as \begin{eqnarray} \label{Psi_w} |\Psi_{2}(0)\rangle&=&|{N}/{2}\rangle_{g_L}|{N}/{2}\rangle_{g_R}. \end{eqnarray} Without loss of generality, we assume that $N$ is an even number. We apply the HPT such that the angular momentum operators can be mapped onto the harmonic oscillators as: \begin{eqnarray} S_{L+}&=&c^\dag\sqrt{N/2-c^\dag{c}},~~~~S_{L-}=c\sqrt{N/2-c^\dag{c}},\\ &&~~~~~~~~~~S_{Lz}=c^\dag{c}-\frac{N}{4},\\ S_{R+}&=&d^\dag\sqrt{N/2-d^\dag{d}},~~~~S_{R-}=d\sqrt{N/2-d^\dag{d}},\\ &&~~~~~~~~~~S_{Rz}=d^\dag{d}-\frac{N}{4}, \end{eqnarray} If the mean numbers of the atomic excitations, $\langle{c^\dag{c}}\rangle$ and $\langle{d^\dag{d}}\rangle$, are much smaller than the number of atoms $N/2$ in each well, then the Hamiltonian can be approximated \cite{Ng1,Ng3} as \begin{eqnarray} \label{eHam2} H^{(2)}_{\rm eff}&=&\hbar\Delta_w(c^\dag{c}+d^\dag{d})+{\hbar}g\sqrt{\frac{N}{2}}[a(c^\dag+d^\dag)+{\rm H.c.}]\nonumber\\ &&+\hbar\chi[(c^\dag{c})^2+(d^\dag{d})^2], \end{eqnarray} where $\Delta_w=2\Delta+\delta-{\chi}N/2$. The effective Rabi frequency is enhanced by a factor of $\sqrt{N/2}$. The parameter $\chi$ is much smaller than the effective Rabi frequency because the scattering lengths of the inter- and intra-component condensates of ${}^{87}$Rb are very similar \cite{Matthews}. We will ignore the terms with the parameter $\chi$ in Eq.~(\ref{eHam2}) in our later discussion. The effective Hamiltonian $H^{(2)}_{\rm eff}$ in Eq.~(\ref{eHam2}) describes the interactions between the single mode of the photon field and the two modes of the collective excitations of the atoms in the left and right potential wells, respectively. This system can be described by a system of three coupled harmonic oscillators. The effective Rabi frequency for each atomic mode is proportional to the factor $\sqrt{N/2}$. This is different to the effective Rabi frequency, in the strong-tunneling regime, which is proportional to the factor $\sqrt{N}$. \section{Dark states and Quantum dynamics of the system} \begin{figure}[ht] \centering \includegraphics[height=8.5cm]{fig_stnum_4} \caption{ \label{fig_stnum} (Color online) Time evolution of the mean photon number (a) and the mean atomic excitations (b) with the damping rate $\kappa=100g$ and the detuning $\Delta=0$. The different number of atoms $N$ are $5\times{10^3}$ (black-solid line), $1\times{10^4}$ (blue-dashed line) and $2\times{10^4}$ (red-dotted line), respectively. } \end{figure} We now study dark states of the system which has different dark-state subspaces in the strong- and weak-tunneling regimes. Let us first introduce the definition of dark states. Dark states \cite{Fleischhauer} are the eigenstates of the atom-photon interaction operator $\mathcal{V}$, with zero eigenvalues, i.e., \begin{eqnarray} \mathcal{V}|{\rm Dark}\rangle&=&0|{\rm Dark}\rangle,\\ &=&0. \end{eqnarray} Dark states, in the strong- and weak-tunneling regimes, in this system can be obtained as \begin{eqnarray} H^{(j)}_{\rm eff}|D\rangle_j=0, \end{eqnarray} where $H^{(j)}_{\rm eff}$ are the two effective Hamiltonians in Eqs.~(\ref{eHam1}) and (\ref{eHam2}) with zero detunings ($\Delta=\Delta_w=0$) and $j=1,2$. In the limit of strong tunnel coupling, the dark state $|D\rangle_1$ is the product state of the vacuum state of the photon field and the ground state of the atomic mode $b$, which is given by \begin{equation} |D\rangle_1=|0\rangle_a|0\rangle_b. \end{equation} This state is the ground state of the coupled system of the atoms and the photon field. \begin{figure}[ht] \centering \includegraphics[height=9.5cm]{fig_wtnum_4} \caption{ \label{fig_wtnum} (Color online) Dynamics of the mean photon number and mean atomic excitation numbers with the damping rate $\kappa=100g$ and the detuning $\Delta_w=0$. (a) the mean photon number $\langle{a^\dag{a}}\rangle$ as a function of the time $gt$. Time evolution of the mean atomic excitation numbers of the atomic mode $c$, in (b), and the atomic mode $d$, in (c) as shown. The different number of atoms $N$ are $5\times{10^3}$ (black-solid line), $1\times{10^4}$ (blue-dashed line) and $2\times{10^4}$ (red-dotted line), respectively.} \end{figure} In the weak-tunneling regime, the system has a family of dark states. The family of dark states are \begin{equation} \label{adark1} |D_{n}\rangle_{2}=|0\rangle_a|D^{a}_n\rangle, \end{equation} where \begin{equation} \label{adark2} |D^{a}_n\rangle=2^{-n/2}\sum^n_{j=0}(-1)^j\sqrt{C^{n}_j}|n-j\rangle_c|j\rangle_d, \end{equation} and $C^{n}_j$ is the binomial coefficient. The dark states $|D_{n}\rangle_2$ are the product state of the vacuum state $|0\rangle_a$ of the photon field and the states $|D^{a}_n\rangle$ are the eigenstates of the operator $c+d$ with zero eigenvalues. Note that the states $|D^{a}_n\rangle$ in Eq.~(\ref{adark2}) is a superposition of the states $|n-j\rangle_c|j\rangle_d$ which have the same degree of atomic excitations. To gain more insight into dark states, let us first investigate the atom-photon dynamics subject to the dissipation of the photon field. For a superconducting resonator with the frequency $\sim{40}$~GHz can be cooled down to low temperatures ($\sim{25}$~mK) \cite{Hofheinz}. This allows us to consider the cavity field being weakly coupled to the reservoir at the zero temperature \cite{Ng4}. Note that the relaxation time (several $\mu$s) of the single photon inside the superconducting resonator is much shorter than the coherence time ($\sim{1}$s) of the cold atoms \cite{Harber,Treutlein}. The effect of the dissipation of the atoms caused by the noise of the surface of the superconductor is negligible \cite{Kasch}. The main source of the dissipation is the damping of the photon field. The dynamics of the system can be described by the master equation, for the zero temperature, as \cite{Ng4,Barnett} \begin{eqnarray} \label{dmaster} \dot{\rho}_j&=&-\frac{i}{\hbar}[H^{(j)}_{\rm eff},\rho_j]+\frac{\kappa}{2}(2a{\rho_j}a^\dag-a^\dag{a}\rho_j-\rho_j{a^\dag{a}}), \end{eqnarray} where $\rho_j$ is the density matrix of the total system, and $j=1,2$. Obviously, the dark states $|D\rangle_1$ and $|D_n\rangle_2$ are the steady-state solutions of the master equation in Eq.~(\ref{dmaster}). Thus, the dark states are robust against the dissipation of the photon field. In the strong tunneling regime, the steady state is the dark state $|D\rangle_1$. In the weak tunneling regime, the state of the condensates evolves as a mixture of dark states $|D_n\rangle_2$ through the dissipation of the photon field. Now we study the dynamics of the system in the strong-tunneling regime, where the state is prepared as $|0\rangle_a|1\rangle_b$ and $|1\rangle_b$ is a number state. We plot the time of evolution of the mean photon number and mean atomic-excitation number in Fig.~\ref{fig_stnum}. The mean photon number and mean atomic excitations undergo a few oscillations and then both of them decay to zero. We also see that the faster rate of oscillations can be obtained if a larger number of atoms $N$ are used. We proceed to investigate the atom-photon dynamics in the weak-tunneling regime. The system is initially prepared as the state $|0\rangle_a|1\rangle_c|0\rangle_d$, where $|1\rangle_c$ is a number state. In Fig.~\ref{fig_wtnum}, we plot the mean photon number, and the mean excitation numbers of the two atomic modes versus the time. When the atom-photon interactions are turned on, the excitation number of the atomic mode $c$ decreases while the mean photon number increases as shown in Fig.~\ref{fig_wtnum}. Afterwards, the mean excitation number of the atomic mode $d$ starts to increase. This means that the energy of the atomic mode $c$ transfers to the photon field and the atomic mode $d$ absorbs the energy from the photon field. In this way, the two atomic modes exchange the energy via the photon field. The faster rate of exchanging energy between the atoms and the photon field can be attained if a larger number of atoms $N$ are used. We also note that the mean photon number in Fig.~\ref{fig_wtnum}(a) is about half of the mean photon number in Fig.~\ref{fig_stnum}(a). It is because the atoms in the atomic mode $d$, in the weak tunneling regime, absorbs the energy from the photon field. In Fig.~\ref{fig_wtnum}(a), the mean photon number decays to zero after a period of time. However, the mean excitation numbers of modes $c$ and $d$ remain non-zero as shown in Figs.~\ref{fig_wtnum} (b) and (c). It is because the state of the atoms evolves to a mixture of dark states $|D_0\rangle_2$ and $|D_1\rangle_2$, and a single excitation is shared by the atoms in the dark state $|D_1\rangle_2$. This results in the non-zero excitation numbers of the two atomic modes. \section{Generation of entanglement between two spatially separated condensates} \begin{figure}[ht] \centering \includegraphics[height=8.5cm]{fig_wtent14} \caption{ \label{fig_wtent1} (Color online) Time evolution of the entanglement witness in (a) and logarithmic negativity in (b), for the damping rate $\kappa=100g$ and the detuning $\Delta_w=0$. The different number of atoms $N$ are $5\times{10^3}$ (black-solid line), $1\times{10^4}$ (blue-dashed line) and $2\times{10^4}$ (red-dotted line), respectively.} \end{figure} We have shown that the system has the different dark-state subspaces in the two tunneling limits. Now we study the entanglement between the condensates in the two different potential wells in the weak tunneling regime. In this regime, the system has a family of dark states which can be used for generating entanglement. Here we consider the tunneling between the wells to be effectively turned off. Therefore, the two independent condensates in the two potential wells are initially unentangled. We will show that steady-state entanglement between the two condensates can be produced by evolving to a mixture of dark states $\{|D_n\rangle_2\}$ through the dissipation of the photon field \cite{Yang,Plenio,Joshi}. To study the quantum entanglement between the two atomic modes $c$ and $d$, it is necessary to obtain the density matrix of the atomic condensate. By tracing out the system of the photon field, we can obtain the density matrix $\rho_{cd}$, \begin{equation} \rho_{cd}={\rm Tr}_{a}(\rho), \end{equation} where $\rho$ is the density matrix of the total system. Let us first examine the entanglement of a single dark state $|D_n\rangle_2$. For a dark state $|D_n\rangle_2$ in Eq.~(\ref{adark1}), the density matrix $\rho_{cd}$ is given by \begin{equation} \rho_{cd}=|D^a_n\rangle\langle{D}^a_n|, \end{equation} where $|D^a_n\rangle$ is the state in Eq.~(\ref{adark2}). The degree of entanglement between the two atomic modes can be quantified by the von Neumann entropy. It is defined as \begin{eqnarray} E_F&=&-{\rm Tr}(\rho_{c}\ln\rho_{c}), \end{eqnarray} where $\rho_{c}={\rm Tr}_d(\rho_{cd})$ is the reduced density matrix. \begin{figure}[ht] \centering \includegraphics[height=8.5cm]{fig_wtent24} \caption{ \label{fig_wtent2} (Color online) Plot of the dynamics of entanglement. (a) entanglement witness $\mathcal{W}$ and (b) logarithmic negativity $E_{\mathcal{N}}(\rho_{cd})$ as a function of the time $gt$. The initial state $|0\rangle_a|n\rangle_c|0\rangle_d$ with the different excitation numbers $n$ are shown, for $n=1$ (black-solid line), $n=2$ (blue-dashed line) and $n=3$ (red-dotted line), respectively. The parameters are $\kappa=100g$, $\Delta_w=0$ and $N=5{\times}{10^3}$. } \end{figure} The von Neumann entropy is \begin{equation} \label{vNentropy} E_F=-2^{-n}\sum^n_{j=0}C^{n}_j\ln|{2^{-n}C^n_j}|. \end{equation} Thus, the state $|D^a_n\rangle$ is an entangled state. The degree of two-mode entanglement becomes higher for larger $n$. In general, this density matrix $\rho_{cd}$ is a mixed state. To quantify the entanglement of a mixed state, the logarithmic negativity can be used \cite{Vidal}. The definition of the logarithmic negativity is \cite{Vidal} \begin{eqnarray} E_{\mathcal{N}}(\rho_{cd})&=&\log_2\parallel{\rho^{T_c}_{cd}}\parallel, \end{eqnarray} where $\rho^{T_c}_{cd}$ is the partial transpose of the density matrix $\rho_{cd}$ and $\parallel{\cdot}\parallel$ is the trace norm. However, the logarithmic negativity is difficult to be experimentally determined. It is very useful to study an experimentally accessible quantity to detect the quantum entanglement between the two bosonic modes \cite{Hillery}. If an inequality \begin{eqnarray} |\langle{cd^\dag}\rangle|^2>\langle{n_c}{n_d}\rangle, \end{eqnarray} is satisfied \cite{Hillery}, then the state is an entangled state. Here $n_c=c^\dag{c}$ and $n_d=d^\dag{d}$ are the number operators of the atomic modes $c$ and $d$, respectively. For convenience, this quantity $\mathcal{W}$ is defined as \begin{equation} \mathcal{W}=\langle{n_c}{n_d}\rangle-|\langle{cd^\dag}\rangle|^2. \end{equation} If $\mathcal{W}$ is negative, then the state is non-separable. This quantity $\mathcal{W}$ is called as an entanglement witness \cite{Horodecki}. We investigate the dynamics of entanglement between the two atomic modes. We consider an initial state as a product state of the three modes, i.e., $|0\rangle_a|1\rangle_c|0\rangle_d$, where $|1\rangle_c$ is a number state. We plot the entanglement witness and logarithmic negativity versus time as shown in Fig.~\ref{fig_wtent1}. This figure shows that the entanglement witness decreases and logarithmic negativity increases with a similar rate, and then they saturate after a short time. This shows that the steady-state entanglement can be produced in a short time via the dissipative photon field. The entanglement can also be produced faster if a larger number of atoms are used. Besides, we can see that the entanglement witness is consistent with the logarithmic negativity to indicate the degree of entanglement. The entanglement witness is a faithful indicator for detecting the entanglement between the two bosonic modes. Next, we study the generation of entanglement by using an initial state $|0\rangle_a|n\rangle_c|0\rangle_d$ with a higher degree of excitation, where $|n\rangle_c$ is a number state and $n$ is larger than one. In Fig.~\ref{fig_wtent2}, the entanglement witness and logarithmic negativity are plotted versus the time. It shows that a higher degree of the entanglement can be obtained if higher excitation numbers $n=2,3$ are used. \vspace*{-0.1cm} \section{Summary} \vspace*{-0.1cm} We have studied a two-component condensate in a double-well potential, where the atoms are magnetically coupled to a single-mode of the photon field inside a superconducting resonator. The system has the different dark-state subspaces in the strong- and weak-tunneling regimes, respectively, and it gives rises to the different dynamics of atomic excitations in the two regimes. Steady-state entanglement between the two spatially separated condensates can be produced by evolving to a mixture of dark states through the dissipative photon field. We have shown that the entanglement can be faithfully indicated by an entanglement witness. \vspace*{0.2cm} \begin{acknowledgments} \vspace*{0.2cm} H.T.N. thank David Hallwood for his careful reading and helpful comment, and C. K. Law for his useful discussion. This work was partially supported by U.S. National Science Foundation. We also would like to acknowledge the partial support of National Science Council of Taiwan (Grant No. 97-2112-M-002-003-MY3) and National Taiwan University (Grant No. 99R80869). \end{acknowledgments} \vspace*{0.1cm}
1,314,259,993,076
arxiv
\section{Introduction} Let $\ell$, $m\in\{0,1,\dots\}$. Consider the nonlinear integral equation \begin{equation} \tag{I} \begin{split} & u(x,t)=\int_{{\mathbb R}^N}G(x,y,t)\phi(y)\,dy\\ & \qquad +\sum_{|\alpha|=\ell}a_\alpha\int_0^t\int_{{\mathbb R}^N}\partial_x^\alpha G(x,y,t-s) F\left(y,s,u(y,s),\dots,\nabla^m u(y,s)\right)\,dy\,ds \end{split} \end{equation} for $ x\in{\mathbb R}^N$ and $t>0$, where $N\ge1$, $\phi$ is a locally integrable function in ${\mathbb R}^N$, $\{a_\alpha\}\subset{\mathbb R}$, and $F$ is a continuous function in ${\mathbb R}^N\times[0,\infty)\times{\mathbb R}\times\cdots\times{\mathbb R}^{N^m}$. Here $G=G(x,y,t)$ is an integral kernel, which is a generalization of the fundamental solutions to the heat equation, fractional heat equations, and higher-order heat equations. Throughout this paper we assume the following condition~(G) on the integral kernel~$G$: \begin{itemize} \item[(G)] \begin{itemize} \item[(a)] $G\in C^{\ell+m}({\mathbb R}^{2N}\times(0,T_*))$ for some $T_*\in(0,\infty]$; \item[{\rm (b)}] There exist $C_G>0$, $d>\ell+m$, and $L>0$ such that $$ |\nabla^j G(x,y,t)|\le C_G\,t^{-\frac{N+j}{d}}\left(1+t^{-\frac{1}{d}}|x-y|\right)^{-N-L-j} $$ for $(x,y,t)\in{\mathbb R}^{2N}\times(0,T_*)$ and $j\in\{0,\dots,\ell+m\}$; \item[(c)] $\displaystyle{G(x,z,t)=\int_{{\mathbb R}^N}G(x,y,t-s)G(y,z,s)dy}\quad$ for $x$, $z\in{\mathbb R}^N$ and $0<s<t<T_*$. \end{itemize} \end{itemize} The purpose of this paper is to obtain sufficient conditions for the existence of solutions to integral equation~(I) under condition~(G) and a suitable structure condition on~$F$. Our sufficient conditions enable us to study the existence of solutions to the Cauchy problem for nonlinear parabolic equations of the form \begin{equation} \label{eq:1.1} \partial_t u+{\mathcal L}u=\displaystyle{\sum_{|\alpha|=\ell}} a_\alpha\partial_x^\alpha F(x,t,u,\dots,\nabla_x^m u). \end{equation} Here $-{\mathcal L}$ is a generalization of elliptic operators with variable coefficients, fractional elliptic operators, and higher-order elliptic operators. \vspace{3pt} Let us consider the Cauchy problem for the semilinear parabolic equation \begin{equation} \tag{S} \left\{ \begin{array}{ll} \partial_t u+(-\Delta)^{\frac{d}{2}} u=|u|^p, & \quad x\in{\mathbb R}^N,\,\,t>0,\vspace{3pt}\\ u(x,0)=\phi(x)\ge 0, & \quad x\in{\mathbb R}^N, \end{array} \right. \end{equation} where $d>0$ and $p>1$. The solvability of problem~(S) has been studied in many papers. Here we just refer to the monograph~\cite{QS} and papers~\cites{BP, C, FL, GP, GG, HI02, HI01, HIT, IKO, KY, P, Q, RS, W, Y}, which are closely related to this paper. Among others, for the case of $0<d\le 2$, the first author of this paper and Hisa \cite{HI01} developed the arguments in \cites{LN, RS, S} and obtained necessary conditions and sufficient conditions for the existence of solutions to problem~(S). As corollaries of their main results, they proved the following properties for $0<d\le 2$: \begin{itemize} \item[(a)] Let $1<p<1+d/N$. Then problem~(S) possesses a local-in-time nonnegative solution if and only if $\sup_{x\in{\mathbb R}^N}\|\phi\|_{L^1(B(x,1))}<\infty$; \item[(b)] There exists $\gamma>0$ such that, if $$ \phi(x)\ge \left\{ \begin{array}{ll} \gamma|x|^{-\frac{d}{p-1}} & \mbox{if}\quad \displaystyle{p>1+\frac{d}{N}},\vspace{3pt}\\ \gamma|x|^{-N}\displaystyle{\biggr|\log\biggr(e+\frac{1}{|x|}\biggr)\biggr|^{-\frac{N}{d}-1}} & \mbox{if}\quad \displaystyle{p=1+\frac{d}{N}},\vspace{3pt}\\ \end{array} \right. \quad x\in B(0,1), $$ then problem~(S) possesses no local-in-time nonnegative solutions; \item[(c)] There exists $\gamma'>0$ such that, if $$ 0\le\phi(x)\le \left\{ \begin{array}{ll} \gamma'|x|^{-\frac{d}{p-1}}+C & \mbox{if}\quad \displaystyle{p>1+\frac{d}{N}},\vspace{3pt}\\ \gamma'|x|^{-N}\displaystyle{\biggr|\log\biggr(e+\frac{1}{|x|}\biggr)\biggr|^{-\frac{N}{d}-1}}+C & \mbox{if}\quad \displaystyle{p=1+\frac{d}{N}},\vspace{3pt}\\ \end{array} \right. \quad x\in{\mathbb R}^N, $$ for some $C>0$, then problem~(S) possesses a local-in-time nonnegative solution. \end{itemize} In the proof of assertion~(c), it is crucial to construct supersolutions to problem~(S) by the semigroup property of the corresponding semigroup. Subsequently, in \cite{IKO}, the authors of this paper obtained necessary conditions and sufficient conditions for the existence of solutions to problem~(S) in the case of $d=2,4,\cdots$, and proved that assertions~(a), (b), and (c) hold. One of the main difficulties in the study of sufficient conditions in the case of $d=2,4,\cdots$ comes from the sign-change of the fundamental solution~$G_d$ to the parabolic equation $$ \partial_t u+(-\Delta)^{\frac{d}{2}} u=0,\quad x\in{\mathbb R}^N,\,\,\,t>0. $$ In order to overcome the difficulty, they introduced a majorant kernel $K=K(x,t)$ satisfying $$ |G_d(x,t)|\le K(x,t),\quad \int_{{\mathbb R}^N}K(x-y,t-s)K(y,s)\,dy\le CK(x,t), $$ for $x\in{\mathbb R}^N$ and $0<s<t$. Here $C$ is a positive constant independent of $x\in{\mathbb R}^N$ and $0<s<t$. Thanks to the majorant kernel $K$, they developed the arguments in \cite{HI01} to obtain sufficient conditions for the existence of solutions to problem~(S) in the case of $d=2,4,\cdots$. \vspace{3pt} In this paper, under condition~(G), we develop the arguments in \cite{IKO} and obtain sufficient conditions for the existence of solutions to integral equation~(I). Furthermore, we apply our main results to the Cauchy problem for some concrete nonlinear parabolic equations of form~\eqref{eq:1.1} and obtain rather sharp sufficient conditions for the existence of solutions to the Cauchy problem. (See Section~7.) \vspace{3pt} We introduce some notations and formulate the solution to integral equation~(I). For any $x\in{\mathbb R}^N$ and $\sigma>0$, let $B(x,\sigma):=\{y\in{\mathbb R}^N\,:\,|y-x|<\sigma\}$ and $|B(x,\sigma)|$ denotes the volume of the ball {\bf$B(x,\sigma)$}. For any multi-index $\alpha=(\alpha_1,\cdots,\alpha_N)\in(\mathbb N\cup\{0\})^N$, we write $$ |\alpha|:=\sum_{i=1}^N\alpha_i\quad\mbox{and}\quad \partial_x^\alpha:=\frac{\partial^{|\alpha|}}{\partial x_1^{\alpha_1}\cdots\partial x_N^{\alpha_N}}. $$ For any $1\le r\le\infty$, we set $L^r:=L^r({\mathbb R}^N)$ and we denote by $L^r_{{\rm uloc}}$ the uniformly local $L^r$ space, that is, $f\in L^r_{{\rm uloc}}$ if and only if $$ \sup_{x\in{\mathbb R}^N}\|f\|_{L^r(B(x,1))}<\infty. $$ Furthermore, for any $1\le q\le r<\infty$, we say that a function $f\in L^q_{{\rm uloc}}$ belongs to the Morrey space ${\mathcal M}_{r,q}$ if $$ \|f\|_{{\mathcal M}_{r,q}}:=\sup_{x\in{\mathbb R}^N}\sup_{\sigma>0}\, \sigma^{\frac{N}{r}}\left(\,\Xint-_{B(x,\sigma)}|\phi(y)|^q\,dy\right)^{\frac{1}{q}}<\infty, $$ where $$ \Xint-_{B(x,\sigma)} \, f(y)\,dy:=\frac{1}{|B(x,\sigma)|}\int_{B(x,\sigma)} f(y)\,dy. $$ Set $$ D_m:=m+1\quad\mbox{if}\quad N=1,\qquad D_m:=(N^{m+1}-1)/(N-1)\quad\mbox{if}\quad N\ge 2. $$ \begin{definition} \label{Definition:1.1} Assume condition~{\rm (G)}. Let $F\in C({\mathbb R}^N\times[0,\infty)\times{\mathbb R}^{D_m})$ and $0<T\le\infty$. We say that $u$ is a solution to integral equation~{\rm (I)} in ${\mathbb R}^N\times [0,T)$ if $u\in BC^{m;0}({\mathbb R}^N\times(0,T))$, that is, $$ \nabla^j_x u\in BC({\mathbb R}^N\times(\tau,T)),\quad j\in\{0,\dots,m\}, $$ for $\tau\in(0,T)$ and $u$ satisfies~{\rm (I)} for $(x,t)\in{\mathbb R}^N\times(0,T)$. \end{definition} We are ready to state our main results. Theorem~\ref{Theorem:1.1} is a modification of \cite{IKO}*{Theorem~4.1} and it is crucial in our study. \begin{theorem} \label{Theorem:1.1} Let $\ell$, $m\in\{0,1,\dots\}$ and let $G$ be the integral kernel satisfying condition~{\rm (G)} for some $L>0$, $d>0$, and $T_*\in(0,\infty]$. Let $0<\theta<2$ be such that $\theta\le \min\{d,L\}$ and set $P_\theta=P_\theta(x,t)$ be the fundamental solution to the fractional heat equation \begin{equation} \label{eq:1.2} \partial_tu+(-\Delta)^{\frac{\theta}{2}}u=0,\quad x\in{\mathbb R}^N,\,\,t>0. \end{equation} Set \begin{equation} \label{eq:1.3} K_\theta(x,t):=P_\theta\left(x,t^{\frac{\theta}{d}}\right),\quad x\in{\mathbb R}^N,\,\,t>0. \end{equation} \begin{itemize} \item[{\rm (a)}] For any $j\in\{0,\dots,\ell+m\}$, there exists $c_j>0$ such that \begin{equation} \label{eq:1.4} |\nabla^j_x G(x,y,t)|\le c_j t^{-\frac{j}{d}}K_\theta(x-y,t),\quad x,y\in{\mathbb R}^N,\,\,t\in(0,T_*). \end{equation} \item[{\rm (b)}] There exists $C_*>0$ such that $$ \int_{{\mathbb R}^N}K_\theta(x-y,t-s)K_\theta(y,s)\,dy\le C_*K_\theta(x,t), \quad x\in{\mathbb R}^N,\,\,0<s<t. $$ \end{itemize} \end{theorem} On the basis of Theorem~\ref{Theorem:1.1}, we study the existence of solutions to integral equation~(I) under the following structure condition~($\mbox{F}_n$) for some $n\in\{0,\dots,m\}$: \begin{itemize} \item[($\mbox{F}_n$)] \begin{itemize} \item[(a)] Let $F$ is a continuous function in ${\mathbb R}^N\times[0,\infty)\times{\mathbb R}^{D_m}$; \item[(b)] There exist $J\subset\{n,\dots,m\}$, ${\bf p}:=\{p_j\}_{j\in J}\subset (0,\infty)$, and $A>-1$ such that $|{\bf p}|:=\displaystyle{\sum_{j\in J}p_j>1}$ and $$ |F(x,t,z_0,z_1,\dots,z_m)|\le t^A\prod_{j\in J}|z_j|^{p_j} $$ for $(x,t)\in{\mathbb R}^N\times[0,\infty)$ and $z_j\in{\mathbb R}^{N^j}$, where $j\in\{0,\dots,m\}$; \item[(c)] Let $\langle {\bf p}\rangle_n:=n+\displaystyle{\sum_{j\in J}} (j-n)p_j$ satisfy $$ d(1+A)\ge\langle {\bf p}\rangle_n+\ell\quad\mbox{if}\quad n+\ell>0, \quad d(1+A)>\langle {\bf p}\rangle_0\quad\mbox{if}\quad n+\ell=0. $$ \end{itemize} \end{itemize} Set \begin{equation} \label{eq:1.5} r_n:= \left\{ \begin{array}{ll} \displaystyle{\frac{N(|{\bf p}|-1)}{d(1+A)-\langle{\bf p}\rangle_n-\ell}} & \mbox{if}\quad d(1+A)>\langle {\bf p}\rangle_n+\ell,\vspace{5pt}\\ \infty & \mbox{if}\quad d(1+A)=\langle {\bf p}\rangle_n+\ell. \end{array} \right. \end{equation} Under structure condition~($\mbox{F}_n$), we consider the following four cases: \begin{equation*} \begin{array}{ll} {\rm (A)}\quad 0<n+\ell<d(1+A); \qquad& {\rm (B)}\quad\mbox{$n=\ell=0$ and $r_0<1$};\vspace{7pt}\\ {\rm (C)}\quad\mbox{$n=\ell=0$ and $r_0>1$};\,\,\, & {\rm (D)}\quad\mbox{$n=\ell=0$ and $r_0=1$}, \end{array} \end{equation*} and state our sufficient conditions for the existence of solutions to integral equation~(I). Our sufficient conditions are represented in the spirit of Morrey spaces and their generalizations. \begin{theorem} \label{Theorem:1.2} Assume conditions~{\rm (G)} and {\rm ($\mbox{F}_n$)} for some $n\in\{0,\dots,m\}$. Consider case~{\rm (A)}. Let $\phi\in L^1_{{\rm uloc}}$. Then there exists $\gamma>0$ such that, if \begin{equation} \label{eq:1.6} \sup_{x\in{\mathbb R}^N}\,\sup_{0<\sigma<T^{\frac{1}{d}}}\, \sigma^{\frac{N}{r_n}}\,\Xint-_{B(x,\sigma)}|\nabla^n \phi(y)|\,dy\le\gamma \end{equation} for some $T\in(0,T_*]$, integral equation~{\rm (I)} possesses a solution $u$ in ${\mathbb R}^N\times[0,T)$ such that \begin{equation} \label{eq:1.7} |\nabla^j u(x,t)| \le \left\{ \begin{array}{ll} CT^{-\frac{N}{d}\left(\frac{1}{r_n}-1\right)}t^{-\frac{N}{d}-\frac{j-n}{d}} & \mbox{if}\quad r_n<1,\vspace{5pt}\\ Ct^{-\frac{N}{dr_n}-\frac{j-n}{d}} & \mbox{if}\quad r_n\ge 1, \end{array} \right. \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T)$ and $j\in\{n,\dots,m\}$. Here $C$ is a positive constant independent of~$T$. \end{theorem} As a corollary of Theorem~\ref{Theorem:1.2}, we have: \begin{corollary} \label{Corollary:1.1} Assume conditions~{\rm (G)} with $T_*=\infty$ and {\rm ($\mbox{F}_n$)} for some $n\in\{0,\dots,m\}$. Consider case~{\rm (A)} and assume $r_n\ge 1$. Then there exists $\gamma>0$ such that, if $$ \|\nabla^n\phi\|_{{\mathcal M}_{r_n,1}}\le\gamma, $$ integral equation~{\rm (I)} possesses a global-in-time solution~$u$ satisfying $$ |\nabla^j u(x,t)| \le Ct^{-\frac{N}{dr_n}-\frac{j-n}{d}} $$ for $(x,t)\in{\mathbb R}^N\times(0,\infty)$ and $j\in\{n,\dots,m\}$, where $C$ is a positive constant. \end{corollary} In the following two theorems we treat cases (B) and (C). \begin{theorem} \label{Theorem:1.3} Assume conditions~{\rm (G)} and {\rm ($\mbox{F}_0$)}. Consider case {\rm (B)}. Then there exists $\gamma>0$ such that, if \begin{equation} \label{eq:1.8} \sup_{x\in{\mathbb R}^N}\Xint-_{B(x,T^{\frac{1}{d}})}|\phi(y)|\,dy\le\gamma T^{-\frac{N}{dr_0}} \end{equation} for some $T\in(0,\infty)$ with $T\le T_*$, integral equation~{\rm (I)} possesses a solution~$u$ in ${\mathbb R}^N\times[0,T)$ such that $$ |\nabla u(x,t)|\le Ct^{-\frac{N}{d}-\frac{j}{d}} $$ for $(x,t)\in{\mathbb R}^N\times(0,T)$ and $j\in\{0,\dots,m\}$, where $C$ is a positive constant independent of~$T$. \end{theorem} \begin{theorem} \label{Theorem:1.4} Assume conditions~{\rm (G)} and {\rm ($\mbox{F}_0$)}. Consider case {\rm (C)}. Then, for any $q>1$, there exists $\gamma>0$ such that, if \begin{equation} \label{eq:1.9} \sup_{x\in{\mathbb R}^N}\,\sup_{0<\sigma<T^{\frac{1}{d}}}\, \sigma^{\frac{N}{r_0}}\,\biggr(\,\Xint-_{B(x,\sigma)}|\phi(y)|^q\,dy\biggr)^{\frac{1}{q}}\le\gamma \end{equation} for some $T\in(0,T_*]$, integral equation~{\rm (I)} possesses a solution~$u$ in ${\mathbb R}^N\times[0,T)$ such that \begin{equation} \label{eq:1.10} |\nabla u(x,t)|\le Ct^{-\frac{N}{dr_0}-\frac{j}{d}} \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T)$ and $j\in\{0,\dots,m\}$, where $C$ is a positive constant independent of~$T$. \end{theorem} As a corollary of Theorem~\ref{Theorem:1.4}, we have: \begin{corollary} \label{Corollary:1.2} Assume conditions~{\rm (G)} with $T_*=\infty$ and {\rm ($\mbox{F}_0$)}. Consider case {\rm (C)}. Then, for any $q>1$, there exists $\gamma>0$ such that, if $$ \|\phi\|_{{\mathcal M}_{r_0,q}}\le\gamma, $$ integral equation~{\rm (I)} possesses a global-in-time solution~$u$ satisfying \eqref{eq:1.10} in ${\mathbb R}^N\times(0,\infty)$. \end{corollary} \begin{remark} \label{Remark:1.1} Corollary~{\rm\ref{Corollary:1.2}} is a generalization of \cite{IKK01}*{Theorem~1.1}. Indeed, assume condition~{\rm ($\mbox{F}_0$)} and consider case~{\rm (C)}. Under stronger assumptions than condition~{\rm (G)} with $T_*=\infty$, the existence of global-in-time solutions to integral equation~{\rm (I)} was proved in~\cite{IKK01}*{Theorem~1.1} for the case when $\phi\in W^{1,m}({\mathbb R}^N)$ and $\|\phi\|_{L^{r_0,\infty}}$ is small enough. Here $L^{r_0,\infty}$ is the weak $L^{r_0}$ space in ${\mathbb R}^N$. On the other hand, by Corollary~{\rm\ref{Corollary:1.2}} we easily obtain the existence of global-in-time solutions to integral equation~{\rm (I)} provided that $\|\phi\|_{L^{r_0,\infty}}$ is small enough, since $L^{r_0,\infty}\subset{{\mathcal M}_{r_0,q}}$ for $1\le q<r_0$ {\rm ({\it see} \cite{KY}*{Lemma~1.7})}. \end{remark} We state our result in case (D). \begin{theorem} \label{Theorem:1.5} Assume conditions~{\rm (G)} and {\rm ($\mbox{F}_0$)}. Consider case {\rm (D)}. Let $\beta>0$. Set \begin{equation} \label{eq:1.11} \Phi(s):=s[\log(e+s)]^\beta, \quad \rho(s):=s^{-N}\left[\log\left(e+s^{-1}\right)\right]^{-\frac{N}{d(1+A)-\langle{\bf p}\rangle_0}}. \end{equation} Then there exists $\gamma>0$ such that, if \begin{equation} \label{eq:1.12} \sup_{x\in{\mathbb R}^N}\Phi^{-1}\biggr(\,\Xint-_{B(x,\sigma)}\Phi(T^{\frac{N}{d}}|\phi(y)|)\,dy\biggr)\le\gamma \rho(\sigma T^{-\frac{1}{d}}), \quad 0<\sigma<T^{\frac{1}{d}}, \end{equation} for some $T\in(0,\infty)$ with $T\le T_*$, integral equation~{\rm (I)} possesses a solution~$u$ in ${\mathbb R}^N\times[0,T)$ such that \begin{equation} \label{eq:1.13} |\nabla^j u(x,t)| \le Ct^{-\frac{N}{d}-\frac{j}{d}}\biggr|\log\left(\frac{t}{2T}\right)\biggr|^{-\frac{N}{d}},\quad j\in\{0,\dots,m\}, \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T)$, where $C$ is a positive constant independent of $T$. \end{theorem} As a corollary of Theorem~\ref{Theorem:1.5}, we have: \begin{corollary} \label{Corollary:1.3} Assume conditions~{\rm (G)} and {\rm ($\mbox{F}_0$)}. Consider case {\rm (D)}. Then there exists $\gamma>0$ such that, if $$ \sup_{x\in{\mathbb R}^N}\int_{B(x,\sigma)} |\phi(y)|\left[\log\left(e+T^{\frac{N}{d}}|\phi(y)|\right)\right] ^{\frac{N}{d(1+A)-\langle{\bf p}\rangle_0}}\,dy \le\gamma , \quad 0<\sigma<T^{\frac{1}{d}}, $$ for some $T\in(0,\infty)$ with $T\le T_*$, integral equation~{\rm (I)} possesses a solution~$u$ in ${\mathbb R}^N\times[0,T)$ satisfying \eqref{eq:1.13} in ${\mathbb R}^N\times(0,T)$. \end{corollary} Here we mention the strategy for the proofs of our sufficient conditions. As the first step we construct approximate solutions~$\{u_\epsilon\}_{\epsilon>0}$ to integral equation~(I). Next, thanks to the integral kernel $K_\theta$ given in Theorem~\ref{Theorem:1.1}, we develop the arguments in \cite{IKO} to find supersolutions to integral equation~(I). This enables us to obtain uniform estimates of approximate solutions~$\{u_\epsilon\}_{\epsilon>0}$. Then, applying the parabolic regularity theorems and the Arzel\'a--Ascoli theorem, we find a solution to integral equation~(I), and the proofs of our sufficient conditions are complete. The rest of this paper is organized as follows: In Section~2 we recall some properties of fundamental solutions to fractional heat equations and prove Theorem~\ref{Theorem:1.1}. In Section~3 we construct approximate solutions to integral equation~(I) and obtain some decay estimates of the approximate solutions. In Sections~4, 5, and 6 we find supersolutions to integral equation~(I) and prove our main theorems and their corollaries. In Section~7 we apply our main theorems to the Cauchy problem for some concrete nonlinear parabolic equations and we show the sharpness of our sufficient conditions. \section{Proof of Theorem~\ref{Theorem:1.1}} In this section we prove Theorem~\ref{Theorem:1.1}. In what follows, by $C$ we denotes generic positive constants and they may have different values also within the same line. We recall some properties of the fundamental solution~$P_\theta$ to the fractional heat equation~\eqref{eq:1.2}, where $0<\theta<2$. The fundamental solution~$P_\theta$ is represented by $$ P_\theta(x,t)=(2\pi)^{-\frac{N}{2}}\int_{{\mathbb R}^N}e^{ix\cdot \xi}e^{-t|\xi|^\theta}\,d\xi. $$ Then $P_\theta=P_\theta(x,t)$ is a positive, smooth, and radially symmetric function in ${\mathbb R}^N\times(0,\infty)$ and satisfies the following properties (see \cites{BJ,BK}): \begin{align} \label{eq:2.1} & P_\theta(x,t)=t^{-\frac{N}{\theta}}P_\theta(t^{-\frac{1}{\theta}}x,1),\\ \label{eq:2.2} &|(\nabla^j P_\theta)(x,t)|\le C_j t^{-\frac{N+j}{\theta}} \big(1+t^{-\frac{1}{\theta}}|x|\big)^{-N-\theta-j},\\ \label{eq:2.3} & P_\theta(x,t)\ge C^{-1} t^{-\frac{N}{\theta}} \big(1+t^{-\frac{1}{\theta}}|x|\big)^{-N-\theta},\\ \label{eq:2.4} & \int_{\mathbb R^N}P_\theta(x,t)\,dx=1, \end{align} for $x\in{\mathbb R}^N$, $t>0$, and $j=0,1,2,\dots$. Here $C_j$ is a positive constant depending on $j$. Furthermore, \begin{equation} \label{eq:2.5} P_\theta(x,t)=\int_{{\mathbb R}^N}P_\theta(x-y,t-s)\,P_\theta(y,s)\,dy, \quad x\in{\mathbb R}^N,\,\,0<s<t. \end{equation} For any $\phi\in L^1_{\rm uloc}$, we set \begin{equation} \label{eq:2.6} [S_\theta(t)\phi](x):=\int_{{\mathbb R}^N}P_\theta(x-y,t)\phi(y)\,dy. \end{equation} Then, for any $j=0,1,2,\dots$, by the Young inequality and \eqref{eq:2.2} we find $C_j'>0$ such that $$ \|\nabla^j S_\theta(t)\phi\|_{L^q}\le C_j' t^{-\frac{N}{\theta}(\frac{1}{p}-\frac{1}{q})-\frac{j}{\theta}}\|\phi\|_{L^p},\quad t>0, $$ for $\phi\in L^q$ and $1\le p\le q\le\infty$. (See e.g. \cite{IKK01}*{Section~2}.) Furthermore, we recall the following lemma on the decay of $\|S_\theta(t)\phi\|_{L^\infty}$ (see \cite{HI01}*{Lemma~2.1}). \begin{lemma} \label{Lemma:2.1} Let $0<\theta< 2$. Then there exists $C=C(N,\theta)>0$ such that $$ \|S_\theta(t)\mu\|_{L^\infty} \le Ct^{-\frac{N}{\theta}}\sup_{x\in{\mathbb R}^N}|\mu(B(x,t^{\frac{1}{\theta}}))|, \quad t>0, $$ for Radon measures $\mu$ in ${\mathbb R}^N$. \end{lemma} We prove Theorem~\ref{Theorem:1.1}. \vspace{3pt} \newline {\bf Proof of Theorem~\ref{Theorem:1.1}.} For any $j\in\{0,\dots,\ell+m\}$, it follows from condition~(G), \eqref{eq:1.3}, and \eqref{eq:2.1} that \begin{equation*} \begin{split} |\nabla_x^j G(x,y,t)| & \le Ct^{-\frac{N+j}{d}}\left(1+t^{-\frac{1}{d}}|x-y|\right)^{-N-L-j}\\ & \le Ct^{-\frac{N+j}{d}}\left(1+t^{-\frac{1}{d}}|x-y|\right)^{-N-\theta}\\ & \le Ct^{-\frac{N+j}{d}}P_\theta\left(t^{-\frac{1}{d}}(x-y),1\right) =Ct^{-\frac{j}{d}}K_\theta(x-y,t) \end{split} \end{equation*} for $x, y\in{\mathbb R}^N$ and $0<t<T_*$. Here we used the assumption that $\theta\le L$. This implies assertion~(a). On the other hand, since $\theta\le d$, we have $$ t^{\frac{\theta}{d}}=(t-s+s)^{\frac{\theta}{d}} \le(t-s)^{\frac{\theta}{d}}+s^{\frac{\theta}{d}}\le 2t^{\frac{\theta}{d}} $$ for $0<s<t$. Then, by \eqref{eq:1.3} and \eqref{eq:2.5} we have \begin{equation*} \begin{split} & \int_{{\mathbb R}^N}K_\theta(x-y,t-s)K_\theta(y,s)\,dy\\ & =\int_{{\mathbb R}^N}P_\theta\left(x-y,(t-s)^{\frac{\theta}{d}}\right) P_\theta\left(y,s^{\frac{\theta}{d}}\right)\,dy \\ & =P_\theta\left(x,(t-s)^{\frac{\theta}{d}}+s^{\frac{\theta}{d}}\right)\\ & =P_\theta\left(x,\kappa_{t,s}t^{\frac{\theta}{d}}\right), \quad\mbox{where}\quad \kappa_{t,s}:=\frac{{(t-s)^{\frac{\theta}{d}}+s^{\frac{\theta}{d}}}}{t^{\frac{\theta}{d}}}\in[1,2], \end{split} \end{equation*} for $x\in{\mathbb R}^N$ and $0<s<t$. Furthermore, it follows from \eqref{eq:2.2} and \eqref{eq:2.3} that \begin{equation*} \begin{split} P_\theta(x,\kappa_{t,s} t) & \le C(\kappa_{t,s}t)^{-\frac{N}{\theta}}\left(1+(\kappa_{t,s} t)^{-\frac{1}{\theta}}|x|\right)^{-N-\theta}\\ & \le Ct^{-\frac{N}{\theta}}\left(1+t^{-\frac{1}{\theta}}|x|\right)^{-N-\theta} \le CP_\theta(x,t) \end{split} \end{equation*} for $x\in{\mathbb R}^N$ and $0<s<t$. These imply assertion~(b). Thus Theorem~\ref{Theorem:1.1} follows. $\Box$\vspace{5pt} Similarly to \eqref{eq:2.6}, we set $$ [S(t)\phi](x):=\int_{{\mathbb R}^N}G(x,y,t)\phi(y)\,dy, \quad [S_{K_\theta}(t)\phi](x):=\int_{{\mathbb R}^N}K_\theta(x-y,t)\phi(y)\,dy, $$ for $\phi\in L^1_{\rm uloc}$. Then we observe from Theorem~\ref{Theorem:1.1} that \begin{align} \label{eq:2.7} & |[\nabla^j S(t)\phi](x)|\le c_jt^{-\frac{j}{d}}[S_{K_\theta}(t)|\phi|](x), \quad t\in(0,T_*), \quad j\in\{0,\dots, \ell+m\},\\ \label{eq:2.8} & [S_{K_\theta}(t-s)[S_{K_\theta}(s)\phi]](x)\le C_*[S_{K_\theta}(t)\phi](x), \quad 0<s<t, \end{align} for $x\in{\mathbb R}^N$. Furthermore, it follows from Lemma~\ref{Lemma:2.1} with \eqref{eq:1.3} that \begin{equation} \label{eq:2.9} \|S_{K_\theta}(t)\phi\|_{L^\infty} \le C\sup_{x\in{\mathbb R}^N}\Xint-_{B(x,t^{\frac{1}{d}})}|\phi(y)|\,dy, \quad t>0. \end{equation} These properties are crucial in the proof of our sufficient conditions for the existence of solutions to integral equation~(I). \section{Approximate solutions} Let $\ell$, $m\in\{0,1,\dots\}$. Assume condition~($\mbox{F}_n$) for some $n\in\{0,\dots,m\}$. We construct approximate solutions to integral equation~(I). For $\epsilon>0$, let $$ F_\epsilon(x,t,z) := \left\{ \begin{array}{ll} -\epsilon^{-1}\quad & \mbox{if}\quad F(x,|t|,z)<-\epsilon^{-1},\vspace{3pt}\\ F(x,|t|,z)\quad & \mbox{if}\quad -\epsilon^{-1}\le F(x,|t|,z)\le\epsilon^{-1},\vspace{3pt}\\ \epsilon^{-1}\quad & \mbox{if}\quad F(x,|t|,z)>\epsilon^{-1},\vspace{3pt} \end{array} \right. $$ for $(x,t)\in{\mathbb R}^{N+1}$ and $z=(z_1,\dots,z_m)\in {\mathbb R}^{D_m}$. Let $\rho\in C_0^\infty({\mathbb R}^{N+1+D_m})$ be such that $$ \rho\ge 0\quad\mbox{in}\quad {\mathbb R}^{N+1+D_m}, \quad \rho=0\quad\mbox{if}\quad |(x,t,z)|\ge 1, \quad \int_{{\mathbb R}^{N+1+D_m}}\rho(x,t,z)\,dx\,dt\,dz=1. $$ Set $$ \tilde{F}_\epsilon(x,t,z):=\epsilon^{-N-1-D_m} \int_{{\mathbb R}^{N+1+D_m}} \rho\left(\epsilon(x-y),\epsilon(t-s),\epsilon(z-\xi)\right)F_\epsilon(y,s,\xi)\,dy\,ds\,d\xi $$ for $(x,t,z)\in{\mathbb R}^{N+1+D_m}$. Then we easily see that \begin{equation} \label{eq:3.1} \tilde{F}_\epsilon\in BC^m({\mathbb R}^{N+1+D_m}), \quad \|\tilde{F}_\epsilon\|_{L^\infty({\mathbb R}^{N+1+D_m})}\le\epsilon^{-1}, \quad \|\nabla_z\tilde{F}_\epsilon\|_{L^\infty({\mathbb R}^{N+1+D_m})}\le C\epsilon^{-2}. \end{equation} Furthermore, it follows from condition~($\mbox{F}_n$) that \begin{align} \notag & \lim_{\epsilon\to +0}\tilde{F}_\epsilon(x,t,z)=F(x,|t|,z),\\ \label{eq:3.2} & |\tilde{F}_\epsilon(x,t,z)| \le |t|^A\prod_{j\in J}\left(|z_j|+\epsilon\right)^{p_j} \le C|t|^A\prod_{j\in J}\max\left\{|z_j|,\epsilon\right\}^{p_j}, \end{align} for $(x,t,z)\in{\mathbb R}^{N+1+D_m}$. In this section we prove the following lemma. \begin{lemma} \label{Lemma:3.1} Assume conditions~{\rm (G)} and {\rm($\mbox{F}_n$)} for some $n\in\{0,\dots,m\}$. Let $\epsilon>0$ and $\tilde{F}_\epsilon$ be as in the above. Assume that $\phi$ is a measurable function in ${\mathbb R}^N$ such that $$ \sup_{x\in{\mathbb R}^N}\int_{B(x,1)}|\phi(y)|\,dy<\infty. $$ \begin{itemize} \item[{\rm (a)}] There exists $u^\epsilon\in C^{m;0}({\mathbb R}^N\times(0,T_*))$ such that \begin{equation} \label{eq:3.3} u^\epsilon(x,t)=[S(t)\phi](x) +\sum_{|\alpha|=\ell}a_\alpha\int_0^t\partial_x^\alpha \left[S(t-s)\tilde{F}_\epsilon(s,u^\epsilon(s),\dots,\nabla^m u^\epsilon(s))\right](x)\, ds \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T_*)$. Here $T_*$ is as in condition~{\rm (G)}. \item[{\rm (b)}] There exists $c_*>0$ with the following property: If there exist $T\in(0,T_*]$ and a continuous function $U^\epsilon$ in ${\mathbb R}^N\times(0,T)$ such that \begin{align} \label{eq:3.4} & t^{-\frac{j-n}{d}}U^\epsilon(x,t)\ge\epsilon,\\ \label{eq:3.5} & c_*[S_{K_\theta}(t)|\nabla^n\phi|](x) \le \frac{1}{2}U^\epsilon(x,t),\\ \label{eq:3.6} & c_*\int_0^t(t-s)^{-\frac{\ell+j}{d}}s^{A-\frac{\langle {\bf p}\rangle_n-n}{d}} \left[S_{K_\theta}(t-s)U^\epsilon(s)^{|{\bf p}|}\right](x)\,ds \le\frac{1}{2}t^{-\frac{j-n}{d}}U^\epsilon(x,t), \end{align} for $(x,t)\in{\mathbb R}^N\times(0,T)$ and $j\in\{n,\dots,m\}$, then \begin{equation} \label{eq:3.7} |\nabla^j u^\epsilon(x,t)|\le t^{-\frac{j-n}{d}}U^\epsilon(x,t) \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T)$ and $j\in\{n,\dots,m\}$. \end{itemize} \end{lemma} {\bf Proof.} Set $u^\epsilon_0(x,t):=[S(t)\phi](x)$ for $(x,t)\in{\mathbb R}^N\times(0,T_*)$. It follows from \eqref{eq:2.7} and \eqref{eq:2.9} that \begin{equation} \label{eq:3.8} \begin{split} &\qquad u^\epsilon_0\in C^{m;0}({\mathbb R}^N\times(0,T_*)), \\ & \sup_{(x,t)\in{\mathbb R}^N\times[\tau,T_*)}|\nabla^j u^\epsilon_0(x,t)|<\infty, \quad \quad\tau\in(0,T_*),\quad j\in\{0,\dots,\ell+m\}. \end{split} \end{equation} Since $\ell+m<d$, by \eqref{eq:2.7} we can define $\{u^\epsilon_k\}\subset BC^{m;0}({\mathbb R}^N\times(0,T_*))$ inductively as follows: \begin{equation} \label{eq:3.9} u^\epsilon_{k+1}(x,t):=u^\epsilon_0(x,t) +\sum_{|\alpha|=\ell}a_\alpha\int_0^t\partial_x^\alpha \left[S(t-s)\tilde{F}_\epsilon(s,u^\epsilon_k(s),\dots,\nabla^m u^\epsilon_k(s))\right](x)\, ds \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T_*)$ and $k=0,1,2,\dots$. Then it follows from condition~(G)~(c) that \begin{equation} \label{eq:3.10} u^\epsilon_{k+1}(x,t):=u^\epsilon_{k+1}(x,\tau) +\sum_{|\alpha|=\ell}a_\alpha\int_\tau^t\partial_x^\alpha \left[S(t-s)\tilde{F}_\epsilon(s,u^\epsilon_k(s),\dots,\nabla^m u^\epsilon_k(s))\right](x)\, ds \end{equation} for $(x,t)\in{\mathbb R}^N\times(\tau,T_*)$, $\tau\in(0,T_*)$, and $k=0,1,2,\dots$. Let $\delta\in(0,T_*)$ be small enough. Set $$ L_k:=\sup_{j\in\{0,\dots,m\}}\sup_{(x,t)\in{\mathbb R}^N\times(0,\delta]}|\nabla^j u^\epsilon_{k+1}(x,t)-\nabla^j u^\epsilon_k(x,t)|. $$ Thanks to the mean value theorem, by \eqref{eq:1.4}, \eqref{eq:3.1}, and \eqref{eq:3.9} we obtain \begin{equation*} \begin{split} & |\nabla^j u_{k+1}^\epsilon(x,t)-\nabla^j u_k^\epsilon(x,t)|\\ & \le CL_{k-1}\int_0^t\int_{{\mathbb R}^N}|\nabla^{\ell+j}G(x,y,t-s)| \|\nabla_z \tilde F_\epsilon\|_{L^\infty({\mathbb R}^{N+1+D_m})}\,dy\,ds\\ & \le C\epsilon^{-2}L_{k-1}\int_0^t\int_{{\mathbb R}^N}(t-s)^{-\frac{\ell+j}{d}}K_\theta(x-y,t-s)\,dy\,ds\\ & \le C\epsilon^{-2}L_{k-1}\delta^{1-\frac{\ell+j}{d}} \end{split} \end{equation*} for $(x,t)\in{\mathbb R}^N\times(0,\delta]$, $j\in\{0,\dots,m\}$, and $k=1,2,\dots$. Then, taking small enough $\delta>0$ if necessary, we obtain $$ L_k\le C\epsilon^{-2}L_{k-1}\delta^{1-\frac{\ell+j}{d}}\le\frac{1}{2}L_{k-1},\quad k=1,2,\dots. $$ This together with \eqref{eq:3.8} implies that $\{u^\epsilon_k\}_{k=0}^\infty$ is a Cauchy sequence in $BC^{m;0}({\mathbb R}^N\times(\tau,\delta])$ for $\tau\in(0,\delta)$. Therefore there exists $u^\epsilon\in C^{m;0}({\mathbb R}^N\times(0,\delta])$ such that \begin{equation} \label{eq:3.11} u^\epsilon\in BC^{m;0}({\mathbb R}^N\times(\tau,\delta]), \quad \lim_{k\to\infty}\sup_{(x,t)\in{\mathbb R}^N\times(\tau,\delta]}|\nabla^j u^\epsilon_k(x,t)-\nabla^j u^\epsilon(x,t)|=0 \end{equation} for $\tau\in(0,\delta)$. Then we apply the Lebesgue dominated convergence theorem to see that $u^\epsilon$ satisfies \eqref{eq:3.3} in ${\mathbb R}^N\times(0,\delta]$. This implies that assertion~(a) holds for $t\in(0,\delta]$. Repeating the above arguments with $\phi$ replaced by $u^\epsilon(\cdot,\delta)$, due to \eqref{eq:3.10}, we see that assertion~(a) holds for $t\in(0,2\delta]\cap(0,T_*)$. Therefore, repeating this argument several times, we deduce that assertion~(a) holds. We prove assertion~(b). Set $$ c_*:=\biggr(1+\sum_{|\alpha|=\ell}|a_\alpha|\biggr)\max_j c_j, $$ where $c_j$ is as in Theorem~\ref{Theorem:1.1}. Assume \eqref{eq:3.4}, \eqref{eq:3.5}, and \eqref{eq:3.6}. It follows from \eqref{eq:3.5} that \begin{equation} \label{eq:3.12} \begin{split} |\nabla^j u_0^\epsilon(x,t)| & \le \int_{{\mathbb R}^N}|\nabla^{j-n} G(x,y,t)||\nabla^n\phi(y)|\,dy\\ & \le c_{j-n}t^{-\frac{j-n}{d}}\int_{{\mathbb R}^N}K_\theta(x-y,t)|\nabla^n\phi(y)|\,dy \le\frac{1}{2}t^{-\frac{j-n}{d}}U^\epsilon(x,t) \end{split} \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T)$ and $j\in\{n,\dots,m\}$. Furthermore, if $$ |\nabla^j u_k^\epsilon(x,t)|\le t^{-\frac{j-n}{d}}U^\epsilon(x,t), \quad(x,t)\in{\mathbb R}^N\times(0,T),\,\,j=n,\dots,m, $$ for some $k\in\{0,1,2,\dots\}$, then, by \eqref{eq:1.4}, \eqref{eq:3.2}, \eqref{eq:3.4}, \eqref{eq:3.6}, \eqref{eq:3.9}, and \eqref{eq:3.12} we find $C>0$ such that \begin{equation*} \begin{split} & |\nabla^j u^\epsilon_{k+1}(x,t)| \le\frac{1}{2}t^{-\frac{j-n}{d}}U^\epsilon(x,t)\\ & \quad +\sum_{|\alpha|=\ell}|a_\alpha|\int_0^t\int_{{\mathbb R}^N}|\nabla^{\ell+j} G(x,y,t-s)|\tilde{F}_\epsilon(y,s,u^\epsilon_k(y,s),\dots,\nabla^m u^\epsilon_k(y,s))|\,dy\,ds\\ & \le\frac{1}{2}t^{-\frac{j-n}{d}}U^\epsilon(x,t)\\ & \quad +c_{\ell+j}\sum_{|\alpha|=\ell}|a_\alpha| \int_0^t\int_{{\mathbb R}^N}(t-s)^{-\frac{\ell+j}{d}}K_\theta(x-y,t-s)s^A\prod_{j\in J} \max\{s^{-\frac{j-n}{d}}U^\epsilon(y,s),\epsilon\}^{p_j}\,dy\,ds\\ & \le\frac{1}{2}t^{-\frac{j-n}{d}}U^\epsilon(x,t) +c_*\int_0^t\int_{{\mathbb R}^N}(t-s)^{-\frac{\ell+j}{d}}K_\theta(x-y,t-s)s^A\prod_{j\in J} \left(s^{-\frac{j-n}{d}}U^\epsilon(y,s)\right)^{p_j}\,dy\,ds\\ & \le\frac{1}{2}t^{-\frac{j-n}{d}}U^\epsilon(x,t) +c_*\int_0^t\int_{{\mathbb R}^N}(t-s)^{-\frac{\ell+j}{d}} s^{A-\frac{\langle {\bf p}\rangle_n-n}{d}}K_\theta(x-y,t-s)U^\epsilon(y,s)^{|{\bf p}|}\,dy\,ds\\ & \le t^{-\frac{j-n}{d}}U^\epsilon(x,t) \end{split} \end{equation*} for $(x,t)\in{\mathbb R}^N\times(0,T)$. This together with \eqref{eq:3.12} implies that $$ |\nabla^j u^\epsilon_k(x,t)|\le t^{-\frac{j-n}{d}}U^\epsilon(x,t) $$ for $(x,t)\in{\mathbb R}^N\times(0,T)$, $j\in\{n,\dots,m\}$, and $k=0,1,2,\dots$. Then, thanks to \eqref{eq:3.11}, we obtain \eqref{eq:3.7}. Thus assertion~(b) holds, and Lemma~\ref{Lemma:3.1} follows. $\Box$ \section{Proof of Theorem~\ref{Theorem:1.2}} In this section we prove Theorem~\ref{Theorem:1.2} and Corollary~\ref{Corollary:1.1}. \vspace{3pt} \newline {\bf Proof of Theorem~\ref{Theorem:1.2}.} Let $T\in(0,T_*]$ and assume \eqref{eq:1.6}. Let $c_*$ be as in Lemma~\ref{Lemma:3.1}. Let $i\in\{1,2,\dots\}$ and fix it. Let $T_i:=\min\{T,i\}$ and $\epsilon_i\in(0,1)$ be small enough. Then we find $L_i>0$ such that $$ c_*L_i\min_{j\in\{n,\dots,m\}}T_i^{-\frac{j-n}{d}}=\epsilon_i. $$ Set \begin{equation} \label{eq:4.1} U^i(x,t):=2c_*\left[S_{K_\theta}(t)\left(|\nabla^n\phi|+L_i\right)\right](x) =2c_*\left[S_{K_\theta}(t)|\nabla^n\phi|\right](x)+2c_*L_i. \end{equation} Then we see that \begin{equation} \label{eq:4.2} \begin{split} & \inf_{(x,t)\in{\mathbb R}^N\times(0,T_i)}t^{-\frac{j-n}{d}}U^i(x,t)\ge 2c_*L_i T_i^{-\frac{j-n}{d}}\ge\epsilon_i, \quad j\in\{n,\dots,m\},\\ & c_*[S_{K_\theta}(t)|\nabla^n\phi|](x) \le \frac{1}{2}U^i(x,t),\quad (x,t)\in{\mathbb R}^N\times(0,T_i). \end{split} \end{equation} Furthermore, by \eqref{eq:1.6} we have $$ \sup_{x\in{\mathbb R}^N}\int_{B(x,t^{\frac{1}{d}})}|\nabla^n \phi(y)|\,dy \le\left\{ \begin{array}{ll} \gamma {T_i}^{\frac{N}{d}\left(1-\frac{1}{r_n}\right)} & \quad\mbox{if}\quad r_n<1,\vspace{3pt}\\ \gamma t^{\frac{N}{d}-\frac{N}{dr_n}} & \quad\mbox{if}\quad r_n\ge 1, \end{array} \right. $$ for $0<t<T_i$. Since $\lim_{\epsilon_i\to 0}L_i=0$, taking small enough $\epsilon_i>0$ if necessary, by \eqref{eq:2.9} we see that \begin{equation} \label{eq:4.3} U^i(x,t)\le C\gamma_i t^{-\kappa}+2c_*L_i \le 2\gamma_i Ct^{-\kappa}, \quad (x,t)\in{\mathbb R}^N\times(0,T_i), \end{equation} where \begin{equation} \label{eq:4.4} \gamma_i:= \left\{ \begin{array}{ll} \gamma {T_i}^{\frac{N}{d}\left(1-\frac{1}{r_n}\right)} & \mbox{if}\quad r_n<1,\vspace{3pt}\\ \gamma & \mbox{if}\quad r_n\ge 1, \end{array} \right. \qquad \kappa:= \left\{ \begin{array}{ll} \displaystyle{\frac{N}{d}} & \mbox{if}\quad r_n<1,\vspace{7pt}\\ \displaystyle{\frac{N}{dr_n}} & \mbox{if}\quad r_n\ge 1. \end{array} \right. \end{equation} On the other hand, it follows from \eqref{eq:2.8} and \eqref{eq:4.1} that \begin{equation} \label{eq:4.5} [S_{K_\theta}(t-s)U^i(s)](x)\le \tilde c_*U^i(x,t), \quad x\in{\mathbb R}^N,\,\,t>s>0, \end{equation} where $\tilde c_*=2c_*C_*$ and $C_*$ is as in \eqref{eq:2.8}. Combining \eqref{eq:4.3} and \eqref{eq:4.5}, we have \begin{equation} \label{eq:4.6} \begin{split} & c_*\int_0^t\int_{{\mathbb R}^N}(t-s)^{-\frac{\ell+j}{d}}K_\theta(x-y,t-s) s^{A-\frac{\langle {\bf p}\rangle_n-n}{d}}U^i(y,s)^{|{\bf p}|}\,dy\,ds\\ & \le c_*(2\gamma_i C)^{|{\bf p}|-1} \int_0^t(t-s)^{-\frac{\ell+j}{d}} s^{A-\frac{\langle {\bf p}\rangle_n-n}{d}-\kappa(|{\bf p}|-1)} S_{K_\theta}(t-s)U^i(s)\,ds\\ & \le c_*\tilde c_*(2\gamma_i C)^{|{\bf p}|-1}U^i(x,t)\int_0^t(t-s)^{-\frac{\ell+j}{d}} s^{A-\frac{\langle {\bf p}\rangle_n-n}{d}-\kappa(|{\bf p}|-1)}\,ds \end{split} \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T_i)$. On the other hand, it follows from (A), \eqref{eq:1.5}, and \eqref{eq:4.4} that $$ A-\frac{\langle {\bf p}\rangle_n-n}{d}-\kappa(|{\bf p}|-1) \ge-1+\frac{\ell+n}{d}>-1. $$ Since $\ell+j\le\ell+m<d$, using \eqref{eq:1.5} and \eqref{eq:4.4} again, we see that \begin{equation} \label{eq:4.7} \begin{split} & \int_0^t(t-s)^{-\frac{\ell+j}{d}} s^{A-\frac{\langle {\bf p}\rangle_n-n}{d}-\kappa(|{\bf p}|-1)}\,ds\\ & \le Ct^{-\frac{\ell+j}{d}+1+A-\frac{\langle {\bf p}\rangle_n-n}{d}-\kappa(|{\bf p}|-1)} \le \left\{ \begin{array}{ll} CT_i^{\frac{N(|{\bf p|}-1)}{d}\left(\frac{1}{r_n}-1\right)}t^{-\frac{j-n}{d}} & \mbox{if}\quad r_n<1,\vspace{5pt}\\ Ct^{-\frac{j-n}{d}} & \mbox{if}\quad r_n\ge 1, \end{array} \right.\end{split} \end{equation} for $0<t<T_i$. By \eqref{eq:4.4}, \eqref{eq:4.6}, and \eqref{eq:4.7}, taking small enough $\gamma>0$ if necessary, we see that \begin{equation} \label{eq:4.8} \begin{split} & c_*\int_0^t\int_{{\mathbb R}^N}(t-s)^{-\frac{\ell+j}{d}}K_\theta(x-y,t-s)s^{A-\frac{\langle {\bf p}\rangle_n-n}{d}}U^i(y,s)^{|{\bf p}|}\,dy\,ds\\ & \le C\gamma^{|{\bf p}|-1}t^{-\frac{j-n}{d}}U^i(x,t) \le\frac{1}{2}t^{-\frac{j-n}{d}}U^i(x,t) \end{split} \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,{T_i})$. Therefore, by \eqref{eq:4.2} and \eqref{eq:4.8} we apply Lemma~\ref{Lemma:3.1} with $U^\epsilon=U^i$ to find $u^i\in C^{m;0}({\mathbb R}^N\times(0,T))$ satisfying \eqref{eq:3.3} with $\epsilon=\epsilon_i$ and \begin{equation} \label{eq:4.9} |\nabla^j u^i(x,t)|\le t^{-\frac{j-n}{d}}U^i(x,t) \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T_i)$ and $j\in\{n,\dots,m\}$. Let $\tau\in(0,\min\{T,1\})$. Since $T_i>\tau$, by \eqref{eq:4.3} and \eqref{eq:4.9} we see that $$ \sup_{i\in\{1,2,\dots\}} \sup_{(x,t)\in{\mathbb R}^N\times[\tau,T_i)}|\nabla^j u^i(x,t)|\le \tau^{-\frac{j-n}{d}}\sup_{(x,t)\in{\mathbb R}^N\times[\tau,T_i)}U^i(x,t)\le C<\infty $$ for $j\in\{n,\dots,m\}$. This together with \eqref{eq:3.2} implies that $$ \sup_{i\in\{1,2,\dots\}} \sup_{(x,t)\in{\mathbb R}^N\times[\tau,T_i)}|\tilde{F}_{\epsilon_i}(x,t,u^i(x,t),\dots,\nabla^m u^i(x,t))|<\infty. $$ Applying the parabolic regularity theorems (see e.g. \cite{F}*{Chapter~1, Section~3} and \cite{IKK01}*{Section~2}) to integral equation~\eqref{eq:3.3}, we find $\nu\in(0,1)$ such that $$ |\nabla^j u^i(x,t)|\le C,\quad |\nabla^j u^i(x,t)-\nabla^j u^i(y,s)|\le C(|x-y|^\nu+|t-s|^{\frac{\nu}{d}}), $$ for $(x,t)$, $(y,s)\in{\mathbb R}^N\times[\tau,T_i)$, $j\in\{n,\dots,m\}$, and $i\in\{1,2,\dots\}$. By the Arzel\'a--Ascoli theorem and the diagonal argument we find a subsequence $\{u^{i'}\}$ of $\{u^i\}$ with $\lim_{i'\to\infty}\epsilon_{i'}=0$ and a function $u\in C^{m;0}({\mathbb R}^N\times(0,T))$ such that $$ \lim_{i'\to\infty}\sup_{j\in\{n,\dots,m\}}\sup_{(x,t)\in E}|\nabla^j u^{i'}(x,t)-\nabla^j u(x,t)|=0 $$ for compact sets $E$ in ${\mathbb R}^N\times(0,T)$. Then, applying the Lebesgue dominated convergence theorem to integral equation~\eqref{eq:3.3}, we see that $u$ is a solution to integral equation~(I) in ${\mathbb R}^N\times[0,T)$. Furthermore, by \eqref{eq:4.3} and \eqref{eq:4.9} we observe that $u$ satisfies \eqref{eq:1.7}. Thus Theorem~\ref{Theorem:1.2} follows. $\Box$ \vspace{5pt} \noindent {\bf Proof of Corollary~\ref{Corollary:1.1}.} Corollary~\ref{Corollary:1.1} follows Theorem~\ref{Theorem:1.2} with $T=\infty$ and the definition of the Morrey space ${\mathcal M}_{r_n,1}$. $\Box$ \section{Proofs of Theorems~\ref{Theorem:1.3} and \ref{Theorem:1.4}} In this section we prove Theorems~\ref{Theorem:1.3} and \ref{Theorem:1.4} by using Lemma~\ref{Lemma:3.1}. Furthermore, we prove Corollary~\ref{Corollary:1.2}. \vspace{5pt} \newline {\bf Proof of Theorem~\ref{Theorem:1.4}.} We can assume, without loss of generality, that $1<q<|\bf p|$. Indeed, it follows from the Jensen inequality that $$ \sup_{x\in{\mathbb R}^N}\biggr(\,\Xint-_{B(x,\sigma)}|\phi(y)|^{q'}\,dy\biggr)^{\frac{1}{q'}} \le\sup_{x\in{\mathbb R}^N}\biggr(\,\Xint-_{B(x,\sigma)}|\phi(y)|^q\,dy\biggr)^{\frac{1}{q}} \quad\mbox{if}\quad 1\le q'\le q. $$ Let $T\in(0,T_*]$ and assume \eqref{eq:1.9}. Let $c_*$ be as in Lemma~\ref{Lemma:3.1}. Let $i\in\{1,2,\dots\}$ and fix it. Let $T_i:=\min\{T,i\}$ and $\epsilon_i\in(0,1)$ be small enough. Then we find $L_i>0$ such that $$ c_*L_i\min_{j\in\{0,\dots,m\}}T_i^{-\frac{j}{d}}=\epsilon_i. $$ Set \begin{equation} \label{eq:5.1} U^i(x,t):=2c_*([S_{K_\theta}(t)\left(|\phi|+L_i\right)^q](x))^\frac{1}{q}. \end{equation} Then \begin{equation} \label{eq:5.2} \inf_{(x,t)\in{\mathbb R}^N\times(0,T_i)}t^{-\frac{j}{d}}U^i(x,t) \ge 2c_* L_i T_i^{-\frac{j}{d}}\ge\epsilon_i, \quad j\in\{0,\dots,m\}. \end{equation} On the other hand, it follows from the Jensen inequality, \eqref{eq:1.3}, and \eqref{eq:2.4} that \begin{equation} \label{eq:5.3} c_*[S_{K_\theta}(t)|\phi|](x) \le c_*([S_{K_\theta}(t)|\phi|^q](x))^\frac{1}{q} \le\frac{1}{2}U^i(x,t) \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T_i)$. Since $\lim_{\epsilon_i\to 0}L_i=0$, taking small enough $\epsilon_i>0$ if necessary, by \eqref{eq:1.9} and \eqref{eq:2.9} we find $C_1>0$ such that \begin{equation} \label{eq:5.4} U^i(x,t)\le C_1\gamma t^{-\frac{N}{dr_0}}+2c_*L_i \le 2\gamma C_1 t^{-\frac{N}{dr_0}}, \quad(x,t)\in{\mathbb R}^N\times(0,T_i). \end{equation} On the other hand, it follows from \eqref{eq:2.8} and \eqref{eq:5.1} that \begin{equation} \label{eq:5.5} [S_{K_\theta}(t-s)U^i(s)^q](x)\le \tilde c_*U^i(x,t)^q, \quad x\in{\mathbb R}^N,\,\,t>s>0, \end{equation} where $\tilde c_*$ is as in \eqref{eq:4.5}. Since it follows from \eqref{eq:1.5} that $$ A-\frac{\langle {\bf p}\rangle_0}{d}-\frac{N(|{\bf p}|-q)}{dr_0}=-1+\frac{N(q-1)}{dr_0}>-1, $$ by \eqref{eq:5.4} and \eqref{eq:5.5} we find $C_2>0$ such that \begin{equation} \label{eq:5.6} \begin{split} & c_*\int_0^t\int_{{\mathbb R}^N}(t-s)^{-\frac{j}{d}}K_\theta(x-y,t-s) s^{A-\frac{\langle {\bf p}\rangle_0}{d}}U^i(y,s)^{|{\bf p}|}\,dy\,ds\\ & \le c_*(2\gamma C_1)^{|{\bf p}|-q} \int_0^t(t-s)^{-\frac{j}{d}} s^{A-\frac{\langle {\bf p}\rangle_0}{d}-\frac{N(|{\bf p}|-q)}{dr_0}} \int_{{\mathbb R}^N}K_\theta(x-y,t-s)U^i(y,s)^q\,dy\,ds\\ & \le c_*\tilde c_*M(2\gamma C_1)^{|{\bf p}|-q}U^i(x,t)^q\int_0^t (t-s)^{-\frac{j}{d}} s^{A-\frac{\langle {\bf p}\rangle_0}{d}-\frac{N(|{\bf p}|-q)}{dr_0}}\,ds\\ & \le c_*\tilde c_*(2\gamma C_1)^{|{\bf p}|-1}t^{-\frac{N(q-1)}{dr_0}}U^i(x,t) \int_0^t(t-s)^{-\frac{j}{d}} s^{A-\frac{\langle {\bf p}\rangle_0}{d}-\frac{N(|{\bf p}|-q)}{dr_0}}\,ds\\ & \le C_2\gamma^{|{\bf p}|-1}t^{-\frac{j}{d}}U^i(x,t) \end{split} \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T_i)$. Then, taking small enough $\gamma>0$ if necessary, we have \begin{equation} \label{eq:5.7} c_*\int_0^t\int_{{\mathbb R}^N}(t-s)^{-\frac{j}{d}}K_\theta(x-y,t-s)s^{A-\frac{\langle {\bf p}\rangle_0}{d}}U^i(y,s)^{|{\bf p}|}\,dy\,ds \le\frac{1}{2}t^{-\frac{j}{d}}U^i(x,t) \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T_i)$. By \eqref{eq:5.2}, \eqref{eq:5.3}, and \eqref{eq:5.7} we apply Lemma~\ref{Lemma:3.1} with $U^\epsilon=U^i$ to find $u^i\in C^{m;0}({\mathbb R}^N\times(0,T_i))$ satisfying \eqref{eq:3.3} with $\epsilon=\epsilon_i$ and \begin{equation} \label{eq:5.8} |\nabla^j u^i(x,t)|\le t^{-\frac{j}{d}}U^i(x,t), \quad j\in\{0,\dots,m\}, \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T_i)$. Then, by the same arguments as in the proof of Theorem~\ref{Theorem:1.2} we find a solution~$u\in C^{m;0}({\mathbb R}^N\times(0,T))$ to integral equation~(I) in ${\mathbb R}^N\times[0,T)$. Furthermore, by \eqref{eq:5.4} and \eqref{eq:5.8} we see that $u$ satisfies \eqref{eq:1.10}. Thus Theorem~\ref{Theorem:1.4} follows. $\Box$ \vspace{5pt} \noindent {\bf Proof of Corollary~\ref{Corollary:1.2}.} Similarly to the proof of Corollary~\ref{Corollary:1.1}, Corollary~\ref{Corollary:1.2} follows Theorem~\ref{Theorem:1.4} with $T=\infty$ and the definition of the Morrey space ${\mathcal M}_{r_0,q}$. $\Box$ \vspace{5pt} \noindent{\bf Proof of Theorem~\ref{Theorem:1.3}.} Let $c_*$ be as in Lemma~\ref{Lemma:3.1}. It follows from \eqref{eq:1.8} that $$ \sup_{x\in{\mathbb R}^N}\int_{B(x,t^{\frac{1}{d}})}|\phi(y)|\,dy \le \sup_{x\in{\mathbb R}^N}\int_{B(x,T^{\frac{1}{d}})}|\phi(y)|\,dy \le\gamma T^{\frac{N}{d}-\frac{N}{dr_0}}, \quad 0<t\le T. $$ This together with \eqref{eq:2.9} implies that \begin{equation} \label{eq:5.9} \|S_{K_\theta}(t)|\phi|\|_{L^\infty}\le C\gamma T^{\frac{N}{d}-\frac{N}{dr_0}}t^{-\frac{N}{d}}, \quad t\in(0,T). \end{equation} Let $\epsilon>0$. Then we find $L_\epsilon>0$ such that $$ c_*L_\epsilon\min_{j\in\{0,\dots,m\}}T^{-\frac{j}{d}}=\epsilon. $$ Set \begin{equation} \label{eq:5.10} U^\epsilon(x,t):=2c_*[S_{K_\theta}(t)(|\phi|+L_\epsilon)](x). \end{equation} These imply that \begin{equation*} \begin{split} & \inf_{(x,t)\in{\mathbb R}^N\times(0,T]}t^{-\frac{j}{d}}U^\epsilon(x,t) \ge\epsilon, \quad j\in\{0,\dots,m\},\\ & c_*[S_{K_\theta}(t)|\phi|](x) \le\frac{1}{2}U^\epsilon(x,t), \quad (x,t)\in{\mathbb R}^N\times(0,T). \end{split} \end{equation*} Furthermore, taking small enough $\epsilon>0$ if necessary, by \eqref{eq:5.9} we find $C_1>0$ such that \begin{equation} \label{eq:5.11} U^\epsilon(x,t)\le C_1\gamma T^{\frac{N}{d}-\frac{N}{dr_0}} t^{-\frac{N}{d}}+2c_*L_\epsilon \le 2C_1 \gamma T^{\frac{N}{d}-\frac{N}{dr_0}} t^{-\frac{N}{d}}, \quad (x,t)\in{\mathbb R}^N\times(0,T). \end{equation} On the other hand, it follows from \eqref{eq:2.8} and \eqref{eq:5.10} that \begin{equation} \label{eq:5.12} [S_{K_\theta}(t-s)U^\epsilon(s)](x)\le \tilde c_*U^\epsilon(x,t), \quad x\in{\mathbb R}^N,\,\,t>s>0, \end{equation} where $\tilde c_*$ is as in \eqref{eq:4.5}. Since it follows from \eqref{eq:1.5} and (B) that $$ A-\frac{\langle {\bf p}\rangle_0}{d}-\frac{N(|\bf p|-1)}{d} =-1+\frac{N(|\bf p|-1)}{d}\bigg(\frac{1}{r_0}-1\bigg)>-1, $$ similarly to \eqref{eq:5.6}, by \eqref{eq:5.11} and \eqref{eq:5.12} we find $C_2>0$ such that \begin{equation*} \begin{split} & c_*\int_0^t\int_{{\mathbb R}^N}(t-s)^{-\frac{j}{d}}K_\theta(x-y,t-s)s^{A-\frac{\langle {\bf p}\rangle_0}{d}}U^\epsilon(y,s)^{|{\bf p}|}\,dy\,ds\\ & \le c_*(2C_1\gamma T^{\frac{N}{d}-\frac{N}{dr_0}})^{|{\bf p}|-1} \int_0^t(t-s)^{-\frac{j}{d}} s^{A-\frac{\langle {\bf p}\rangle_0}{d}-\frac{N(|{\bf p}|-1)}{d}} \int_{{\mathbb R}^N}K_\theta(x-y,t-s)U^\epsilon(y,s)\,dy\,ds\\ & \le C_2\gamma^{|{\bf p}|-1}t^{-\frac{j}{d}}U^\epsilon(x,t) \end{split} \end{equation*} for $(x,t)\in{\mathbb R}^N\times(0,T)$ and $j\in\{0,\dots,m\}$. Then, applying the same arguments as in the proof of Theorem~\ref{Theorem:1.4} with $q$ replaced by $1$, we complete the proof of Theorem~\ref{Theorem:1.3}. $\Box \section{Proof of Theorem~\ref{Theorem:1.5}} We prove Theorem~\ref{Theorem:1.5} and Corollary~\ref{Corollary:1.3}. \vspace{5pt} \newline {\bf Proof of Theorem~\ref{Theorem:1.5}.} Let $M\ge e$ and set $\Phi_M(s):=s[\log(M+s)]^\beta$ for $s>0$. Then, taking large enough $M\ge e$ if necessary, we have: \begin{itemize} \item[{\rm (i)}] $\Phi_M$ is convex in $(0,\infty)$; \item[{\rm (ii)}] The function $(0,\infty)\ni s\mapsto$ $s^{\frac{|\bf p|-1}{2}}[\log(M+s)]^{-\beta |\bf p|}$ is monotone increasing. \end{itemize} Furthermore, by \eqref{eq:1.11} we see that \begin{equation} \label{eq:6.1} \begin{split} C^{-1}\Phi_M(s) & \le\Phi(s)\le C\Phi_M(s),\\ C^{-1}s[\log(M+s)]^{-\beta} & \le\Phi_M^{-1}(s)\le Cs[\log(M+s)]^{-\beta}, \end{split} \end{equation} for $s>0$. It follows from \eqref{eq:1.12} and \eqref{eq:6.1} that \begin{equation} \label{eq:6.2} \sup_{x\in\mathbb R^N}\Phi_M^{-1}\left[\,\Xint-_{B(x,\sigma)} \Phi_M(T^{\frac{N}{d}}|\phi(y)|)\,dy\,\right]\le C\gamma \rho(\sigma T^{-\frac{1}{d}}), \quad 0<\sigma<T^{\frac{1}{d}}. \end{equation} Set \begin{equation} \label{eq:6.3} V(x,t):=[S_{K_\theta}(t)\Phi_M(T^{\frac{N}{d}}|\phi|)](x), \quad \tau(t):=T^{-\frac{1}{d}}t^{\frac{1}{d}}. \end{equation} Then, by \eqref{eq:2.9} and \eqref{eq:6.2} we see that \begin{equation} \label{eq:6.4} \begin{split} {\|V(t)\|_{L^\infty}} & \le Ct^{-\frac{N}{d}}\sup_{x\in{\mathbb R}^N}\int_{B(x,t^{1/d})} \Phi_M(T^{\frac{N}{d}}|\phi(y)|)\,dy\\ & \le C\Phi_M(C\gamma \rho(\tau(t))) \le C\gamma \rho(\tau(t))[\log(M+C\gamma \rho(\tau(t)))]^\beta\\ & \le C\gamma \tau(t)^{-N}\left|\log\frac{\tau(t)}{2}\right|^{-\frac{N}{d(1+A)-\langle{\bf p}\rangle_0}+\beta}=:\gamma \xi(\tau(t)) \end{split} \end{equation} for $t\in(0,T)$. Here the last inequality in \eqref{eq:6.4} follows from \eqref{eq:1.11} and \begin{equation*} \begin{split} \rho(\tau)[\log(M+C\rho(\tau))]^\beta & =O\left(\tau^{-N}|\log \tau|^{-\frac{N}{d(1+A)-\langle{\bf p}\rangle_0}}|\log \tau|^\beta\right)\\ & =O\left(\tau^{-N}|\log \tau|^{-\frac{N}{d(1+A)-\langle{\bf p}\rangle_0}+\beta}\right) \quad\mbox{as}\quad\tau\to +0. \end{split} \end{equation*} Let $c_*$ be as in Lemma~\ref{Lemma:3.1}. For any $\epsilon>0$, let $L_\epsilon>0$ be such that $$ c_*L_\epsilon\min_{j\in\{0,\dots,m\}}T^{-\frac{N+j}{d}}=\epsilon. $$ Then, taking small enough $\epsilon>0$ if necessary, by \eqref{eq:6.4} we see that \begin{equation} \label{eq:6.5} \|V(t)\|_{L^\infty}+\Phi_M(L_\epsilon)\le 2\gamma \xi(\tau(t))\quad\mbox{for}\quad t\in (0,T). \end{equation} Set \begin{equation} \label{eq:6.6} V^\epsilon(x,t):=V(x,t)+\Phi_M(L_\epsilon),\quad U^\epsilon(x,t):=2c_*T^{-\frac{N}{d}}\Phi_M^{-1}\left(V^\epsilon(x,t)\right). \end{equation} Then \begin{equation} \label{eq:6.7} \inf_{(x,t)\in{\mathbb R}^N\times(0,T)}t^{-\frac{j}{d}}U^\epsilon(x,t)\ge 2c_*T^{-\frac{N+j}{d}}L_\epsilon\ge\epsilon. \end{equation} On the other hand, by \eqref{eq:1.3}, \eqref{eq:2.4}, and \eqref{eq:6.6} we apply the Jensen inequality to obtain \begin{equation} \label{eq:6.8} c_*[S_{K_\theta}(t)|\phi|](x) \le c_*T^{-\frac{N}{d}}\Phi_M^{-1}\left[S_{K_\theta}(t)\Phi_M\left(T^{\frac{N}{d}}|\phi|\right)\right](x) \le\frac{1}{2}U^\epsilon(x,t) \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T)$. We can assume, without loss of generality, that $\gamma \le 1/2$. By property~(ii), \eqref{eq:6.1}, \eqref{eq:6.4}, \eqref{eq:6.5}, and \eqref{eq:6.6} we have \begin{align*} 0 \le \frac{U^\epsilon(x,t)^{|{\bf p}|}}{V^\epsilon(x,t)} & =(2c_*)^{|\bf p|}T^{-\frac{N|{\bf p}|}{d}}\frac{[\Phi_M^{-1}(V^\epsilon(x,t))]^{|{\bf p}|}}{V^\epsilon(x,t)}\\ & \le CT^{-\frac{N|{\bf p}|}{d}}V^\epsilon(x,t)^{|{\bf p}|-1}[\log(M+V^\epsilon(x,t))]^{-\beta |{\bf p}|}\\ & =CT^{-\frac{N|{\bf p}|}{d}}V^\epsilon(x,t)^{\frac{|{\bf p}|-1}{2}}V^\epsilon(x,t)^{\frac{|{\bf p}|-1}{2}}[\log(M+V^\epsilon(x,t))]^{-\beta |{\bf p}|}\\ & \le CT^{-\frac{N|{\bf p}|}{d}}(2\gamma\xi(\tau(t)))^{\frac{|{\bf p}|-1}{2}} (2\gamma \xi(\tau(t)))^{\frac{|{\bf p}|-1}{2}} [\log(M+2\gamma \xi(\tau(t)))]^{-\beta |{\bf p}|} \\ & \le C\gamma^{\frac{|{\bf p}|-1}{2}} T^{-\frac{N|{\bf p}|}{d}}\xi(\tau(t))^{|{\bf p}|-1}[\log(M+\xi(\tau(t)))]^{-\beta |{\bf p}|} \end{align*} for $(x,t)\in{\mathbb R}^N\times(0,T)$. This together with (D) and the definition of $\xi(\tau(t))$ implies that \begin{equation} \label{eq:6.9} \begin{split} 0\le \frac{U^\epsilon(x,t)^{|{\bf p}|}}{V^\epsilon(x,t)} & \le C\gamma^{\frac{|{\bf p}|-1}{2}} T^{-\frac{N|{\bf p}|}{d}}\tau(t)^{-N(|{\bf p}|-1)} \left|\log\frac{\tau(t)}{2}\right|^{-\frac{N(|{\bf p}|-1)}{d(1+A)-\langle {\bf p}\rangle_0} +\beta(|{\bf p}|-1)-\beta |{\bf p}|}\\ & =C\gamma^{\frac{|{\bf p}|-1}{2}} T^{-\frac{N|{\bf p}|}{d}}\tau(t)^{-N(|{\bf p}|-1)}\left|\log\frac{\tau(t)}{2}\right|^{-1-\beta} \end{split} \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T)$. Similarly, \begin{equation} \label{eq:6.10} \begin{split} \frac{V^\epsilon(x,t)}{U^\epsilon(x,t)} & =(2c_*)^{-1}T^{\frac{N}{d}} \frac{V^\epsilon(x,t)}{\Phi_M^{-1}(V^\epsilon(x,t))}\\ & \le CT^{\frac{N}{d}}[\log(M+V^\epsilon(x,t))]^\beta \le CT^{\frac{N}{d}}\biggr|\log\frac{\tau(t)}{2}\biggr|^\beta \end{split} \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T)$. On the other hand, it follows from \eqref{eq:2.8}, \eqref{eq:6.3}, and \eqref{eq:6.6} that \begin{equation} \label{eq:6.11} [S_{K_\theta}(t-s)V^\epsilon(s)](x)\le C_*V^\epsilon(x,t), \quad x\in{\mathbb R}^N,\,\,t>s>0, \end{equation} where $C_*$ is as in \eqref{eq:2.8}. Combining \eqref{eq:6.9}, \eqref{eq:6.10}, and \eqref{eq:6.11} we obtain \begin{equation} \label{eq:6.12} \begin{split} & c_*\int_0^t\int_{{\mathbb R}^N}(t-s)^{-\frac{j}{d}}K_\theta(x-y,t-s) s^{A-\frac{\langle {\bf p}\rangle_0}{d}}U^\epsilon(y,s)^{|{\bf p}|}\,dy\,ds\\ & \le C\gamma^{\frac{|{\bf p}|-1}{2}}T^{-\frac{N|{\bf p}|}{d}} \\ & \qquad \times \int_0^t(t-s)^{-\frac{j}{d}}s^{A-\frac{\langle {\bf p}\rangle_0}{d}} \tau(s)^{-N(|{\bf p}|-1)}\left|\log\frac{\tau(s)}{2}\right|^{-1-\beta} S_{K_\theta}(t-s)V^\epsilon(s)\,ds\\ & \le CC_*\gamma^{\frac{|{\bf p}|-1}{2}}T^{-\frac{N|{\bf p}|}{d}}V^\epsilon(x,t) \int_0^t(t-s)^{-\frac{j}{d}}s^{A-\frac{\langle {\bf p}\rangle_0}{d}} \tau(s)^{-N(|{\bf p}|-1)}\left|\log\frac{\tau(s)}{2}\right|^{-1-\beta}\,ds\\ & \le C\gamma^{\frac{|{\bf p}|-1}{2}}T^{-\frac{N(|{\bf p}|-1)}{d}}U^\epsilon(x,t)\\ & \qquad\qquad \times\left|\log\frac{\tau(t)}{2}\right|^{\beta} \int_0^t(t-s)^{-\frac{j}{d}}s^{A-\frac{\langle {\bf p}\rangle_0}{d}} \tau(s)^{-N(|{\bf p}|-1)}\left|\log\frac{\tau(s)}{2}\right|^{-1-\beta}\,ds \end{split} \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T)$. Since it follows from (D) and \eqref{eq:6.3} that \begin{equation*} \begin{split} & \int_0^t(t-s)^{-\frac{j}{d}}s^{A-\frac{\langle {\bf p}\rangle_0}{d}}\tau(s)^{-N(|{\bf p}|-1)}\left|\log\frac{\tau(s)}{2}\right|^{-1-\beta}\,ds\\ & =T^{\frac{N(|{\bf p}|-1)}{d}}\int_0^t(t-s)^{-\frac{j}{d}}s^{-1}\left|\log\frac{\tau(s)}{2}\right|^{-1-\beta}\,ds \le CT^{\frac{N(|{\bf p}|-1)}{d}}t^{-\frac{j}{d}}\left|\log\frac{\tau(t)}{2}\right|^{-\beta} \end{split} \end{equation*} for $t\in(0,T)$, taking small enough $\gamma>0$ if necessary, we deduce from \eqref{eq:6.12} that \begin{equation} \label{eq:6.13} \begin{split} & c_*\int_0^t\int_{{\mathbb R}^N}(t-s)^{-\frac{j}{d}}K_\theta(x-y,t-s)s^{-\frac{\langle {\bf p}\rangle_0}{d}}U^\epsilon(y,s)^{|{\bf p}|}\,dy\,ds\\ & \le C\gamma^{\frac{|{\bf p}|-1}{2}}t^{-\frac{j}{d}}U^\epsilon(x,t) \le\frac{1}{2}t^{-\frac{j}{d}}U^\epsilon(x,t) \end{split} \end{equation} for $(x,t)\in\mathbb R^N\times(0,T)$ and $j=\{0,\dots,m\}$. Therefore, by \eqref{eq:6.7}, \eqref{eq:6.8}, and \eqref{eq:6.13} we apply Lemma~\ref{Lemma:3.1} to find $u^\epsilon\in C^{m;0}({\mathbb R}^N\times(0,T))$ satisfying \eqref{eq:3.3} and \begin{equation} \label{eq:6.14} |\nabla^j u^\epsilon(x,t)|\le t^{-\frac{j}{d}}U^\epsilon(x,t),\quad j\in\{0,\dots,m\}, \end{equation} for $(x,t)\in{\mathbb R}^N\times(0,T)$. Then, by the same arguments as in the proof of Theorem~\ref{Theorem:1.2} we find a solution~$u\in C^{m;0}({\mathbb R}^N\times(0,T))$ to integral equation~(I) in ${\mathbb R}^N\times[0,T)$. Furthermore, by \eqref{eq:6.1}, \eqref{eq:6.3}, and \eqref{eq:6.14} we have \begin{equation*} \begin{split} |\nabla^j u^\epsilon(x,t)| & \le CT^{-\frac{N}{d}}t^{-\frac{j-n}{d}}\Phi_M^{-1}(V^\epsilon(x,t))\\ & \le CT^{-\frac{N}{d}}t^{-\frac{j}{d}}\xi(\tau(t))[\log(M+\xi(\tau(t)))]^{-\beta} \le Ct^{-\frac{N+j}{d}}\biggr|\log\frac{t}{2T}\biggr|^{-\frac{N}{d}} \end{split} \end{equation*} for $(x,t)\in{\mathbb R}^N\times(0,T)$. This implies inequality~\eqref{eq:1.13}. Thus Theorem~\ref{Theorem:1.4} follows. $\Box$ \vspace{5pt} \noindent {\bf Proof of Corollary~\ref{Corollary:1.3}.} Let $$ \beta=\frac{N}{d(1+A)-\langle{\bf p}\rangle_0}. $$ Then $$ \Phi(\gamma \rho(\sigma T^{-\frac{1}{d}}))=\gamma \rho(\sigma T^{-\frac{1}{d}}) \left[\log(e+\gamma \rho(\sigma T^{-\frac{1}{d}}))\right]^{\frac{N}{d(1+A)-\langle{\bf p}\rangle_0}} \le C\gamma T^{\frac{N}{d}}\sigma^{-N} $$ for $0<\sigma<T^{\frac{1}{d}}$. Then Corollary~\ref{Corollary:1.3}. follows from Theorem~\ref{Theorem:1.4}. $\Box$ \section{Applications} It is known that fundamental solutions to a large class of linear parabolic operators satisfy condition~(G) for some $T_*\in(0,\infty]$, $d>0$, and $L>0$, and our main results are applicable to the Cauchy problem for various nonlinear parabolic equations. In this section we focus on the Cauchy problem for nonlinear parabolic equation~\eqref{eq:1.1} with $$ {\mathcal L}=(-\Delta)^{\frac{d}{2}},\qquad d>0, $$ and show the validity and the advantage of our main results. \subsection{Semilinear parabolic equations} Consider the Cauchy problem for a semilinear parabolic equation \begin{equation} \tag{SP} \left\{ \begin{array}{ll} \partial_t u+(-\Delta)^{\frac{d}{2}}u=|u|^p,\quad & x\in{\mathbb R}^N,\,\,t>0,\vspace{3pt}\\ u(x,0)=\phi(x), & x\in{\mathbb R}^N, \end{array} \right. \end{equation} where $d>0$ and $p>1$. Then $$ \ell=n=m=A=0,\quad |{\bf p}|=p,\quad \langle {\bf p}\rangle_0=0. $$ Applying Theorems~\ref{Theorem:1.3}, \ref{Theorem:1.4}, and \ref{Theorem:1.5} to problem~(SP), we have: \begin{theorem} \label{Theorem:7.1} Consider Cauchy problem~{\rm (SP)}, where $d>0$ and $p>1$. Then the same statements as in Theorems~{\rm\ref{Theorem:1.3}}, {\rm\ref{Theorem:1.4}}, and {\rm\ref{Theorem:1.5}} hold in the cases $r_0<1$, $r_0>1$, and $r_0=1$, respectively, where $$ r_0=\frac{N(p-1)}{d}. $$ \end{theorem} In the case either $0<d\le 2$ or $d\in\{4,6,\dots\}$, Theorem~\ref{Theorem:7.1} has been already proved in \cite{HI01} and \cite{IKO}, respectively, and it is shown that sufficient conditions in Theorems~\ref{Theorem:1.3}, \ref{Theorem:1.4}, and~\ref{Theorem:1.5} are sharp. See also Section~1. \subsection{Viscous Hamilton-Jacobi equations} Consider the Cauchy problem for a viscous Hamilton-Jacobi equation \begin{equation} \tag{VHJ} \left\{ \begin{array}{ll} \partial_t u+(-\Delta)^{\frac{d}{2}}u=|\nabla u|^p,\quad & x\in{\mathbb R}^N,\,\,t>0,\vspace{3pt}\\ u(x,0)=\phi(x), & x\in{\mathbb R}^N, \end{array} \right. \end{equation} where $d>1$ and $p>1$. Then \begin{equation*} \begin{split} & n\in\{0,1\},\quad \ell=A=0,\quad m=1,\quad |{\bf p}|=p, \quad \langle {\bf p}\rangle_0=p,\quad \langle {\bf p}\rangle_1=1,\\ & r_0=\frac{N(p-1)}{d-p},\quad r_1=\frac{N(p-1)}{d-1}. \end{split} \end{equation*} It follows that $r_0<1$ if and only if $p<p_{HJ}$, where $$ p_{HJ}:=\frac{N+d}{N+1}\in(1,d). $$ Applying Theorems~\ref{Theorem:1.2}, \ref{Theorem:1.3}, \ref{Theorem:1.4}, and \ref{Theorem:1.5} to problem~(VHJ), we have: \begin{theorem} \label{Theorem:7.2} Let $p>1$ and $d>1$. \begin{itemize} \item[{\rm (a)}] Let $1<p<p_{HJ}$. Then problem~{\rm (VHJ)} possesses a local-in-time solution if $\phi\in L^1_{{\rm uloc}}$. \item[{\rm (b)}] Let $p=p_{HJ}$. Then the same statement as in Theorem~{\rm\ref{Theorem:1.5}} holds with $$ \rho(s)=s^{-N}[\log(e+s^{-1})]^{-\frac{N}{d-p}}. $$ In particular, there exists $C_1>0$ such that, if $$ |\phi(x)|\le C_1|x|^{-N}\biggr[\log\biggr(e+\frac{1}{|x|}\biggr)\biggr]^{-\frac{N}{d}-1}+C,\quad x\in{\mathbb R}^N, $$ for some $C>0$, problem~{\rm (VHJ)} possesses a local-in-time solution. \item[{\rm (c)}] Let $p_{HJ}<p<d$ and $q>1$. Then there exists $C_2>0$ such that, if $$ \sup_{x\in{\mathbb R}^N}\,\sup_{0<\sigma<T^{\frac{1}{d}}}\, \sigma^{\frac{d-p}{p-1}}\, \biggr(\,\Xint-_{B(x,\sigma)}|\phi(y)|^q\,dy\biggr)^{\frac{1}{q}}\le C_2 $$ for some $T\in(0,\infty]$, problem~{\rm (VHJ)} possesses a solution in ${\mathbb R}^N\times[0,T)$. \item[{\rm (d)}] Let $p>1$. Then there exists $C_3>0$ such that, if $$ \sup_{x\in{\mathbb R}^N}\,\sup_{0<\sigma<T^{\frac{1}{d}}}\, \sigma^{\frac{d-1}{p-1}}\,\Xint-_{B(x,\sigma)}|\nabla\phi(y)|\,dy\le C_3 $$ for some $T\in(0,\infty]$, problem~{\rm (VHJ)} possesses a solution in ${\mathbb R}^N\times[0,T)$. \end{itemize} \end{theorem} We remark that, in the case of $1<d\le 2$, thanks to \cite{AB}, \cite{DI}, and \cite{KW}, Definition~\ref{Definition:1.1} implies that if problem~(VHJ) possesses a local-in-time solution~$u$, then the solution~$u$ can be extended as a global-in-time solution to (VHJ). For the case of $d=2$, due to \cite{BSW}, the well-posedness of local-in-time solutions holds in the following cases: \begin{itemize} \item $\phi\in L^q$ for $q>r_1\ge1$ or $q=r_1>1$; \item $\phi\in L^1$ if $p<p_{HJ}$; \item $\phi\in W^{1,q}(\mathbb R^N)$ if $p\in[1,\infty)$ and $q>r_1\ge1$ or $q=r_1>1$. \end{itemize} (See also \cites{A, BL}.) Comparing with these results, we see that Theorem~\ref{Theorem:7.2} includes some new criterions for the existence of solutions to problem~(VHJ) even in the case of $d=2$, in particular, in assertions~(b) and (c). \subsection{Nonlinear parabolic equations with $\ell>0$ and $m=0$} Consider the Cauchy problem \begin{equation} \tag{gCD} \left\{ \begin{array}{ll} \partial_t u+(-\Delta)^{\frac{d}{2}}u= \displaystyle{\sum_{|\alpha|=\ell}}a_\alpha\partial_x^\alpha (|u|^{p-1}u),\quad & x\in{\mathbb R}^N,\,\,t>0,\vspace{3pt}\\ u(x,0)=\phi(x), & x\in{\mathbb R}^N, \end{array} \right. \end{equation} where $\ell\in\{1,2,\dots\}$, $0<\ell<d$, $\{a_\alpha\}\subset{\mathbb R}$, and $p>1$. Then $$ n=A=0,\quad |{\bf p}|=p,\quad\langle {\bf p}\rangle_0=0, \quad r_0=\frac{N(p-1)}{d-\ell}. $$ Applying Theorem~\ref{Theorem:1.2} to problem~(gCD), we have: \begin{theorem} \label{Theorem:7.3} Let $\ell\in\{1,2,\dots\}$, $0<\ell<d$, $\{a_\alpha\}\subset{\mathbb R}$, and $p>1$. Then there exists $\gamma>0$ such that, if $$ \sup_{x\in{\mathbb R}^N}\,\sup_{0<\sigma<T^{\frac{1}{d}}}\, \sigma^{\frac{d-\ell}{p-1}}\,\Xint-_{B(x,\sigma)}|\phi(y)|\,dy\le\gamma $$ for some $T\in(0,\infty]$, then problem~{\rm (gCD)} possesses a solution in ${\mathbb R}^N\times[0,T)$. \end{theorem} In the case when $1<d\le 2$ and $\ell=1$, the comparison principle holds for problem~(gCD). Then, similarly to problem~(VHJ), Definition~\ref{Definition:1.1} implies that if problem~(gCD) possesses a local-in-time solution~$u$, then the solution~$u$ can be extended as a global-in-time solution to (gCD). Problem~(gCD) is a generalization of the Cauchy problem for a convection-diffusion equation \begin{equation} \tag{CD} \left\{ \begin{array}{ll} \partial_t u+(-\Delta)^{\frac{d}{2}} u={\bf a}\cdot\nabla(|u|^{p-1}u),\quad & x\in{\mathbb R}^N,\,\,t>0,\vspace{3pt}\\ u(x,0)=\phi(x), & x\in{\mathbb R}^N, \end{array} \right. \end{equation} where $0<d\le 2$, $p>1$, and ${\bf a}\in{\mathbb R}^N$. The solvability and the asymptotic behavior of solutions to problem~(CD) have been studied in many papers (see e.g. \cites{Carpio, EZ01, EZ02, IKK02, IwaK, HOS} and references therein). Theorem~\ref{Theorem:7.3} also includes some new criterions for the existence of solutions even to problem~(CD). (Compare with \cite{HOS}.) \subsection{Nonlinear parabolic equations with $\ell>0$ and $m=1$} Consider the Cauchy problem for a higher-order parabolic equation with gradient nonlinearity \begin{equation} \tag{HG} \left\{ \begin{array}{ll} \partial_t u+(-\Delta)^{\frac{d}{2}}u =-\nabla\cdot(|\nabla u|^{p-1}\nabla u),\quad & x\in{\mathbb R}^N,\,\,t>0,\vspace{3pt}\\ u(x,0)=\phi(x), & x\in{\mathbb R}^N, \end{array} \right. \end{equation} where $d>2$ and $p>1$. Then \begin{equation*} \begin{split} & n\in\{0,1\},\quad A=0,\quad \ell=1,\quad |{\bf p}|=p,\quad \langle {\bf p}\rangle_0=p,\quad \langle {\bf p}\rangle_1=1,\\ & r_0=\frac{N(p-1)}{d-p-1},\quad r_1=\frac{N(p-1)}{d-2}. \end{split} \end{equation*} Applying Theorem~\ref{Theorem:1.2} to problem~(HG), we have: \begin{theorem} \label{Theorem:7.4} Let $d>2$ and $p>1$. \begin{itemize} \item[{\rm (a)}] Let $1<p\le d-1$. Then there exists $C_1>0$ such that, if $$ \sup_{x\in{\mathbb R}^N}\,\sup_{0<\sigma<T^{\frac{1}{d}}}\, \sigma^{\frac{d-p-1}{p-1}}\, \Xint-_{B(x,\sigma)}|\phi(y)|\,dy\le C_1 $$ for some $T\in(0,\infty]$, then problem~{\rm (HG)} possesses a solution in ${\mathbb R}^N\times[0,T)$. \item[{\rm (b)}] Let $p>1$. Then there exists $C_2>0$ such that, if $$ \sup_{x\in{\mathbb R}^N}\,\sup_{0<\sigma<T^{\frac{1}{d}}}\, \sigma^{\frac{d-2}{p-1}}\,\Xint-_{B(x,\sigma)}|\nabla \phi(y)|\,dy\le C_2 $$ for some $T\in(0,\infty]$, then problem~{\rm (HG)} possesses a solution in ${\mathbb R}^N\times[0,T)$. \end{itemize} \end{theorem} Problem~(HG) with $d=4$ appears in the study of mathematical models describing epitaxial growth of thin film (see e.g. \cites{EGP, EGK, KSW, IMO, ORS} and references therein for related results). In \cite{IMO} the authors gave sufficient conditions for the existence of local-in-time solutions and global-in-time solutions by the use of uniformly local weak Lebesgue spaces. Theorem~\ref{Theorem:7.4} with $d=4$ improves their results. \medskip \noindent {\bf Acknowledgment.} The authors of this paper were supported in part by JSPS KAKENHI Grant Number JP19H05599. The second author and the third author were supported in part by JSPS KAKENHI Grant Numbers JP20K03689 and JP20KK0057, respectively. \begin{bibdiv} \begin{biblist} \bib{AB}{article}{ author={Amour, Laurent}, author={Ben-Artzi, Matania}, title={Global existence and decay for viscous Hamilton-Jacobi equations}, journal={Nonlinear Anal.}, volume={31}, date={1998}, pages={621--628}, } \bib{A}{article}{ author={Andreucci, Daniele}, title={Degenerate parabolic equations with initial data measures}, journal={Trans. Amer. Math. Soc.}, volume={349}, date={1997}, pages={3911--3923}, } \bib{BP}{article}{ author={Baras, Pierre}, author={Pierre, Michel}, title={Crit\`ere d'existence de solutions positives pour des \'{e}quations semi-lin\'{e}aires non monotones}, journal={Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire}, volume={2}, date={1985}, pages={185--212}, } \bib{BL}{article}{ author={Benachour, Said}, author={Lauren\c{c}ot, Philippe}, title={Global solutions to viscous Hamilton-Jacobi equations with irregular initial data}, journal={Comm. Partial Differential Equations}, volume={24}, date={1999}, pages={1999--2021}, issn={0360-5302}, } \bib{BSW}{article}{ author={Ben-Artzi, Matania}, author={Souplet, Philippe}, author={Weissler, Fred B.}, title={The local theory for viscous Hamilton-Jacobi equations in Lebesgue spaces}, journal={J. Math. Pures Appl. (9)}, volume={81}, date={2002}, pages={343--378}, } \bib{BJ}{article}{ author={Bogdan, Krzysztof}, author={Jakubowski, Tomasz}, title={Estimates of heat kernel of fractional Laplacian perturbed by gradient operators}, journal={Comm. Math. Phys.}, volume={271}, date={2007}, pages={179--198}, } \bib{BK}{article}{ author={Brandolese, Lorenzo}, author={Karch, Grzegorz}, title={Far field asymptotics of solutions to convection equation with anomalous diffusion}, journal={J. Evol. Equ.}, volume={8}, date={2008}, pages={307--326}, } \bib{Carpio}{article}{ author={Carpio, A.}, title={Large time behaviour in convection-diffusion equations}, journal={Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4)}, volume={23}, date={1996}, pages={551--574}, } \bib{C}{article}{ author={Cui, Shangbin}, title={Local and global existence of solutions to semilinear parabolic initial value problems}, journal={Nonlinear Anal.}, volume={43}, date={2001}, pages={293--323}, } \bib{DI}{article}{ author={Droniou, J\'{e}r\^{o}me}, author={Imbert, Cyril}, title={Fractal first-order partial differential equations}, journal={Arch. Ration. Mech. Anal.}, volume={182}, date={2006}, pages={299--331}, } \bib{EZ01}{article}{ author={Escobedo, Miguel}, author={Zuazua, Enrike}, title={Large time behavior for convection-diffusion equations in ${\bf R}^N$}, journal={J. Funct. Anal.}, volume={100}, date={1991}, pages={119--161}, } \bib{EZ02}{article}{ author={Escobedo, Miguel}, author={Zuazua, Enrique}, title={Long-time behavior for a convection-diffusion equation in higher dimensions}, journal={SIAM J. Math. Anal.}, volume={28}, date={1997}, pages={570--594}, } \bib{EGP}{article}{ author={Escudero, Carlos}, author={Gazzola, Filippo}, author={Peral, Ireneo}, title={Global existence versus blow-up results for a fourth order parabolic PDE involving the Hessian}, journal={J. Math. Pures Appl. (9)}, volume={103}, date={2015}, pages={924--957}, } \bib{EGK}{article}{ author={Evans, J. D.}, author={Galaktionov, V. A.}, author={King, J. R.}, title={Blow-up similarity solutions of the fourth-order unstable thin film equation}, journal={European J. Appl. Math.}, volume={18}, date={2007}, pages={195--231}, } \bib{FL}{article}{ author={Filippucci, Roberta}, author={Lombardi, Silvia}, title={Fujita type results for parabolic inequalities with gradient terms}, journal={J. Differential Equations}, volume={268}, date={2020}, pages={1873--1910}, } \bib{F}{book}{ author={Friedman, Avner}, title={Partial differential equations of parabolic type}, publisher={Prentice-Hall, Inc., Englewood Cliffs, N.J.}, date={1964}, pages={xiv+347}, } \bib{GP}{article}{ author={Galaktionov, V. A.}, author={Pohozaev, S. I.}, title={Existence and blow-up for higher-order semilinear parabolic equations: majorizing order-preserving operators}, journal={Indiana Univ. Math. J.}, volume={51}, date={2002}, pages={1321--1338}, issn={0022-2518}, } \bib{GG}{article}{ author={Gazzola, Filippo}, author={Grunau, Hans-Christoph}, title={Global solutions for superlinear parabolic equations involving the biharmonic operator for initial data with optimal slow decay}, journal={Calc. Var. Partial Differential Equations}, volume={30}, date={2007}, pages={389--415}, } \bib{HOS}{article}{ author={Haque, Md. Rabiul}, author={Ogawa, Takayoshi}, author={Sato, Ryuichi}, title={Existence of weak solutions to a convection-diffusion equation in a uniformly local Lebesgue space}, journal={Commun. Pure Appl. Anal.}, volume={19}, date={2020}, pages={677--697}, } \bib{HI01}{article}{ author={Hisa, Kotaro}, author={Ishige, Kazuhiro}, title={Existence of solutions for a fractional semilinear parabolic equation with singular initial data}, journal={Nonlinear Anal.}, volume={175}, date={2018}, pages={108--132}, } \bib{HI02}{article}{ author={Hisa, Kotaro}, author={Ishige, Kazuhiro}, title={Solvability of the heat equation with a nonlinear boundary condition}, journal={SIAM J. Math. Anal.}, volume={51}, date={2019}, pages={565--594}, } \bib{HIT}{article}{ author={Hisa, Kotaro}, author={Ishige, Kazuhiro}, author={Takahashi, Jin}, title={Existence of solutions for an inhomogeneous fractional semilinear heat equation}, journal={Nonlinear Anal.}, volume={199}, date={2020}, pages={111920, 28}, } \bib{IKO}{article}{ author={Ishige, Kazuhiro}, author={Kawakami, Tatsuki}, author={Okabe, Shinya}, title={Existence of solutions for a higher-order semilinear parabolic equation with singular initial data}, journal={Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire}, volume={37}, date={2020}, pages={1185--1209}, } \bib{IKK01}{article}{ author={Ishige, Kazuhiro}, author={Kawakami, Tatsuki}, author={Kobayashi, Kanako}, title={Global solutions for a nonlinear integral equation with a generalized heat kernel}, journal={Discrete Contin. Dyn. Syst. Ser. S}, volume={7}, date={2014}, pages={767--783}, } \bib{IKK02}{article}{ author={Ishige, Kazuhiro}, author={Kawakami, Tatsuki}, author={Kobayashi, Kanako}, title={Asymptotics for a nonlinear integral equation with a generalized heat kernel}, journal={J. Evol. Equ.}, volume={14}, date={2014}, pages={749--777}, } \bib{IMO}{article}{ author={Ishige, Kazuhiro}, author={Miyake, Nobuhito}, author={Okabe, Shinya}, title={Blowup for a fourth-order parabolic equation with gradient nonlinearity}, journal={SIAM J. Math. Anal.}, volume={52}, date={2020}, pages={927--953}, } \bib{IwaK}{article}{ author={Iwabuchi, Tsukasa}, author={Kawakami, Tatsuki}, title={Existence of mild solutions for a Hamilton-Jacobi equation with critical fractional viscosity in the Besov spaces}, journal={J. Math. Pures Appl. (9)}, volume={107}, date={2017}, pages={464--489}, } \bib{KW}{article}{ author={Karch, Grzegorz}, author={Woyczy\'{n}ski, Wojbor A.}, title={Fractal Hamilton-Jacobi-KPZ equations}, journal={Trans. Amer. Math. Soc.}, volume={360}, date={2008}, pages={2423--2442}, } \bib{KSW}{article}{ author={King, Belinda B.}, author={Stein, Oliver}, author={Winkler, Michael}, title={A fourth-order parabolic equation modeling epitaxial thin film growth}, journal={J. Math. Anal. Appl.}, volume={286}, date={2003}, pages={459--490}, } \bib{KY}{article}{ author={Kozono, Hideo}, author={Yamazaki, Masao}, title={Semilinear heat equations and the Navier-Stokes equation with distributions in new function spaces as initial data}, journal={Comm. Partial Differential Equations}, volume={19}, date={1994}, pages={959--1014}, } \bib{LN}{article}{ author={Lee, Tzong-Yow}, author={Ni, Wei-Ming}, title={Global existence, large time behavior and life span of solutions of a semilinear parabolic Cauchy problem}, journal={Trans. Amer. Math. Soc.}, volume={333}, date={1992}, number={1}, pages={365--378}, } \bib{ORS}{article}{ author={Ortiz, M.}, author={Repetto, E. A.}, author={Si, H.}, title={A continuum model of kinetic roughening and coarsening in thin films}, journal={J. Mech. Phys. Solids}, volume={47}, date={1999}, pages={697--730}, } \bib{P}{article}{ author={Ponce, Gustavo}, title={Global existence of small solutions to a class of nonlinear evolution equations}, journal={Nonlinear Anal.}, volume={9}, date={1985}, pages={399--418}, } \bib{Q}{article}{ author={Quittner, Pavol}, title={Liouville theorems for superlinear parabolic problems with gradient structure}, journal={J. Elliptic Parabol. Equ.}, volume={6}, date={2020}, pages={145--153}, } \bib{QS}{book}{ author={Quittner, Pavol}, author={Souplet, Philippe}, title={Superlinear parabolic problems}, series={Birkh\"{a}user Advanced Texts: Basler Lehrb\"{u}cher. [Birkh\"{a}user Advanced Texts: Basel Textbooks]}, date={2019}, pages={xvi+725}, } \bib{RS}{article}{ author={Robinson, James C.}, author={Sier\.{z}\polhk ega, Miko\l aj}, title={Supersolutions for a class of semilinear heat equations}, journal={Rev. Mat. Complut.}, volume={26}, date={2013}, pages={341--360}, } \bib{S}{article}{ author={Sugitani, Sadao}, title={On nonexistence of global solutions for some nonlinear integral equations}, journal={Osaka Math. J.}, volume={12}, date={1975}, pages={45--51}, } \bib{W}{article}{ author={Weissler, Fred B.}, title={Existence and nonexistence of global solutions for a semilinear heat equation}, journal={Israel J. Math.}, volume={38}, date={1981}, pages={29--40}, } \bib{Y}{article}{ author={Yamazaki, Masao}, author={Zhou, Xiaofang}, title={Semilinear heat equations with distributions in Morrey spaces as initial data}, journal={Hokkaido Math. J.}, volume={30}, date={2001}, pages={537--571}, } \end{biblist} \end{bibdiv} \end{document}
1,314,259,993,077
arxiv
\section{Introduction} Formulating the Casimir energy in terms of scattering theory has made it possible to efficiently reduce quantum field theory calculations to standard problems in quantum mechanics and electromagnetism. By expressing the ``TGTG'' form of the Casimir energy \cite{Kenneth06} in appropriate scattering bases, one can calculate the Casimir interaction energy of a collection of objects as a combination of the objects' scattering amplitudes ($T$-matrices) together with universal translation matrices, which are obtained from a mode expansion of the free Green's function \cite{spheres,scalar,universal}. The former are computed for each object individually, while the latter depend only on the objects' relative positions and orientations. As a result, the Casimir energy can be computed for any collection of objects for which the scattering $T$-matrix is available within a standard scattering basis. This approach allows for exact calculations, extending earlier results using asymptotic expansions \cite{Balian77,Balian78} and results from scalar theories \cite{Bulgac01,Bulgac06,Wirzba08}. It can also be applied in the weak coupling approximation \cite{Milton08-1}. For objects without special symmetries, however, one must ultimately turn to computational methods to compute either the $T$-matrix or the associated Green's functions \cite{Johnson,Gies1,Gies2,Forrow:2012sp}. With sufficient symmetry, the exact $T$-matrix can take an analytically calculable form, greatly reducing the amount of computation required. This reduction has made it possible to apply the scattering method to efficient computations of the Casimir energy for planes \cite{Lambrecht06}, spheres and ordinary cylinders \cite{Emig06,spheres,universal,Teo:2012kf}, parabolic cylinders \cite{parabolic1,parabolic2}, and wedges and cones \cite{wedge}. Here we complete the set of separable geometries in electromagnetism by treating the case of an elliptic cylinder. This geometry has been investigated for microfabricated materials using a Lifshitz formula approach in Ref.~\cite{Decca} and has been used to study Casimir self-energies in Refs.~\cite{Kitson:2006hf,Straley}. \section{Scattering in Elliptic Cylinder Coordinates} We begin by formulating scattering theory in elliptic cylinder coordinates, \begin{eqnarray} x &=& d\cosh \mu\cos \theta \cr y &=& d\sinh \mu\sin \theta \,, \end{eqnarray} where $2d$ is the interfocal separation of our elliptic cylinder coordinates, $\theta$ is the analog of the angle in ordinary cylindrical coordinates, and $\mu$ is the analog of the radius, with \begin{equation} r = \sqrt{ x^2 + y^2} = d\sqrt{\frac{\cosh 2 \mu+\cos 2 \theta}{2}}\to \frac{d}{2} e^{ \mu} \end{equation} as $ \mu\to\infty$. We use separation of variables to form solutions of the Helmholtz equation $-\nabla^2 \psi(\bm{r}) = k^2\psi(\bm{r})$ as products of functions of $\mu$, $\theta$ and $z$ individually. For the functions of $z$, we have ordinary complex exponentials $e^{ik_z z}$, which will multiply angular functions of $\theta$ and radial functions of $\mu$. Since we have parity symmetry, we can choose our angular solutions to be either even or odd under reflection across the $x$-axis, $\theta \to -\theta$. Unlike the ordinary cylinder case, the elliptic angular solutions depend on the wave number $k$, and the elliptic radial solutions associated with the the odd and even angular solutions differ and depend on the wave number and radius separately, rather than only on the product $kr$. For $q=\frac{d^2}{4}(k^2-k_z^2)$, the angular solutions are the even and odd angular Mathieu functions $\mSe_m(\theta, q)$ and $\mSo_m(\theta, q)$, which are the analogs of $\cos m \theta$ and $\sin m \theta$ respectively. As in the case of ordinary cylindrical coordinates, for the even functions $m$ runs from $0$ to $\infty$, while for the odd functions $m$ runs from $1$ to $\infty$. For the corresponding radial functions, we have both the even and odd first kind solutions $\mJe_m(\mu, q)$ and $\mJo_m(\mu, q)$, the analogs of the Bessel function $J_m(\sqrt{k^2-k_z^2}r)$, and the even and odd outgoing wave solutions $\mHe_m(\mu, q)$ and $\mHo_m(\mu, q)$, the analogs of the Hankel function $H_m^{(1)}(\sqrt{k^2-k_z^2}r)$. We will normalize the Mathieu functions so that they obey the same orthonormality conditions as their cylindrical analogs, except that the $m=0$ even angular function will be normalized so that its root mean square average value is $1/\sqrt{2}$ (the same as for all the other angular functions) instead of $\cos 0 = 1$. As a result, we have \begin{equation} \int_0^{2\pi} \mSe_m(\theta, q)^2 d\theta = \int_0^{2\pi} \mSo_m(\theta, q)^2 d\theta = \pi \,, \end{equation} with the radial functions normalized to coincide with their cylindrical analogs asymptotically. Our notation and normalization match that of Ref.~\cite{Graham:2005cq}, which defines Mathieu functions following the conventions of Abramowitz and Stegun \cite{Abramowitz}, but uses a modified notation that is more closely analogous to the ordinary cylinder case. We will make use of identities for elliptic cylinder functions found in standard references \cite{Abramowitz,Morse53,Bateman}. The key ingredients for our calculation will be the free Green's function \begin{eqnarray} G(\bm{r}_1, \bm{r}_2, k) &=& \int_{-\infty}^\infty \frac{d k_z}{2 \pi} \frac{i}{2} \left[ \sum_{m=0}^\infty \mSe_m(\theta_1, q) \mSe_m(\theta_2, q) \mJe_m(\mu_<, q) \mHe_m(\mu_>, q) \right. \cr && \left. + \sum_{m=1}^\infty \mSo_m(\theta_1, q) \mSo_m(\theta_2, q) \mJo_m(\mu_<, q) \mHo_m(\mu_>, q) \right] \,, \label{eqn:Green1} \end{eqnarray} where $ \mu_<$ ($ \mu_>$) is the smaller (larger) of $ \mu_1$ and $ \mu_2$, and the expansion of a plane wave, \begin{equation} e^{i \bm{k} \cdot {\bm{r}}} = e^{ik_z z} \left[ 2 \sum_{m=0}^\infty i^m \mSe_m(\phi, q) \mSe_m(\theta, q) \mJe_m(\mu, q) + 2 \sum_{m=1}^\infty i^m \mSo_m(\phi, q) \mSo_m(\theta, q) \mJo_m(\mu, q) \right]\,, \label{eqn:plane1} \end{equation} where $\mu$, $ \theta$, and $z$ are the elliptic cylinder coordinates of ${\bm{r}}$ and $\phi = \arctan \frac{k_y}{k_x}$ is the angle of $\bm{k} = (k_x,k_y,k_z)$ in the $xy$-plane, with $k^2 = k_x^2 + k_y^2 + k_z^2$. We will work on the imaginary $k$-axis $k=i\kappa$, so that $k_y = i\sqrt{\kappa^2+k_x^2+k_z^2}$ and $q = -d^2(\kappa^2 + k_z^2)/4$ is negative. As a result, it is convenient to rewrite these expressions in terms of modified radial functions, \begin{eqnarray} G(\bm{r}_1, \bm{r}_2, k) &=& \int_{-\infty}^\infty \frac{d k_z}{2 \pi} \frac{1}{\pi} \left[ \sum_{m=0}^\infty \mSe_m(\theta_1, q) \mSe_m(\theta_2, q) \mIe_m(\mu_<, -q) \mKe_m(\mu_>, -q) \right. \cr && \left. + \sum_{m=1}^\infty \mSo_m(\theta_1, q) \mSo_m(\theta_2, q) \mIo_m(\mu_<, -q) \mKo_m(\mu_>, -q) \right] \label{eqn:Green2} \end{eqnarray} and \begin{equation} e^{i \bm{k} \cdot {\bm{r}}} = e^{ik_z z} \left[ 2 \sum_{m=0}^\infty (-1)^m \mSe_m(\phi, q) \mSe_m(\theta, q) \mIe_m(\mu, -q) + 2 \sum_{m=1}^\infty (-1)^m \mSo_m(\phi, q) \mSo_m(\theta, q) \mIo_m(\mu, -q) \right]\,, \label{eqn:plane2} \end{equation} where $\mIe_m(\mu, -q) = i^{-m} \mJe_m(\mu, q)$, $\mIo_m(\mu, -q) = i^{-m} \mJo_m(\mu, q)$, $\mKe_m(\mu, -q) = i^{m+1} \frac{\pi}{2} \mHe_m(\mu, q)$ and $\mKo_m(\mu, -q) = i^{m+1} \frac{\pi}{2} \mHo_m(\mu, q)$ are the modified outgoing radial functions. We will consider scattering with Dirichlet and Neumann boundary conditions on an elliptic cylinder of radius $\mu_0$. For the scattering amplitudes, we have $\displaystyle {\cal T}_{m k_z m' k_z'}^{e,o} = 2\pi \delta(k_z - k_z')\delta_{mm'} {\cal T}_m^{e,o}$, with \begin{eqnarray} {\cal T}_m^e = -\frac{\mIe_m \left(\mu_0, -q \right)} {\mKe_m\left(\mu_0, -q \right)} \qquad {\cal T}_m^o = -\frac{\mIo_m \left(\mu_0, -q \right)} {\mKo_m\left(\mu_0, -q \right)} && \hbox{\qquad (Dirichlet)} \cr {\cal T}_m^e = -\frac{\mIe_m' \left(\mu_0, -q \right)} {\mKe_m' \left(\mu_0, -q \right)} \qquad {\cal T}_m^o = -\frac{\mIo_m' \left(\mu_0, -q \right)} {\mKo_m' \left(\mu_0, -q \right)} && \hbox{\qquad (Neumann),} \end{eqnarray} where prime indicates a derivative with respect to $\mu$. \section{Elliptic Cylinder and Plane} To consider the elliptic cylinder's interaction with a plane, we will need to connect the elliptic cylinder and planar geometries. To do so, we make use of the expression for the free Green's function in Cartesian coordinates for $y_2>y_1$, \begin{equation} G(\bm{r}_1,\bm{r}_2, k) = \int_{-\infty}^\infty \frac{d k_z}{2 \pi} e^{i k_z(z_2 - z_1)} \frac{i}{4\pi} \int_{-\infty}^\infty \frac{dk_x}{k_y} e^{i(k_x (x_2-x_1) + k_y (y_2 - y_1))} \,, \label{eqn:Greenplane} \end{equation} where $k_y = \sqrt{k^2 - k_x^2 - k_z^2} = i \sqrt{\kappa^2 + k_x^2 + k_z^2}$. We equate Eq.~(\ref{eqn:Greenplane}) to the Green's function in Eq.~(\ref{eqn:Green2}), expand the plane wave $\displaystyle e^{i \bm{k}\cdot{\bm{r}_2}}$ in Eq.~(\ref{eqn:Greenplane}) using Eq.~(\ref{eqn:plane2}), make the substitution $k_x \to -k_x$, and finally use the orthogonality of the regular elliptic cylinder solutions to equate both sides term by term in the sums over $m$. The result is an expansion for the elliptic outgoing wave solutions in terms of plane waves for $y<0$ \cite{Abramowitz}, \begin{eqnarray} \mSe_m(\theta, q) \mKe_m(\mu, -q) e^{ik_z z} &=& \int_{-\infty}^{\infty} dk_x \left[ \frac{i}{2 k_y} \mSe_m(\phi, q)\right] e^{-i k_y y + i k_x x} e^{ik_z z} \cr \mSo_m(\theta, q) \mKo_m(\mu, -q) e^{ik_z z} &=& \int_{-\infty}^{\infty} dk_x \left[ \frac{-i}{2k_y} \mSo_m(\phi, q)\right] e^{-i k_y y + i k_x x} e^{ik_z z} \,. \label{eqn:expandout} \end{eqnarray} The quantities in brackets represent the translation matrix elements, which we must then multiply by the normalization factor $\frac{C^{\hbox{\tiny elliptic}}_m}{C^{\hbox{\tiny plane}}_{k_x}}$, where we can read off $C^{\hbox{\tiny elliptic}}_m = \sqrt{\frac{1}{\pi}}$ and $C^{\hbox{\tiny plane}}_{k_x} = \sqrt{\frac{i}{4\pi k_y}}$ from the expressions for the free Green's function in Eqs.~(\ref{eqn:Green2}) and (\ref{eqn:Greenplane}). Finally, the $T$-matrix elements for the plane in Cartesian coordinates are simply ${\cal T}^P = \pm 1$ for Neumann and Dirichlet boundary conditions respectively. (For more general boundary conditions on the plane, this scattering amplitude would be a function of $k_x$.) We have now obtained the $T$-matrix elements, which describe how waves scatter off each object individually, and the translation matrix elements, which convert the scattering bases between the two objects. As a result, we are prepared to assemble these ingredients into the result for the full Casimir interaction energy per unit length. We consider a perfectly conducting plane oriented perpendicular to the $y$-axis and a perfectly conducting elliptic cylinder with its $z$-axis parallel to the plane, its center at a distance $H$ from the plane, and its major axis at an angle $\varphi$ to the plane, as shown in Fig.~\ref{fig:tilt}. This angle represents a rotation of the elliptic cylinder coordinates $\theta$ and $\mu$ relative to the Cartesian coordinates $x$ and $y$, which we then implement in Eq.~(\ref{eqn:expandout}) by adding a constant shift $\varphi$ to the angle $\phi=\arctan \frac{k_y}{k_x}$ in the translation matrix elements. \begin{figure}[htbp] \includegraphics[width=0.6\linewidth]{tilt} \caption{Geometry for the elliptic cylinder and plane.} \label{fig:tilt} \end{figure} For a particular choice of boundary conditions, we can now use the approach of Refs.~\cite{spheres,scalar,universal} to write the Casimir energy per unit length as \begin{equation} \frac{\cal E}{\hbar c L}= \int_0^\infty \frac {d\kappa}{2 \pi} \int_{-\infty}^\infty \frac {dk_z}{2 \pi} \log \det \left(\mathbbm{1}_{mm'}^{\chi \chi'} - {\cal T}_{m}^{\chi} \int \frac{i d k_x}{k_y} {\cal U}_{m k_x}^{\chi} {\cal T}^{P}_{k_x} \hat {\cal U}_{m' k_x}^{\chi'} \right)\,, \end{equation} where the matrix determinant runs over $\chi,\chi'=o,e$ with $m=0,1,2,3\ldots$ for $\chi=e$ and $m=1,2,3\ldots$ for $\chi=o$, and similarly for $m'$ and $\chi'$. The translation matrices ${\cal U}^\chi_{m k_x}$ and reverse translation matrices $\hat {\cal U}^\chi_{m k_x}$ are given by \begin{equation} {\cal U}^e_{m k_x} = \mSe_m\left(\phi + \varphi, q\right) e^{ik_y H} \qquad \hat{\cal U}^e_{m k_x} = \mSe_m\left(-\phi + \varphi, q\right) e^{ik_y H} \label{eqn:trans1} \end{equation} for the even modes and \begin{equation} {\cal U}^o_{m k_x} = \mSo_m\left(\phi + \varphi, q\right) e^{ik_y H} \qquad \hat{\cal U}^o_{m k_x} = \mSo_m\left(-\phi + \varphi, q\right) e^{ik_y H} \label{eqn:trans2} \end{equation} for the odd modes. We can then change the integration variable from $k_x$ to $u=\frac{1}{i} \left(\phi -\frac{\pi}{2}\right)$ and combine the $\kappa$ and $k_z$ equations into a single integral over $p=\sqrt{\kappa^2 + k_z^2}$, so that $q=-\frac{d^2 p^2}{4}$. We obtain \begin{equation} \frac{\cal E}{\hbar c L}= \frac{1}{4\pi}\int_0^\infty p dp \log \det \left[\mathbbm{1}_{mm'}^{\chi \chi'} - {\cal T}^\chi_{m} {\cal T}^P \int_{-\infty}^\infty du e^{-2 p H \cosh u} \genfrac{}{}{0pt}{}{\mSe_m}{\mSo_m} \left(\frac{\pi}{2} + iu + \varphi, q\right) \genfrac{}{}{0pt}{}{\mSe_{m'}}{\mSo_{m'}} \left(\frac{\pi}{2} - iu + \varphi, q\right) \right]\,, \label{eqn:energy} \end{equation} where we choose $\mSe_m$ for $\chi=e$ and $\mSo_m$ for $\chi=o$, and similarly for $m'$ and $\chi'$. The full electromagnetic Casimir energy is the sum of this result for Dirichlet conditions on both surfaces and for Neumann conditions on both surfaces. Note that the established result for an ordinary cylinder \cite{Emig06} can be obtained from this expression by replacing the elliptic functions with their ordinary cylindrical analogs, combining the even and odd modes using $\cosh m u \cosh m' u + \sinh m u \sinh m' u = \cosh (m+m')u$, and employing the integral identity \begin{equation} K_{n}(\sigma) = \int_0^\infty e^{-\sigma \cosh u} \cosh n u \, du \,. \end{equation} There are several special cases of interest in which the calculation simplifies: \begin{itemize} \item Plane perpendicular to the ellipse's major axis. For $\varphi = \pi/2$, the elliptic cylinder's major axis runs perpendicular to the plane. By the reflection symmetry across the $y$-axis, the even and odd sectors decouple, and we can compute the Casimir energy by considering the odd and even elliptic modes separately. \item Plane parallel to the ellipse's major axis. For $\varphi=0$, the elliptic cylinder's major axis lies parallel to the plane. This case also has reflection symmetry across the $y$-axis, but this symmetry does not correspond directly to the symmetry of the even and odd Mathieu functions. Instead, the even Mathieu functions of even order and the odd Mathieu functions of odd order are symmetric under this transformation, while the odd Mathieu functions of even order and the even Mathieu functions of odd order are antisymmetric. (This is the same symmetry structure as the ordinary trigonometric functions have when their argument is displaced by $\pi/2$.) Thus we can again decompose the problem into two independent sectors, consisting of the modes for which the parity of the elliptic functions matches the parity of $m$, and the modes for which they are opposite. \item Zero radius cylinder. An elliptic cylinder with $\mu_0=0$ becomes a strip of width $2d$, allowing us to study the effects of edges \cite{wedge,parabolic1,parabolic2,Gies3,Kabat1}. In that case we have ${\cal T}_m^o =0$ for a Dirichlet boundary and ${\cal T}_m^e =0$ for a Neumann boundary, since in these cases the free modes already obey the boundary condition at the surface. These modes therefore give zero contribution to the Casimir energy in this case, and can be omitted from the calculation. \end{itemize} \section{Numerical Results} We can now compute the Casimir energy by straightforward numerical integration of Eq.~(\ref{eqn:energy}). To compute the modified radial functions needed for the scattering amplitude, we use the package of Alhargan \cite{Alhargan:2000,Alhargan:2000a}. (Standard packages such as Maple and Mathematica only implement the angular Mathieu functions of the first kind. Although the radial functions are related to the angular functions with imaginary argument, without an implementation of the second kind angular function we cannot take advantage of this relationship to compute the functions needed to for the scattering amplitude.) The angular functions arising from the translation matrix, on the other hand, need to be computed for complex arguments, which are not supported directly in the Alhargan package. Fortunately, since only the first kind angular functions are required, we can use the implementation in Mathematica, which supports fully complex arguments. As a final complication, because of problems with the Mathieu function routines in the current version of Mathematica for the case where the parameter $q<0$, we make use of the identities \begin{eqnarray} \mSe_m(\mu,q) &=& \left\{ \begin{array}{l@{\quad}l} (-1)^{\frac{m}{2}} \mSe_m\left(\frac{\pi}{2} - \mu, -q\right) & \hbox{for $m$ even} \cr (-1)^{\frac{m-1}{2}} \mSo_m\left(\frac{\pi}{2} - \mu, -q\right) & \hbox{for $m$ odd} \end{array} \right. \cr \mSo_m(\mu, q) &=& \left\{ \begin{array}{l@{\quad}l} (-1)^{\frac{m}{2}-1} \mSo_m\left(\frac{\pi}{2} - \mu, -q\right) & \hbox{for $m$ even} \cr (-1)^{\frac{m-1}{2}} \mSe_m\left(\frac{\pi}{2} - \mu, -q\right) & \hbox{for $m$ odd} \end{array} \right. \end{eqnarray} so that we only need to compute the angular functions for $-q>0$. As a result, after importing the Alhargan routines for the modified radial functions, it is possible to carry out the full calculation within Mathematica. Because of limitations in the ability of the angular routines to handle large imaginary arguments, however, it was not possible to extend the calculation to very small separations. Figure~\ref{fig:rotate} shows the orientation dependence of the Casimir interaction energy for a perfectly conducting strip (an elliptic cylinder of zero radius) for the case where the distance $H$ from the center of the strip to the plane is twice the distance from the center of the strip to the edge of the strip, $H=2d$. Because higher values did not change the results appreciably, the matrix determinants were truncated at $m_{\hbox{\tiny max}} = 8$. We see that the lowest energy occurs for $\varphi=\pi/2$, when the strip is perpendicular to the plane. As expected, the result for the energy per unit length in this case, $\frac{{\cal E} d^2}{\hbar c L} = -0.00637$, is less negative than the $-0.00674$ one finds \cite{parabolic1,parabolic2} for the case where the strip is extended to an infinite half-plane whose edge maintains the same distance $H-d=d$ from the infinite plane. We note, however, that if we subtract the contribution from a half-plane at distance $H+d=3d$ from the result for the half-plane at distance $H-d=d$ to account for the missing remainder of the half plane, we obtain $-0.00674 \cdot \frac{8}{9} = -0.00599$, which underestimates the magnitude of the true result for the strip. We also compare these results to the proximity force approximation (PFA), \begin{equation} \frac{{\cal E}_{PFA}^{(0)}}{\hbar c L}= -\frac{\pi^2}{720} \int_{-d\cos\varphi}^{-d\cos\varphi} \frac{dx}{(H+x \tan \varphi)^3} = -\frac{\pi^2}{360} \frac{Hd \cos \varphi}{\left( H^2 - d^2 \sin^2 \varphi\right)^2} \end{equation} which gives a good approximation for $\varphi=0$ but goes to zero at $\varphi=\pi/2$. For $\varphi\neq0$ the derivative expansion correction to the PFA \cite{Fosco:2011xx,beyondpfa} is also invalid, because of the sharp curvature at the point of closest approach. \begin{figure} \includegraphics[width=0.5\linewidth]{rotate} \caption{Electromagnetic Casimir interaction energy for a perfectly conducting strip opposite a perfectly conducting plane, as a function of the orientation angle $\varphi$. The distance $H$ from the center of the strip to the plane is twice the distance from the center of the strip to the edge of the strip, $H=2d$. The solid line shows the proximity force approximation.} \label{fig:rotate} \end{figure} Figure \ref{fig:pfa} shows the Casimir interaction energy for a strip oriented parallel to a plane as a function of the distance to the plane. The energy is shown as a ratio with the PFA result (in this case the correction from the derivative expansion vanishes). As in the case of the ordinary cylinder \cite{Emig06}, the PFA is an underestimate at large distances, but at short distances the exact result approaches the PFA result from below. These calculations were carried out with the matrices truncated at several different values of $m_{\hbox{\tiny max}}$ up to $m_{\hbox{\tiny max}} = 16$, with the final result then obtained by extrapolating these results for $m_{\hbox{\tiny max}} \to \infty$. \begin{figure} \includegraphics[width=0.5\linewidth]{pfa} \caption{Ratio of the electromagnetic Casimir interaction energy to the proximity force approximation (PFA) for a perfectly conducting strip of width $2d$ parallel to a perfectly conducting plane, as a function of the separation $H$. As in the case of an ordinary cylinder \cite{Emig06}, the ratio is a nonmonotonic function of $H$.} \label{fig:pfa} \end{figure} \section{Discussion} We have computed the Casimir interaction energy for an elliptic cylinder, the last remaining geometry for which electromagnetic scattering is separable. For a plane, cylinder, and sphere, the problem remains separable even for a dielectric, while for a parabolic cylinder, elliptic cylinder, wedge, and cone only perfect conductors can be solved exactly. However, the scattering method is particularly useful in these latter cases, because they contain sharp limits in which the PFA is invalid. In principle, it should be possible to extend the elliptic cylinder result to a hyperbolic cylinder in the same way as the wedge is obtained from the ordinary cylinder and the cone is obtained from the sphere, but at present there do not appear to be routines available for computing all the Mathieu functions of complex order that would be needed for such a calculation. Focusing on the limit in which the elliptic cylinder becomes a strip has made it possible to study the orientation dependence of the Casimir force, to show how the PFA depends on distance and angle, and to observe non-superposition effects in the perpendicular configuration (where the PFA is invalid). With improvements to the available routines for computing Mathieu functions, this calculation could offer an independent check of the edge correction that was obtained for half-planes \cite{parabolic1,parabolic2,planes}. More generally, this calculation establishes another addition to the toolbox of Casimir problems that can be cast into analytically tractable form. \section{Acknowledgements} N.\ G. thanks T.\ Emig, R.\ L.\ Jaffe, M.\ Kardar, and K.\ Milton for helpful conversations and suggestions. This work was supported in part by the National Science Foundation (NSF) through grant PHY-1213456. \bibliographystyle{apsrev}
1,314,259,993,078
arxiv
\section{Introduction} Let $X$ be a projective variety over $\mathbb{C}$ and let $N_{k}(X)_{\mathbb{Z}}$ denote the group of $k$-cycle classes up to numerical equivalence. Given a class $\alpha \in N_{k}(X)_{\mathbb{Z}}$, we define the mobility count of $\alpha$ to be \begin{equation*} \mc(\alpha) = \max \left\{ b \in \mathbb{Z}_{\geq 0} \, \left| \, \begin{array}{c} \textrm{any }b\textrm{ general points of } X \textrm{ are contained} \\ \textrm{in an effective cycle of class } \alpha \end{array} \right. \right\}. \end{equation*} The mobility count is analogous to the dimension of the space of sections of a divisor: for a divisor $L$ the number of general points that can be imposed on members of $|L|$ is $h^{0}(X,L)-1$. This analogy is richer than might be expected at first sight. \cite{lehmann16} and \cite{fl13} show that one can understand the ``positivity'' of a cycle class $\alpha$ by studying the asymptotic behavior of $\mc(m\alpha)$ as $m$ increases. In this paper, we study the asymptotic behavior of classes on the boundary of the pseudo-effective cone. Continuing the analogy, we define: \begin{defn} \label{iitakadimdef} Let $X$ be a projective variety of dimension $n$ and let $\alpha \in N_{k}(X)_{\mathbb{Z}}$. If some positive multiple of $\alpha$ is represented by an effective cycle, we define the Iitaka dimension of $\alpha$ to be \begin{equation*} \kappa(\alpha) := (n-k) \sup \left\{ r \in \mathbb{R}_{\geq 0} \left| \, \limsup_{m \to \infty} \frac{\mc(m\alpha)}{m^{r}} > 0 \right. \right\}. \end{equation*} Otherwise, we set $\kappa(\alpha) = -\infty$. \end{defn} Here the term $(n-k)$ is simply a convenient rescaling factor. It is not hard to show that the Iitaka dimension takes values in the set \begin{equation*} \kappa(\alpha) \in \{ -\infty \} \cup \{ 0 \} \cup [n-k,n]. \end{equation*} Our main goal is to analyze the possible values of the Iitaka dimension and to understand their relationship with geometry. Our results are motivated by the following conjecture: \begin{conj} \label{mainconj} Let $X$ be a projective variety and let $\alpha \in N_{k}(X)_{\mathbb{Z}}$. Then $\kappa(\alpha) \in \mathbb{Z}_{\geq 0} \cup \{ - \infty \}$. \end{conj} This conjecture is perhaps surprising: while higher codimension cycles exhibit many pathologies not present for divisors, the conjecture predicts a cleaner picture from the viewpoint of positivity. It is also worthwhile to study weaker variants. For example, if we fix the dimensions $n,k$, are there only finitely many possible values of the Iitaka dimension? If the Iitaka dimension is integer-valued, then it captures fundamental geometric information about $\alpha$, and it would be interesting to clarify this geometric input. The following question is related to a conjecture of \cite{voisin10}. \begin{ques} Let $X$ be a smooth projective variety and let $\alpha \in N_{k}(X)_{\mathbb{Z}}$. As we let $W$ vary over all effective cycles with class proportional to $\alpha$ which contain and are smooth at a fixed very general point $p$, is $\kappa(\alpha)$ determined by the set of tangent planes $T_{p}W \subset T_{p}X$? \end{ques} \begin{exmple} Let $X$ be a smooth projective variety and $D$ be a Cartier divisor on $X$. The usual Iitaka dimension of $D$ can only take integer values (see \cite{iitaka70}, \cite{iitaka71}). Section \ref{divisorsec} shows that the result remains true in the numerical setting: for a Weil divisor class $\alpha$ on any projective variety $X$, $\kappa(\alpha) \in \{ - \infty, 0, 1, \ldots, \dim X \}$. \end{exmple} \begin{exmple} Let $X$ be a projective variety of dimension $n$ and let $\alpha \in N_{k}(X)_{\mathbb{Q}}$ be any class. \cite{lehmann16} shows that $\kappa(\alpha)$ attains its maximum value $n$ if and only if $\alpha$ lies in the interior of the pseudo-effective cone (in which case we say that $\alpha$ is big). In fact more is true: there is a constant $\epsilon_{n,k} > 0$ such that $\kappa(\alpha)$ can not take values in the set $(n-\epsilon_{n,k},n)$, showing ``discreteness'' of the Iitaka dimension in a small neighborhood of $n$. \end{exmple} \begin{exmple} \label{mobilityofcurves} Let $\alpha$ be a curve class on a projective variety $X$ of dimension $n$. \cite[Theorem 2.4]{8authors} shows that if two general points of $X$ can be connected by an effective cycle with class proportional to $\alpha$, then $\alpha$ is big. Thus there are four distinct behaviors for the Iitaka dimension of $\alpha$: \begin{enumerate} \item $\kappa(\alpha) = -\infty$. By definition this happens when no positive multiple of $\alpha$ is represented by an effective cycle. \item $\kappa(\alpha) = 0$. This occurs when no positive multiple of $\alpha$ is represented by a curve through a very general point of $X$. In particular $\mc(m\alpha) = 0$ for every $m>0$. \item $\kappa(\alpha) = n-1$. This occurs when there is an effective cycle of class proportional to $\alpha$ through one general point of $X$, but two general points can not be connected by a chain of such cycles. Then the quotient theory of \cite{campana81} and \cite{kmm92} yields a rational map $g: X \dashrightarrow Z$ with $\dim Z > 0$ contracting all such curves through very general points. In particular, there must be a positive constant $C$ such that $\mc(m\alpha) = Cm$ for every sufficiently divisible $m$. \item $\kappa(\alpha) = n$. By \cite{8authors}, the only other possibility is that $\alpha$ is big and that $\mc(m\alpha)$ has the maximal possible growth rate. \end{enumerate} \end{exmple} We first prove some general results in support of Conjecture \ref{mainconj}, for example: \begin{prop} Let $X$ be a smooth projective variety of dimension $n \geq 3$. Suppose that $\alpha \in \Eff_{n-2}(X)$ is an extremal ray and that there is an ample divisor $i: A \hookrightarrow X$ such that $\alpha \not \in A \cdot N^{1}(X)$. Then $\kappa(\alpha) \leq n-1$. \end{prop} We then focus on two specific situations: classes contracted by morphisms, and Schubert classes on Grassmannians. These examples can be seen as prototypes of arbitrary boundary classes, and so are particularly interesting as indicators of what to expect in general. \subsection{Grassmannians} Suppose that $X = G(m,n)$ is a Grassmannian of $m$ planes in an $n$-dimensional vector space and $\alpha$ is a Schubert class on $X$. Given a non-increasing tuple of integers $\lambda = (\lambda_{1},\ldots,\lambda_{m})$ whose components $\lambda_{i}$ satisfy $0 \leq \lambda_{i} \leq n-m$, we let $\sigma_{\lambda}$ denote the class of the Schubert variety parametrizing linear subspaces $W$ whose dimension of intersection with the members of a fixed full flag $V_{\bullet}$ are determined by \begin{equation*} \dim(W \cap V_{n-m+i-\lambda_{i}}) \geq i. \end{equation*} We focus on the easiest case $G(2,n)$, where we can give a complete description of the Iitaka dimension for Schubert classes. \begin{thrm} \label{iitakadimg2nintro} The Iitaka dimension of a Schubert cycle on $G(2,n)$ is determined by the following list: \begin{itemize} \item $\kappa(\sigma_{1}) = \kappa(\sigma_{n-2,n-3}) = 2(n-2)$. \item $\kappa(\sigma_{r}) = n-2$ for $1 < r \leq n-2$. \item $\kappa(\sigma_{r,r-1}) = 2r$ for $1 < r < n-2$. \item $\kappa(\sigma_{r,s}) = r+s$ otherwise. \end{itemize} \end{thrm} In particular, the Iitaka dimension is usually the smallest possible value. \begin{rmk} The study of the Iitaka dimension for Schubert classes is closely related to the differential-geometric notion of Schubert rigidity developed in the series of papers \cite{walters97}, \cite{bryant05}, \cite{hong05}, \cite{hong07}, \cite{coskun11}, \cite{rt12}, \cite{robles13}, \cite{cr13}, \cite{coskun14}. A Schubert class $\sigma$ is called multi rigid if the only effective cycles with class proportional to $\sigma$ are sums of Schubert varieties. Note that any multi rigid class automatically has the minimal possible Iitaka dimension $\kappa(\sigma) = \mathrm{codim}(\sigma)$. By comparison, the calculation of the Iitaka dimension yields a weaker conclusion but provides interesting geometric information for every class. \end{rmk} \subsection{Contracted classes} We next turn to classes contracted by morphisms. We say that $\alpha \in \Eff_{k}(X)$ is a $\pi$-contracted class if $\pi: X \to Z$ is a morphism such that $\pi_{*}\alpha = 0$. If $\dim(Z) \leq k$ then any contracted class must lie on the boundary of the pseudo-effective cone. Using the geometry of $\pi$ we can expect to obtain bounds on the Iitaka dimension of $\alpha$. \begin{exmple} \label{p2timesp2one} Let $X = \mathbb{P}^{2} \times \mathbb{P}^{2}$. Let $A$ and $H$ denote the pullback of the hyperplane class from the first and second factors respectively. Then $\Eff_{2}(X)$ is a simplicial cone with generators $H^{2}, H \cdot A, A^{2}$. The Iitaka dimensions of the non-zero boundary classes are determined by the extremal face: \begin{enumerate} \item $\alpha \in \mathbb{R}_{\geq 0} H^{2} + \mathbb{R}_{\geq 0} A^{2}$. Each component of a cycle representing $m\alpha$ is a fiber of one of the projection maps and can contain at most one general point. Thus $\kappa(\alpha) = 2$. \item $\alpha \in \mathbb{R}_{>0} (H \cdot A)$. An irreducible cycle representing $m(A \cdot H)$ maps to a curve under each projection. If the degrees of these curves are $d_{1}$ and $d_{2}$ the cycle can go through at most \begin{equation*} \min \left\{ \left( \begin{array}{c} d_{1} + 2 \\ 2 \end{array} \right) - 1, \left( \begin{array}{c} d_{1} + 2 \\ 2 \end{array} \right) - 1 \right\} \approx \frac{1}{2} \min \{ d_{1}^{2} ,d_{2}^{2} \} \end{equation*} general points. Since $d_{1}d_{2} = m$, the maximum possible bound occurs when $d_{1} \approx d_{2} \approx m^{1/2}$. It is then not hard to show that $\kappa(\alpha) = 2$. \item $\alpha \in \mathbb{R}_{>0} H^{2} + \mathbb{R}_{>0} (H \cdot A)$ or its involutive face. Theorem \ref{contractedthrm} shows that $\kappa(\alpha) = 3$. We describe how to construct cycles achieving this growth rate; showing that this rate is the optimal one is somewhat harder. Since any two classes in this face have comparable mobility count growth rates, it suffices to construct a single example $\alpha$ with $\kappa(\alpha) \geq 3$. Let $\phi: \tilde{X} \to X$ be the blow-up along a fiber of the first projection map. This variety admits a map $g: \tilde{X} \to \mathbb{P}^{2} \times \mathbb{P}^{1}$. Let $\beta$ be a complete intersection curve on $\mathbb{P}^{2} \times \mathbb{P}^{1}$, so that $\mc(m\beta) \sim Cm^{3/2}$. Then $g^{*}\beta$ is a surface class whose mobility count achieves the same growth rate, and its pushforward $\alpha = \phi_{*}g^{*}\beta$ has $\kappa(\alpha) \geq 3$. Furthermore $\alpha$ lies in the interior of the desired face. \end{enumerate} \end{exmple} The most useful framework for discussing contracted classes in general was set up by \cite{fl15} (see also \cite{cc15}). \begin{defn} Let $\pi: X \to Z$ be a surjective morphism of projective varieties and let $\alpha \in \Eff_{k}(X)$. Fix an ample divisor $A$ on $Z$. The $\pi$-contractibility index of $\alpha$ is defined to be the largest non-negative integer $c \leq k$ such that $\alpha \cdot \pi^{*}A^{k-c+1}=0$. This definition is independent of the choice of $A$. \end{defn} The expected behavior of the Iitaka dimension for a contracted cycle $\alpha$ depends on the contractibility index. \begin{thrmconj} \label{contractedconj} Let $X$ be a projective variety of dimension $n$. Suppose that $\pi: X \to Z$ is a surjective morphism of projective varieties of relative dimension $e$ and suppose $\alpha \in \Eff_{k}(X)_{\mathbb{Z}}$ has $\pi$-contractibility index $c$. Then: \begin{description} \item[Theorem] If $c > e$, then $\kappa(\alpha) \leq 0$. \item[Theorem] If $c = e$, then $\kappa(\alpha) \leq \dim $. \item[Conjecture] If $k-\dim Z < c < e$, then $\kappa(\alpha) \leq n-c$. \end{description} \end{thrmconj} The transition in behavior from $c > e$ to $c \leq e$ has the following geometric explanation. \cite{fl15} defines a pseudo-effective class $\alpha \in N_{k}(X)$ to be ``$\pi$-exceptional'' if the $\pi$-contractibility index for $\alpha$ is larger than the relative dimension of $\pi$. It then shows that $\pi$-exceptional classes are ``rigid'' in a strong sense, and in particular can not contain a general point of $X$. We prove two statements in the direction of Conjecture \ref{contractedconj} for $c<e$. First, we show that the conjectural upper bound cannot be improved: for any morphism $\pi: X \to $, there exists a class $\alpha \in \Eff_{k}(X)$ of contractibility index $c$ achieving the stated bound on the Iitaka dimension. Second, we prove Conjecture \ref{contractedconj} when $k-c$ is at most $1$. In particular: \begin{thrm} Conjecture \ref{contractedconj} holds if either \begin{itemize} \item $X$ has dimension $\leq 4$, or \item $k \leq 2$. \end{itemize} \end{thrm} \begin{exmple} \label{p2timesp2two} Consider again $X = \mathbb{P}^{2} \times \mathbb{P}^{2}$ equipped with the first projection map $\pi: X \to \mathbb{P}^{2}$. The contractibility index of a non-zero pseudo-effective class $a H^{2} + b (H \cdot A) + c A^{2}$ is simply the smallest exponent of $A$ appearing in a term with non-zero coefficient. Then Theorem \ref{contractedconj} is verified explicitly by Example \ref{p2timesp2one}. \end{exmple} \subsection{Acknowledgements} I would like to thank I.~Coskun for a helpful conversation about Grassmannians. \section{Background} Throughout we work over $\mathbb{C}$. Varieties are irreducible and reduced. A cycle will always mean a $\mathbb{Z}$-cycle unless otherwise qualified, and a numerical class will always mean an $\mathbb{R}$-class unless otherwise qualified. \subsection{Numerical spaces and cones} For a projective variety $X$, we let $N_{k}(X)_{\mathbb{Z}}$ denote the abelian group of $k$-cycles up to numerical equivalence as in \cite{fulton84}. We then set \begin{align*} N_{k}(X)_{\mathbb{Q}} & := N_{k}(X)_{\mathbb{Z}} \otimes_{\mathbb{Z}} \mathbb{Q} \\ N_{k}(X) & := N_{k}(X)_{\mathbb{Z}} \otimes_{\mathbb{Z}} \mathbb{R} \end{align*} We let $N^{k}(X)$ denote the dual space of $N_{k}(X)$ consisting of $\mathbb{R}$-polynomials in Chern classes of vector bundles on $X$ up to numerical equivalence (and similarly define the dual groups $N^{k}(X)_{\mathbb{Q}}$ and $N^{k}(X)_{\mathbb{Z}}$). There is an intersection product $N^{\ell}(X) \times N_{k}(X) \to N_{k-\ell}(X)$. We refer to \cite{fl14} for a discussion of these spaces and their behavior under morphisms. The pseudo-effective cone $\Eff_{k}(X) \subset N_{k}(X)$ is the closure of the cone generated by classes of effective cycles on $X$. It is a full-dimensional proper convex closed cone. A class in the interior of $\Eff_{k}(X)$ is called big. We will use the notation $\alpha \preceq \beta$ to denote that $\beta - \alpha \in \Eff_{k}(X)$. The dual cone to $\Eff_{k}(X)$ is the nef cone and is denoted $\Nef^{k}(X)$. \begin{lem} \label{intprop} Let $X$ be a projective variety and let $H$ be an ample divisor. \begin{enumerate} \item Suppose that some multiple of $\alpha \in \Eff_{k}(X)$ is represented by an effective class. Then some multiple of $H \cdot \alpha$ is represented by an effective class. In particular, $H \cdot$ preserves pseudo-effectiveness. \item If $\alpha \in \Eff_{k}(X)$ is big, then $H \cdot \alpha \in \Eff_{k-1}(X)$ is big. \end{enumerate} \end{lem} \subsection{Families of cycles and mobility count} \label{familysec} For us, the most convenient definition of a family of cycles is the following. \begin{defn} \label{familydef} Let $X$ be a projective variety. A family of $k$-cycles on $X$ consists of a variety $W$, a reduced closed subscheme $U \subset W \times X$, and an integer $a_{i}$ for each component $U_{i}$ of $U$, such that for each component $U_{i}$ of $U$ the first projection map $p: U_{i} \to W$ is flat dominant of relative dimension $k$. If each $a_{i} \geq 0$ we say that we have a family of effective cycles. We say that $\sum a_{i}U_{i}$ is the cycle underlying the family. \end{defn} We will usually denote a family of $k$-cycles using the notation $p: U \to W$, with the rest of the data implicit. Over any closed point of $W$, we obtain a $k$-cycle on $X$ by taking the cycle underlying the corresponding fiber of $p$; we call these cycles the members of the family. \begin{defn} \label{mcdefn} Let $X$ be a projective variety and let $W$ be a variety. Suppose that $U\subset W \times X$ is a subscheme and let $p: U \to W$ and $s: U \to X$ denote the projection maps. The mobility count $\mc(p)$ of the morphism $p$ is the maximum non-negative integer $b$ such that the map \begin{equation*} U \times_{W} U \times_{W} \ldots \times_{W} U \xrightarrow{s \times s \times \ldots \times s} X \times X \times \ldots \times X \end{equation*} is dominant, where we have $b$ terms in the product on each side. (If the map is dominant for every positive integer $b$, we set $\mc(p) = \infty$.) For $\alpha \in N_{k}(X)_{\mathbb{Z}}$, the mobility count of $\alpha$, denoted $\mc(\alpha)$, is defined to be the largest mobility count of any family of effective cycles representing $\alpha$. \end{defn} \subsection{Geometry of families} It will often be helpful to replace a family $p: U \to W$ by a slightly modified version. We list briefly several possible changes. We do not describe the constructions formally; most are explained more carefully in \cite{lehmann16}. \begin{itemize} \item Families of cycles admit proper pushforwards and flat pullbacks. To perform such an operation, one does the corresponding operation on the cycle $\sum a_{i} U_{i}$ underlying the family; after passing to a smaller open subset $W^{0} \subset W$ to ensure flatness, we obtain a new family of cycles. \item Suppose given two families $p: U \to W$ and $q: S \to T$ of effective $k$-cycles. We can define the family sum over an open subset of $W \times T$ which parametrizes a sum of a member of $p$ and a member of $q$. By \cite[Lemma 4.9]{lehmann16}, the mobility count adds under this operation. \item Suppose $p:U \to W$ is a family of effective $k$-cycles and $D$ is a divisor. If the general member of $p$ has no component contained in $D$, then we can take generic intersections to define a family $p \cdot D$ of effective $(k-1)$-cycles. We can also take generic intersections with a linear series of Cartier divisors $\mathcal{D}$. \end{itemize} There are also a couple constructions which ``improve'' the geometry of a family without changing it in a fundamental way. \begin{itemize} \item Let $p: U \to W$ be a family of effective cycles on $X$. Using the closedness of the Chow scheme, one can show that there is a normal projective variety $W'$ that is birational to $W$ and a family of cycles $p': U' \to W'$ such that $p$ and $p'$ agree over an open subset of the base. By \cite[Proposition 4.5]{lehmann16}, this operation does not change the mobility count of $p$. \item Let $p: U \to W$ be a family of effective cycles on $X$ and suppose that $U$ is irreducible. By base-changing the family via a suitable morphism $g: T \to W^{0}$ for $W^{0} \subset W$ open, we may ensure that the general fiber of the base-change family is irreducible (as geometric integrality of fibers is constructible on the base). It is a priori unclear whether this change can affect the mobility count. However, by using a suitable family sum to ``glue'' the components back together one can construct a family whose cycle-theoretic fibers are the same as those for $p$ but whose irreducible components have generically irreducible fibers. The mobility count of this modified family is at least as large as $p$. With more care, one can perform an analogous change when $U$ consists of several components. \end{itemize} In sum, when working with families of maximal mobility count, there is no loss in assuming that our family lies over a projective normal base and that the general fibers of the restriction of $p$ to each component of $U$ are irreducible. Suppose given a dominant generically finite map $\phi: X \dashrightarrow Y$. Let $p: U \to W$ denote a family of effective $k$-cycles on $X$. We define the strict transform family $f_{*}p$ by first removing all components of $U$ whose map to $X$ is not dominant, taking the strict transform of the remaining components to a resolution of $f$, and then pushing forward the members of the family (see \cite{lehmann16}). \begin{thrm}[\cite{lehmann16} Lemma 4.8] \label{mobcountstricttransform} Let $X$ be a projective variety and let $p: U \to W$ be a family of effective $k$-cycles on $X$. If $f: X \dashrightarrow Y$ is a dominant generically finite map, then $\mc(p) \leq \mc(f_{*}p)$. If $f$ is furthermore birational, then $\mc(p) = \mc(f_{*}p)$. \end{thrm} More generally, given any dominant map $f: X \dashrightarrow Z$ which does not contract any effective cycles $V$ through a general point satisfying $m\alpha \succeq [V]$, we obtain a pushforward family which has at least as large a mobility count as the original family. \subsection{Variant of the mobility count} We record for later use a variant of the mobility count. By a family of (closed) subschemes of $X$, we mean a closed subscheme $R \subset S \times X$, where $S$ is a variety and the projection $q: R \to S$ is surjective. \begin{defn} \label{familymcdef} Let $X$ be a projective variety, and let $q: R \to S$ be a fixed family of subschemes of $X$ equipped with a flat morphism $t: R \to X$. Suppose that $p: U \to W$ is a family of effective $k$-cycles on $X$, and let $p': U' \to W^{0}$ denote the flat pullback family to $R$. We define the mobility count $\mc(p;q)$ of $p$ with respect to the family $q$ to be the largest non-negative integer $b$ such that the map \begin{equation*} U' \times_{W^{0}} U' \times_{W^{0}} \ldots \times_{W^{0}} U' \to S \times S \times \ldots \times S \end{equation*} is dominant, where we have $b$ terms in the product on each side. (If the map is dominant for every positive integer $b$, we set $\mc(p;q) = \infty$.) \end{defn} Conceptually, $\mc(p;q)$ represents how many general elements of $q$ can be intersected by members of the family $p$. One could of course define an analogous notion where $t$ is not assumed to be flat, but this situation is the only one we will need. \begin{lem} \label{familymcbound} Let $X$ be a projective variety of dimension $n$ with a fixed very ample divisor $A$. Let $q: R \to S$ be a family of equidimensional codimension $r$ subschemes of $X$ equipped with a flat morphism $t: R \to X$. Consider a family $p: U \to W$ of effective $k$-cycles where $r > k$. Then \begin{equation*} \mc(p;q) \leq 2^{kr+3r} ([p] \cdot A^{k} + 1)^{r/r-k}. \end{equation*} \end{lem} The goal of this lemma is the exponent of $r/r-k$ in the degree of $[p]$; there has been no attempt to optimize the leading constant. \begin{proof} We may assume that $\mc(p;q) > 0$. Just as in Definition \ref{familymcdef}, let $p': U' \to W^{0}$ be the flat pullback family of $p$. Then we have a dominant map \begin{equation*} U' \times_{W^{0}} U' \times_{W^{0}} \ldots \times_{W^{0}} U' \to S \times S \times \ldots \times S \end{equation*} where there are $\mc(p;q)$ terms on both sides. Consider the map \begin{equation*} U' \times_{W^{0}} U' \times_{W^{0}} \ldots \times_{W^{0}} U' \to R \times R \times \ldots \times R \end{equation*} and let $V_{R}$ denote the image. By composing with the flat map $R \to X$ we obtain \begin{equation*} U' \times_{W^{0}} U' \times_{W^{0}} \ldots \times_{W^{0}} U' \to X \times X \times \ldots \times X. \end{equation*} Let $f: X \dashrightarrow \mathbb{P}^{r}$ denote the rational map defined by a general $(r+1)$-dimensional subspace of $H^{0}(X,A)$. We claim that the induced rational map \begin{equation*} U' \times_{W^{0}} U' \times_{W^{0}} \ldots \times_{W^{0}} U' \dashrightarrow \mathbb{P}^{r} \times \mathbb{P}^{r} \times \ldots \times \mathbb{P}^{r} \end{equation*} is dominant, where there are $\mc(p;q)$ factors on each side. We argue inductively on the number of factors. Suppose we fix a general fiber of the map $R^{\times \mc(p;q)} \to S^{\times \mc(p;q)-1}$. The intersection of $V_{R}$ with this fiber dominates $S$ under the first projection, and a dimension count (using flatness of $R \to X$) shows that this set must meet the pullback of a general complete intersection variety $Q = A^{n-r}$ from the first factor $X$. Varying the fiber we see that $V_{R} \cap f_{1}^{-1}(Q)$ maps dominantly onto $S^{\mc(p;q)-1}$. Repeating this argument inductively, we see that $V_{R}$ must intersect the intersection of the pullbacks of a general $Q$ from all the factors, which is equivalent to the dominance of the map. By construction, this map factors through \begin{equation*} U|_{W^{0}} \times_{W^{0}} U|_{W^{0}} \times_{W^{0}} \ldots \times_{W^{0}} U|_{W^{0}} \dashrightarrow \mathbb{P}^{r} \times \mathbb{P}^{r} \times \ldots \times \mathbb{P}^{r} \end{equation*} which is then itself dominant. In fact, even if we replace $W^{0}$ by a smaller open subset, this map will still be dominant; this follows from the argument of \cite[Proposition 4.5]{lehmann16}. By generality of $f$, we can pushforward $p$ to define a family of $k$-cycles on $\mathbb{P}^{r}$. Note that this image family has degree $[p] \cdot A^{k}$. Furthermore, since we obtain a dominant map above even when shrinking $W^{0}$ to the locus where the pushforward family is defined, the pushforward family has mobility count at least $\mc(p;q)$. One then applies \cite[Theorem 5.12]{lehmann16} to bound mobility counts on projective space. \end{proof} \section{Basic properties} We next verify some basic properties of the Iitaka dimension. The following Lemma \ref{iitakadimhom} shows that, just as for divisors, the Iitaka dimension is invariant under rescaling. In particular, we can naturally extend the Iitaka dimension to any class $\alpha \in N_{k}(X)_{\mathbb{Q}}$. \begin{lem} \label{iitakadimhom} Let $X$ be a projective variety and let $\alpha \in N_{k}(X)_{\mathbb{Z}}$. Then for any positive integer $c$ we have $\kappa(\alpha) = \kappa(c\alpha)$. \end{lem} \begin{proof} Fix a positive real number $r$, and define the function $g_{r}: N_{k}(X)_{\mathbb{Z}} \to \mathbb{R} \cup \{ \infty \}$ by \begin{equation*} g_{r}(\alpha) = \limsup_{m \to \infty} \frac{\mc(m\alpha)}{m^{r}}. \end{equation*} It suffices to show that $c^{r}g_{r}(\alpha) = g_{r}(c\alpha)$. This is a consequence of the following Lemma \ref{lazlemma} applied to the function $f: \mathbb{N} \to \mathbb{R}_{\geq 0}$ which sends $m \mapsto \mc(m\alpha)$. (Note that the hypothesis of Lemma \ref{lazlemma} is verified by using the additivity of the mobility count of the family sum.) \end{proof} \begin{lem}[\cite{lazarsfeld04} Lemma 2.2.38] \label{lazlemma} Let $f: \mathbb{N} \to \mathbb{R}_{\geq 0}$ be a function. Suppose that for any $r,s \in \mathbb{N}$ with $f(r) > 0$ we have that $f(r+s) \geq f(s)$. Then for any $k \in \mathbb{R}_{>0}$ the function $g: \mathbb{N} \to \mathbb{R} \cup \{ \infty \}$ defined by \begin{equation*} g(r) := \limsup_{m \to \infty} \frac{f(mr)}{m^{k}} \end{equation*} satisfies $g(cr) = c^{k}g(r)$ for any $c,r \in \mathbb{N}$. \end{lem} \begin{rmk} Although \cite[Lemma 2.2.38]{lazarsfeld04} only explicitly address the volume function, the essential content of the proof is the more general statement above. \end{rmk} \begin{prop} Let $X$ be a projective variety of dimension $n$ and let $\alpha \in N_{k}(X)_{\mathbb{Z}}$. Then \begin{equation*} \kappa(\alpha) \in \{ -\infty \} \cup \{ 0 \} \cup [n-k,n]. \end{equation*} \end{prop} \begin{proof} The upper bound $\kappa(\alpha) \leq n$ is proved by \cite[Proposition 5.1]{lehmann16}. If every positive multiple of $\alpha$ has vanishing mobility count, then $\kappa(\alpha) \in \{ -\infty, 0\}$. Otherwise for some positive integer $s$ we have $\mc(s\alpha) > 0$. Using additivity of the mobility count under family sums we see that $\mc(ms\alpha) \geq m \mc(s\alpha)$ so that $\kappa(\alpha) \geq n-k$. \end{proof} \subsection{Concentration of mobility count} The following somewhat technical result shows that an increasing mobility count must be concentrated on families of irreducible cycles. \begin{lem} \label{irrfamconcentrates} Let $X$ be a projective variety of dimension $n$ and let $\alpha \in N_{k}(X)_{\mathbb{Q}}$. Suppose that $\kappa(\alpha) > n-k$. Fix an ample divisor $H$ on $X$. Then for any positive integer $M$, any positive constant $C$, and any sufficiently small positive $\epsilon$, there is some integer $m > M$ and an irreducible family of $k$-cycles $p$ such that $\mc(p) > C(H^{k} \cdot p)^{\frac{\kappa(\alpha)}{n-k}-\epsilon}$ and $m\alpha - [p]$ is the class of an effective $\mathbb{Z}$-cycle. \end{lem} \begin{proof} Suppose for a contradiction that we can choose an $M$, $C$, and $\epsilon$ violating the condition. For any $m>M$ such that $m\alpha \in N_{k}(X)_{\mathbb{Z}}$, choose finitely many irreducible families $p_{i}$ whose family sum gives a family of cycles representing $\alpha$ of maximal mobility count. Since each class $m\alpha-[p_{i}]$ represents an effective $\mathbb{Z}$-cycle, we find that for every sufficiently large $m$ \begin{align*} \mc(m\alpha) & = \sum_{i} \mc(p_{i}) \\ & \leq \sum_{i} C(H^{k} \cdot p_{i})^{\frac{\kappa(\alpha)}{n-k} - \epsilon} \\ & \leq C(H^{k} \cdot \alpha)^{\frac{\kappa(\alpha)}{n-k} - \epsilon} m^{\frac{\kappa(\alpha)}{n-k} - \epsilon} \end{align*} where the last inequality follows from convexity. This is a contradiction to the expected growth rate of $\mc(m\alpha)$. \end{proof} \subsection{Ample intersections} We next analyze the behavior of $\kappa(X)$ under intersections with ample divisors. \begin{lem} \label{linearseriesintest} Let $X$ be a projective variety of dimension $n$. Suppose that $p$ is a family of irreducible $k$-cycles and $r: \mathcal{D} \to V$ is a linear series of effective Cartier divisors. Then \begin{equation*} \mc(p \cdot \mathcal{D}) \geq \min\{ \mc(p) - 1, \mc(r) \}. \end{equation*} \end{lem} Recall that $p \cdot \mathcal{D}$ denotes the family of cycles which are intersections of general elements of $p$ with general elements of $\mathcal{D}$. \begin{proof} It suffices to consider the case when $\mc(p) > 1$. Suppose we fix a Cartier divisor $D$ through $\mc(r)$ general points $x_{i}$ of $X$. There is an irreducible element $Z$ of the family $p$ containing a general point of $X - \Supp(D)$ and any $\min\{ \mc(p)-1,\mc(r) \}$ of the $x_{i}$. Since $Z$ is not contained in $\Supp(D)$, we can take cycle-theoretic intersections to obtain the desired family. \end{proof} \begin{prop} \label{ampleintandiitakadim} Let $X$ be a projective variety of dimension $n$ and let $\alpha \in N_{k}(X)_{\mathbb{Q}}$. Suppose that $H$ is an ample $\mathbb{Q}$-Cartier divisor on $X$. We have $\kappa(\alpha \cdot H) \geq \kappa(\alpha)$, with strict inequality unless $\kappa(\alpha) \in \{ -\infty, 0, n \}$. \end{prop} \begin{proof} Since the Iitaka dimension is invariant under rescaling, it suffices to prove this when $H$ is an ample Cartier divisor satisfying the condition \begin{equation*} \mc(|mH|) \geq cm^{n} \end{equation*} for some constant $c>1$. The statement is obvious when $\kappa(\alpha) = -\infty$. We next analyze the special cases $\kappa(\alpha) \in \{0, n \}$. If some multiple of $\alpha$ is represented by an effective cycle, then by Lemma \ref{intprop} the same is true for some multiple of $\alpha \cdot H$, showing the inequality when $\kappa(\alpha)=0$. If $\alpha$ is big, then $\alpha \cdot H$ is also big by Lemma \ref{intprop}, showing the inequality when $\kappa(\alpha) =n$. Next suppose that $\kappa(\alpha) = n-k$. Clearly we can find effective cycles with class proportional to $\alpha \cdot H$ through any general point of $X$. Thus $\kappa(\alpha \cdot H) \geq n-k+1 > \kappa(\alpha)$. Finally, consider the case when $n-k < \kappa(\alpha) < n$. Fix a positive constant $C$ and an $\epsilon > 0$. Suppose that $p_{i}: U_{i} \to W$ are the irreducible families of $k$-cycles composing a family $p$ representing $m\alpha$ of maximal mobility count; as described in Section \ref{familysec}, we may assume that each $U_{i}$ has irreducible generic fiber. For $m$ sufficiently large, we have \begin{equation*} \mc(|\lceil m^{\frac{\kappa(\alpha)}{n(n-k)}} \rceil H |) \geq cm^{\frac{\kappa(\alpha)}{n-k}} > Cm^{\frac{\kappa(\alpha)-\epsilon}{n-k}}. \end{equation*} Note that $p$ can have at most $H^{k} \cdot m\alpha$ components. Thus for $m$ sufficiently large, we have \begin{align*} \mc(m \lceil m^{\frac{\kappa(\alpha)}{n(n-k)}} \rceil \alpha \cdot H) & \geq \sum_{i} \mc(p_{i} \cdot |\lceil m^{\frac{\kappa(\alpha)}{n(n-k)}} \rceil H |) \\ & \geq \sum_{i} \min\{ \mc(p_{i}) -1, \mc(|\lceil m^{\frac{\kappa(\alpha)}{n(n-k)}}\rceil H |) \} \textrm{ by Lemma \ref{linearseriesintest} } \\ & \geq Cm^{\frac{\kappa(\alpha) - \epsilon}{n-k}} - m(H^{k} \cdot \alpha). \end{align*} Note that $m \cdot \lceil m^{\kappa(\alpha)/n(n-k)} \rceil \leq 2 m^{1+\kappa(\alpha)/n(n-k)}$. Thus, by renormalizing and taking roots, for any positive constant $\widetilde{C}$ and any $\epsilon > 0$ we have for $m$ sufficiently large \begin{equation*} \mc(m\alpha \cdot H) \geq \widetilde{C}m^{\frac{\kappa(\alpha)-\epsilon}{n-k+\frac{\kappa(\alpha)}{n}}}. \end{equation*} The codimension of $\alpha \cdot H$ is $n-k+1$. Since the equality above is true for any positive $\widetilde{C}$ and any sufficiently small $\epsilon$, by taking limits we find \begin{equation*} \kappa(\alpha \cdot H) \geq \frac{n-k+1}{n-k+\frac{\kappa(\alpha)}{n}} \kappa(\alpha). \end{equation*} \end{proof} \subsection{Birational behavior of the Iitaka dimension} Suppose that $\phi: Y \to X$ is a birational morphism of projective varieties. Given any class $\beta \in N_{k}(Y)_{\mathbb{Q}}$, Theorem \ref{mobcountstricttransform} shows that $\kappa(\phi_{*}\beta) \geq \kappa(\beta)$. In this section, we address the opposite question: given a class $\alpha \in N_{k}(X)_{\mathbb{Q}}$, what are the possible Iitaka dimensions of classes $\beta$ satisfying $\phi_{*}\beta = \alpha$? \begin{thrm} \label{birbeh} Let $\phi: Y \to X$ be a birational morphism of projective varieties. Suppose $\alpha \in N_{k}(X)_{\mathbb{Q}}$. Then there is a class $\beta \in N_{k}(Y)_{\mathbb{Q}}$ satisfying $\phi_{*}\beta = \alpha$ and $\kappa(\beta) = \kappa(\alpha)$. \end{thrm} Before proving this theorem, we need to recall some results of \cite{fl13} concerning the movable cone and its behavior under birational maps. \begin{defn} The movable cone $\Mov_{k}(X)$ is the closure of the cone generated by classes of irreducible subvarieties which deform to cover $X$. We say that $\alpha \in N_{k}(X)$ is movable if it lies in $\Mov_{k}(X)$. \end{defn} \begin{lem}[\cite{fl13} Corollary 6.6] \label{compactnessofmov} Let $\phi: Y \to X$ be a birational morphism of projective varieties. Fix a class $\alpha \in \Eff_{k}(X)$. Then the set of classes \begin{equation*} \mathcal{S} := \{ \beta \in \Mov_{k}(Y) | \phi_{*}\beta \preceq \alpha \} \end{equation*} is compact. \end{lem} We also need: \begin{lem} \label{conelem} Let $M$ be a real vector space, $L \subset M$ a full rank lattice and $T \subset L$ a subsemigroup which generates $L$. For any compact subset $S \subset M$, there is an element $\widetilde{\beta} \in T$ such that \begin{equation*} m(\widetilde{\beta} - S) \cap L \subset T \end{equation*} for any positive integer $m$. \end{lem} \begin{proof} Let $C$ denote the closure of the cone in $M$ generated by elements of $T$. Since $C$ is full-dimensional, it is clear that there is a class $\gamma \in T$ such that $\gamma - S \subset C^{\circ}$. One can then choose a subcone $C' \subset C$ which is finitely generated by a subset of $T$ which still generates $L$ and such that $\gamma - S \subset C'$. Note that $m(\gamma - S) \subset C'$ for any positive integer $m$. One can then conclude by the argument of \cite[Lemma 4.13]{fl13} the existence of a $\beta \in T$ such that \begin{equation*} \beta + (m(\gamma - S) \cap L) \subset T \end{equation*} for any positive integer $m$. Set $\widetilde{\beta} = \beta + \gamma$. \end{proof} \begin{proof}[Proof of Theorem \ref{birbeh}:] Without loss of generality we may suppose $\alpha \in \Eff_{k}(X)_{\mathbb{Z}}$. Define: \begin{itemize} \item $T_{Y} \subset N_{k}(Y)_{\mathbb{Z}}$ to be the subsemigroup consisting of effective classes $\xi$ such that $m\alpha - \phi_{*}\xi$ is an effective class for some $m>0$. \item $L_{Y}$ for the sublattice of $N_{k}(Y)_{\mathbb{Z}}$ generated by $T_{Y}$. \item $M_{Y}$ the subspace of $N_{k}(Y)$ spanned by $T_{Y}$. \end{itemize} Let $\mathcal{S}$ be the compact set as constructed in Lemma \ref{compactnessofmov}. Then $\mathcal{S} \cap M_{Y}$ is also compact. By Lemma \ref{conelem}, there is some class $\widetilde{\beta} \in T_{Y}$ such that $m(\widetilde{\beta} - (\mathcal{S} \cap M_{Y})) \cap L_{Y} \subset T_{Y}$ for any positive integer $m$. Fix a positive integer $m$ and a family of effective $k$-cycles $p: U \to W$ of class $m\alpha$ of maximal mobility count. Remove all non-dominant components of $U$ and consider the strict transform family $q$ on $Y$. Note that $[q] \in m\mathcal{S} \cap T_{Y}$. Thus $m \widetilde{\beta} - [q] \in m(\widetilde{\beta} - (\mathcal{S} \cap M_{Y})) \cap L_{Y}$ is an effective class. Consequently \begin{equation*} \mc(m \widetilde{\beta}) \geq \mc(q) = \mc(p) = \mc(m \alpha). \end{equation*} Since $\widetilde{\beta} \in T_{Y}$, by definition there is some positive integer $c$ such that $c\alpha - \phi_{*}\widetilde{\beta}$ is an effective class $\nu$. By \cite[Proposition 3.21]{fl14} there is an effective $\mathbb{Q}$-class $\mu$ such that $\phi_{*}\mu= \nu$; let $b$ be a positive integer such that $b\mu$ is an effective class. Set $\beta := \frac{1}{c}(\mu + \widetilde{\beta})$. Then $\mc(cbm\beta) \geq \mc(m\alpha)$ for all positive integers $m$, and we obtain $\kappa(\beta) \geq \kappa(\alpha)$ by the invariance of Iitaka dimensions under rescaling. The reverse inequality follows from Theorem \ref{mobcountstricttransform}. \end{proof} \subsection{Extremal rays} There are some techniques which one can sometimes apply to give upper bounds on the Iitaka dimension of a class on an extremal ray. \begin{prop} Let $X$ be a smooth projective variety of dimension $n \geq 3$. Suppose that $\alpha \in \Eff_{n-2}(X)_{\mathbb{Q}}$ spans an extremal ray and that there is an ample divisor $A$ such that $\alpha \not \in A \cdot N^{1}(X)$. Then $\kappa(\alpha) \leq n-1$. \end{prop} \begin{proof} Fix a positive integer $q$ so that $qA$ is very ample, and let $H$ be a very general member of $|qA|$. By the Lefschetz hyperplane theorem we see that if $Z \subset H$ is an effective divisor then its class in $X$ lies in $A \cdot N^{1}(X)$. In particular, since $\alpha$ spans an extremal ray, the class $m\alpha - [Z]$ is not pseudo-effective for any positive integer $m$ and for any such choice of $Z$. Let $p: U \to W$ be a family of effective cycles representing $m\alpha$ of maximal Iitaka dimension. By the above argument, there is no component of any member of the family which is contained in $H$. Thus, by taking the intersection of these cycles with the divisor $H$ we obtain a family $p_{H}$ of $(n-3)$-cycles on $H$ of class $m \alpha \cdot H$. By arguing as in \cite[Theorem 5.12]{lehmann16}, we see that $\mc_{X}(p) \leq \mc_{H}(p_{H})$. As $H$ has dimension $n-1$ and the class of $p_{H}$ grows linearly in $m$, the mobility count of $p_{H}$ is bounded above by $Cm^{n-1/n-k}$ for some constant $C$. \end{proof} The argument above clearly extends to other codimensions when $X$ satisfies a suitable Lefschetz theorem. \section{Iitaka dimension of divisors} \label{divisorsec} We next show that for divisors the Iitaka dimension is an integer. This is a familiar fact for the classical Iitaka dimension defined by sections; we verify that the numerical version has similar behavior. To differentiate the two, we let $\kappa_{\mathrm{classical}}(D)$ denote the classical Iitaka dimension of a Cartier divisor $D$. To study the mobility count of divisors, it is often useful to reformulate the definition as follows. Suppose that $X$ is smooth and that $p: U \to W$ is a family of effective divisors on $X$ with $W$ normal. We obtain an induced map $ch: W \to \Chow(X)$ and the mobility count of $p$ coincides with the dimension of $\overline{ch(W)}$. We will frequently use this interpretation in this section. \begin{lem} \label{cartiersectionestimate} Let $X$ be a smooth projective variety of dimension $n$ and let $D$ be a Cartier divisor on $X$ with $\kappa_{\mathrm{classical}}(D) = r$. Let $A$ be the pullback of a very ample divisor under a birational map and let $s$ be a positive integer such that $D \cdot A^{n-1} < sA^{n}$. Then \begin{equation*} h^{0}(X,D) < s^{r}A^{n} + 1. \end{equation*} \end{lem} \begin{proof} Let $\phi: X' \to X$ be a birational map resolving the linear series $|D|$. Let $M$ denote a divisor in the basepoint free part of $\phi^{*}|D|$ and let $\pi: X' \to Z$ be the morphism induced by $|M|$. Note that $\dim(Z) \leq r$; we may also assume that $\dim(Z) \geq 1$ since otherwise the desired inequality is clear. Since \begin{equation*} M \cdot \phi^{*}A^{n-1} \leq D \cdot A^{n-1} \end{equation*} it suffices to prove the statement for $M$. Fix $n-r$ general elements of the linear series $A_{1},\ldots,A_{n-r} \in |A|$ and let $W_{i}$ denote the scheme-theoretic intersection of the first $i$ of these. Note that $M|_{W_{i}}$ is not big for $i<n-r$; using the LES for restriction of sections inductively one sees that $h^{0}(X',M) \leq h^{0}(W_{n-r},M)$. But by another easy inductive argument using a LES of sections and cutting down by hyperplanes the latter is bounded above by $s^{r}A^{n}+1$. \end{proof} \begin{thrm} Let $X$ be a smooth projective variety of dimension $n$ and let $\alpha \in N_{n-1}(X)_{\mathbb{Q}}$. Then \begin{equation*} \kappa(\alpha) = \sup_{L \in \mathrm{Div}(X) \otimes \mathbb{Q}, [L] = \alpha} \kappa_{\mathrm{classical}}(L). \end{equation*} In particular, $\kappa(\alpha) \in \{-\infty \} \cup \mathbb{Z}_{\geq 0}$. \end{thrm} \begin{proof} By homogeneity it suffices to consider the case when $\alpha \in N_{n-1}(X)_{\mathbb{Z}}$. Set $r$ to be the maximum over all Iitaka dimensions as in the right hand side of the statement above. The inequality $\kappa(\alpha) \geq r$ is clear. Conversely, let $P(X)$ denote the dual of the Albanese variety of $X$. Fix a very ample divisor $A$ on $X$ and choose a positive integer $s$ such that \begin{equation*} \alpha \cdot A^{n-1} \leq s A^{n}. \end{equation*} Let $p: U \to W$ denote a family of effective divisors on $X$ representing $m\alpha$. Then $p$ induces a rational map $W \dashrightarrow P(X)$ defined on the normal locus $W^{\circ} \subset W$. We see that $\dim(\overline{ch_{p}(W^{\circ})}) \leq \dim(P(X)) + \chdim(p|_{F})$ where $F$ is a general fiber of the map from $W$ to $P(X)$. In particular for any component $F'$ of $F$ we have that $p|_{F'}$ is a family of rationally equivalent effective divisors. Thus by Lemma \ref{cartiersectionestimate} we have \begin{equation*} \mc(p) \leq \dim(P(X)) + m^{r}s^{r}A^{n} + 1 \end{equation*} and the reverse inequality follows. \end{proof} By Theorem \ref{birbeh} we deduce: \begin{thrm} Let $X$ be a projective variety of dimension $n$ and let $\alpha \in N_{n-1}(X)_{\mathbb{Q}}$. Let $\phi: X' \to X$ be a smooth birational model. Then \begin{equation*} \kappa(\alpha) = \sup_{L \in \mathrm{Div}(X') \otimes \mathbb{Q}, \phi_{*}[L] = \alpha} \kappa_{\mathrm{classical}}(L). \end{equation*} In particular, $\kappa(\alpha) \in \{-\infty \} \cup \mathbb{Z}_{\geq 0}$. \end{thrm} \begin{rmk} When $X$ is normal, one can define the Iitaka dimension of any Weil divisor $D$ capturing the asymptotic growth rate of sections of the rank $1$ reflexive sheaves $\mathcal{O}_{X}(mD)$ analogously with the Cartier divisor case. There is a Cartier divisor $D'$ on a birational model of $X$ with exactly the same behavior of sections (see for example \cite[Lemma 3.3]{fkl15}), so that the Iitaka dimension is still integer-valued. For normal varieties the Iitaka dimension of a Weil divisor numerical class coincides with the maximal Iitaka dimension of any Weil divisor representing the class by essentially the same argument. \end{rmk} \section{Contracted classes} Suppose that $\pi: X \to Z$ is a morphism and $\alpha \in \Eff_{k}(X)$ satisfies $\pi_{*}\alpha = 0$. The goal of this section is to bound the Iitaka dimension of $\alpha$ in terms of the geometry of the map $\pi$. \begin{defn} Let $\pi: X \to Z$ be a surjective morphism of projective varieties and let $\alpha \in \Eff_{k}(X)$. Fix an ample divisor $A$ on $Z$. The $\pi$-contractibility index of $\alpha$ is defined to be the largest non-negative integer $c \leq k+1$ such that $\alpha \cdot \pi^{*}A^{k-c+1}=0$. This definition is independent of the choice of $A$. \end{defn} The basic properties of the contractibility index are described by \cite[Section 4.2]{fl15}: \begin{itemize} \item The contractibility index of $\alpha$ is positive precisely when $\pi_{*}\alpha=0$. \item The contractibility index is at most $k+1$, with equality only for the $0$ class. \item The contractibility index is at least $k-\dim Z$. If the contractibility index is larger than this minimum value, then no effective cycle of class $\alpha$ surjects onto $Z$. \end{itemize} \begin{exmple} If $V$ is an irreducible subvariety of $X$, then the contractibility index of $[V]$ is the same as $\mathrm{reldim}(\pi|_{V})$. \end{exmple} The Iitaka dimension of $\alpha$ exhibits two different behaviors, based on whether the contractibility index $c$ is smaller or larger than the relative dimension of $\pi$. First suppose that $c > \mathrm{reldim}(\pi)$. \cite{fl15} calls such classes ``$\pi$-exceptional'' and shows that they are rigid in a strong sense. In particular: \begin{lem} Let $\pi: X \to Z$ be a surjective morphism of projective varieties of relative dimension $e$. Suppose that $\alpha \in \Eff_{k}(X)_{\mathbb{Z}}$ has contractibility index $c$ and that $c > e$. Then $\kappa(\alpha) \leq 0$. \end{lem} \begin{proof} Arguing as in \cite[Lemma 4.11]{fl15}, there is a proper closed subset $E$ in $X$ such that any effective cycle represented by a multiple of $\alpha$ is contained in $E$. Thus $\mc(m\alpha) = 0$ for all positive integers $m$. \end{proof} The case when $c = \mathrm{reldim}(\pi)$ is handled by the following naive bound. \begin{lem} \label{naivebound} Let $\pi: X \to Z$ be a surjective morphism from a projective variety of dimension $n$ to a projective variety of dimension $d$. Suppose that $\alpha \in \Eff_{k}(X)$ has contractibility index $c$ and that $k - \dim(Z) < c \leq n-d$. Then $\kappa(\alpha) \leq (n-k) \cdot \frac{d}{d-k+c}$. \end{lem} Note that when $c = n-d$, this simplifies to $\kappa(\alpha) \leq d$ as desired. \begin{proof} The statement is clear if $\kappa(\alpha) \leq n-k$, so we may assume otherwise. Fix an ample divisor $A$ on $X$ and an ample divisor $H$ on $Z$. Let $m$ be a positive integer and let $p_{m}$ be a family of effective cycles representing $m\alpha$ with maximal mobility count. The image of a cycle in the family is a subscheme of $Z$; every component has dimension at most $k-c < \dim(Z)$ and has $H$-degree bounded linearly in terms of $\alpha \cdot A^{k}$. It is clear that $\mc(p_{m})$ is bounded above by the mobility count of the images. Using \cite{lehmann16} to bound the mobility count of the images we see that there is some constant $C$ such that $\mc(p_{m}) \leq C m^{d/d-k+c}$. \end{proof} We next turn our attention to the case when $c$ is close to $k$. \begin{lem} \label{reldimequal} Let $\pi: X \to Z$ be a surjective morphism of projective varieties of relative dimension $e$. Suppose that $\alpha \in \Eff_{k}(X)_{\mathbb{Z}}$ has contractibility index $c = k$. Then $\kappa(\alpha) \leq n-c$. \end{lem} \begin{proof} Let $V$ be an effective cycle representing $m\alpha$ through $b$ general points of $X$. Then $\pi(V)$ is a union of points on $Z$, which contains $b$ general points as a subset. Since the cardinality of $\pi(V)$ can only grow linearly with $m$, we obtain the result. \end{proof} \begin{thrm} \label{contractedthrm} Let $\pi: X \to Z$ be a morphism from a projective variety of dimension $n$ to a projective variety of dimension $d$. Let $\alpha \in \Eff_{k}(X)$ be a class of $\pi$-contractibility index $c$. If $c = k-1$, then $\kappa(\alpha) \leq n-c$. \end{thrm} \begin{proof} Set $e = n-d$. We start with several reductions. Let $\phi: X' \to X$ be a birational model and let $\beta \in \Eff_{k}(X)$ be a class such that $\pi_{*}\beta = \alpha$. Then the contractibility index of $\beta$ for $\pi \circ \phi$ is still $c$. Thus by Theorem \ref{birbeh} it suffices to replace $X$ by any higher birational model. In particular, we may suppose that $X$ admits a map $\rho: X \to Y$ where $Y$ is a variety of dimension $e$ and $\rho|_{F}$ is generically finite for a general fiber $F$ of $\pi$. Such $X$ naturally carries a generically finite map surjecting onto $Z \times Y$, and hence also to $\mathbb{P}^{d} \times \mathbb{P}^{e}$. Since the Iitaka dimension can only increase under pushforward to this variety, and the contractibility index can also only increase, it suffices to consider the case when $X = \mathbb{P}^{d} \times \mathbb{P}^{e}$ and $\pi$ is the first projection map. In this setting, we let $\rho$ denote the second projection map, $A = \pi^{*}\mathcal{O}(1)$, $H = \rho^{*}\mathcal{O}(1)$. Suppose that $p: U \to W$ is a family of irreducible cycles on $X$ such that $[p] \preceq m\alpha$. Each irreducible cycle $V$ in the family is mapped to a subvariety $V' \subset \mathbb{P}^{d}$ of dimension $k-c=1$. We associate the following invariants to $V$: \begin{itemize} \item $\sigma$ is the degree of $V'$ \item $\tau$ is the degree of $V \cap F \subset \mathbb{P}^{e}$, where $F = \pi^{-1}(p)$ for a general point $p \in V'$. \end{itemize} Since $V'$ is a curve we have that the base-change of $V$ to the normalization $V'^{n}$ of $V'$ is flat, so we can equally well think of each member of our family $V$ as a family of cycles in projective space defined by a map $f: V'^{n} \to \Chow_{\tau,c}(\mathbb{P}^{e})$, where $\Chow_{\tau,c}$ denotes the subvarieties of dimension $c$ and degree $\tau$. We can realize $\Chow_{\tau,c}(\mathbb{P}^{e})$ as a subvariety of $\mathbb{P}H^{0}(\mathbb{G},L^{\otimes \tau})$ where $\mathbb{G} = G(e-c,e+1)$ is the Grassmannian and $L$ is the pullback of $\mathcal{O}(1)$ under the Pl\"ucker embedding. Let $M$ denote the very ample divisor on $\Chow$ induced by pulling back $\mathcal{O}(1)$ from this projective space. Let $\nu_{1}$ and $\nu_{2}$ denote the projection maps on $\mathbb{P}^{d} \times \Chow_{\tau,c}(\mathbb{P}^{e})$ and let $T \subset \mathbb{P}^{d} \times \Chow_{\tau,c}(\mathbb{P}^{e})$ denote the image of $V'^{n}$. Note that \begin{equation*} T \cdot \nu_{1}^{*}\mathcal{O}(1) = \sigma \leq V \cdot A \cdot H^{k-1} \qquad \textrm{and} \qquad T \cdot \nu_{2}^{*}M = V \cdot H^{k} \end{equation*} so that degree of $T$ against the ample divisor $\nu_{1}^{*}\mathcal{O}(1) + \nu_{2}^{*}M$ is bounded linearly in terms of the class of $\alpha$. Fix a component $\mathcal{C} \subset \Chow_{\tau,c}(\mathbb{P}^{e})$ which contains the image of $T$. Let $q: R \to \mathbb{P}^{e}$ denote the family of subschemes of $\mathcal{C}$ where the fiber over $x \in \mathbb{P}^{e}$ parametrizes all cycles containing $x$. The induced map $R \to \mathcal{C}$ is flat by equivariance. We let $\widetilde{q}: \mathbb{P}^{d} \times R \to \mathbb{P}^{d} \times \mathbb{P}^{e}$ denote the corresponding family on the product $\mathbb{P}^{d} \times \mathcal{C}$. The subvarieties parametrized by $\widetilde{q}$ have codimension $d+e-c = n-c$. Note that \begin{equation*} \mc_{X}(p) = \mc_{\mathbb{P}^{d} \times \mathcal{C}}(\widetilde{p};\widetilde{q}) \end{equation*} where $\widetilde{p}$ is the family of $(k-c)$-dimensional subvarieties $T$ on $\mathbb{P}^{d} \times \mathcal{C}$ induced by $p$. Consider the very ample divisor $\widetilde{A} = \nu_{1}^{*}\mathcal{O}(1) + \nu_{2}^{*}M$. By Lemma \ref{familymcbound}, we have \begin{equation*} \mc_{X}(p) \leq 2^{(k-c)(n-c)+3(n-c)} \left( [\widetilde{p}] \cdot \widetilde{A}^{k-c} + 1 \right)^{n-c/n-c-(k-c)} \end{equation*} By the intersection calculations above, $[\widetilde{p}] \cdot \widetilde{A}^{k-c}$ is bounded linearly in terms of $[p] \cdot (A+H)^{k}$. Thus we see that for such an irreducible family $p$, there is some constant $C$ such that $\mc(p) \leq Cm^{n-c/n-k}$. We conclude by Lemma \ref{irrfamconcentrates} that $\kappa(\alpha) \leq n-c$. \end{proof} The upper bound proposed by Conjecture \ref{contractedconj} is optimal in a strong sense: given any morphism $\pi: X \to Z$ and any choice of $k,c$ satisfying the necessary constraints, one can always find a class $\alpha \in \Eff_{k}(X)$ of contractibility index $c$ and Iitaka dimension $\geq n-c$. \begin{exmple} We first set parameters. Choose integers $0 < d < n$ and integers $k,c$ such that $0 < k < n$ and $\min\{0,k-d\} < c < \min\{k+1,n-d\}$. Suppose that $\pi: X \to Z$ is a morphism where $X$ has dimension $n$ and $Z$ has dimension $d$. We construct a $k$-cycle class $\alpha$ on $X$ with contractibility index $c$ and with $\kappa(\alpha) \geq n-c$. The first step is a reduction: suppose that there is a diagram \begin{displaymath} \xymatrix{ X' \ar[r] \ar[d]_{\pi'} & X \ar[d]^{\pi} \\ Z' \ar[r] & Z} \end{displaymath} where the horizontal maps are generically finite. We claim that it is enough to construct a suitable class $\alpha'$ on $X'$ for $\pi'$, if we assume in addition that $\alpha'$ has a multiple represented by an irreducible cycle $V$ through a general point. Indeed, the pushforward $\alpha$ on $X$ will have at least as large of an Iitaka dimension as $\alpha'$. Furthermore, the contractibility index of $\alpha'$ is simply the relative dimension of $\pi'|_{V}$, and it is clear that this quantity is preserved by pushing forward $V$. By applying Noether normalization to the function field $K(Z)$, after replacing $X$ and $Z$ by birational models we may assume that there is a diagram \begin{displaymath} \xymatrix{ X \ar[r]^{f} \ar[d]_{\pi} & W \ar[dl]_{g} \\ Z & } \end{displaymath} where $W$ has dimension $n-c$. Let $H$ be an ample divisor on $W$ and set $\alpha = f^{*}H^{n-k}$. Since $\alpha$ is represented by the preimages of a big $(k-c)$-cycle on $W$, $\mc(m\alpha) \geq Cm^{n-c/n-k}$ for some constant $C$ and for sufficiently large $m$. Furthermore, it is clear that that $\alpha$ is represented by irreducible cycles whose image in $Z$ has dimension $k-c$. Thus $\alpha$ has all the desired properties. \end{exmple} \begin{rmk} Note that the above construction is somewhat better than the ``naive'' lower bound given by complete intersections. For example, consider again the class $\alpha = A^{2} + A \cdot H$ on $\mathbb{P}^{2} \times \mathbb{P}^{2}$ as in Example \ref{p2timesp2one}. As demonstrated there, $\mc(m\alpha) \sim Cm^{3/2}$. However, if we intersect members of $|m_{1}H|$ and $|m_{2}(H+A)|$ where $m_{1}m_{2}=m$, we can only obtain cycles through $Cm^{4/3}$ general points. \end{rmk} \section{Grassmannians} $G(m,n)$ parametrizes $m$-dimensional subplanes of an $n$-dimensional complex vector space. Fix a complete flag $V_{0} \subset \ldots \subset V_{n}$ in our vector space. Given a non-increasing tuple of integers $\lambda = (\lambda_{1},\ldots,\lambda_{m})$ whose components $\lambda_{i}$ satisfy $0 \leq \lambda_{i} \leq n-m$, we let $\sigma_{\lambda}$ denote the class of the Schubert variety parametrizing linear subspaces $W$ satisfying for all $i$ \begin{equation*} \dim(W \cap V_{n-m+i-\lambda_{i}}) \geq i. \end{equation*} As discussed in the introduction, there has been previous extensive work describing various forms of ``rigidity'' for Schubert classes on Grassmannians. The study of the Iitaka dimension is a variation on this theme which yields interesting information for all classes. \subsection{Iitaka dimensions on $G(2,n)$} \begin{thrm} \label{iitakadimg2n} The Iitaka dimension of a Schubert cycle on $G(2,n)$ is determined by the following list: \begin{itemize} \item $\kappa(\sigma_{1}) = \kappa(\sigma_{n-2,n-3}) = 2(n-2)$. \item $\kappa(\sigma_{r}) = n-2$ for $1 < r \leq n-2$. \item $\kappa(\sigma_{r,r-1}) = 2r$ for $1 < r < n-2$. \item $\kappa(\sigma_{r,s}) = r+s$ otherwise. \end{itemize} \end{thrm} \begin{rmk} Note that Theorem \ref{iitakadimg2n} does not address the boundary classes that do not lie on extremal rays. It would be interesting to see what behavior to expect along the rest of the pseudo-effective cone. \end{rmk} \begin{rmk} The multi rigid classes are classified by \cite{rt12}, \cite{robles13} (see also \cite{bryant05}): on $G(2,n)$ multi rigid classes have the form $\sigma_{j,j}$ and $\sigma_{n-2,0}$. These will automatically have the minimal Iitaka dimension, but note that the converse implication is not true. \end{rmk} \begin{rmk} Certain features of this theorem should persist for all Grassmannians. For example, consider $G(m,n)$ and suppose that $1 < t \leq m$. Then we should have $\kappa(\sigma_{t}) = n-m$ and $\kappa(\sigma_{1^{t}}) = m$. \end{rmk} We prove each statement of Theorem \ref{iitakadimg2n} in turn. Theorem \ref{iitakadimg2n}.(1) is obvious. \subsubsection{The classes $\sigma_{r}$} \begin{lem} \label{classificationofsigmar} Consider $\sigma_{r}$ on $G(2,n)$ for some $1 < r \leq n-2$. Then any irreducible cycle $V$ of class $m \sigma_{r}$ consists of the set of lines which intersect a fixed codimension $r+1$ degree $m$ subscheme in $\mathbb{P}^{n-1}$. \end{lem} \begin{proof} Take a general point $p$ on $\mathbb{P}^{n-1}$ and consider the cycle $Z_{p}$ representing the set of lines through $p$, so that $Z_{p}$ has class $\sigma_{n-2}$. Using generality, the intersection of $Z_{p}$ and $V$ can be done on the cycle level (see \cite[2.Theorem]{kleiman74}). For convenience we let $U \subset \mathbb{P}^{n-1}$ denote the open set of points such that the intersection of $Z_{p}$ and $V$ can be done on a set-theoretic level. The cycle $Z_{p} \cdot V$ represents the class $m\sigma_{n-2,r}$ and by \cite[Theorem 5]{bryant05} consists of the lines through $p$ intersecting some codimension $r+1$ subscheme $Q_{p}$ of $\mathbb{P}^{n-1}$. In this way we obtain a codimension $(r+1)$ subscheme $Q_{p}$ of $\mathbb{P}^{n-1}$ for each point $p \in U$. Now we show that the $Q_{p}$ coincide as $p$ varies. Take a general codimension $(n-r-1)$ plane $L$ in $\mathbb{P}^{n-1}$. The locus $T$ parametrizing lines contained in $L$ has class $\sigma_{n-r-1,n-r-1}$. Again we are in a setting of cycle-level intersection, and since the intersection must vanish we see that $T$ is disjoint from $V$. Now consider varying $p$ through the points of $U \cap L$. If the corresponding $Q_{p}$ varied in at least a one-dimensional family, then (after taking closures) we would find a line contained in $L$ represented by a point of $V$, a contradiction. Thus $Q_{p}$ must be fixed as $p$ varies over points in $U \cap L$. Since any pair of general points can be connected by a general codimension $(n-r-1)$ plane, this argument shows that $Q_{p}$ must be fixed as we vary $p$ over all general points of $\mathbb{P}^{n-1}$. Taking a closure, we see that $V$ must be the set of lines intersecting a fixed codimension $(r+1)$ subscheme $Q$. Finally, we must compare degrees. If $Q$ has degree $d$, then the corresponding $V$ has class $m \sigma_{r}$ where \begin{align*} m & = V \cdot \sigma_{n-2,n-r} = V \cdot \sigma_{n-2} \cdot \sigma_{1}^{n-r} \\ & = \mathrm{cone}_{p}(Q) \cdot \sigma_{1}^{n-r} = d. \end{align*} \end{proof} \begin{lem} On $G(2,n)$ we have $\kappa(\sigma_{r}) = n-2$ for any $1 < r \leq n-2$. \end{lem} \begin{proof} We first show that this is a lower bound. Fix a hyperplane $H$ in $\mathbb{P}^{n-1}$. A general line in $\mathbb{P}^{n-1}$ will intersect $H$ at a general point. So, any codimension $r$ subvariety of $H$ which contains $b$ general points will also intersect $b$ general lines as a codimension $r+1$ subvariety of $\mathbb{P}^{n-1}$. A degree $m$ codimension $r$ subvariety $Z$ of $H$ can contain $\approx C m^{n-2/r}$ general points of $H$ for a positive constant $C$. The set of lines intersecting $Z$ is a cycle on $G(2,n)$ of class $m\sigma_{r}$ going through $\approx C m^{n-2/r}$ general points. (More precisely, by deforming $Z$ we obtain a family of cycles with the desired mobility count.) This gives $\kappa(\sigma_{r}) \geq n-2$. We next show that this is an upper bound. Lemma \ref{classificationofsigmar} classifies all irreducible cycles whose class is proportional to $\sigma_{r}$. In particular, an irreducible family of cycles on $G(2,n)$ representing $m\sigma_{r}$ will always be induced (at least over an open subset) by a family $p: U \to W$ of codimension $r+1$ degree $m$ subvarieties of $\mathbb{P}^{n-1}$. We apply Lemma \ref{familymcbound} to $\mathbb{P}^{n-1}$ using for $q$ the family of lines on $\mathbb{P}^{n-1}$. We conclude that there is some constant $C$ so that a member of $p$ can meet at most $Cm^{n-2/r}$ general lines. Thus, the corresponding family of cycles on $G(2,n)$ has mobility count at most $Cm^{n-2/r}$. \end{proof} \subsubsection{The classes $\sigma_{r,r-1}$} We first recall a classical result of \cite{segre1}, \cite{segre2}. \begin{lem}[\cite{segre2}] \label{classificationofsigmarr-1} Consider $\sigma_{r,r-1}$ on $G(2,n)$ for some $1 < r < n-2$. Then any irreducible cycle $V$ of class $m \sigma_{r,r-1}$ parametrizes either: \begin{itemize} \item the lines contained in the fibers of a one-dimensional family of $\mathbb{P}^{n-r-1}$s, or \item the lines contained in a quadric hypersurface in some subplane of $\mathbb{P}^{n-1}$. \end{itemize} \end{lem} To compute the mobility count of $\sigma_{r,r-1}$, it clearly suffices to focus on the first type of cycles. \begin{lem} On $G(2,n)$ we have $\kappa(\sigma_{r,r-1}) = 2r$ for any $1 < r < n-2$. \end{lem} \begin{proof} We first rephrase the problem. Note that the locus of $\mathbb{P}^{n-r-1}$s containing a fixed line in $\mathbb{P}^{n-1}$ is a Schubert variety of class $\sigma_{r,r}$ in $G(n-r,n)$. Given the result of Lemma \ref{classificationofsigmarr-1}, it suffices to show that a curve in $G(n-r,n)$ of degree $m$ intersects at most $\approx C m^{\frac{2r}{2r-1}}$ Schubert varieties of class $\sigma_{r,r}$. First we show the lower bound. Fix a dimension $2r$ complete intersection variety $Y$ in $G(n-r,n)$. Then a general Schubert variety of class $\sigma_{r,r}$ will intersect $Y$ in a finite number of points. Since a degree $m$ curve in $Y$ can contain $\approx C m^{2r/2r-1}$ general points of $Y$, we can also find a curve intersecting this many general Schubert varieties of class $\sigma_{r,r}$. The upper bound follows from Lemma \ref{familymcbound}. \end{proof} \subsubsection{The other classes} \begin{lem} \label{grassmanniancycledim} Let $V \subset G(2,n)$ be an irreducible cycle with class proportional to $\sigma_{r,s}$ where $1 \leq s < r-1$. Then the lines parametrized by $V$ sweep out an irreducible subset of $\mathbb{P}^{n-1}$ of codimension $s$. \end{lem} \begin{proof} Clearly the lines sweep out an irreducible subset. We have $\sigma_{r,s} \cdot \sigma_{n-s-1} = 0$ but $\sigma_{r,s} \cdot \sigma_{n-s-2} \neq 0$. Noting that the Schubert cycle of type $\sigma_{k}$ parametrizes lines intersecting a fixed dimension $n-k-2$ linear subspace and using transversality of general intersections, we obtain the result. \end{proof} \begin{lem} Consider $\sigma_{r,s}$ where $1 \leq s < r-1$. Let $V$ be a cycle of class $m \sigma_{r,s}$ and let $Z$ denote the image in $\mathbb{P}^{n-1}$ of the universal family over $V$. Suppose that $Z$ is irreducible. Then either \begin{itemize} \item $Z$ is contained in a hyperplane, or \item there is a unique fixed $n-2-r$ dimensional hyperplane $Q$ such that every line parametrized by $V$ intersects $Q$, and furthermore for every point $q$ of $Q$ there exists a $\geq (n-2-s)$-dimensional subfamily of $V$ which parametrizes lines through $q$. \end{itemize} \end{lem} \begin{proof} The proof is by decreasing induction on $r$. For $r = n-2$, this is proved by \cite[Theorem 5]{bryant05}. (Note that both conditions $n-3 > s$ and $s \geq 1$ are necessary for the base case.) We now consider the case $r < n-2$. By Lemma \ref{grassmanniancycledim} $Z$ is irreducible of dimension $n-1-s$. Choose a general hyperplane $H$ of $\mathbb{P}^{n-1}$ and the corresponding Schubert cycle $\sigma_{1,1}$. By generality the intersection of $\sigma_{1,1}$ with $V$ can be done set theoretically to obtain a cycle $V'$. Let $Z'$ denote the closed subset of $\mathbb{P}^{n-1}$ swept out by the lines parametrized by $V'$. Applying Lemma \ref{grassmanniancycledim} to components of $V'$ we see that each component of $Z'$ has dimension $n-s-2$. Since $Z' \subset Z \cap H$ and this latter set is irreducible by Bertini, by dimension considerations we must have $Z' = Z \cap H$ and so $Z'$ is irreducible. By induction we see that either $Z'$ is contained in a hyperplane in $\mathbb{P}^{n-1}$ or every line parametrized by $V'$ intersects a fixed $n-3-r$ dimensional hyperplane $Q_{H}$ (necessarily contained in $H$). In the first case, since $H$ is general we see that $Z$ is contained in a hyperplane of $\mathbb{P}^{n-1}$. In the second case, consider the family of codimension $1$ hyperplanes $\widehat{H}$ in $\mathbb{P}^{n-1}$ containing $Q_{H}$. As the general such $\widehat{H}$ varies, we obtain a varying family of $Q_{\widehat{H}}$. We claim that the $Q_{\widehat{H}}$ all coincide with $Q_{H}$. Indeed, any line parametrized by $V'$ which also intersects some other point of $\widehat{H}$ is contained in $\widehat{H}$. Since $s < r-1 < (n-2)-1$, the existence part of the inductive assumption yields through any point of $Q_{H}$ a $(n-4-s)$-dimension worth of lines contained in both $H$ and $\widehat{H}$. Thus there is at least one line contained in $\widehat{H}$ through any point of $Q_{H}$, and by the uniqueness part of the inductive assumption, we must have $Q_{\widehat{H}} = Q_{H}$. Finally note that as $H$ varies over general hyperplanes, the $Q_{H}$ are all hyperplane sections of a fixed $n-2-r$ dimensional plane $Q$. Indeed, since $\sigma_{r,s} \cdot \sigma_{n-3-s,n-1-r} = 0$, we see that $V$ must not intersect a general Schubert variety of the latter class. Such a variety parametrizes lines contained in a general $r$-dimensional hyperplane $L$ and intersecting an $s+1$-dimensional subplane $M$ of $L$. If the $Q_{H}$ varied to cover a variety of dimension $n-1-r$, then some $Q_{H}$ would intersect $L$, and since through each point of $Q_{H}$ we have at least an $(n-3-s)$-dimensional family of lines there would be a line through that point also intersecting $M$, a contradiction. Thus the union of all the $Q_{H}$ must have dimension $n-2-r$. This union is obviously then a plane $Q$. Finally we verify the two desired properties of $Q$ by induction. Since $Q$ is the closure of the union of the $Q_{H}$, there is a line parametrized by $V$ through every point of $Q$. If $Q$ were not unique, then a general hyperplane section would violate the inductive hypothesis. Finally, an easy dimension count and induction argument verifies the existence part of the inductive assumption. \end{proof} For $r \geq 1$ there is an upper bound on the number of general lines which intersect any fixed $n-2-r$-dimensional hyperplane in $\mathbb{P}^{n-1}$. Furthermore, the number of general lines contained in any degenerate subvariety is bounded above. Thus we immediately obtain: \begin{cor} On $G(2,n)$ we have $\kappa(\sigma_{r,s}) = r+s$ whenever $1 \leq s < r-1$. \end{cor} \bibliographystyle{amsalpha}
1,314,259,993,079
arxiv
\section{Introduction} Let $[n]$ denote the set $\{1,2,\ldots,n\}$. Let $N,t,k,$ and $v$ be integers such that $k \ge t \ge 2$ and $v \ge 2$. Let $A$ be an $N \times k$ array where each entry is from the set $[v]$. For $I = \{j_1, \ldots, j_\rho\} \subseteq [k]$ where $j_1<\ldots<j_\rho$, let $A_I$ denote the $N \times \rho$ array in which $A_I(i,\ell) = A(i,j_\ell)$ for $1 \le i \le N$ and $1 \le \ell \le \rho$; $A_I$ is the projection of $A$ onto the columns in $I$. A \emph{covering array} $\mbox{$\mathsf{CA}$}(N;t,k,v)$ is an $N \times k$ array $A$ with each entry from $[v]$ so that for each $t$-set of columns $C \in {[k] \choose t}$, each $t$-tuple $x \in [v]^t$ appears as a row in $A_C$. The smallest $N$ for which a $\mbox{$\mathsf{CA}$}(N;t,k,v)$ exists is denoted by $\mbox{$\mathsf{CAN}$}(t,k,v)$. Covering arrays find important application in software and hardware testing (see \cite{KKL2013} and references therein). Applications of covering arrays also arise in experimental testing for advanced materials \cite{Caw2003}, inference of interactions that regulate gene expression \cite{Sha2001}, fault-tolerance of parallel architectures \cite{GHLS1993}, synchronization of robot behavior \cite{Ha2005}, drug screening \cite{tong}, and learning of boolean functions \cite{dam}. Covering arrays have been studied using different nomenclature, as qualitatively independent partitions \cite{GKV93}, $t$-surjective arrays \cite{CKMZ1983}, and $(k,t)$-universal sets \cite{Jukna}, among others. Covering arrays are closely related to hash families \cite{Co2011} and orthogonal arrays \cite{Co2004}. \section{Background and Motivation} The exact or approximate determination of $\mbox{$\mathsf{CAN}$}(t,k,v)$ is central in applications of covering arrays, but remains an open problem. For fixed $t$ and $v$, only when $t=v=2$ is $\mbox{$\mathsf{CAN}$}(t,k,v)$ known precisely for infinitely many values of $k$. Kleitman and Spencer \cite{KlSp1973} and Katona \cite{Ka1973} independently proved that the largest $k$ for which a $\mbox{$\mathsf{CA}$}(N;2,k,2)$ exists satisfies $k=\binom{N-1}{\lceil N/2\rceil}.$ When $t=2$, Gargano, K\H{o}rner, and Vaccaro \cite{GKV93} establish that \begin{equation}\label{gkv2} \mbox{$\mathsf{CAN}$}(2,k,v) =\frac{v}{2}\log k(1+\mbox{o}(1)). \end{equation} (We write $\log$ for logarithms base 2, and $\ln$ for natural logarithms.) Several researchers \cite{BB88,CKMZ1983,GSS96,GY2006} establish a general asymptotic upper bound on $\mbox{$\mathsf{CAN}$}(t,k,v)$: \begin{equation}\label{general1} \mbox{$\mathsf{CAN}$}(t,k,v) \leq \frac{t-1}{\log\frac{v^t}{v^t-1}}\log k(1+\mbox{o}(1)). \end{equation} A slight improvement on (\ref{general1}) has recently been proved \cite{FS2015,sarkar16}. An (essentially) equivalent but more convenient form of (\ref{general1}) is: \begin{equation}\label{eq:can-upper} \mbox{$\mathsf{CAN}$}(t,k,v) \le (t-1)v^t \log k(1+o(1)). \end{equation} A lower bound on $\mbox{$\mathsf{CAN}$}(t,k,v)$ results from the inequality $\mbox{$\mathsf{CAN}$}(t,k,v) \ge v \cdot \mbox{$\mathsf{CAN}$}(t-1,k-1,v)$ obtained by derivation, together with (\ref{gkv2}), to establish that $\mbox{$\mathsf{CAN}$}(t,k,v) \ge v^{t-2} \cdot \mbox{$\mathsf{CAN}$}(2,k-t+2,v) = v^{t-2}\cdot \frac{v}{2}\log(k-t+2)(1+\mbox{o}(1))$. When $\frac{t}{k} < 1$, we obtain: \begin{equation}\label{eq:can-lower} \mbox{$\mathsf{CAN}$}(t,k,v) = \Omega(v^{t-1}\log k). \end{equation} Because (\ref{eq:can-lower}) ensures that the number of rows in covering arrays can be considerable, researchers have suggested the need for relaxations in which not all interactions must be covered \cite{Chen,HR2004,K2013+,MKTK201} in order to reduce the number of rows. The practical relevance is that each row corresponds to a test to be performed, adding to the cost of testing. For example, an array \emph{covers a $t$-set of columns} when it covers each of the $v^t$ interactions on this $t$-set. Hartman and Raskin \cite{HR2004} consider arrays with a fixed number of rows that cover the \emph{maximum} number of $t$-sets of columns. A similar question was also considered in \cite{MKTK201}. In \cite{K2013+,MKTK201} a more refined measure of the (partial) coverage of an $N\times k$ array $A$ is introduced. For a given $q\in [0,1]$, let $\alpha(A,q)$ be the number of $N\times t$ submatrices of $A$ with the property that at least $qv^t$ elements of $[v]^t$ appear in their set of rows; the \emph{$(q,t)$-completeness} of $A$ is $\alpha(A,q)/\binom{k}{t}$. Then for practical purposes one wants ``high" $(q,t)$-completeness with few rows. In these works, no theoretical results on partial coverage appear to have been stated; earlier contributions focus on experimental investigations of heuristic construction methods. Our purpose is to initiate a mathematical investigation of arrays offering ``partial" coverage. More precisely, we address: \begin{itemize} \item Can one obtain a significant improvement on the upper bound (\ref{eq:can-upper}) if the set $[v]^t$ is only required to be contained among the rows of \emph{at least} $(1-\epsilon)\binom{k}{t}$ subarrays of $A$ of dimension $N\times t$? \item Can one obtain a significant improvement if, among the rows of \emph{every} $N\times t$ subarray of $A$, only a (large) \emph{subset} of $[v]^t$ is required to be contained? \item Can one obtain a significant improvement if the set $[v]^t$ is only required to be contained among the rows of \emph{at least} $(1-\epsilon)\binom{k}{t}$ subarrays of $A$ of dimension $N\times t$, {\bf and} among the rows of each of the $\epsilon\binom{k}{t}$ subarrays that remain, a (large) \emph{subset} of $[v]^t$ is required to be contained? \end{itemize} We answer these questions both theoretically and algorithmically in the following sections. \section{Partial Covering Arrays} When $1 \le m \le v^t$, a \emph{partial $m$-covering array}, $\mbox{$\mathsf{PCA}$}(N;t,k,v,m)$, is an $N \times k$ array $A$ with each entry from $[v]$ so that for each $t$-set of columns $C \in {[k] \choose t}$, at least $m$ distinct tuples $x \in [v]^t$ appear as rows in $A_C$. Hence a covering array $\mbox{$\mathsf{CA}$}(N;t,k,v)$ is precisely a partial $v^t$-covering array $\mbox{$\mathsf{PCA}$}(N;t,k,v,v^t)$. \begin{theorem}\label{thm:pcan-bound1} For integers $t,k,v$, and $m$ where $k \ge t \ge 2$, $v \ge 2$ and $1 \le m \le v^t$ there exists a $\mbox{$\mathsf{PCA}$}(N;t,k,v,m)$ with \begin{equation} N \le \frac{\ln \left\{{k \choose t}{v^t \choose m - 1}\right\}}{\ln \left(\frac{v^t}{m-1}\right)} . \end{equation}. \end{theorem} \begin{proof} Let $r = v^t - m + 1$, and $A$ be a random $N \times k$ array where each entry is chosen independently from $[v]$ with uniform probability. For $C \in {[k] \choose t}$, let $B_C$ denote the event that at least $r$ tuples from $[v]^t$ are missing in $A_C$. The probability that a particular $r$-set of tuples from $[v]^t$ is missing in $A_C$ is $\left(1 - \frac{r}{v^t}\right)^N$. Applying the union bound to all $r$-sets of tuples from $[v]^t$, we obtain $\Pr[B_C] \le {v^t \choose r}\left(1 - \frac{r}{v^t}\right)^N$. By linearity of expectation, the expected number of $t$-sets $C$ for which $A_C$ misses at least $r$ tuples from $[v]^t$ is at most ${k \choose t} {v^t \choose r}\left(1 - \frac{r}{v^t}\right)^N$. When $A$ has at least $\frac{\ln \left\{{k \choose t}{v^t \choose m - 1}\right\}}{\ln \left(\frac{v^t}{m-1}\right)}$ rows this expected number is less than 1. Therefore, an array $A$ exists with the required number of rows such that for all $C \in {[k] \choose t}$, $A_C$ misses at most $r-1$ tuples from $[v]^t$, i.e. $A_C$ covers at least $m$ tuples from $[v]^t$. \qed \end{proof} Theorem \ref{thm:pcan-bound1} can be improved upon using the Lov\'asz local lemma. \begin{lemma}\label{lem:lllsym} (Lov\'asz local lemma; symmetric case) (see {\rm \cite{alon08}}) Let $A_{1},A_{2},\ldots,A_{n}$ events in an arbitrary probability space. Suppose that each event $A_{i}$ is mutually independent of a set of all other events $A_{j}$ except for at most $d$, and that $\Pr[A_{i}]\le p$ for all $1\le i\le n$. If $ep(d+1)\le1$, then $\Pr[\cap_{i=1}^{n}\bar{A_{i}}]>0$. \end{lemma} Lemma \ref{lem:lllsym} provides an upper bound on the probability of a ``bad'' event in terms of the dependence structure among such bad events, so that there is a guaranteed outcome in which all ``bad'' events are avoided. This lemma is most useful when there is limited dependence among the ``bad'' events, as in the following: \begin{theorem}\label{thm:pcan-bound2} For integers $t,k,v$ and $m$ where $v,t \ge 2$, $k \ge 2t$ and $1 \le m \le v^t$ there exists a $\mbox{$\mathsf{PCA}$}(N;t,k,v,m)$ with \begin{equation}\label{eq:pcan-bound} N \le \frac{1 + \ln \left\{t{k \choose t - 1}{v^t \choose m - 1}\right\}}{\ln \left(\frac{v^t}{m-1}\right)} . \end{equation} \end{theorem} \begin{proof} When $k \ge 2t$, each event $B_C$ with $C \in {[k] \choose t}$ (that is, at least $v^t - m + 1$ tuples are missing in $A_C$) is independent of all but at most ${t \choose 1}{k-1 \choose t-1}<t{k \choose t-1}$ events in $\{ B_{C'} : C' \in {[k] \choose t}\setminus \{C\}\}$. Applying Lemma \ref{lem:lllsym}, $\Pr[\wedge_{C \in {[k] \choose t}} \overline{B_C}]>0$ when \begin{equation}\label{eq:lll} \mathrm{e}{v^t \choose r}\left(1 - \frac{r}{v^t}\right)^N t{k \choose t-1} \le 1. \end{equation} Solve (\ref{eq:lll}) to obtain the required upper bound on $N$. \qed \end{proof} When $m=v^t$, apply the Taylor series expansion to obtain $\ln \left(\frac{v^t}{m-1}\right) \ge \frac{1}{v^t}$, and thereby recover the upper bound (\ref{eq:can-upper}). Theorem \ref{thm:pcan-bound2} implies: \begin{corollary} Given $q\in [0,1]$ and integers $2 \le t \le k$, $v \ge 2$, there exists an $N\times k$ array on $[v]$ with $(q, t)$-completeness equal to 1 (i.e., \emph{maximal}), whose number $N$ of rows satisfies $$N\leq \frac{1 + \ln \left\{t{k \choose t - 1}{v^t \choose qv^t - 1}\right\}}{\ln \left(\frac{v^t}{qv^t-1}\right)}.$$ \end{corollary} Rewriting (\ref{eq:pcan-bound}), setting $r = v^t - m + 1$, and using the Taylor series expansion of $\ln \left(1 - \frac{r}{v^t}\right)$, we get \begin{equation}\label{eq:pcan-bound-asymp} N \le \frac{1 + \ln \left\{t{k \choose t - 1}{v^t \choose r}\right\}}{\ln \left(\frac{v^t}{v^t - r}\right)} \le \frac{v^t(t-1)\ln k}{r}\left\{1 - \frac{\ln r}{\ln k} + o(1)\right\}. \end{equation} Hence when $r = v(t-1)$ (or equivalently, $m = v^t - v(t-1) + 1$), there is a partial $m$-covering array with $\Theta(v^{t-1} \ln k)$ rows. This matches the lower bound (\ref{eq:can-lower}) asymptotically for covering arrays by missing, in each $t$-set of columns, \emph{no more} than $v(t-1)-1$ of the $v^t$ possible rows. The dependence of the bound (\ref{eq:pcan-bound}) on the number of $v$-ary $t$-vectors that must appear in the $t$-tuples of columns is particularly of interest when test suites are run sequentially until a fault is revealed, as in \cite{Bryce}. Indeed the arguments here may have useful consequences for the rate of fault detection. Lemma \ref{lem:lllsym} and hence Theorem \ref{thm:pcan-bound2} have proofs that are non-constructive in nature. Nevertheless, Moser and Tardos \cite{moser10} provide a randomized algorithm with the same guarantee. Patterned on their method, Algorithm \ref{algo:m-t} constructs a partial $m$-covering array with exactly the same number of rows as (\ref{eq:pcan-bound}) in expected polynomial time. Indeed, for fixed $t$, the expected number of times the resampling step (line \ref{line:re-sample}) is repeated is linear in $k$ (see \cite{moser10} for more details). \begin{algorithm}[t] \SetKw{Break}{break} \KwIn{Integers $N,t,k,v$ and $m$ where $v,t \ge 2$, $k \ge 2t$ and $1 \le m \le v^t$} \KwOut{$A$ : a $\mbox{$\mathsf{PCA}$}(N;t,k,v,m)$} Let $N := \frac{1 + \ln \left\{t{k \choose t - 1}{v^t \choose m - 1}\right\}}{\ln \left(\frac{v^t}{m-1}\right)}$\; \label{line:lll-bound} Construct an $N \times k$ array $A$ where each entry is chosen independently and uniformly at random from $[v]$\; \Repeat {covered $=$ true}{ \label{line:mt-loop} Set \emph{covered}$:=$ true\; \For {each column $t$-set $C \in {[k] \choose t}$}{ \label{line:online-check} \If {$A_C$ does not cover at least $m$ distinct $t$-tuples $x\in [v]^t$} { Set \emph{covered}$:=$ false\; Set \emph{missing-column-set} $:= C$\; \Break\; } } \If {covered $=$ false}{ Choose all the entries in the $t$ columns of \emph{missing-column-set} independently and uniformly at random from $[v]$\; \label{line:re-sample} } } Output $A$\; \caption{Moser-Tardos type algorithm for partial $m$-covering arrays.} \label{algo:m-t} \end{algorithm} \section{Almost Partial Covering Arrays} For $0 < \epsilon < 1$, an \emph{$\epsilon$-almost partial $m$-covering array}, $\mbox{$\mathsf{APCA}$}(N;t,k,v,m,\epsilon)$, is an $N \times k$ array $A$ with each entry from $[v]$ so that for at least $(1-\epsilon){k \choose t}$ column $t$-sets $C \in {[k] \choose t}$, $A_C$ covers at least $m$ distinct tuples $x \in [v]^t$. Again, a covering array $\mbox{$\mathsf{CA}$}(N;t,k,v)$ is precisely an $\mbox{$\mathsf{APCA}$}(N;t,k,v,v^t, \epsilon)$ when $\epsilon < 1/ \binom{k}{t}$. Our first result on $\epsilon$-{\em almost} partial $m$-covering arrays is the following. \begin{theorem}\label{thm:apcan-bound} For integers $t,k,v,m$ and real $\epsilon$ where $k \ge t \ge 2$, $v \ge 2$, $1 \le m \le v^t$ and $0 \le \epsilon \le 1$, there exists an $\mbox{$\mathsf{APCA}$}(N;t,k,v,m,\epsilon)$ with \begin{equation} N \le \frac{\ln \left\{{v^t \choose m - 1}/\epsilon\right\}}{\ln \left(\frac{v^t}{m-1}\right)}. \end{equation} \end{theorem} \begin{proof} Parallelling the proof of Theorem \ref{thm:pcan-bound1} we compute an upper bound on the expected number of $t$-sets $C\in {[k] \choose t}$ for which $A_C$ misses at least $r$ tuples $x \in [v]^t$. When this expected number is at most $\epsilon{k \choose t}$, an array $A$ is guaranteed to exist with at least $(1-\epsilon){k \choose t}$ $t$-sets of columns $C \in {[k] \choose t}$ such that $A_C$ misses at most $r-1$ distinct tuples $x \in [v]^t$. Thus $A$ is an $\mbox{$\mathsf{APCA}$}(N;t,k,v,m,\epsilon)$. To establish the theorem, solve the following for $N$: \begin{equation*} {k \choose t} {v^t \choose r}\left(1 - \frac{r}{v^t}\right)^N \le \epsilon{k \choose t}. \end{equation*} \qed \end{proof} When $\epsilon < 1 / {k \choose t}$ we recover the bound from Theorem \ref{thm:pcan-bound1} for partial $m$-covering arrays. In terms of $(q,t)$-completeness, Theorem \ref{thm:apcan-bound} yields the following. \begin{corollary} For $q\in [0,1]$ and integers $2 \le t \le k$, $v \ge 2$, there exists an $N\times k$ array on $[v]$ with $(q, t)$-completeness equal to $1-\epsilon$, with $$N \leq \frac{\ln \left\{{v^t \choose m - 1}/\epsilon\right\}}{\ln \left(\frac{v^t}{m-1}\right)}.$$ \end{corollary} When $m = v^t$, an $\epsilon$-almost covering array exists with $N \le v^t \ln \left(\frac{v^t}{\epsilon}\right)$ rows. Improvements result by focussing on covering arrays in which the symbols are acted on by a finite group. In this setting, one chooses orbit representatives of rows that collectively cover orbit representatives of $t$-way interactions under the group action; see \cite{ColCECA}, for example. Such group actions have been used in direct and computational methods for covering arrays \cite{cck,MeagherS}, and in randomized and derandomized methods \cite{ColCECA,SarkarColbourn2,sarkar16}. We employ the sharply transitive action of the cyclic group of order $v$, adapting the earlier arguments using methods from \cite{sarkar16}: \begin{theorem}\label{thm:cyclic} For integers $t,k,v$ and real $\epsilon$ where $k \ge t \ge 2$, $v \ge 2$ and $0 \le \epsilon \le 1$ there exists an $\mbox{$\mathsf{APCA}$}(N;t,k,v,v^t,\epsilon)$ with \begin{equation} N \le v^t \ln \left(\frac{v^{t-1}}{\epsilon}\right). \end{equation} \end{theorem} \begin{proof} The action of the cyclic group of order $v$ partitions $[v]^t$ into $v^{t-1}$ orbits, each of length $v$. Let $n = \lfloor \frac{N}{v} \rfloor$ and let $A$ be an $n \times k$ random array where each entry is chosen independently from the set $[v]$ with uniform probability. For $C \in {[k] \choose t}$, $A_C$ \emph{covers the orbit} $X$ if at least one tuple $x\in X$ is present in $A_C$. The probability that the orbit $X$ is not covered in $A$ is $\left(1 - \frac{v}{v^t}\right)^n = \left(1 - \frac{1}{v^{t-1}}\right)^n$. Let $D_C$ denote the event that $A_C$ does not cover at least one orbit. Applying the union bound, $\Pr[D_C] \le v^{t-1}\left(1 - \frac{1}{v^{t-1}}\right)^n$. By linearity of expectation, the expected number of column $t$-sets $C$ for which $D_C$ occurs is at most ${k \choose t}v^{t-1}\left(1 - \frac{1}{v^{t-1}}\right)^n$. As earlier, set this expected value to be at most $\epsilon{k \choose t}$ and solve for $n$. An array exists that covers all orbits in at least $(1-\epsilon){k \choose t}$ column $t$-sets. Develop this array over the cyclic group to obtain the desired array. \qed \end{proof} As in \cite{sarkar16}, further improvements result by considering a group, like the Frobenius group, that acts sharply 2-transitively on $[v]$. When $v$ is a prime power, the \emph{Frobenius group} is the group of permutations of $\mathbb{F}_v$ of the form $\{x \mapsto ax+b\,:\,a,b\in \mathbb{F}_v,\,a\neq0\}$. \begin{theorem}\label{thm:frob} For integers $t,k,v$ and real $\epsilon$ where $k \ge t \ge 2$, $v \ge 2$, $v$ is a prime power and $0 \le \epsilon \le 1$ there exists an $\mbox{$\mathsf{APCA}$}(N;t,k,v,v^t,\epsilon)$ with \begin{equation} N \le v^t \ln \left(\frac{2v^{t-2}}{\epsilon}\right) + v. \end{equation} \end{theorem} \begin{proof} The action of the Frobenius group partitions $[v]^t$ into $\frac{v^{t-1}-1}{v-1}$ orbits of length $v(v-1)$ (full orbits) each and $1$ orbit of length $v$ (a short orbit). The short orbit consists of tuples of the form $(x_1,\ldots,x_t)\in [v]^t$ where $x_1=\ldots=x_t$. Let $n = \lfloor \frac{N-v}{v(v-1)}\rfloor$ and let $A$ be an $n \times k$ random array where each entry is chosen independently from the set $[v]$ with uniform probability. Our strategy is to construct $A$ so that it covers all full orbits for the required number of arrays $\{A_C :C \in {[k] \choose t}\}$. Develop $A$ over the Frobenius group and add $v$ rows of the form $(x_1, \ldots, x_k)\in[v]^t$ with $x_1= \ldots =x_k$ to obtain an $\mbox{$\mathsf{APCA}$}(N;t,k,v,v^t,\epsilon)$ with the desired value of $N$. Following the lines of the proof of Theorem \ref{thm:cyclic}, $A$ covers all full orbits in at least $(1-\epsilon){k \choose t}$ column $t$-sets $C$ when \[ {k \choose t}\frac{v^{t-1}-1}{v-1}\left(1 - \frac{v-1}{v^{t-1}}\right)^n \le \epsilon{k \choose t}. \] Because $\frac{v^{t-1}-1}{v-1} \le 2v^{t-2}$ for $v \ge 2$, we obtain the desired bound. \qed \end{proof} Using group action when $m=v^t$ affords useful improvements. Does this improvement extend to cases when $m < v^t$? Unfortunately, the answer appears to be no. Consider the case for $\mbox{$\mathsf{PCA}$}(N;t,k,v,m)$ when $m \le v^t$ using the action of the cyclic group of order $v$ on $[v]^t$. Let $A$ be a random $n \times k$ array over $[v]$. When $v^t-vs+1 \le m \le v^t-v(s-1)$ for $1 \le s \le v^{t-1}$, this implies that for all $C \in \binom{[k]}{t}$, $A_C$ misses at most $s-1$ orbits of $[v]^t$. Then we obtain that $n \le \left(1+\ln \left(t\binom{k}{t-1}\binom{v^{t-1}}{s}\right)\right)/\ln \left(\frac{v^{t-1}}{v^{t-1}-s}\right)$. Developing $A$ over the cyclic group we obtain a $\mbox{$\mathsf{PCA}$}(N;t,k,v,m)$ with \begin{equation}\label{eq:pcan-cyclic} N \le v \frac{1+\ln \left\{\binom{k}{t-1}\binom{v^{t-1}}{s}\right\}}{\ln \left(\frac{v^{t-1}}{v^{t-1}-s}\right)} \end{equation} \begin{figure} \centering \begin{subfloat}[$t=6,\,k=20,\,v=4$]{ \includegraphics[bb=100bp 238bp 520bp 553bp,clip,scale=0.42]{m-6-20-4}\label{fig:comp-ga-a} } \end{subfloat} \hspace{-0.35in} \begin{subfloat}[$t=6,\,v=4,\,m=v^t-v$]{ \includegraphics[bb=100bp 238bp 520bp 553bp,clip,scale=0.42]{k-6-4}\label{fig:comp-ga-b} } \end{subfloat} \caption{Comparison of (\ref{eq:pcan-cyclic}) and (\ref{eq:pcan-bound}). Figure (a) compares the sizes of the partial $m$-covering arrays when $v^t-6v+1 \le m \le v^t$. Except for $m=v^t=4096$ the bound from (\ref{eq:pcan-bound}) outperforms the bound obtained by assuming group action. Figure (b) shows that for $m=v^t-v=4092$, (\ref{eq:pcan-bound}) outperforms (\ref{eq:pcan-cyclic}) for all values of $k$.}\label{fig:comp-ga} \end{figure} Figure \ref{fig:comp-ga} compares (\ref{eq:pcan-cyclic}) and (\ref{eq:pcan-bound}). In Figure \ref{fig:comp-ga-a} we plot the size of the partial $m$-covering array as obtained by (\ref{eq:pcan-cyclic}) and (\ref{eq:pcan-bound}) for $v^t-6v+1 \le m \le v^t$ and $t=6,\,k=20,\,v=4$. Except when $m=v^t=4096$, the covering array case, (\ref{eq:pcan-bound}) outperforms (\ref{eq:pcan-cyclic}). Similarly, Figure \ref{fig:comp-ga-b} shows that for $m=v^t-v=4092$, (\ref{eq:pcan-bound}) consistently outperforms (\ref{eq:pcan-cyclic}) for all values of $k$ when $t=6,\,v=4$. We observe similar behavior for different values of $t$ and $v$. Next we consider even stricter coverage restrictions, combining Theorems \ref{thm:pcan-bound2} and \ref{thm:cyclic}. \begin{theorem}\label{thm:concat} For integers $t,k,v,m$ and real $\epsilon$ where $k \ge t \ge 2$, $v \ge 2$, $0 \le \epsilon \le 1$ and $m \le v^t + 1 - \frac{\ln k}{\ln (v/\epsilon^{1/(t-1)})}$ there exists an $N\times k$ array $A$ with entries from $[v]$ such that \begin{enumerate} \item for each $C \in {[k] \choose t}$, $A_C$ covers at least $m$ tuples $x\in[v]^t$, \item for at least $(1 - \epsilon){k \choose t}$ column $t$-sets $C$, $A_C$ covers all tuples $x \in [v]^t$, \item $N = O(v^t \ln \left(\frac{v^{t-1}}{\epsilon}\right))$. \end{enumerate} \end{theorem} \begin{proof} We vertically juxtapose a partial $m$-covering array and an $\epsilon$-almost $v^t$-covering array. For $r = \frac{\ln k}{\ln (v/\epsilon^{1/(t-1)})}$ and $m = v^t - r + 1$, (\ref{eq:pcan-bound-asymp}) guarantees the existence of a partial $m$-covering array with $v^t \ln \left(\frac{v^{t-1}}{\epsilon}\right)\{1+\mbox{o}(1)\}$ rows. Theorem \ref{thm:cyclic} guarantees the existence of an $\epsilon$-almost $v^t$-covering array with at most $v^t \ln \left(\frac{v^{t-1}}{\epsilon}\right)$ rows. \qed \end{proof} \begin{corollary}\label{cor:concat} There exists an $N \times k$ array $A$ such that: \begin{enumerate} \item for any $t$-set of columns $C \in {[k] \choose t}$, $A_C$ covers at least $m \le v^t + 1 - v(t-1)$ distinct $t$-tuples $x\in [v]^t$, \item for at least $\left(1-\frac{v^{t-1}}{k^{1/v}}\right){k \choose t}$ column $t$-sets $C$, $A_C$ covers all the distinct $t$-tuples $x\in [v]^t$. \item $N = O(v^{t-1}\ln k)$. \end{enumerate} \end{corollary} \begin{proof} Apply Theorem \ref{thm:concat} with $m = v^t + 1 - \frac{\ln k}{\ln (v/\epsilon^{1/(t-1)})}$. There are at most $\frac{\ln k}{\ln (v/\epsilon^{1/(t-1)})} -1$ missing $t$-tuples $x \in [v]^t$ in the $A_C$ for each of the at most $\epsilon{k\choose t}$ column $t$-sets $C$ that do not satisfy the second condition of Theorem \ref{thm:concat}. To bound from above the number of missing tuples to a certain small function $f(t)$ of $t$, it is sufficient that $\epsilon \le v^{t-1}\left(\frac{1}{k}\right)^\frac{t-1}{f(t)+1}$. Then the number of missing $t$-tuples $x \in [v]^t$ in $A_C$ is bounded from above by $f(t)$ whenever $\epsilon$ is not larger than \begin{equation}\label{up} v^{t-1}\left(\frac{1}{k}\right)^\frac{t-1}{f(t)+1} \end{equation} On the other hand, in order for the number $N=O\left(v^{t-1}\ln \left(\frac {v^{t-1}}{\epsilon}\right)\right)$ of rows of $A$ to be asymptotically equal to the lower bound (\ref{eq:can-lower}), it suffices that $\epsilon$ is not smaller than \begin{equation}\label{low} {v^{t-1}\over k^{\frac {1}{v}}}. \end{equation} When $f(t)=v(t-1)-1$, (\ref{up}) and (\ref{low}) agree asymptotically, completing the proof. \qed \end{proof} Once again we obtain a size that is $O(v^{t-1}\!\log k)$, a goal that has not been reached for covering arrays. This is evidence that even a small relaxation of covering arrays provides arrays of the best sizes one can hope for. Next we consider the efficient construction of the arrays whose existence is ensured by Theorem \ref{thm:concat}. Algorithm \ref{algo:apca} is a randomized method to construct an $\mbox{$\mathsf{APCA}$}(N;t,k,v,m,\epsilon)$ of a size $N$ that is very close to the bound of Theorem \ref{thm:apcan-bound}. By Markov's inequality the condition in line \ref{line:cond} of Algorithm \ref{algo:apca} is met with probability at most $1/2$. Therefore, the expected number of times the loop in line \ref{line:loop} repeats is at most $2$. To prove Theorem \ref{thm:apcan-bound}, $t$-wise independence among the variables is sufficient. Hence, Algorithm \ref{algo:apca} can be derandomized using $t$-wise independent random variables. We can also derandomize the algorithm using the method of conditional expectation. In this method we construct $A$ by considering the $k$ columns one by one and fixing all $N$ entries of a column. Given a set of already fixed columns, to fix the entries of the next column we consider all possible $v^N$ choices, and choose one that provides the maximum conditional expectation of the number of column $t$-sets $C \in \binom{[k]}{t}$ such that $A_C$ covers at least $m$ tuples $x\in[v]^t$. Because $v^N=O(\mathsf{poly}(1/\epsilon))$, this derandomized algorithm constructs the desired array in polynomial time. Similar randomized and derandomized strategies can be applied to construct the array guaranteed by Theorem \ref{thm:cyclic}. Together with Algorithm \ref{algo:m-t} this implies that the array in Theorem \ref{thm:concat} is also efficiently constructible. \begin{algorithm}[t] \SetKw{Break}{break} \KwIn{Integers $N,t,k,v$ and $m$ where $v,t \ge 2$, $k \ge 2t$ and $1 \le m \le v^t$, and real $0<\epsilon<1$} \KwOut{$A$ : an $\mbox{$\mathsf{APCA}$}(N;t,k,v,m,\epsilon)$} Let $N :=\frac{\ln \left\{2{v^t \choose m - 1}/\epsilon\right\}}{\ln \left(\frac{v^t}{m-1}\right)}$\; \Repeat {isAPCA $=$ true}{ \label{line:loop} Construct an $N \times k$ array $A$ where each entry is chosen independently and uniformly at random from $[v]$\; Set \emph{isAPCA}$:=$ true\; Set \emph{defectiveCount}$:=$ 0\; \For {each column $t$-set $C \in {[k] \choose t}$} { \If {$A_C$ does not cover at least $m$ distinct $t$-tuples $x\in [v]^t$} { Set \emph{defectiveCount}$:=$ \emph{defectiveCount} + $1$\; \If {\emph{defectiveCount} $>$ $\lfloor\epsilon\binom{k}{t}\rfloor$}{ \label{line:cond} Set \emph{isAPCA}$:=$ false\; \Break\; } } } } Output $A$\; \caption{Randomized algorithm for $\epsilon$-almost partial $m$-covering arrays.} \label{algo:apca} \end{algorithm} \section{Final Remarks} We have shown that by relaxing the coverage requirement of a covering array somewhat, powerful upper bounds on the sizes of the arrays can be established. Indeed the upper bounds are substantially smaller than the best known bounds for a covering array; they are of the same order as the \emph{lower} bound for $\mbox{$\mathsf{CAN}$}(t,k,v)$. As importantly, the techniques not only provide asymptotic bounds but also randomized polynomial time construction algorithms for such arrays. Our approach seems flexible enough to handle variations of these problems. For instance, some applications require arrays that satisfy, for different subsets of columns, different coverage or separation requirements \cite{Co2004}. In \cite{GY2006} several interesting examples of combinatorial problems are presented that can be unified and expressed in the framework of $S$-constrained matrices. Given a set of vectors $S$ each of length $t$, an $N\times k$ matrix $M$ is \emph{$S$-constrained} if for every $t$-set $C\in \binom{[k]}{t}$, $M_C$ contains as a row each of the vectors in $S$. The parameter to optimize is, as usual, the number of rows of $M$. One potential direction is to ask for arrays that, in every $t$-tuple of columns, cover at least $m$ of the vectors in $S$, or that all vectors in $S$ are covered by all but a small number of $t$-tuples of columns. Exploiting the structure of the members of $S$ appears to require an extension of the results developed here. \section*{Acknowledgements} Research of KS and CJC was supported in part by the National Science Foundation under Grant No. 1421058. \bibliographystyle{plain
1,314,259,993,080
arxiv
\section{Introduction} Recovery of coherent broadband signals, especially their phase, is a continuously developing field as detectors are often not fast enough to make direct measurements. Recently ptychography, a robust lens-less imaging technique developed to solve the phase problem in crystallography by Hoppe \cite{Hoppe1969}, has been migrated to the time domain \cite{Spangenberg2015a} by application of the ptychographic iterative engine (PIE) \cite{Faulkner2005} to time domain equivalent problems. Time-domain ptychography requires the measurement of a spectrum resulting from the product of two coherent signals, the unknown object and a time delayed probe signal. This is done for a number of time delays resulting in a sequence of spectra which is referred to as a spectrogram. In the most fundamental version the spectrogram is fed to the ptychographic iterative engine which reconstructs the unknown object signal given that the probe signal is known. Refined codes, e.g., the extended ptychographic iterative engine (ePIE), make use of redundancy in the spectrogram to also reconstruct the probe signal \cite{Lucchini2015}. More recently, a new modality, i.e., the implicit ptychographic iterative engine (iPIE), was introduced \cite{Spangenberg2015b,Spangenberg2016a}. In iPIE the spectrogram is generated from the product of an unknown object with a probe signal which is derived from the object by application of a linear spectral transfer function. Even though time-domain ptychography can be applied to all coherent broadband signals irrespective of carrier frequency, experiments published up to now focused on ultrafast broadband laser pulses. For example, PIE has been shown to reliably reconstruct unknown ultrafast pulses from corresponding spectrograms or cross-correlation frequency resolved optical gating (XFROG) traces \cite{Spangenberg2015a,Heidt2016}. In this work, we extend ptychography to reconstruct unknown object signals entirely without a probe signal, but by application of different families of spectral phase-only transfer functions. We call the scheme i$^2$PIE since we measure the \emph{square} of a signal which is the result of applying families of known transfer functions, i.e., \emph{intrinsic knowledge}, to the object signal. The new i$^2$PIE scheme has the potential to simplify ultrafast pulse reconstruction as no probe pulse is required. Instead, it analyzes spectra which come from collinear second harmonic generation of phase modulated object pulses. Thus, possible experimental arrangements are similar to those used in multiphoton intrapulse interference phase scan (MIIPS) \cite{Lozovoy2004a}, interferometric frequency resolved optical gating (iFROG) \cite{iFROG,Galler2008}, shaper assisted collinear spectral phase interferometry for direct electric field reconstruction (SPIDER) \cite{SPIDER} or the dispersion scan (D-Scan) method \cite{dscan}. Experimentally, we implement the method using a 4f-shaper with an SLM to apply selected spectral transfer functions and reconstruct a complex supercontinuum pulse from an all normal dispersion fiber. The i$^2$PIE algorithm takes a spectrogram as input. The spectrogram $S(\Omega,n)$, consisting of $n$ measured spectra $S_n(\Omega)$, is recorded by applying each transfer function $H_n(\Omega)$ from a set of known spectral phase-only transfer functions $H(\Omega,n)$ sequentially to the unknown object signal and recording the resultant second harmonic spectrum. Here $\Omega = \omega-\omega_0$ is defined relative to the carrier frequency $\omega_0$. More formally, for each transfer function in a set, the product of the transfer function $H_n(\Omega)$ with the object pulse $E_\mathrm{in}(\Omega)$, \begin{equation} o_n(\Omega) = E_\mathrm{in}(\Omega) H_n(\Omega), \end{equation} is sent into a nonlinear mixer and the resultant spectrally resolved second harmonic intensity is recorded, \begin{equation} S_n(\Omega) = \left| \mathcal{F}\left\{ o_n^2(t) \right\} \right|^2, \end{equation} where $\mathcal{F}$ denotes the Fourier transformation. From such a spectrogram the unknown object signal $E_\textrm{in}$ can be reconstructed using the i$^2$PIE algorithm as follows. An initial guess is made for the object signal $E_\textrm{in}'(\Omega)$ which defines the modulated signal $o_n(\Omega)$ based on the corresponding transfer function, \begin{equation} o_n(\Omega) = E_\mathrm{in}'(\Omega) H_n(\Omega). \label{eq_square} \end{equation} Assuming perfect phase matching over the entire spectral bandwidth, the second harmonic signal is \begin{equation} g_n(t) = o_n^2(t). \end{equation} This second harmonic signal is used to calculate an updated field $g_n'$ by replacing the current estimated amplitude with the measured amplitude from the corresponding spectrum $S_n(\Omega)$, i.e., \begin{equation} g_n'(\Omega) = \sqrt{S_n(\Omega)} \exp[ \textrm{i} \arg(g_n(\Omega)) ]. \end{equation} Now the modulated signal is updated following standard ptychographic recipe \begin{equation} o_n'(t) = o_n(t) + \beta U_n(t) \left[g_n'(t) - g_n(t) \right] \end{equation} where \begin{equation} U(t) = \frac{|o_n(t)|}{\mathrm{max}\left(|o_n(t)|\right)} \; \frac{o_n^*(t)}{|o_n(t)^2|+\alpha}. \label{eq_soft_div} \end{equation} We use a constant weight $\beta \in [0 \ldots 1]$ and $\alpha < 1$. The last step, unique to the i$^2$PIE algorithm, is to update the current estimate of the object signal $E_\mathrm{in}'$, \begin{equation} E_\textrm{in}'(\Omega) = o_n'(\Omega) H_n^*(\Omega), \end{equation} using the intrinsic knowledge of the transfer function used and the current updated second harmonic signal $o_n'$. The procedure is repeated for all recorded spectra multiple times until the object signal $E_\textrm{in}$ is sufficiently reconstructed. As with other variants of PIE there is a redundancy of information requirement. This is achieved by having a sufficient number of transfer functions and sensible choices for the specific type of transfer functions. Here, we organize transfer functions into families with the same basis function and we show how one can calculate sensible boundary values for the free parameters of a transfer function family. More formally, to reconstruct the slowly varying envelope of an unknown object signal, using i$^2$PIE, \begin{equation} E_\mathrm{in}(\Omega) = A(\Omega) \; \mathrm{e}^{\mathrm{i} \psi(\Omega)} \end{equation} a family of phase-only transfer functions $[\psi_n(\Omega)]$ with $n \in [1 \ldots N]$ is chosen. We restrict ourselves to families that can be characterized by only a few parameters and we will discuss two examples, i.e. families of polynomial and sinusoidal phases. The parameter boundaries of the chosen transfer function are either given by the experimental setup, or fundamentally, by discrete sampling theory. In the latter case they are just as easily obtainable for a measurement as they are for the simulated signals. Besides the transfer functions, we assume to know the spectral resolution $\Delta\Omega$ of the spectrometer and the spectrum of the unknown object signal $I(\Omega) = |A(\Omega)|^2$. From the spectrometer resolution we calculate the total time window, i.e. $T = 2\pi/\Delta\Omega$. Further, we assume that the maximum applied transfer function $\psi_N(\Omega)$ dominates the total phase, i.e. $\psi_\mathrm{tot}(\Omega) = \psi(\Omega) + \psi_N(\Omega) \approx \psi_N(\Omega)$. With this we can estimate the object signal duration after applying the maximum transfer function $\psi_N(\Omega)$ to \begin{equation} \label{eq_sigt} \sigma_t^2 = \frac{1}{2\pi} \int\limits_{-\infty}^\infty \mathrm{d}\Omega \; \left\{ \left( \frac{\partial A(\Omega)}{\partial\Omega} \right)^2 + \left[ \left( \frac{\partial \psi_N(\Omega)}{\partial\Omega} + \overline{t} \right) A(\Omega) \right]^2 \right\} \end{equation} with $\overline{t}$, the first moment of the temporal intensity. While the first term on the right hand side represents the bandwidth limited duration \begin{equation} \sigma_0^2 \doteq \frac{1}{2\pi} \int \mathrm{d}\Omega \; \left( \frac{\partial A(\Omega)}{\partial\Omega} \right)^2, \end{equation} the second term describes signal broadening due to the applied phase modulation. Hereafter, we assume $\overline{t}=0$ which is approximately true for most cases discussed here. We determine the parameter boundaries of $\psi_N(\Omega)$ by restricting the duration of the modulated object signal to a fraction $\gamma$ of the total time window, i.e. $\gamma T$. Therefore, broadening due to $\psi_N(\Omega)$ should at most be equal to \begin{equation} \sigma_\psi = \sqrt{\gamma^2 T^2 - \sigma_0^2} \end{equation} Consider the two families of transfer functions discussed hereafter. First, the family of polynomial phases \begin{equation} \psi(\Omega) = \pm q \, \Omega^k \end{equation} with parameter $q$ and constant order $k \geq 2$. With eq.~(\ref{eq_sigt}) we find for the maximum allowed $k$-th order phase \begin{equation} \label{eq_qmax} q_\mathrm{max} = \pm \sqrt{\frac{\sigma_\psi^2}{\frac{k^2}{2\pi} \int \mathrm{d}\Omega \; \Omega^{2(k-1)} I(\Omega)}} \end{equation} which can be easily calculated knowing $\gamma T$ and the object spectrum $I(\Omega)$. Second, we consider the family of sinusoidal phase functions \begin{equation} \psi(\Omega) = a \cos(\Omega\tau+\phi) \end{equation} The members of this family are parameterized through amplitude $a$, frequency $\tau$ and phase $\phi$. With eq.~(\ref{eq_sigt}) we find \begin{eqnarray} \nonumber a_\mathrm{max} \tau_\mathrm{max} & = & \sqrt{\frac{2 \sigma_\psi^2}{G(0) - \Re\{ G(2\tau_\mathrm{max}) \mathrm{e}^{2 \mathrm{i} \phi} \}}} \\ \label{eq_amax} & \approx & \sqrt{\frac{2 \sigma_\psi^2}{G(0)}} \end{eqnarray} with the Fourier transform of the spectral intensity \begin{equation} G(t) \doteq \frac{1}{2\pi} \int \mathrm{d}\Omega \; I(\Omega) \; \mathrm{e}^{\mathrm{i} \Omega t}. \end{equation} Typically, the average spectral intensity $G(0)$ is larger than $\Re\{ G(2\tau_\mathrm{max}) \mathrm{e}^{2 \mathrm{i} \phi} \}$ and the approximate expression~(\ref{eq_amax}) can be readily used. Also note that the approximate expression is independent of the phase $\phi$. That is, we fix either $a_\mathrm{max}$ or $\tau_\mathrm{max}$ and use eq~(\ref{eq_amax}) to calculate the other. We evaluate the i$^2$PIE algorithm by reconstruction of a set of random laser pulses. The pulses are used to numerically calculate input spectrograms based on a transfer function family, which are fed to i$^2$PIE. In each case we analyze how well the chosen transfer function family performs by testing its performance against the set of random object pulses. For each object pulse we calculate the root mean square (rms) error between input and reconstructed spectrogram. Reconstructions where $\log_{10}(\textrm{rms})<-3.5$ are considered successful and where $\log_{10}(\textrm{rms}) \geq -3.5$ are considered unsuccessful. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{fig1} \caption{Histogram of the logarithmic rms error of reconstructions of the random pulse set for a family of quadratic phase transfer functions with $N=6$ (a), $N=12$ (b), $N=25$ (c) and $N=50$ (d) members. Green bars indicate successful and red bars unsuccessful reconstructions.} \label{figQ} \end{figure} The random object set consists of 1000 object pulses. Their randomly shaped spectrum is centered around 800~nm with a spectral bandwidth between 2~nm and 20~nm. The spectral phase is either, in the first case, polynomial up to fourth order with random coefficients and, in the second case, sinusoidal with random amplitude, frequency and phase. Each object pulse is generated for a temporal window of 8~ps, which corresponds to a spectral resolution of 0.27~nm at 800~nm, and on a grid of 1024 samples. In all reconstructions we used $\alpha=0.0001$ and $\beta=0.3$ and each reconstruction started with an initial guess of a 200~fs Fourier limited Gaussian pulse. We set $\gamma=1/8$ and used $N$ transfer functions in all families. For every object pulse we calculate the parameter boundaries of the respective transfer functions from $\gamma T$ and the fundamental spectrum according to eq.~(\ref{eq_qmax}) and (\ref{eq_amax}). We sequentially applied the i$^2$PIE update for the entire family of transfer functions in a set and repeat the process 500 times before the rms error was evaluated. First, we start with the family of quadratic phase transfer functions ($k=2$). The individual members are characterized by $q_n = (n-N/2-1) q_\mathrm{max}$. Shown in Fig.~\ref{figQ} are histograms of the logarithmic rms error of all reconstructions with $N=6, 12, 25, 50$. We find that with as little as six transfer functions the method successfully reconstructs 92.7\% of all objects. The success rate increases to close to 100\% for $N$ as large as 50 and the mean of the rms error decreases by several orders of magnitude. We find similar results for $k=3,4$. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{fig2} \caption{Histogram of the logarithm of the rms error for three families of sinusoidal phase transfer functions. Green bars indicate successful and red bars unsuccessful reconstructions. The blue curve shows the cumulative percentage of successful reconstructions. (a) Varying $\phi$ from 0 to $2\pi$ for $\tau=300$~fs and $a$ from equation~(\ref{eq_amax}). (b) Fixing $\phi=0$, $\tau=300$~fs and varying $a$ within the limits calculated with equation~(\ref{eq_amax}). (c) Fixing $\phi=0$, $a=2.7$ and varying $\tau$ within the limits calculated with eq.~(\ref{eq_amax}).} \label{fig1} \end{figure} Next we consider the families of sinusoidal phase transfer functions. Three families can be defined based on varying the parameters $a$, $\tau$ and $\phi$, respectively. First, we arbitrarily set $\tau=300$~fs and calculate the corresponding amplitude for every object pulse using equation~(\ref{eq_amax}). Then we vary $\phi_n$ between 0 and $\phi_N = 2\pi$ in $N$ equidistant steps. Second, we arbitrarily set $\phi=0$ and $\tau=300$~fs, and use equation~(\ref{eq_amax}) to calculate the maximum amplitude for every object pulse. The individual transfer functions then have amplitudes of $a_n = (n-N/2-1) a_\mathrm{max}$. Finally, we arbitrarily set $\phi=0$ and $a=2.7$, calculate $\tau_\mathrm{max}$ for every object pulse and vary the frequency according to $\tau_n = \tau_\mathrm{max} n/N$. Shown in Fig.~\ref{fig1} are histograms of the logarithmic rms errors for the three families of sinusoidal phase transfer function. The percentage of successful reconstructions is found to be 95\% and higher. In our lab the broadband object pulse to be characterized is generated by sending a 800~nm seed pulse with 80~fs duration at 80~MHz repetition rate from a Ti:Sapphire oscillator into an all-normal dispersion (ANDi) photonic crystal fiber. The fiber output is then compressed by 48 bounces on a chirped mirror with 160~fs$^2$ compression per bounce. The resulting pulse serves as the object pulse $E_\mathrm{in}$. A 4f-shaper with a Jenoptic 640d spatial light modulator is used to sequentially apply all transfer functions $H_n(\Omega)$ from a specific family to the object pulse. The output from the 4f-shaper is focused onto a 20~$\mu$m thick BBO crystal by a 0.9~NA objective after which the frequency doubled light is collected and focused into an AvaSpec-3648 spectrometer with a resolution of 3.92~THz at 400~nm. The set of recorded second harmonic spectra is stored in a spectrogram $S_n(\Omega)$. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{fig3} \caption{Measured spectrogram for scanning (a) the quadratic phase between $-2250$~fs$^2$ and $2250$~fs$^2$, (b) the sinusoidal phase $\phi$ between $-\pi$ and $\pi$ with $a=15$ and $\tau=25$, (c) the sinusoidal amplitude $a$ between $-30$ and $30$ with $\phi=0$ and $\tau=25$~fs, and (d) the sinusoidal frequency $\tau$ between $-100$~fs and $100$~fs with $\phi=0$ and $a=\pi$.} \label{fig_results} \end{figure} We took measurements based on families of quadratic and sinusoidal phase transfer functions where we varied the respective parameters as discussed in the simulation section. In Fig.~\ref{fig_results} the measured spectrograms are shown for the different cases when (a) the quadratic phase is varied between $-2250$~fs$^2$ and $2250$~fs$^2$, (b) the sinusoidal phase $\phi$ between $-\pi$ and $\pi$ with $a=15$ and $\tau=25$, (c) the sinusoidal amplitude $a$ between $-30$ and $30$ with $\phi=0$ and $\tau=25$~fs, and (d) the sinusoidal frequency $\tau$ between $-100$~fs and $100$~fs with $\phi=0$ and $a=\pi$. The spectrograms are further used to retrieve amplitude and phase of the object pulse. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{fig4} \caption{The reconstructed spectral intensity is shown in (a) and the phase in (b). Purple: quadratic phase scan; blue: $\phi$ scan; red: amplitude $a$ scan; yellow: frequency $\tau$ scan.} \label{fig_amp_phase} \end{figure} In Fig.~\ref{fig_amp_phase}(a) we plot the reconstructed spectral intensities for all four families of spectral transfer functions on top of each other and in Fig.~\ref{fig_amp_phase}(b) the respective reconstructed spectral phases. We find reasonable agreement in the reconstructed spectral amplitudes and excellent agreement in the retrieved phases in regions of nonzero amplitude. In summary, we have demonstrated that the i$^2$PIE algorithm can reconstruct amplitude and phase of an unknown object signal from a measured second harmonic spectrogram recorded by applying different families of phase-only spectral transfer functions with excellent results. In principle, the choice of family is arbitrary and we derive a formalism that allows to calculate the scan limits from only the spectral resolution of the spectrometer and the spectral intensity of the object pulse. \section*{Acknowledgments} This research was funded in part through the Swiss National Science Foundation (Grant Number: 200020-178812/1) as well as the National Research Foundation of South Africa (Grant Number: 47793).
1,314,259,993,081
arxiv
\section{Introduction}% \label{section-introduction} Query learning is a framework in which a learning algorithm attempts to identify a target concept using specified types of queries to an oracle (or teacher) about the target concept~\cite{Angluin:1988}. For example, if the target concept is a regular language $L$, a membership query asks whether a string $x$ is a member of $L$, and is answered either ``yes'' or ``no''. An equivalence query asks whether a hypothesis language $L'$ (represented, for example, by a deterministic finite acceptor) is equal to $L$. In the case of an equivalence query, the answer may be ``yes'', in which case the learning algorithm has succeeded in exact identification of the target concept, or it may be ``no'', accompanied by a counterexample, that is, a string $x$ in $L$ but not in $L'$ (or vice versa). The counterexample is a witness that $L'$ is not equal to $L$. When $L'$ is not equal to $L$, there is generally a choice (often an infinity) of possible counterexamples, and we require that the learning algorithm works well regardless of which counterexample is chosen by the teacher. To account for this in terms of quantifying the running time of the learning algorithm, we include a parameter that is the maximum length of any counterexample returned by the teacher at any point in the learning process. In this setting, the \ensuremath{{L^*}}\ algorithm of Angluin~\cite{Angluin87} learns any regular language $L$ using membership and equivalence queries in time polynomial in the size of the smallest deterministic finite acceptor for $L$ and the length of the longest counterexample chosen by the teacher. As shown in~\cite{Angluin:1990}, there can be no such polynomial time algorithm using just membership queries or just equivalence queries. The assumption that equivalence queries are available may seem unrealistic. How is a person or a program to judge the equivalence of the target concept to some large, complex, technical specification of a hypothesis? If the hypothesis and the target concept are both deterministic finite acceptors, there is a polynomial time algorithm to test equivalence and return a counterexample in case the answer is negative. Alternatively, if there is a polynomial time algorithm for exact learnability of a class $\C$ of concepts using membership and equivalence queries, then it may be transformed into a polynomial time algorithm that learns approximations of concepts from $\C$ using membership queries and randomly drawn labeled examples~\cite{Angluin87,Angluin:1988}. In this transformation, there is an unknown probability distribution on examples, and an approximation bound $\epsilon > 0$ and a confidence bound $\delta > 0$ are given, and the algorithm draws a corpus of labeled examples of cardinality polynomial in the size of the target concept, $1/\epsilon$ and $\log(1/\delta)$. To answer an equivalence query, the hypothesis is checked against the labeled examples in the corpus. If the hypothesis agrees with the labels of all the examples in the corpus, the equivalence query is answered ``yes'', and otherwise, any exception supplies a counterexample to return as the answer of the equivalence query. The final hypothesis output by the transformed algorithm will, with probability at least $1-\delta$, have a probability of at most $\epsilon$ of disagreeing with the target on examples drawn from the unknown probability distribution. Since the publication of $L^*$, there have been a number of substantial improvements and extensions of the algorithm, as well as novel and unanticipated applications in the analysis, verification and synthesis of programs, protocols and hardware, following the work of Peled et al.\ that identified the applicability of $L^*$ in the area of formal methods~\cite{PeledVY02}. In a recent CACM review article, Vaandrager~\cite{Vaandrager:2017} explains Model Learning, which takes a black box approach to learning a finite state model of a given hardware or software system using membership queries (implemented by giving the system a sequence of inputs and observing the sequence of outputs) and equivalence queries (implemented using a set of test sequences in which the outputs of the hypothesis are compared with the outputs of the given system.) The learned models may then be analyzed to find discrepancies between a specification and its implementation, or between different implementations. He cites applications in telecommunications~\cite{HagererHNS02,ShahbazG14}, the automotive industry~\cite{FengLMNSW13}, online conference systems~\cite{WindmullerNSHB13}, as well as analyzing botnet protocols~\cite{ChocSS10}, smart card readers~\cite{ChaluparPPR14}, bank card protocols~\cite{AartsRP13}, network protocols~\cite{RuiterP15} and legacy software~\cite{MargariaNRS04,SchutsHV16}. Another application of finite state machine learning algorithms is in the assume-guarantee approach to verifying systems by dividing them into modules that can be verified individually. Cobleigh, Giannakopoulou and P\u{a}sare\u{a}nu~\cite{Cobleigh:2003} first proposed using a learning algorithm to learn a correct and sufficient contextual assumption for the component being verified, and there has since been a great deal of research progress in this area~\cite{NamA06}. If we consider {\it reactive systems}, that is, systems that maintain an ongoing interaction with their environment (e.g., operating systems, communication protocols, or robotic swarms), the restriction to models specified by finite automata processing finite sequences of inputs is too limiting. Instead, one trajectory of the behavior of a reactive system may be modeled using an infinite word ($\omega$-word), each symbol of which specifies the current state of the system and the environment at a given time. The system itself may be modeled by an $\omega$-automaton, that is, a finite state automaton that processes $\omega$-words. The desired behavior of such a system may be specified by a linear temporal logic formula, that defines the set of $\omega$-words that constitute ``good'' behaviors of the system. Researchers have thus sought query learning algorithms for $\omega$-automata that could be used in the settings of model learning or assume-guarantee verification for reactive systems. However, learning $\omega$-automata seems to be a much more challenging problem than learning automata on finite words, in part because the Myhill-Nerode characterization for regular languages (stating that there is a unique minimum deterministic acceptor that can be constructed using the right congruence classes of the language) does not hold in general for regular $\omega$-languages. The Myhill-Nerode characterization is the basis of the \ensuremath{{L^*}}\ algorithm and its successors. There is no known polynomial time algorithm using membership and equivalence queries to learn even the whole class $\dbw$ of languages recognized by deterministic B\"{u}chi acceptors, which is a strict subclass of the class of all regular $\omega$-languages. Maler and Pnueli~\cite{Maler1995} have given a polynomial time algorithm using membership and equivalence queries to learn the {weak regular $\omega$-languages}. This class, denoted $\dwpw$, is the set of languages accepted by deterministic weak parity automata, and is a non-trivial subclass of $\dbw$. The class $\dwpw$ does have a Myhill-Nerode property, but this alone does not suffice for extending \ensuremath{{L^*}}\ to learn this class, since the observed data might suggest conflicting ways to mark accepting states in an automaton agreeing with the observed data. Maler and Pnueli's algorithm manages to overcome this problem by finding a set of membership queries to ask to resolve the conflict. In the context of assume-guarantee verification, Farzan et al.~\cite{FarzanCCTW08} proposed a direct application of \ensuremath{{L^*}}\ to learn the full class of regular $\omega$-languages. Their approach is based on the result of Calbrix, Nivat and Podelski~\cite{CalbrixNP93} showing that a regular $\omega$-language $L$ can be characterized by the regular language \ensuremath{L_{\$}}\ of finite strings $u {\$} v$ representing the set of ultimately periodic words ${u(v)}^{\omega}$ in $L$. This establishes that a regular $\omega$-language $L$ is learnable using membership and equivalence queries in time polynomial in the size of the minimal deterministic finite acceptor for \ensuremath{L_{\$}}. However, the size of this representation may be exponentially larger than its $\omega$-automaton representation. More recently, Angluin and Fisman~\cite{AngluinF16} have given a learning algorithm using membership and equivalence queries for general regular $\omega$-languages represented by families of deterministic finite acceptors, which improves on the \ensuremath{L_{\$}}\ representation, however the running time is not bounded by a polynomial in the representation. Clearly, much more research is needed in the area of query learning of $\omega$-automata. Despite the difficulties in learning $\omega$-automata, which are used in the analysis of linear temporal logic, in this paper we consider the theoretical question of learning $\omega$-tree automata, which are used in the analysis of branching temporal logic~\cite{EmersonS88,KupfermanVW00}. As a potential motivation for studying learning of $\omega$-tree languages, we consider a setting in which two players play an infinite game in which the opponent chooses one of two actions ($1$ and $2$) and the player responds with a symbol chosen from a finite alphabet $\Sigma$. We can represent the strategy of the player as a binary $\omega$-tree in which each node is the player's state, the two edges leaving the node are the possible choices of the opponent (action $1$ or $2$), each edge is labeled with the response action (from $\Sigma$) of the player, and each leads to a (potentially new) state for the player. In this interpretation, a set of $\omega$-trees represents a property of strategies, and the task of the learner is to learn an initially unknown property of strategies by using membership queries (``Does this strategy have the unknown property?'') and equivalence queries (``Is this property the same as the unknown property of strategies?'') answered either ``yes'' or with a counterexample, that is, a strategy that distinguishes the two properties. Because of the difficulty of the problem, we restrict our attention to $\omega$-tree languages such that all of their paths satisfy a certain temporal logic formula, or equivalently, a property of $\omega$-words. Given an $\omega$-word language $L$, we use $\Trees_d(L)$ to denote the set of all $d$-ary $\omega$-trees $t$ all of whose paths are in $L$. The $\omega$-tree language $\Trees_d(L)$ is often referred to as the \emph{derived} language of $L$. In this context, it is natural to ask whether learning a derived $\omega$-tree language $\Trees_d(L)$ can be reduced to learning the $\omega$-word language $L$. We answer this question affirmatively for the case that $L$ can be recognized by a deterministic B\"{u}chi word automaton and learned using membership and equivalence queries. Applying this reduction to the result of Maler and Pnueli on polynomially learning languages in $\dwpw$ we obtain a polynomial learning algorithm for derived languages in $\Trees_d(\dwpw)$ using membership and equivalence queries. Moreover, any progress on polynomially learning an extended subclass $\C$ of $\dbw$ using membership and equivalence queries can be automatically lifted to learning $\Trees_d(\C)$. The framework of the reduction is depicted in Fig.~\ref{fig:reduction-framework}. An algorithm $\A_{{\Trees}}$ for learning $\Trees_d(L)$ uses a learning algorithm $\A$ for $L$ to complete its task. The membership and equivalence queries ($\MQ$ and $\EQ$, henceforth) of algorithm $\A_{{\Trees}}$ are answered by respective oracles $\MQ$ and $\EQ$ for $\Trees_d(L)$. Since $\A$ asks membership and equivalence queries about $L$ rather than $\Trees_d(L)$, the learner $\A_{{\Trees}}$ needs to find a way to answer these queries. If $\A$ asks a membership query about an $\omega$-word, $\A_{{\Trees}}$ can ask a membership query about an $\omega$-tree all of whose paths are identical to the given $\omega$-word. Since the tree is accepted by $\Trees_d(L)$ iff the given word is accepted by $L$ it can pass the answer as is to $\A$. If $\A$ asks an equivalence query, using an acceptor $M$ for an $\omega$-language, then $\A_{{\Trees}}$ can ask an equivalence query using an acceptor $M^T$ that accepts an $\omega$-tree if all its paths are accepted by $M$. If this query is answered positively then $\A_{{\Trees}}$ can output the tree acceptor $M^T$ and halt. The challenge starts when this query is answered negatively. When the $\EQ$ is answered negatively, a counterexample tree $t$ is given. There are two cases to consider. Either this tree is in $\Trees_d(L)$ but is rejected by the hypothesis acceptor $M^T$, in which case $t$ is referred to as a \emph{positive counterexample}; or this tree is not in $\Trees_d(L)$ but is accepted by the hypothesis acceptor $M^T$, in which case $t$ is referred to as a \emph{negative counterexample}. If $t$ is a positive counterexample, since $M^T$ rejects $t$ there must be a path in $t$ which is rejected by $M$. It is not too dificult to extract that path. The real challenge is dealing with a negative counterexample. This part is grayed out in the figure. In this case the tree $t$ is accepted by $M^T$ yet it is not in $\Trees_d(L)$. Thus, all the paths of the tree are accepted by $M$, yet at least one path is not accepted by $L$. Since $L$ is not given, it is not clear how we can extract such a path. Since we know that not all paths of $t$ are contained in $L$, a use of an unrestricted subset query could help us. Unrestricted subset queries ($\USQ$) are queries on the inclusion of a current hypothesis in the unknown language that are answered by ``yes'' or ``no'' with an accompanying counterexample in the case the answer is negative. \begin{figure} \centering \scalebox{0.30}{ \includegraphics{ATrees_Fig_NG.pdf} } \caption{The reduction framework}\label{fig:reduction-framework} \end{figure} Since we don't have access to $\USQ$s we investigate whether we can obtain such queries given the queries we have. We show that unrestricted subset queries can be simulated by restricted subset queries. Restricted subset queries ($\RSQ$) on $\omega$-words are subset queries that are \emph{not} accompanied by counterexamples. This essentially means that there is a way to construct a desired counterexample without it being given. To discharge the use of restricted subset queries (as the learner is not provided such queries either) we investigate the relation between subsets of $\omega$-words and $\omega$-trees. Finally, we show that the desired subset queries on $\omega$-words can be given to the $\omega$-tree learning algorithm by means of $\omega$-tree membership queries. From these we can construct a series of procedures to implement the grayed area. The subsequent sections contain definitions of $\omega$-words, $\omega$-trees and automata processing them, derived $\omega$-tree languages, the problem of learning classes of $\omega$-word and $\omega$-tree languages, preliminary results, the algorithm for the main reduction, and some discussion. We also include an Appendix with a few examples illustrating some of the procedures involved in our framework. \section{Definitions}% \label{section-definitions} \subsection{Words and trees} (For more details see Gr\"{a}del, Thomas and Wilke~\cite{Gradel2002}, Perrin and Pin~\cite{PP2004}, and L\"{o}ding~\cite{Loding11}.) Let $\Sigma$ be a fixed finite alphabet of symbols. The set of all finite words over $\Sigma$ is denoted $\Sigma^*$. The empty word is denoted $\varepsilon$, and the length of a finite word $x$ is denoted $|x|$. $\Sigma^+$ is the set of all nonempty finite words over $\Sigma$, and for a nonnegative integer $k$, $\Sigma^k$ is the set of all finite words over $\Sigma$ of length equal to $k$. A finite word language is a subset of $\Sigma^*$. An $\omega$-word over $\Sigma$ is an infinite sequence $w = \sigma_1 \sigma_2 \sigma_3 \cdots$ where each $\sigma_i \in \Sigma$. The set of all $\omega$-words over $\Sigma$ is denoted $\Sigma^{\omega}$. An $\omega$-word language is a subset of $\Sigma^{\omega}$. The $\omega$-regular expressions are analogous to finite regular expressions, with the added operation $S^{\omega}$, where $S$ is a set of finite words, and the restriction that concatenation combines a set of finite words as the left argument with a set of finite words or $\omega$-words as the right argument. The set $S^{\omega}$ is the set of all $\omega$-words $s_1 s_2 \cdots$ such that for each $i$, $s_i \in S$ and $s_i \neq \varepsilon$. For example, ${(a+b)}^* {(a)}^{\omega}$ is the set of all $\omega$-words over $\{a,b\}$ that contain finitely many occurrences of $b$. If $S \subseteq \Sigma^*$, $n$ is a nonnegative integer and $u \in \Sigma^*$, we define the \concept{length and prefix restricted} version of $S$ by $S[n,u] = S \cap \Sigma^n \cap (u \cdot \Sigma^*)$. This is the set of all words in $S$ of length $n$ that begin with the prefix $u$. We also define the \concept{length restricted} version of $S$ by $S[n] = S[n,\varepsilon]$, that is, the set of all words in $S$ of length $n$. Let $d$ be a positive integer. We consider $T_d$, the unlabeled complete $d$-ary $\omega$-tree whose directions are specified by $D = \{1,\ldots, d\}$. The \concept{nodes} of $T_d$ are the elements of $D^*$. The \concept{root} of $T_d$ is the node $\varepsilon$, and the \concept{children} of node $v$ are $v \cdot i$ for $i \in D$. An \concept{infinite path} $\pi$ in $T_d$ is a sequence $x_0, x_1, x_2, \ldots$ of nodes of $T_d$ such that $x_0$ is the root and for all nonnegative integers $n$, $x_{n+1}$ is a child of $x_n$. An infinite path in $T_d$ corresponds to an $\omega$-word over $D$ giving the sequence of directions traversed by the path starting at the root. A labeled $d$-ary $\omega$-tree (or just \concept{$\omega$-tree}) is given by a mapping $t: D^+ \rightarrow \Sigma$ that assigns a symbol in $\Sigma$ to each non-root node of $T_d$. We may think of $t$ as assigning the symbol $t(v \cdot i)$ to the tree edge from node $v$ to its child node $v \cdot i$. The set of all labeled $d$-ary $\omega$-trees is denoted $T_d^{\Sigma}$. An $\omega$-tree language is a subset of $T_d^{\Sigma}$. If $\pi = x_0, x_1, x_2, \ldots$ is an infinite path of $T_d$, then we define $t(\pi)$ to be the $\omega$-word $t(x_1), t(x_2), \ldots$ consisting of the sequence of labels of the non-root nodes of $\pi$ in $t$. (Recall that $t$ does not label the root node.) \subsection{Automata on words}% \label{subsection-Automata-on-words} A \concept{finite state word automaton} is given by a tuple $M = (Q,q_0,\delta)$, where $Q$ is a finite set of states, $q_0 \in Q$ is the initial state, and $\delta: Q \times \Sigma \rightarrow 2^Q$ is the (nondeterministic) transition function. The automaton is \concept{deterministic} if $\delta(q,\sigma)$ contains at most one state for every $(q,\sigma) \in Q \times \Sigma$, and \concept{complete} if $\delta(q,\sigma)$ contains at least one state for every $(q,\sigma) \in Q \times \Sigma$. For a complete deterministic automaton we extend $\delta$ to map $Q \times \Sigma^*$ to $Q$ in the usual way. Let $x = \sigma_1 \sigma_2 \cdots \sigma_k$ be a finite word, where each $\sigma_n \in \Sigma$. A \concept{run} of $M$ on $x$ is a sequence of $k+1$ states $r_0, r_1, \ldots, r_k$ such that $r_0 = q_0$ is the initial state and $r_n \in \delta(r_{n-1},\sigma_n)$ for integers $1 \le n \le k$. Let $w = \sigma_1 \sigma_2 \cdots$ be an $\omega$-word, where each $\sigma_n \in \Sigma$. A \concept{run} of $M$ on $w$ is an infinite sequence of states $r_0, r_1, r_2, \ldots$ such that $r_0 = q_0$ is the initial state and $r_n \in \delta(r_{n-1},\sigma_n)$ for all positive integers $n$. A \concept{nondeterministic finite acceptor} is given by $M = (Q,q_0,\delta,F)$, where $(Q,q_0,\delta)$ is a finite state word automaton, and the new component $F \subseteq Q$ is the set of accepting states. $M$ is a \concept{deterministic finite acceptor} if $\delta$ is deterministic. Let $M$ be a nondeterministic finite acceptor and $x \in \Sigma^*$ a finite word of length $n$. $M$ \concept{accepts} $x$ iff there is a run $r_0, r_1, \ldots, r_n$ of $M$ on $x$ such that $r_n \in F$. The language \concept{recognized} by $M$ is the set of all finite words accepted by $M$, denoted by $\lang{M}$. The class of all finite word languages recognized by deterministic finite acceptors is denoted by $\dfw$, and by nondeterministic finite acceptors, $\nfw$. These representations are equally expressive, that is, $\nfw = \dfw$. Turning to finite state word automata processing $\omega$-words, a variety of different acceptance criteria have been considered. Such an acceptor is given by a tuple $M = (Q,q_0,\delta,\alpha)$, where $(Q,q_0,\delta)$ is a finite state word automaton and $\alpha$ specifies a mapping from $2^Q$ to $\{0,1\}$ which gives the criterion of acceptance for an $\omega$-word $w$. Given an $\omega$-word $w$ and a run $r = r_0, r_1, \ldots$ of $M$ on $w$, we consider the set $\infset(r)$ of all states $q \in Q$ such that $r_n = q$ for infinitely many indices $n$. The acceptor $M$ \concept{accepts} the $\omega$-word $w$ iff there exists a run $r$ of $M$ on $w$ such that $\alpha(\infset(r)) = 1$. That is, $M$ accepts $w$ iff there exists a run of $M$ on $w$ such that the set of states visited infinitely often in the run satisfies the acceptance criterion $\alpha$. The language \concept{recognized} by $M$ is the set of all $\omega$-words accepted by $M$, denoted $\lang{M}$. For a B\"{u}chi acceptor, the acceptance criterion $\alpha$ is specified by giving a set $F \subseteq Q$ of accepting states and defining $\alpha(S) = 1$ iff $S \cap F \neq \emptyset$. In words, a B\"{u}chi acceptor $M$ accepts $w$ if and only if there exists a run $r$ of $M$ on $w$ such that at least one accepting state is visited infinitely often in $r$. For a co-B\"{u}chi acceptor, the acceptance criterion $\alpha$ is specified by giving a set $F \subseteq Q$ of rejecting states and defining $\alpha(S) = 1$ iff $S \cap F = \emptyset$. For a parity acceptor, $\alpha$ is specified by giving a function $c$ mapping $Q$ to an interval of integers $[i,j]$, (called \concept{colors} or \concept{priorities}) and defining $\alpha(S) = 1$ iff the minimum integer in $c(S)$ is even. A \emph{parity} automaton is said to be \concept{weak} if no two strongly connected states have distinct colors, i.e., if looking at the partition of its states to maximal strongly connected components (MSCCs) all states of an MSCC have the same color. Clearly every weak parity automaton can be colored with only two colors, one even and one odd, in which case the colors are often referred to as \emph{accepting} or \emph{rejecting}. It follows that a weak parity automaton can be regarded as either a B\"uchi or a coB\"uchi automaton. If in addition no rejecting MSCC is reachable from an accepting MSCC, the acceptor is said to be \emph{weak B\"uchi}. Likewise, a weak parity acceptor where no accepting MSCC is reachable from a rejecting MSCC, is said to be \emph{weak coB\"uchi} acceptor. The classes of languages of $\omega$-words recognized by these kinds of acceptors will be denoted by three/four-letter acronyms, with \class{N} or \class{D} (for nondeterministic or deterministic), \class{B}, \class{C}, \class{P}, \class{wB}, \class{wC} or \class{wP} (for B\"{u}chi, co-B\"{u}chi, parity or their respective weak variants) and then \class{W} (for $\omega$-words). Thus $\dwbw$ is the class of $\omega$-word languages recognized by deterministic weak B\"{u}chi word acceptors. \begin{wrapfigure}{4}{0.3\textwidth} \centering \vspace{-8mm} \begin{minipage}[t]{3cm}\centering \begin{center} \vspace{-1mm} \scalebox{0.7}{% \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm,semithick,initial text=] \node[label] (Reactivity) {$\dpw$}; \node[label] (Recurrence) [below left of=Reactivity]{$\dbw$}; \node[label] (Persistence) [below right of=Reactivity]{$\dcw$}; \node[label] (Obligation) [below of=Reactivity, node distance=2.15cm]{$\dwpw$}; \node[label] (Guarantee) [below left of=Obligation]{$\dwbw$}; \node[label] (Safety) [below right of=Obligation]{$\dwcw$}; \path (Safety) edge (Obligation); \path (Guarantee) edge (Obligation); \path (Recurrence) edge (Reactivity); \path (Persistence) edge (Reactivity); \path (Obligation) edge (Recurrence); \path (Obligation) edge (Persistence); \end{tikzpicture} } \end{center} \end{minipage} \caption{Expressiveness hierarchy of $\omega$-acceptors~\cite{Wagner75,MP89}.}\label{fig-a-acc-hierearchy} \end{wrapfigure} Concerning the expressive power of various types of acceptors, previous research has established the following results. The weak variants are strictly less expressive than the non-weak variants. Deterministic parity automata are more expressive than deterministic B\"uchi and coB\"uchi automata and the same is true for their weak variants. These results are summarized in Fig.~\ref{fig-a-acc-hierearchy}. In addition, $\nbw=\dpw=\npw$ and $\dwpw=\dcw \cap \dbw$. The class of \emph{regular $\omega$-languages} is the class $\dpw$, and the class of \emph{weak regular $\omega$-languages} is the class $\dwpw$. \subsection{Automata on trees}% \label{subsection-Automata-on-trees} Acceptors on $d$-ary $\omega$-trees are equipped with analogous accepting conditions. Such an acceptor is given by a tuple $M = (Q,q_0,\delta,\alpha)$, where $Q$ is a finite set of states, $q_0 \in Q$ is the initial state, the transition function $\delta$ is a map from $Q$ and $d$-tuples of symbols to sets of $d$-tuples of states, that is, $\delta: Q \times \Sigma^d \rightarrow 2^{Q^d}$, and the acceptance criterion $\alpha$ specifies a function from $2^Q$ to $\{0,1\}$. We may think of the acceptor as running top down from the root of the tree, at each node nondeterministically choosing a permissible $d$-tuple of states for the $d$ children of the node depending on the state assigned to the node and the $d$-tuple of symbols on its outgoing edges. In other words, for each node, with a state $q$ assigned to it, and $d$ outgoing edges with symbols $\sigma_1,\ldots,\sigma_d$, the acceptor will assign states $q_1,\ldots, q_d$ to the children of the nodes, only if $(q_1,\ldots,q_d)\in\delta(q, (\sigma_1,\ldots,\sigma_d))$. We define a \concept{run} of $M$ on the $\omega$-tree $t$ as a mapping $r$ from the nodes of $T_d$ to $Q$ such that $r(\varepsilon) = q_0$ and for every node $x$, we have $(r(x \cdot 1), \ldots, r(x \cdot d)) \in \delta(r(x), (t(x \cdot 1), \ldots, t(x \cdot d)))$. That is, the root is assigned the initial state and for every node, the ordered $d$-tuple of states assigned to its children is permitted by the transition function. The acceptor $M$ \concept{accepts} the $\omega$-tree $t$ iff there exists a run $r$ of $M$ on $t$ such that for every infinite path $\pi$, we have $\alpha(\infset(r(\pi))) = 1$. That is, there must be at least one run in which, for every infinite path, the set of states that occur infinitely often on the path satisfies the acceptance criterion $\alpha$. The $\omega$-tree language \concept{recognized} by $M$ is the set of all $\omega$-trees accepted by $M$, denoted $\lang{M}$. The specification of the acceptance criterion $\alpha$ is as for $\omega$-word acceptors, yielding B\"{u}chi, co-B\"{u}chi and parity $\omega$-tree acceptors. If the transition function specifies at most one permissible $d$-tuple of states for every element of $Q \times \Sigma^d$, then the acceptor is deterministic. The corresponding classes of $\omega$-tree languages are also denoted by three-letter acronyms, where the last letter is \class{T} for $\omega$-trees. For $\omega$-trees, the class of all regular $\omega$-tree languages is $\npt$ and $\nbt$ is a proper subclass of $\npt$. For any automaton or acceptor $M$, we denote the number of its states by $|M|$. \section{Derived \texorpdfstring{$\omega$}{omega}-tree languages}% \label{section-Derived-omega-tree-languages} Given an $\omega$-tree $t$ we define the $\omega$-word language $\paths(t)$ consisting of the $\omega$-words labeling its infinite paths. That is, we define \[\paths(t) = \{t(\pi) \mid \pi \textrm{ is an infinite path in } T_d\}.\] If $L$ is an $\omega$-word language and $d$ is a positive integer, we define a corresponding language of $d$-ary $\omega$-trees \concept{derived from} $L$ as follows: \[\Trees_d(L) = \{t \in T_d^{\Sigma} \mid \paths(t) \subseteq L\}.\] That is, $\Trees_d(L)$ consists of all $d$-ary $\omega$-trees such that every infinite path in the tree is labeled by an element of $L$. If $\class{C}$ is any class of $\omega$-word languages, $\Trees_d(\class{C})$ denotes the class of all $\omega$-tree languages $\Trees_d(L)$ such that $L \in \class{C}$. \subsection{Derived tree languages}% \label{subsection-derived-tree-languages} Not every regular $d$-ary $\omega$-tree language can be derived in this way from an $\omega$-word language. As an example, consider the language $L_a$ of all binary $\omega$-trees $t$ over $\Sigma = \{a,b\}$ such that there is at least one node labeled with $a$. An NBT acceptor can recognize $L_a$ by guessing and checking a path that leads to an $a$. However, if $L_a = \Trees_2(L)$ for some $\omega$-word language $L$, then because there are $\omega$-trees in $L_a$ that have infinite paths labeled exclusively with $b$, we must have $b^{\omega} \in L$, so the binary $\omega$-tree labeled exclusively with $b$ would also be in $\Trees_2(L)$, a contradiction. Given an $\omega$-word acceptor $M = (Q,q_0,\delta,\alpha)$, we may construct a related $\omega$-tree acceptor $M^{T,d} = (Q,q_0,\delta^{T,d},\alpha)$ as follows. For all $q \in Q$ and all $(\sigma_1, \ldots, \sigma_d) \in \Sigma^d$, define \[\delta^{T,d}(q,(\sigma_1, \ldots, \sigma_d)) = \{(q_1, \ldots, q_d) \mid \forall i \in D, q_i \in \delta(q,\sigma_i)\}.\] That is, the acceptor $M^{T,d}$ may continue the computation at a child of a node with any state permitted by $M$, independently chosen. It is tempting to think that $\lang{M^{T,d}} = \Trees_d(\lang{M})$, but this may not be true when $M$ is not deterministic. \begin{lem}% \label{lemma-word-to-tree} Given an $\omega$-word acceptor $M$, we have that $\lang{M^{T,d}} \subseteq \Trees_d(\lang{M})$ with equality if $M$ is deterministic. \end{lem} \begin{proof} Consider the $\omega$-word acceptor $M = (Q,q_0,\delta,\alpha)$. If $t \in \lang{M^{T,d}}$ then there is a run $r$ of $M^{T,d}$ on $t$ satisfying the acceptance criterion $\alpha$ on every infinite path. Thus, $t(\pi) \in \lang{M}$ for every infinite path $\pi$ and $t \in \Trees_d(\lang{M})$. Suppose $t \in \Trees_d(\lang{M})$ and $M$ is deterministic. Then $M^{T,d}$ is also deterministic, and there is a unique run $r$ of $M^{T,d}$ on $t$. For every infinite path $\pi$, $r(\pi)$ is also the unique run of $M$ on the $\omega$-word $t(\pi)$, which satisfies $\alpha$ because $t \in \Trees_d(\lang{M})$. Thus $t \in \lang{M^{T,d}}$. \end{proof} Boker et al.~\cite{BKKS13} give the following example to show that the containment asserted in Lemma~\ref{lemma-word-to-tree} may be proper if $M$ is not deterministic. The $\omega$-language $L$ specified by ${(a+b)}^*b^{\omega}$ can be recognized by the nondeterministic B\"{u}chi acceptor $M$ with two states, $q_0$ and $q_1$, transition function $\delta(q_0,a) = \{q_0\}$, $\delta(q_0,b) = \{q_0, q_1\}$, $\delta(q_1,b) = \{q_1\}$, and accepting state set $\{q_1\}$. Let $d = 2$, specifying binary trees with directions $\{1,2\}$. Then $M^{T,2}$ is a nondeterministic $\omega$-tree acceptor, but the following example shows $\lang{M^{T,2}} \subsetneq \Trees_2(L)$. Consider the binary $\omega$-tree $t$ that labels every node in $1^*2$ with $a$ and every other non-root node with $b$. Clearly $t \in \Trees_2(L)$ because every infinite path in $t$ has at most one $a$, but no run of $M^{T,2}$ can satisfy the acceptance criterion on the path $1^{\omega}$. Suppose $r$ were an accepting run of $M^{T,2}$ on $t$. Then for some $n\geq 0$, $r(1^n)$ would have to be equal to $q_1$. But then such a mapping $r$ would not be a valid run because $1^n2$ is labeled by $a$ and $\delta^{T,2}(q_1,(b,a)) = \emptyset$ because $\delta(q_1,a) = \emptyset$. \subsection{Good for trees}% \label{subsection-Good-for-trees} This phenomenon motivates the following definition. An $\omega$-word acceptor $M$ is \concept{good for trees} iff for any positive integer $d$, $\lang{M^{T,d}} = \Trees_d(\lang{M})$. Nondeterministic $\omega$-word acceptors that are good for trees are equivalent in expressive power to deterministic $\omega$-word acceptors, as stated by the following result of Boker et al. \begin{thmC}[\cite{BKKS13}]% \label{theorem-bkks} Let $L$ be a regular $\omega$-word language and $d \ge 2$. If $\Trees_d(L)$ is recognized by a nondeterministic $\omega$-tree acceptor with acceptance criterion $\alpha$, then $L$ can be recognized by a deterministic $\omega$-word acceptor with acceptance criterion $\alpha$. \end{thmC} This theorem generalizes prior results of Kupferman, Safra and Vardi for B\"{u}chi acceptors~\cite{KSV06} and Niwi\'{n}ski and Walukiewicz for parity acceptors~\cite{NW1998}. One consequence of Theorem~\ref{theorem-bkks} is that when $d \ge 2$, nondeterministic $\omega$-word acceptors that are good for trees are not more expressive than the corresponding deterministic $\omega$-word acceptors. Also, for $d \ge 2$, nondeterminism does not increase expressive power over determinism when recognizing $\omega$-tree languages of the form $\Trees_d(L)$. To see this, if $N$ is a nondeterministic $\omega$-tree acceptor with acceptance criterion $\alpha$ recognizing $\Trees_d(L)$ then there is a deterministic $\omega$-word acceptor $M$ with acceptance criterion $\alpha$ such that $\lang{M} = L$, and $M^{T,d}$ is a deterministic $\omega$-tree acceptor with acceptance criterion $\alpha$ that also recognizes $\Trees_d(L)$. However, it is possible that nondeterminism permits acceptors with smaller numbers of states. Kuperberg and Skrzypczak~\cite{KS15} have shown that for an NBT acceptor $M$ recognizing the $\omega$-tree language $\Trees_d(L)$, there is a DBW acceptor with at most $|M|^2$ states recognizing $L$, so nondeterminism gives at most a quadratic savings for B\"{u}chi tree acceptors that are good for trees. However, they have also shown that the blowup in the case of nondeterministic co-B\"{u}chi tree acceptors (and all higher parity conditions) is necessarily exponential in the worst case. \section{Learning tree languages}% \label{section-Learning-tree-languages} We address the problem of learning derived $\omega$-tree languages by giving a polynomial time reduction of the problem of learning $\Trees_d(\class{C})$ to the problem of learning $\class{C}$. The paradigm of learning we consider is exact learning with membership queries and equivalence queries. Maler and Pnueli~\cite{Maler1995} have given a polynomial time algorithm to learn the class of weak regular $\omega$-languages using membership and equivalence queries. Their algorithm and the reduction we give in Theorem~\ref{theorem-reduction-general} prove the following theorem. \begin{thm}% \label{theorem-learn-trees} For every positive integer $d$, there is a polynomial time algorithm to learn $\Trees_d(\dwpw)$ using membership and equivalence queries. \end{thm} \subsection{Representing examples}% \label{subsection-Representing-examples} For a learning algorithm, the examples tested by membership queries and the counterexamples returned by equivalence queries need to be finitely represented. For learning regular $\omega$-word languages, it suffices to consider \concept{ultimately periodic $\omega$-words}, that is, words of the form ${u(v)}^{\omega}$ for finite words $u \in \Sigma^*$ and $v \in \Sigma^+$. If two regular $\omega$-word languages agree on all the ultimately periodic $\omega$-words, then they are equal. The pair $(u,v)$ of finite words represents the ultimately periodic word ${u(v)}^{\omega}$. The corresponding class of examples in the case of regular $\omega$-tree languages is the class of \concept{regular $\omega$-trees}. These are $\omega$-trees that have a finite number of nonisomorphic complete infinite subtrees. We represent a regular $\omega$-tree $t$ by a \concept{regular $\omega$-tree automaton} $A_t = (Q,q_0,\delta,\tau)$, where $(Q,q_0,\delta)$ is a complete deterministic finite state word automaton over the input alphabet $D = \{1,\ldots,d\}$ and $\tau$ is an output function that labels each transition with an element of $\Sigma$. That is, $\tau: Q \times D \rightarrow \Sigma$. The regular $\omega$-tree $t$ represented by such an automaton $A_t$ is defined as follows. For $x \in D^+$, let $i \in D$ be the last symbol of $x$ and let $x'$ be the rest of $x$, so that $x = x' \cdot i$. Then define $t(x) = \tau(\delta(q_0,x'),i)$, that is, $t(x)$ is the label assigned by $\tau$ to the last transition in the unique run of $A_t$ on $x$. Rabin~\cite{Rabin1972} proved that if two regular $\omega$-tree languages agree on all the regular $\omega$-trees then they are equal. Thus, ultimately periodic $\omega$-words and regular $\omega$-trees are proper subsets of examples that are nonetheless sufficient to determine the behavior of regular $\omega$-word and $\omega$-tree acceptors on all $\omega$-words and $\omega$-trees, respectively. \subsection{Types of queries for learning}% \label{subsection-Types-of-queries-for-learning} We consider the situation in which a learning algorithm $\A$ is attempting to learn an initially unknown target language $L$ of $\omega$-words from a known class $\class{C} \subseteq \dbw$. The information that $\A$ gets about $L$ is in the form of answers to queries of specific types~\cite{Angluin:1988}. The learning algorithm will use membership and equivalence queries, whereas, restricted and unrestricted subset queries will in addition be considered in the proof. In a \concept{membership query about} $L$, abbreviated $\MQ$, the algorithm $\A$ specifies an example as a pair of finite words $(u,v)$ and receives the answer ``yes'' if ${u(v)}^{\omega} \in L$ and ``no'' otherwise. In an \concept{equivalence query about} $L$, abbreviated $\EQ$, the algorithm $\A$ specifies a hypothesis language $\lang{M}$ as a DBW acceptor $M$, and receives either the answer ``yes'' if $L = \lang{M}$, or ``no'' and a counterexample, that is, a pair of finite words $(u,v)$ such that ${u(v)}^{\omega} \in (L \oplus \lang{M})$, where $B \oplus C$ denotes the symmetric difference of sets $B$ and $C$. In a \concept{restricted subset query about} $L$, abbreviated $\RSQ$, the algorithm $\A$ specifies a hypothesis language $\lang{M}$ as a DBW acceptor $M$, and receives the answer ``yes'' if $\lang{M} \subseteq L$ and ``no'' otherwise. An \concept{unrestricted subset query about} $L$, abbreviated $\USQ$, is like a restricted subset query, except that in addition to the answer of ``no'', a counterexample $(u,v)$ is provided such that ${u(v)}^{\omega} \in (\lang{M} \setminus L)$. A learning algorithm $\A$ using specific types of queries \concept{exactly learns} a class $\class{C}$ of $\omega$-word languages iff for every $L \in \class{C}$, the algorithm makes a finite number of queries of the specified types about $L$ and eventually halts and outputs a DBW acceptor $M$ such that $\lang{M} = L$. The algorithm runs in polynomial time iff there is a fixed polynomial $p$ such that for every $L \in \class{C}$, at every point the number of steps used by $\A$ is bounded by $p(n,m)$, where $n$ is the size of the smallest DBW acceptor recognizing $L$, and $m$ is the maximum length of any counterexample $\A$ has received up to that point. The case of a learning algorithm for $\omega$-tree languages is analogous, except that the examples and counterexamples are given by regular $\omega$-tree automata, and the hypotheses provided to equivalence or subset queries are represented by DBT acceptors. We also consider cases in which the inputs to equivalence or subset queries may be NBW or NBT acceptors. \section{Framework of a reduction}% \label{section-Framework-of-a-reduction} Suppose $\A$ is a learning algorithm that uses membership and equivalence queries and exactly learns a class $\class{C} \subseteq \dbw$. We shall describe an algorithm $\A_{{\Trees}}$ that uses membership and equivalence queries and exactly learns the derived class $\Trees_d(\class{C})$ of $\omega$-tree languages. Note that $\Trees_d(\class{C}) \subseteq \dbt$. The algorithm $\A_{{\Trees}}$ with target concept $\Trees_d(L)$ simulates algorithm $\A$ with target concept $L$. In order to do so, $\A_{{\Trees}}$ must correctly answer membership and equivalence queries from $\A$ about $L$ by making one or more membership and/or equivalence queries of its own about $\Trees_d(L)$. Before describing the algorithm $\A_{{\Trees}}$ we establish some basic results about regular $\omega$-trees. \subsection{Testing acceptance of a regular \texorpdfstring{$\omega$}{omega}-tree}% \label{section-Testing-acceptance-of-t} \noindent We describe a polynomial time algorithm $\Acc(A_t,M)$ that takes as input a regular $\omega$-tree $t$ represented by a regular $\omega$-tree automaton $A_t =(Q_1, q_{0,1}, \delta_1, \tau_1)$ and a DBW acceptor $M = (Q_2, q_{0,2}, \delta_2, F_2)$ and determines whether or not $M^{T,d}$ accepts $t$. If not, it also outputs a pair $(u,v)$ of finite words such that ${u(v)}^{\omega} \in (\paths(t) \setminus \lang{M})$. \begin{algorithm}% \scriptsize \caption{: $\Acc(A_t,M)$}% \label{algorithm-Acc} \begin{algorithmic} \REQUIRE {$A_t = (Q_1,q_{0,1},\delta_1,\tau_1)$ representing $t$;\\ $M = (Q_2, q_{0,2}, \delta_2, F_2)$, a complete DBW acceptor} \ENSURE {Return ``yes'' if $M^{T,d}$ accepts $t$\\ else return ``no'' and $(u,v)$ with ${u(v)}^{\omega} \in (\paths(t) \setminus \lang{M})$.} \vspace{0.2cm} \STATE let $Q = Q_1 \times Q_2$ \STATE let $q_0 = (q_{0,1},q_{0,2})$ \FORALL {$(q_1, q_2) \in Q$ and $i \in D$} \STATE let $\delta((q_1,q_2),i) = (\delta_1(q_1,i),\delta_2(q_2,\tau_1(q_1,i)))$ \ENDFOR \STATE let $F = \{(q_1,q_2) \mid q_2 \in F_2\}$ \STATE let $M' = (Q,q_0,\delta,F)$ \IF {$\lang{M'} = D^{\omega}$} \RETURN ``yes'' \ELSE \STATE find $x(y)^{\omega} \in (D^{\omega} \setminus \lang{M'})$ \STATE let ${u(v)}^{\omega} = t(x(y)^{\omega})$ \RETURN ``no'' and $(u,v)$ \ENDIF \end{algorithmic} \end{algorithm} We may assume $M$ is complete by adding (if necessary) a new non-accepting sink state and directing all undefined transitions to the new state. We construct a DBW acceptor $M'$ over the alphabet $D = \{1,\ldots,d\}$ by combining $A_t$ and $M$ as follows. The states are $Q = Q_1 \times Q_2$, the initial state is $q_0 = (q_{0,1},q_{0,2})$, the set of accepting states is $F = \{(q_1,q_2) \mid q_2 \in F_2\}$, and the transition function $\delta$ is defined by $\delta((q_1,q_2),i) = (\delta_1(q_1,i), \delta_2(q_2,\tau_1(q_1,i)))$ for all $(q_1,q_2) \in Q$ and $i \in D$. For each transition, the output of the regular $\omega$-tree automaton $A_t$ is the input of the DBW acceptor $M$. An infinite path $\pi$ in $t$ corresponds to an $\omega$-word $z \in D^{\omega}$, giving the sequence of directions from the root. The unique run of $M'$ on $z$ traverses a sequence of states; if we project out the first component, we get the run of $A_t$ on $z$, and if we project out the second component, we get the run of $M$ on $t(\pi)$. Then $M^{T,d}$ accepts $t$ iff $M$ accepts $t(\pi)$ for every infinite path $\pi$, which is true iff $\lang{M'} = D^{\omega}$. This in turn is true iff every nonempty accessible recurrent set of states in $M'$ contains at least one element of $F$. A set $S$ of states is \concept{recurrent} iff for all $q, q' \in S$, there is a nonempty finite word $v$ such that $\delta(q,v) = q'$ and for every prefix $u$ of $v$ we have $\delta(q,u)\in S$. A set $S$ of states is \concept{accessible} iff for every $q \in S$ there exists a finite word $u$ such that $\delta(q_0,u) = q$. The algorithm to test whether $M^{T,d}$ accepts $t$ first removes from the transition graph of $M'$ all states that are not accessible. It then removes all states in $F$ and tests whether there is any cycle in the remaining graph. If not, then $M^{T,d}$ accepts $t$. Otherwise, there is a state $q$ in $Q$ and finite words $x \in D^*$ and $y \in D^+$ such that $\delta(q_0,x) = q$ and $\delta(q,y) = q$ and none of the states traversed from $q$ to $q$ along the path $y$ are in $F$. Thus, ${x(y)}^{\omega}$ is an ultimately periodic path $\pi$ that does not visit $F$ infinitely often, and letting ${u(v)}^{\omega}$ be $t({x(y)}^{\omega})$, we have ${u(v)}^{\omega} \in (\paths(t) \setminus \lang{M})$, so the pair $(u,v)$ is returned in this case. The required graph operations are standard and can be accomplished in time polynomial in $|M|$ and $|A_t|$. \subsection{Representing a language as paths of a tree}% \label{subsection-Representing-a-language-as-paths-of-a-tree} When the algorithm $\A_{{\Trees}}$ makes a membership query about $\Trees_d(L)$ with a regular $\omega$-tree $t$, the answer is ``yes'' if $\paths(t) \subseteq L$ and ``no'' otherwise. Thus, this query has the effect of a restricted subset query about $L$ with $\paths(t)$. However, this does not give us restricted subset queries for arbitrary $\dbw$ languages. Next, we examine the close relationship between languages of the form $\paths(t)$ and safety languages. An $\omega$-word language $L$ is a \concept{safety language} iff $L$ is a regular $\omega$-word language and for every $\omega$-word $w$ not in $L$, there exists a finite prefix $x$ of $w$ such that no $\omega$-word with prefix $x$ is in $L$. A language is safety iff it is in the class $\dwcw$. An alternative characterization is that there is an NBW acceptor $M = (Q,q_0,\delta,Q)$, all of whose states are accepting, such that $\lang{M} = L$. In this case, the acceptor is typically not complete (otherwise it recognizes $\Sigma^{\omega}$). An example of a language in $\dwpw$ that is not a safety language is $a^* b^* {(a)}^{\omega}$. Although $b^{\omega}$ is not in the language, every finite prefix $b^k$ is a prefix of some $\omega$-word in the language. \begin{lem}% \label{lemma-tree-nbw} If $A_t$ is a regular $\omega$-tree automaton representing an $\omega$-tree $t$, then $\paths(t)$ is a safety language recognizable by an NBW acceptor $M$ with $|M| = |A_t|$. \end{lem} \begin{proof} If $A_t = (Q,q_0,\delta,\tau)$, then we define $M = (Q,q_0,\delta',Q)$ where \[\delta'(q,\sigma) = \{r \in Q \mid (\exists i \in D) (\delta(q,i) = r \wedge \tau(q,i) = \sigma)\}\] for all $q \in Q$ and $\sigma \in \Sigma$. That is, the $M$ transition on $q$ and $\sigma$ is defined to be all states reachable from $q$ by a transition in $A_t$ labeled with $\sigma$. Note that all states of $M$ are accepting. If $w \in \paths(t)$, then there is a run $r_0, r_1, \ldots$ of $A_t$ whose transitions are labeled by $w$, and this is a run of $M$ on $w$, so $w \in \lang{M}$. Conversely, if $w \in \lang{M}$, then there is some run $r_0, r_1, \ldots$ of $M$ on $w$, and this is a run of $A_t$ whose transitions are labeled with $w$, and therefore $w \in \paths(t)$. \end{proof} For the converse, representing a safety language as the paths of a regular $\omega$-tree, we require a lower bound on $d$, the arity of the tree. If $M = (Q,q_0,\delta,F)$ is an NBW acceptor and $q \in Q$, we define the set of transitions out of $q$ to be $\transitions(q) = \{(\sigma,r) \mid \sigma \in \Sigma \wedge r \in \delta(q,\sigma)\}$. We define the \concept{out-degree} of $M$ to be the maximum over $q \in Q$ of the cardinality of $\transitions(q)$. \begin{lem}% \label{lemma-nbw-tree} Let $L$ be a safety language recognized by NBW acceptor $M = (Q, q_0, \delta, Q)$. Suppose the out-degree of $M$ is at most $d$. Then there is a $d$-ary regular $\omega$-tree $t$ such that $\paths(t) = L$, and $t$ is representable by $A_t$ with $|A_t| = |M|$. \end{lem} \begin{proof} We may assume that every state of $M$ is accessible and has at least one transition defined. We define $A_t = (Q,q_0,\delta_t,\tau)$ over the alphabet $D = \{1,\ldots,d\}$ as follows. For $q \in Q$, choose a surjective mapping $f_q$ from $D$ to $\transitions(q)$. Then for $q \in Q$ and $i \in D$, let $(\sigma,r) = f_q(i)$ and define $\delta_t(q,i) = r$ and $\tau(q,i) = \sigma$. If $w \in L$, then there is a run $r_0, r_1, \ldots$ of $M$ on $w$, and there is an infinite path in $A_t$ traversing the same states in which the labels are precisely $w$, so $w \in \paths(t)$. Conversely, if $w \in \paths(t)$, then there is an infinite path $\pi$ such that $t(\pi) = w$, and the sequence of states of $A_t$ traversed by $w$ yields a run of $M$ on $w$, so $w \in L$. \end{proof} The NBW acceptor in the proof of Lemma~\ref{lemma-tree-nbw} can be determinized via the subset construction to give a DBW acceptor of size at most $2^{|A_t|}$ recognizing the same language. In the worst case this exponential blow up in converting a regular $\omega$-tree automaton to a DBW acceptor is necessary, as shown by the following lemma. \begin{lem}\label{lemma-exp-blowup-nbw-dbw} There exists a family of regular $\omega$-trees $t_1, t_2, \ldots$ such that $t_n$ can be represented by a regular $\omega$-tree automaton of size $n+2$, but the smallest DBW acceptor recognizing $\paths(t_n)$ has size at least $2^n$. \end{lem} \begin{proof} Let $\Sigma = \{a,b,c\}$ and let $L_n$ be ${(a + b + (a{(a+b)}^n c))}^{\omega}$. This is a safety language: $w \in L_n$ iff every occurrence of $c$ in $w$ is preceded by a word of the form $a{(a+b)}^n$. There is a NBW acceptor $M_n$ of $n+2$ states recognizing $L_n$. The states are nonnegative integers in $[0,n+1]$, with $0$ the initial state, $\delta(0,a) = \{0,1\}$, $\delta(0,b) = 0$, $\delta(i,a) = \delta(i,b) = i+1$ for $1 \le i \le n$, and $\delta(n+1,c) = 0$. By Lemma~\ref{lemma-nbw-tree}, there is a ternary regular $\omega$-tree $t_n$ such that $\paths(t_n) = L_n$ and $t_n$ is represented by a regular $\omega$-tree automaton with $n+2$ states. However, any DBW acceptor recognizing $L_n$ must have enough states to distinguish all $2^n$ strings in ${(a+b)}^n$ in order to check the safety condition. \end{proof} If $t$ is a $d$-ary regular $\omega$-tree represented by the regular $\omega$-tree automaton $A_t$, then $\acceptor(A_t)$ denotes the NBW acceptor $M$ recognizing $\paths(t)$ constructed from $A_t$ in the proof of Lemma~\ref{lemma-tree-nbw}. Note that the out-degree of $\acceptor(A_t)$ is at most $d$. If $M$ is an NBW acceptor such that $\lang{M}$ is a safety language and the out-degree of $M$ is at most $d$, then $\tree_d(M)$ denotes the regular $\omega$-tree automaton $A_t$ constructed from $M$ in the proof of Lemma~\ref{lemma-nbw-tree}. We also use the notation $\tree_d(L)$ if $L$ is a safety language and the implied acceptor for $L$ is clear. For example, given finite words $u \in \Sigma^*$ and $v \in \Sigma^+$, the singleton set containing ${u(v)}^{\omega}$ is a safety language recognized by a DBW of out-degree $1$ and size linear in $|u|+|v|$. Then $\tree_d({u(v)}^{\omega})$ represents the $d$-ary tree all of whose infinite paths are labeled with ${u(v)}^{\omega}$. \section{The algorithm \texorpdfstring{$\A_{{\Trees}}$}{A-trees}}% \label{section-The-algorithm-ATrees} We now describe the algorithm $\A_{{\Trees}}$, which learns $\Trees_d(L)$ by simulating the algorithm $\A$ and answering the membership and equivalence queries of $\A$ about $L$. It is summarized in Algorithm~\ref{algorithm-ATrees}, and some of the cases are illustrated in an example presented in Appendix~\ref{app:A-tree-example}. \begin{algorithm}[t]% \footnotesize \caption{: $\A_{{\Trees}}$}% \label{algorithm-ATrees} \begin{algorithmic}[t] \REQUIRE { Learning algorithm $\A$ for $\class{C}$;\\ $\MQ$ and $\EQ$ access to $\Trees_d(L)$ for $L \in \class{C}$} \ENSURE {Acceptor $M^{T,d}$ such that $\lang{M^{T,d}} = \Trees_d(L)$} \vspace{0.2cm} \WHILE{$\A$ has not halted} \IF {next step of $\A$ is not a query} \STATE {simulate next step of $\A$} \ELSIF {$\A$ asks $\MQ(u,v)$ about $L$} \STATE {answer $\A$ with $\MQ(\tree_d({u(v)}^{\omega}))$ about $\Trees_d(L)$} \ELSIF {$\A$ asks $\EQ(M)$ about $L$} \STATE {ask $\EQ(M^{T,d})$ about $\Trees_d(L)$} \IF {$\EQ(M^{T,d})$ answer is ``yes''} \RETURN {$M^{T,d}$ and halt} \ELSE [$\EQ(M^{T,d})$ answer is counterexample tree $t$ given by $A_t$] \IF {$\Acc(A_t,M)$ returns ``no'' with value $(u,v)$} \STATE {answer $\A$ with $(u,v)$} \ELSE [$\Acc(A_t,M)$ returns ``yes''] \STATE {let $M' = \acceptor(A_t)$} \FORALL{accepting states $q$ of $M'$} \STATE{simulate in parallel $\Findctrex(M',q)$} \STATE{terminate all computations and answer $\A$ with the first $(u,v)$ returned} \ENDFOR \ENDIF \ENDIF \ENDIF \ENDWHILE [$\A$ halts with output $M$] \RETURN {$M^{T,d}$ and halt} \end{algorithmic} \end{algorithm} If $\A$ asks a membership query with $(u,v)$ then $\A_{{\Trees}}$ constructs the regular $\omega$-tree automaton $\tree_d({u(v)}^{\omega})$ representing the $d$-ary regular $\omega$-tree all of whose infinite paths are labeled ${u(v)}^{\omega}$, and makes a membership query with $\tree_d({u(v)}^{\omega})$. Because ${u(v)}^{\omega} \in L$ iff the tree represented by $\tree_d({u(v)}^{\omega})$ is in $\Trees_d(L)$, the answer to the query about $\tree_d({u(v)}^{\omega})$ is simply given to $\A$ as the answer to its membership query about $(u,v)$. For an equivalence query from $\A$ specified by a DBW acceptor $M$, the algorithm $\A_{{\Trees}}$ constructs the corresponding DBT acceptor $M^{T,d}$, which recognizes $\Trees_d(\lang{M})$, and makes an equivalence query with $M^{T,d}$. If the answer is ``yes'', the algorithm $\A_{{\Trees}}$ has succeeded in learning the target $\omega$-tree language $\Trees_d(L)$ and outputs $M^{T,d}$ and halts. Otherwise, the counterexample returned is a regular $\omega$-tree $t$ in $\lang{M^{T,d}} \oplus \Trees_d(L)$, represented by a regular $\omega$-tree automaton $A_t$. A call to the algorithm $\Acc(A_t,M)$ determines whether $M^{T,d}$ accepts $t$. If $M^{T,d}$ rejects $t$, then $t \in \Trees_d(L)$ and $t$ is a \concept{positive counterexample}. If $M^{T,d}$ accepts $t$, then $t \not\in \Trees_d(L)$ and $t$ is a \concept{negative counterexample}. We next consider these two cases. If $t$ is a positive counterexample then we know that $t \in \Trees_d(L)$ and therefore $\paths(t) \subseteq L$. Because $t \notin \lang{M^{T,d}}$, the acceptor $M$ must reject at least one infinite path in $t$. In this case, the algorithm $\Acc(A_t,M)$ returns a pair of finite words $(u,v)$ such that ${u(v)}^{\omega} \in (\paths(t) \setminus \lang{M})$, and therefore ${u(v)}^{\omega} \in (L \setminus \lang{M})$. The algorithm $\A_{{\Trees}}$ returns the positive counterexample $(u,v)$ to $\A$ in response to its equivalence query with $M$. If $t$ is a negative counterexample, that is, $t \in (\lang{M^{T,d}} \setminus \Trees_d(L))$, then we know that $\paths(t) \subseteq \lang{M}$, but at least one element of $\paths(t)$ is not in $L$, so $(\lang{M} \setminus L) \neq \emptyset$. Ideally, we would like to extract an ultimately periodic $\omega$-word ${u(v)}^{\omega} \in (\paths(t) \setminus L)$ and provide $(u,v)$ to $\A$ as a negative counterexample in response to its equivalence query with $M$. If we could make an \emph{unrestricted subset query} with $\paths(t)$ about $L$, then the counterexample returned would be precisely what we need. As noted previously, if $t$ is any regular $\omega$-tree then we can simulate a restricted subset query with $\paths(t)$ about $L$ by making a membership query with $t$ about $\Trees_d(L)$, because $\paths(t) \subseteq L$ iff $t \in \Trees_d(L)$. In order to make use of this, we next show how to use restricted subset queries about $L$ to implement an unrestricted subset query about $L$. \subsection{Restricted subset queries}% \label{subsection-Restricted-subset-queries} To establish basic techniques, we show how to reduce unrestricted subset queries to restricted subset queries for nondeterministic or deterministic finite acceptors over finite words. Suppose $L \subseteq \Sigma^*$ and we may ask restricted subset queries about $L$. In such a query, the input is a nondeterministic (resp., deterministic) finite acceptor $M$, and the answer is ``yes'' if $\lang{M}$ is a subset of $L$, and ``no'' otherwise. If the answer is ``no'', we show how to find a shortest counterexample $u \in (\lang{M} \setminus L)$ in time polynomial in $|M|$ and $|u|$. \begin{thm}% \label{thm-nfa-dfa-subset-reduction} There is an algorithm $\R^*$ which takes as input an NFW (resp., DFW) $M$, and has restricted subset query access to a language $L$ with NFW (resp., DFW) acceptors as inputs, that correctly answers the unrestricted subset query with $M$ about $L$. Additionally, if $L$ is recognized by a DFW $T_L$, then $\R^*(M)$ runs in time bounded by a polynomial in $|M|$ and $|T_L|$.\footnote{The cardinality of $\Sigma$ is treated as a constant.} \end{thm} The idea of the proof is to first establish the minimal length $\ell$ of a counterexample, and then try to extend the prefix $\epsilon$ letter by letter until obtaining a full length minimal counterexample. Note that trying to establish a prefix of a counterexample letter by letter, without obtaining a bound first, may not terminate. For instance, if $L=\Sigma^* \setminus a^*b$, one can establish the sequence of prefixes $\epsilon, a, aa, aaa, \ldots$ and never reach a counterexample. To prove Theorem~\ref{thm-nfa-dfa-subset-reduction} we first construct an acceptor $M_{\ell,v}$ for $\lang{M}[\ell,v]$, the length and prefix restricted version of $\lang{M}$, given $M$, $\ell$ and $v$ as inputs. \begin{lem}% \label{lemma-length-and-prefix-restriction} There is a polynomial time algorithm to construct an acceptor $M_{\ell,v}$ for $\lang{M}[\ell,v]$ given a NFW acceptor $M$, a nonnegative integer $\ell$ and a finite word $v$, such that \begin{enumerate} \item $M_{\ell,v}$ has at most one accepting state, which has no out-transitions, \item the out-degree of $M_{\ell,v}$ is at most the out-degree of $M$, \item $M_{\ell,v}$ is deterministic if $M$ is deterministic. \end{enumerate} \end{lem} \commentout{ \begin{lem}{\ref{lemma-length-and-prefix-restriction}}[restated] There is a polynomial time algorithm to construct an acceptor $M_{\ell,v}$ for $\lang{M}[\ell,v]$ given a nondeterministic finite acceptor $M$, a nonnegative integer $\ell$ and a finite word $v$, such that \begin{enumerate} \item $M_{\ell,v}$ has at most one accepting state, which has no out-transitions, \item the out-degree of $M_{\ell,v}$ is at most the out-degree of $M$, \item $M_{\ell,v}$ is deterministic if $M$ is deterministic. \end{enumerate} \end{lem}} \begin{proof} If $\ell < |v|$, then $\lang{M}[\ell,v] = \emptyset$, and the output $M_{\ell,v}$ is a one-state acceptor with no accepting states. Otherwise, assume $v = \sigma_1 \sigma_2 \cdots \sigma_k$ and construct $M'$ to be the deterministic finite acceptor for $v \cdot \Sigma^{\ell - |v|}$ with states $0, 1, \ldots, \ell$ where $0$ is the inital state, $\ell$ is the final state, and the transitions are $\delta(i,\sigma_{i+1}) = i+1$ for $0 \le i < k$ and $\delta(i,\sigma) = i+1$ for $k \le i < \ell$ and $\sigma \in \Sigma$. Then $M_{\ell,v}$ is obtained by a standard product construction of $M$ and $M'$ for the intersection $\lang{M} \cap \lang{M'}$, with the observation that no accepting state in the product has any out-transitions defined, so they may all be identified. It is straightforward to verify the required properties of $M_{\ell,v}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm-nfa-dfa-subset-reduction}] For input $M$, define $M_{[\ell,v]}$ to be the finite acceptor constructed by the algorithm of Lemma~\ref{lemma-length-and-prefix-restriction} to recognize the length and prefix restricted language $\lang{M}[\ell,v]$. For $\ell = 0, 1, 2, \ldots$, ask a restricted subset query with $M_{[\ell,\varepsilon]}$, until the first query answered ``no''. At this point, $\ell$ is the shortest length of a counterexample in $(\lang{M} \setminus L)$. Then a counterexample $u$ of length $\ell$ is constructed symbol by symbol. Assume we have found a prefix $u'$ of a counterexample of length $\ell$ in $(\lang{M} \setminus L)$, with $|u'| < \ell$. For each symbol $\sigma \in \Sigma$ we ask a restricted subset query with $M_{[\ell,u' \sigma]}$, until the first query answered ``no''. At this point, $u'$ is extended to $u' \sigma$. If the length of $u' \sigma$ is now $\ell$, then $u = u' \sigma$ is the desired counterexample; otherwise, we continue extending $u'$. Note that if the input $M$ is deterministic, then all of the restricted subset queries are made with deterministic finite acceptors. If $L$ is recognized by a deterministic finite acceptor $T_L$, then the value of $\ell$ is bounded by $|M| \cdot |T_L|$, and the algorithm runs in time bounded by a polynomial in $|M|$ and $|T_L|$. \end{proof} We now turn to the $\omega$-word case. \begin{thm}% \label{theorem-restricted-subset-nbw-dbw} There is an algorithm $\R^{\omega}$ with input $M$ and restricted subset query access about $L$, (a language recognized by a DBW acceptor $T_L$) that correctly answers the unrestricted subset query with $M$ about $L$. The algorithm $\R^{\omega}(M)$ runs in time bounded by a polynomial in $|M|$ and $|T_L|$. If $M$ is a DBW acceptor, then all the restricted subset queries will also be with DBW acceptors. \end{thm} \begin{algorithm}[h]% \footnotesize \caption{: $\R^{\omega}(M)$, implementing $\USQ(M)$}% \label{algorithm-Romega} \begin{algorithmic} \REQUIRE {$\RSQ$ access to $L$;\\ $M = (Q,q_0,\delta,F)$, an NBW acceptor} \ENSURE {``yes'' if $\lang{M} \subseteq L$, else ``no'' and $(u,v)$ s.t. ${u(v)}^{\omega} \in (\lang{M} \setminus L)$} \vspace{0.2cm} \IF {$\RSQ(M) =$ ``yes''} \RETURN ``yes'' \ELSE \STATE find $q \in F$ such that $\RSQ(M_q) =$ ``no'' \RETURN {``no'' and $\Findctrex(M,q)$} \ENDIF \end{algorithmic} \end{algorithm} For the sake of generality, the proof considers subset queries with NBW acceptors. The procedure $\R^{\omega}(M)$ takes as input an NBW acceptor $M$, and has restricted subset query access (with NBW acceptors as inputs) to $L$; it is summarized in Algorithm~\ref{algorithm-Romega}. It first asks a restricted subset query with $M$ about $L$, returning the answer ``yes'' if its query is answered ``yes''. Otherwise, for each $q \in F$, it constructs the acceptor $M_q = (Q,q_0,\delta,\{q\})$ with the single accepting state $q$ and asks a restricted subset query with $M_q$ about $L$, until the first query answered ``no''. There will be at least one such query answered ``no'' because any element of $(\lang{M} \setminus L)$ must visit at least one accepting state $q$ of $M$ infinitely many times, and will therefore be in $\lang{M_q}$. The procedure $\R^{\omega}(M)$ then calls the procedure $\Findctrex(M,q)$ to find a counterexample to return --- i.e., a pair $(u,v)$ such that ${u(v)}^{\omega} \in (\lang{M_q} \setminus L)$, and thus also ${u(v)}^{\omega} \in (\lang{M} \setminus L)$. \subsection{Producing a counterexample}% \label{subsection-producing-a-counterexample} The first challenge encountered in producing a counterexample, in comparison to the finite word case, is that one needs to work out both the period and the prefix of the counterexample to be found, and the two are correlated. Define $L_{q_0,q}$ to be the set of finite words that lead from the initial state $q_0$ to the state $q$ in $M$, and define $L_{q,q}$ to be the set of nonempty finite words that lead from $q$ back to $q$ in $M$. Because the language $L_{q_0,q} \cdot {(L_{q,q})}^{\omega}$ is exactly the set of strings recognized by $M_q$, we know that $L_{q_0,q} \cdot {(L_{q,q})}^{\omega} \setminus L \neq \emptyset$. The procedure $\Findctrex(M,q)$ first finds a suitable period, corresponding to a bounded size of a prefix yet to be found, and then finds a prefix of that size in a similar manner to the finite word case. An example is shown in Appendix~\ref{app:findctrex-example}. \begin{algorithm}% \footnotesize \caption{: $\Findctrex(M,q)$}% \label{algorithm-Findctrex} \begin{algorithmic} \REQUIRE {$\RSQ$ access to $L$;\\ $M = (Q,q_0,\delta,F)$, an NBW acceptor;\\ $q \in F$;\\ $L_{q_0,q} \cdot (L_{q,q})^{\omega} \setminus L \neq \emptyset$} \ENSURE {$(u,v)$ such that ${u(v)}^{\omega} \in (L_{q_0,q} \cdot (L_{q,q})^{\omega} \setminus L)$} \vspace{0.2cm} \STATE let $v = \Findperiod(M,q)$ \STATE let $u = \Findprefix(M,q,v)$ \RETURN $(u,v)$ \end{algorithmic} \end{algorithm} Since finding the period is more challenging than the prefix, we explain the procedure $\Findprefix(M,q,v)$ first. The procedure $\Findprefix(M,q,v)$, summarized in Algorithm~\ref{algorithm-Findprefix}, finds a prefix word $u$ given a period word $v$ which loops on state $q$ and is guaranteed to be a period of a valid counterexample. It first finds a length $k$ such that there exists $u\in L_{q_0,q}$ of length $k$ such that $uv^\omega \notin L$. Then it finds such a word $u$ symbol by symbol. Note that it uses length and prefix restricted versions of $L_{q_0,q}$. \begin{algorithm}% \footnotesize \caption{: $\Findprefix(M,q,v)$}% \label{algorithm-Findprefix} \begin{algorithmic} \REQUIRE {$\RSQ$ access to $L$;\\ $M = (Q,q_0,\delta,F)$, an NBW acceptor;\\ $q \in F$;\\ $v \in L_{q,q}$;\\ $L_{q_0,q} \cdot (v)^{\omega} \setminus L \neq \emptyset$} \ENSURE {$u \in L_{q_0,q}$ such that ${u(v)}^{\omega} \in (L_{q_0,q} \cdot (v)^{\omega} \setminus L)$} \vspace{0.2cm} \STATE search for nonnegative integer $k$ such that $\RSQ(L_{q_0,q}[k] \cdot (v)^{\omega}) =$ ``no'' \STATE let $u = \varepsilon$ \WHILE {$|u| < k$} \STATE find $\sigma \in \Sigma$ such that $\RSQ(L_{q_0,q}[k,u\sigma] \cdot (v)^{\omega}) =$ ``no'' \STATE set $u = u \cdot \sigma$ \ENDWHILE \RETURN $u$ \end{algorithmic} \end{algorithm} Finding the periodic part is much more challenging. Indeed, even if one knows that there is a period of the form ${(a\Sigma^\ell)}^\omega$ for some $\ell$ then the size of the smallest period may be bigger than $\ell+1$. For instance, if $L = \Sigma^\omega \setminus {(abbaccadd)}^\omega$ then there is a period of the form ${(a\Sigma^2)}^\omega$ but the shortest period of a counterexample is of size $9$. \begin{algorithm}[ht]% \footnotesize \caption{: $\Findperiod(M,q)$}% \label{algorithm-Findperiod} \begin{algorithmic} \REQUIRE {$\RSQ$ access to $L$;\\ $M = (Q, q_0, \delta, F)$, an NBW acceptor;\\ $q \in F$;\\ $L_{q_0,q} \cdot (L_{q,q})^{\omega} \setminus L \neq \emptyset$} \ENSURE {$v \in L_{q,q}$ such that $L_{q_0,q} \cdot (v)^{\omega} \setminus L \neq \emptyset$} \vspace{0.2cm} \STATE let $y = \varepsilon$ \FORALL {integers $n = 1,2,3,\ldots$} \STATE let $v_n = \Nextword(M,q,y)$ \STATE set $y = y \cdot v_n$ \FOR {integers $i$, $j$ with $1 \le i \le j \le n$} \FOR {$k = 0$ to $n|M|$} \IF {$\RSQ(L_{q_0,q}[k] \cdot (v_i \cdots v_j)^{\omega}) = $ ``no''} \RETURN {$v = v_i \cdots v_j$} \ENDIF \ENDFOR \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} Procedure $\Findperiod(M,q)$, summarized in Algorithm~\ref{algorithm-Findperiod}, starts from the condition \[L_{q_0,q} \cdot {(L_{q,q})}^{\omega} \setminus L \neq \emptyset\] and finds a sequence of words $v_1, v_2, \ldots \in L_{q,q}$ such that for each $n \ge 1$, \[L_{q_0,q} \cdot {(v_1 v_2 \cdots v_n \cdot L_{q,q})}^{\omega} \setminus L \neq \emptyset.\] For a sufficiently long such sequence, there exists a subsequence $v = (v_i \cdots v_j)$ that is a suitable period word, as we prove in Section~\ref{subsection-length-restrictions-time-bounds}. The procedure $\Nextword(M,q,y)$, summarized in Algorithm~\ref{algorithm-Nextword}, is called with $y = v_1 v_2 \cdots v_n$ and finds a suitable next word $v_{n+1}$. After determining a length $\ell$, it repeatedly calls the procedure $\Nextsymbol(M,q,y,\ell,v')$ to determine the next symbol of a suitable word of length $\ell$. \hspace{-0.83cm} \begin{minipage}[b]{0.54\textwidth} \begin{algorithm}[H]% \footnotesize \caption{: $\Nextword(M,q,y)$}% \label{algorithm-Nextword} \begin{algorithmic} \REQUIRE {$\RSQ$ access to $L$;\\ $M = (Q, q_0, \delta, F)$, an NBW acceptor;\\ $q \in F$;\\ $y \in L_{q,q}$ or $y = \varepsilon$;\\ $L_{q_0,q} \cdot (y \cdot L_{q,q})^{\omega} \setminus L \neq \emptyset$} \ENSURE {$v' \in L_{q,q}$ such that $L_{q_0,q} \cdot (y \cdot v' \cdot L_{q,q})^{\omega} \setminus L \neq \emptyset$} \vspace{0.2cm} \STATE search for integers $k,\ell \geq 0$ s.t.\\ $\RSQ(L_{q_0,q}[k] \cdot (y \cdot L_{q,q}[\ell])^{\omega}) =$ ``no'' \STATE let $v' = \varepsilon$ \WHILE {$|v'| < \ell$} \STATE let $\sigma = \Nextsymbol(M,q,y,\ell,v')$ \STATE set $v' = v' \cdot \sigma$ \ENDWHILE \RETURN $v'$ \end{algorithmic} \end{algorithm} \end{minipage} \hspace{2ex} \begin{minipage}[b]{0.43\textwidth} \begin{figure}[H] \caption{An illustration \\ of algorithm $\Nextword$}\label{fig:nextword-illustration} \end{figure} \scalebox{0.6}{ \begin{tikzpicture}[framed] \node [label=below:{$q_0$},circle,fill=black,draw=black,inner sep=1pt, minimum size=0.2cm] (q0) at (0,7) {}; \node [label=right:{$q$},circle,fill=black,draw=black,inner sep=1pt, minimum size=0.2cm] (q1) at (3,7) {}; \node [label=below:{$q$},circle,fill=black,draw=black,inner sep=1pt, minimum size=0.2cm] (q2) at (6-2,7-1.73205080757) {}; \node [label=right:{$q$},circle,fill=black,draw=black,inner sep=1pt, minimum size=0.2cm] (q3) at (6.9,6.4) {}; \node [label=above:{$q$},circle,fill=black,draw=black,inner sep=1pt, minimum size=0.2cm] (q4) at (6.73205080757,7+1) {}; \node [circle, fill=none, draw=none] (e) at (8,7) {}; \draw[-latex, thick] (q0) -- (q1) node [midway, above, fill=none] {$L_{q_0,q}$}; \draw[-latex,thick,black] ([shift=(180:2cm)]5,7) arc (-180:-123:2cm) node [midway, left, fill=none] {$v_1\in L_{q,q}$}; \draw[-latex,thick,black] ([shift=(240:2cm)]5,7) arc (-120:-21:2cm) node [midway, right, fill=none] {$\,\,v_2\in L_{q,q}$}; \draw[-latex,thick,black] ([shift=(-20:2cm)]5,7) arc (-20:27:2cm) node [midway, right, fill=none] {$v_3\in L_{q,q}$}; \draw[-latex,dashed,black] ([shift=(30:2cm)]5,7) arc (30:177:2cm) node [midway, above, fill=none] {$L_{q,q}$}; \end{tikzpicture} } \end{minipage} \vspace{10pt} The procedure $\Nextsymbol(M,q,y,\ell,v')$, summarized in Algorithm~\ref{algorithm-Nextsymbol}, is called to find a feasible next symbol with which to extend $v'$ in the procedure $\Nextword$. \hspace{-0.83cm} \begin{minipage}[b]{0.54\textwidth} \begin{algorithm}[H]% \footnotesize \caption{: $\Nextsymbol(M,q,y,\ell,v')$}% \label{algorithm-Nextsymbol} \begin{algorithmic} \REQUIRE {$\RSQ$ access to $L$;\\ $M = (Q,q_0,\delta,F)$, an NBW acceptor;\\ $q \in F$;\\ $y \in L_{q,q}$;\\ $v' \in \Sigma^*$, $|v'| < \ell$;\\ $L_{q_0,q} \cdot (y \cdot L_{q,q}[\ell,v'] \cdot L_{q,q})^{\omega} \setminus L \neq \emptyset$} \ENSURE {$\sigma \in \Sigma$ such that $L_{q_0,q} \cdot (y \cdot L_{q,q}[\ell,v'\sigma] \cdot L_{q,q})^{\omega} \setminus L \neq \emptyset$} \vspace{0.2cm} \STATE find integers $k \ge 0$, $m \ge 1$, and $\sigma \in \Sigma$ such that\\ {$\RSQ(L_{q_0,q}[k] \cdot (y \cdot L_{q,q}[\ell,v'\sigma] \cdot L_{q,q}[m])^{\omega}) = $ ``no''} \RETURN {$\sigma$} \end{algorithmic} \end{algorithm} \end{minipage} \hspace{2ex} \begin{minipage}[b]{0.43\textwidth} \begin{figure}[H] \caption{An illustration \\ of algorithm $\Nextsymbol$}\label{fig:nextsymbol-illustration} \end{figure} \scalebox{0.6}{ \begin{tikzpicture}[framed] \node [label=below:{$q_0$},circle,fill=black,draw=black,inner sep=1pt, minimum size=0.2cm] (q0) at (0,7) {}; \node [label=right:{$q$},circle,fill=black,draw=black,inner sep=1pt, minimum size=0.2cm] (q1) at (3,7) {}; \node [label=right:{},circle,fill=black,draw=black,inner sep=1pt, minimum size=0.2cm] (s1) at (6-2.93185165258,7-0.5176380902) {}; \node [label=right:{},circle,fill=black,draw=black,inner sep=1pt, minimum size=0.2cm] (s2) at (6-2.73205080757,7-1) {}; \node [label=right:{},circle,fill=black,draw=black,inner sep=1pt, minimum size=0.2cm] (s3) at (6-2.41421356237,7-1.41421356237) {}; \node [label=below:{$q$},circle,fill=black,draw=black,inner sep=1pt, minimum size=0.2cm] (q2) at (6-2,7-1.73205080757) {}; \node [circle, fill=none, draw=none] (e) at (8.6,7) {}; \draw[-latex, thick] (q0) -- (q1) node [midway, above, fill=none] {$L_{q_0,q}$}; \draw[-latex] (q1) -- (s1) node [midway, left, fill=none] {$\sigma_1$}; \draw[-latex] (s1) -- (s2) node [midway, left, fill=none] {$\sigma_2$}; \draw[-latex] (s2) -- (s3) node [midway, left, fill=none] {$\sigma_3$}; \draw[-latex,dashed,black] ([shift=(225:2cm)]5,7) arc (-135:177:2cm) node [midway, right, fill=none] {$L_{q,q}$}; \end{tikzpicture} } \end{minipage} \section{Correctness}% \label{section-correctness} The main hurdle in proving the correctness of algorithm $\A_{{\Trees}}$ is to prove Theorem~\ref{theorem-restricted-subset-nbw-dbw}. The polynomial bound in the proof of Theorem~\ref{theorem-restricted-subset-nbw-dbw} is obtained through a sequence of lemmas bounding the size of the acceptors used in $\A_{{\Trees}}$ subprocedures and the length restrictions and running time in calls to $\RSQ$ made by these procedures. Section~\ref{subsection-bounding-inouts-to-rsq} deals with bounding the acceptors, and Section~\ref{subsection-length-restrictions-time-bounds} deals with the more challenging part, providing the length restrictions. Finally, Section~\ref{subsection-correctness-of-Atrees} concludes with the theorem stating the correctness of algorithm $\A_{{\Trees}}$. \subsection{Bounding the Acceptors}% \label{subsection-bounding-inouts-to-rsq} We turn to the representation (as NBW or DBW acceptors) of the languages used in restricted subset queries by $\R^{\omega}(M)$ and its subprocedures. We consider the size, out-degree, and time to construct the acceptors. In $\R^{\omega}(M)$, there is a restricted subset query with $M$ itself, and if that query is answered ``no'', a sequence of restricted subset queries with $M_q$ for accepting states $q$ until an answer of ``no''. Clearly, if $M$ is an NBW acceptor, each $M_q$ is an NBW acceptor of the same size and out-degree and is easily constructed from $M$, and similarly if $M$ is a DBW acceptor. The restricted subset queries made in $\Findctrex$ and its subprocedures are of the form $P \cdot {(S)}^{\omega}$, where $P$ is a length and prefix restricted version of $L_{q_0,q}$ and $S$ is a concatenation of (at most) a finite word and two length and prefix restricted versions of $L_{q,q}$. Therefore in what follows we consider the operations of concatenation and $\omega$-repetition of regular languages of finite words. These operations are particularly simple for DFW or NFW acceptors in \concept{special form}, that is, containing at most one accepting state, which has no out-transitions defined. In general, any NFW acceptor can be converted to special form, possibly at the cost of increasing its out-degree. A regular language of finite words is recognized by a DFW acceptor in special form iff it is prefix-free. However, if $M$ is an NBW (resp., DBW) acceptor, then the finite word languages $L_{q_0,q}$ and $L_{q,q}$ are recognized by easily constructed NFW (resp., DFW) acceptors of size at most $|M|$ and out-degree at most the out-degree of $M$. Lemma~\ref{lemma-length-and-prefix-restriction} shows that the length and prefix restricted versions of $L_{q_0,q}$ and $L_{q,q}$ are recognized by NFW (resp., DFW) acceptors in special form which may be constructed in time polynomial in $|M|$, $\ell$, and $|v|$ and have out-degree at most the out-degree of $M$. \begin{lem}% \label{lemma-concatenation} Suppose $M_1$ is an NFW acceptor in special form and $M_2$ is an NFW or NBW acceptor. Then an acceptor $M$ for $\lang{M_1} \cdot \lang{M_2}$ can be constructed such that \begin{enumerate} \item $|M| \le |M_1| + |M_2|$, \item the out-degree of $M$ is at most the maximum of out-degrees of $M_1$ and $M_2$, \item $M$ can be constructed in polynomial time, \item $M$ is deterministic if $M_1$ and $M_2$ are deterministic, \item $M$ is an NFW in special form if $M_2$ is an NFW in special form. \end{enumerate} \end{lem} \commentout{ \begin{lem}{\ref{lemma-concatenation}}[restated] Suppose $M_1$ is an NFW acceptor in special form and $M_2$ is an NFW or NBW acceptor. Then an acceptor $M$ for $\lang{M_1} \cdot \lang{M_2}$ can be constructed such that \begin{enumerate} \item $|M| \le |M_1| + |M_2|$, \item the out-degree of $M$ is at most the maximum of out-degrees of $M_1$ and $M_2$, \item $M$ can be constructed in polynomial time, \item $M$ is deterministic if $M_1$ and $M_2$ are deterministic, \item $M$ is an NFW in special form if $M_2$ is an NFW in special form. \end{enumerate} \end{lem}} \begin{proof} Assume the states of $M_1$ and $M_2$ are disjoint. If $M_1$ has no accepting state then $\lang{M_1} = \emptyset$ and we take $M$ to be a one-state acceptor of the same kind as $M_2$ that recognizes $\emptyset$. Otherwise, $M_1$ has one accepting state $q_1$ with no out transitions. If $q_1$ is also the initial state of $M_1$, then $\lang{M_1} = \{\varepsilon\}$ and we take $M = M_2$. Otherwise, $M$ is constructed by taking the union of the two machines, removing the state $q_1$ and redirecting all the transitions to $q_1$ in $M_1$ to the initial state of $M_2$. The initial state of $M$ is set to be the initial state of $M_1$, and the accepting states of $M$ are set to be the accepting states of $M_2$. Then $M$ is an NFW acceptor if $M_2$ is an NFW acceptor, and an NBW acceptor if $M_2$ is an NBW acceptor. It is straightforward to verify the required properties of $M$. \end{proof} \begin{lem}% \label{lemma-omega-repetition} Suppose $M_1$ is an NFW acceptor in special form. Then an NBW acceptor $M$ for $\lang{M_1}^{\omega}$ can be constructed such that \begin{enumerate} \item $|M| \le |M_1|$, \item the out-degree of $M$ is at most the out-degree of $M_1$, \item $M$ can be constructed in polynomial time, \item $M$ is deterministic if $M_1$ is deterministic. \end{enumerate} \end{lem} \commentout{ \begin{lem}{\ref{lemma-omega-repetition}}[restated] Suppose $M_1$ is an NFW acceptor in special form. Then an NBW acceptor $M$ for $\lang{M_1}^{\omega}$ can be constructed such that \begin{enumerate} \item $|M| \le |M_1|$, \item the out-degree of $M$ is at most the out-degree of $M_1$, \item $M$ can be constructed in polynomial time, \item $M$ is deterministic if $M_1$ is deterministic. \end{enumerate} \end{lem}} \begin{proof} If $M_1$ has no accepting states then $\lang{M_1} = \emptyset$. Otherwise, $M_1$ has one accepting state with no out transitions. If the accepting state of $M_1$ is also its initial state, then $\lang{M_1} = \{\varepsilon\}$. In these two cases, $\lang{M_1}^{\omega} = \emptyset$ and we take $M$ to be an NBW acceptor with one state and no accepting states. Otherwise, we construct $M$ by removing from $M_1$ its unique accepting state $q_1$ and redirecting all the transitions into $q_1$ to the initial state of $M_1$. The initial state of $M_1$ becomes the unique accepting state of $M$. It is straightforward to verify the required properties of $M$. \end{proof} The above give us the following corollary for the procedure $\R^{\omega}$. \begin{cor}% \label{lemma-nbw-dbw-inputs} When the input to $\R^{\omega}(M)$ is an NBW (resp., DBW) acceptor $M$, each $\RSQ$ can be made with an NBW (resp., DBW) acceptor whose out-degree is at most the out-degree of $M$ and can be constructed in time polynomial in $|M|$ and parameters giving the length restrictions and the lengths of any words that appear. \end{cor} \subsection{Length restrictions and time bounds}% \label{subsection-length-restrictions-time-bounds} We now turn to establish the correctness and running time of the subprocedures. The first two lemmas allow us to bound the parameters giving the length restrictions in inputs to $\RSQ$. \begin{lem}% \label{lemma-length-bound-k} Let $S \subseteq L_{q,q}$ and suppose $L_{q_0,q} \cdot {(S)}^{\omega} \setminus L \neq \emptyset$. Then for some $k < |M|\cdot|T_L|$ we have $L_{q_0,q}[k] \cdot {(S)}^{\omega} \setminus L \neq \emptyset$. \end{lem} \begin{proof} Let $u = \sigma_1 \cdots \sigma_k$ be chosen to be a shortest word in $L_{q_0,q}$ such that $u \cdot {(S)}^{\omega} \setminus L \neq \emptyset$. Then for some $s_1, s_2, \ldots$ from $S$, the $\omega$-word \[w = u \cdot s_1 \cdot s_2 \cdots\] is in $(L_{q_0,q} \cdot {(S)}^{\omega} \setminus L)$. There is an accepting run $r = r_0, r_1, \ldots$ of $M$ on $w$. Let $t = t_0, t_1, \ldots$ be the unique run of the DBW acceptor $T_L$ on $w$, which is rejecting. Consider the sequence of pairs $(r_n,t_n)$ for $0 \le n \le |u|$. If $|u| \ge |M|\cdot|T_L|$, there will be a repeated pair, say $(r_i,t_i) = (r_j,t_j)$ for $i < j$. If we excise symbols $i+1$ to $j$ of $u$ to get $u'$ and the corresponding states from the runs $r$ and $t$ to get $r'$ and $t'$, we have \[w' = u' \cdot s_1 \cdot s_2 \cdots\] is accepted by $M$ (witnessed by $r'$) and rejected by $T_L$ (witnessed by $t'$), so $u'$ is a shorter word such that $u' \cdot {(S)}^{\omega} \setminus L \neq \emptyset$, a contradiction. \end{proof} \begin{lem}% \label{lemma-length-bound-ell} Let $S \subseteq L_{q,q}$ and suppose $L_{q_0,q} \cdot {(S \cdot L_{q,q})}^{\omega} \setminus L \neq \emptyset$. Then for some $k, \ell < |M|\cdot|T_L|$, we have that $L_{q_0,q}[k] \cdot {(S \cdot L_{q,q}[\ell])}^{\omega} \setminus L \neq \emptyset$. \end{lem} \begin{proof} Let $w \in (L_{q_0,q} \cdot {(S \cdot L_{q,q})}^{\omega} \setminus L)$. The unique run of the DBW acceptor $T_L$ on $w$ is rejecting, and does not visit an accepting state of $T_L$ after some finite prefix. Because $S \subseteq L_{q,q}$, we may choose a sufficiently long prefix $u$ of $w$ such that $u \in L_{q_0,q}$ and when processing $w$, $T_L$ never visits an accepting state after reading the prefix $u$. Then $w$ may be factored as \[w = u (s_1 x_1) (s_2 x_2) \cdots,\] where each $s_n \in S$ and each $x_n \in L_{q,q}$. There is an accepting run $r = r_0, r_1, \ldots$ of $M$ on $w$, which we may assume visits the state $q$ after $u$, and also after every $s_n$ and every $x_n$. Consider the states $t_1, t_2, \ldots$ visited by $T_L$ at the start of every group $(s_n x_n)$ when processing $w$. After at most $|T_L|$ groups, there must be a repeat, say $t_i = t_{i+p}$ for some $p > 0$. Let $j = i+p-1$ and consider the $\omega$-word \[w' = u \cdot (s_1 x_1) \cdots (s_{i-1} x_{i-1}) \cdot {((s_i x_i) \cdots (s_j x_j))}^{\omega}.\] There is an accepting run of $M$ on $w'$, and the unique run of $T_L$ on $w'$ is rejecting. Let \[u' = u \cdot (s_1 x_1) \cdots (s_{i-1} x_{i-1}) \ \ \textrm{and} \ \ z = x_i \cdot (s_{i+1} x_{i+1}) \cdots (s_j x_j).\] Then $w' = u' \cdot {(s_i z)}^{\omega}$ and $u' \in L_{q_0,q}$ and $z \in L_{q,q}$. Consider an accepting run $r' = r_0', r_1', \ldots$ of $M$ on $w'$ that visits state $q$ after processing $u'$ and each occurrence of $s_i$ and $z$. Consider the unique run $t = t_0', t_1', \ldots$ of $T_L$ on $w'$, which is rejecting. As in the proof of Lemma~\ref{lemma-length-bound-k}, if $|z| \ge |M|\cdot|T_L|$ then we may remove a segment of $z$ that produces a cycle in the pairs $(r_n',t_n')$. Thus, for some $\ell < |M|\cdot|T_L|$, we have \[L_{q_0,q} \cdot {(S \cdot L_{q,q}[\ell])}^{\omega} \setminus L \neq \emptyset.\] Applying Lemma~\ref{lemma-length-bound-k}, there also exists $k < |M|\cdot|T_L|$ such that \[L_{q_0,q}[k] \cdot {(S \cdot L_{q,q}[\ell])}^{\omega} \setminus L \neq \emptyset. \qedhere\] \end{proof} We now prove the correctness and polynomial running time of $\Findprefix$ and $\Findperiod$, which establishes the correctness and polynomial running time of $\Findctrex$. \begin{lem}% \label{lemma-find-prefix} Assume $v \in L_{q,q}$ is such that \[L_{q_0,q} \cdot {(v)}^{\omega} \setminus L \neq \emptyset.\] Then in time polynomial in $|M|$, $|T_L|$ and $|v|$, the procedure $\Findprefix(M,q,v)$ returns a word $u \in L_{q_0,q}$ such that \[{u(v)}^{\omega} \in (L_{q_0,q} \cdot {(v)}^{\omega} \setminus L).\] \end{lem} \begin{proof} The algorithm asks restricted subset queries about $L$ for $\ell = 0,1,2,\ldots$ to find the least $\ell$ such that \[L_{q_0,q}[\ell] \cdot {(v)}^{\omega} \setminus L \neq \emptyset.\] The value of $\ell$ is bounded by $|M| \cdot |T_L|$, by Lemma~\ref{lemma-length-bound-k}. It then searches symbol by symbol for a string $u$ of length $\ell$ satisfying the required condition. \end{proof} The procedure $\Findperiod$ depends on the procedures $\Nextword$ and $\Nextsymbol$. The next lemma establishes the correctness and running time of the procedure $\Nextsymbol$. \begin{lem}% \label{lemma-next-symbol} Suppose $\ell$ is a positive integer, $y \in L_{q,q}$ or $y = \varepsilon$ and $v' \in \Sigma^*$ is such that $|v'| < \ell$ and we have \[L_{q_0,q} \cdot {(y \cdot L_{q,q}[\ell,v'] \cdot L_{q,q})}^{\omega} \setminus L \neq \emptyset.\] Then in time polynomial in $|M|$, $|T_L|$, $|y|$ and $\ell$, $\Nextsymbol(M,q,y,\ell,v')$ finds a symbol $\sigma \in \Sigma$ such that \[L_{q_0,q} \cdot {(y \cdot L_{q,q}[\ell, v'\sigma] \cdot L_{q,q})}^{\omega} \setminus L \neq \emptyset.\] \end{lem} \begin{proof} Consider an $\omega$-word \[w = u (y v' x_1 y_1) (y v' x_2 y_2) (y v' x_3 y_3) \cdots,\] in the language \[L_{q_0,q} \cdot {(y \cdot L_{q,q}[\ell,v'] \cdot L_{q,q})}^{\omega} \setminus L,\] where $u \in L_{q_0,q}$, and for all $i$, $v' x_i \in L_{q,q}[\ell,v']$ and $y_i \in L_{q,q}$. Fix a particular accepting run of $M_q$ on $w$ that visits $q$ after $u$ and after every occurrence of $y$, $v'x_i$ and $y_i$ in the factorization of $w$ above. Because in this run $q$ is visited infinitely many times, we may assume that the prefix $u$ is chosen so that $w$ visits no accepting state of $T_L$ after the prefix $u$ has been processed. Now consider the sequence $t_1, t_2, t_3, \ldots$ of states of $T_L$ visited by $w$ at the start of every group $(y v' x_i y_i)$. This sequence must repeat states of $T_L$, say $t_i = t_{i+p}$ for some $p > 0$. Let $j = i + p - 1$ and consider the word \[w' = u (y v' x_1 y_1) \cdots (y v' x_{i-1} y_{i-1}) {((y v' x_i y_i) \cdots (y v' x_j y_j))}^{\omega}.\] Clearly, $w' \not\in L$ because after the prefix $u$, $w'$ visits only rejecting states of $T_L$. Consider the cycle \[((y v' x_i y_i) \cdots (y v' x_j y_j)).\] If it is of length $1$ (that is $i = j$), then we may duplicate the one group $(y v' x_i y_i)$ to make a cycle of length $2$ without changing $w'$. Then we may factor the cycle as \[((y v' x_i y_i) z) \ \ \ \ \ \ \textrm{where}\ \ \ \ \ \ z = (y v' x_{i+1} y_{i+1}) \cdots (y v' x_j y_j)\] and $z \in L_{q,q}$. Choosing $\sigma$ to be the first symbol of $x_i$ and $x_i'$ to be the rest of $x_i$, we have \[w' = u' {(y v'\sigma x_i' z)}^{\omega},\] where $u' = u (y v' x_1 y_1) \cdots (y v' x_{i-1} y_{i-1})$ and therefore \[w' \in L_{q_0,q} \cdot {(y \cdot L_{q,q}[\ell,v' \sigma] \cdot L_{q,q})}^{\omega}.\] Thus we are guaranteed that some symbol $\sigma$ with the required property exists. Lemma~\ref{lemma-length-bound-ell} (with $S = \{y\} \cdot L_{q,q}[\ell,v'\sigma]$) shows that there exist $k, m < |M|\cdot|T_L|$ such that \[L_{q_0,q}[k] \cdot {(y \cdot L_{q,q}[\ell,v' \sigma] \cdot L_{q,q}[m])}^{\omega} \setminus L \neq \emptyset.\] Thus, the search for $k$ and $m$ in the procedure $\Nextsymbol$ can enumerate such pairs $(k,m)$ in increasing order of their maximum and try all $\sigma \in \Sigma$ for each pair until a suitable symbol $\sigma$ is found to return. This process runs in time polynomial in $|M|$, $|T_L|$, $|y|$ and $\ell$. \end{proof} \begin{lem}% \label{lemma-Nextword} Suppose $y \in L_{q,q}$ or $y = \varepsilon$ is such that \[L_{q_0,q} \cdot {(y \cdot L_{q,q})}^{\omega} \setminus L \neq \emptyset.\] Then in time bounded by a polynomial in $|M|$, $|T_L|$ and $|y|$, $\Nextword(M,q,y)$ returns a word $v' \in L_{q,q}$ of length bounded by $|M| \cdot |T_L|$ such that \[L_{q_0,q} \cdot {(y v' \cdot L_{q,q})}^{\omega} \setminus L \neq \emptyset.\] \end{lem} \begin{proof} By Lemma~\ref{lemma-length-bound-ell} (with $S = \{y\}$), the search for $k$ and $\ell$ will succeed with both less than $|M| \cdot |T_L|$. Then $\ell$ calls to the procedure $\Nextsymbol$ will produce the required word $v'$ of length $\ell$. \end{proof} The next lemma shows that $\Findperiod$ calls $\Nextword$ at most $|T_L|$ times. \begin{lem}% \label{lemma-loop-on-vs} Suppose $v_1, v_2, \ldots, v_n \in L_{q,q}$ are such that \[L_{q_0,q} \cdot {(v_1 v_2 \cdots v_n \cdot L_{q,q})}^{\omega} \setminus L \neq \emptyset.\] Also suppose that the number of states of $T_L$ is less than $n$. Then there exist integers $i$ and $j$ with $1 \le i \le j \le n$ such that \[L_{q_0,q} \cdot {(v_i v_{i+1} \cdots v_j)}^{\omega} \setminus L \neq \emptyset.\] \end{lem} \vspace{-5mm} \begin{proof} Consider an $\omega$-word \[w = u (v_1 v_2 \cdots v_n \cdot y_1) (v_1 v_2 \cdots v_n \cdot y_2) (v_1 v_2 \cdots v_n \cdot y_3) \cdots,\] in the language \[L_{q_0,q} \cdot {(v_1 v_2 \cdots v_n \cdot L_{q,q})}^{\omega} \setminus L,\] where $u \in L_{q_0,q}$ and each $y_i \in L_{q,q}$. Fix a particular accepting run of $M$ on $w$ in which state $q$ is visited after each of the individual segments of $w$. Considering the sequence of states of $T_L$ that are visited in processing $w$, there must be some finite prefix after which only rejecting states of $T_L$ are visited. Because the run of $M$ on $w$ visits $q$ infinitely often, we may assume that the prefix $u$ of $w$ extends past the last visit of $T_L$ to an accepting state. Now consider the states $t_1, t_2, \ldots, t_n$ visited by $T_L$ at the start of each of the first occurrences of $v_1, v_2, \ldots, v_n$, respectively. Because $n$ is greater than the number of states of $T_L$, some state of $T_L$ must repeat in this sequence, say $t_i = t_{i+p}$ for some $p > 0$. Let $j = i+p-1$ and consider the $\omega$-word \[w' = u v_1 v_2 \cdots v_{i-1} {(v_i v_{i+1} \cdots v_j)}^{\omega}.\] Then $w' \in L_{q_0,q} \cdot {(v_i v_{i+1} \cdots v_j)}^{\omega}$ because $u' = u v_1 v_2 \cdots v_{i-1}$ is in $L_{q_0,q}$. However, because only rejecting states of $T_L$ are visited in the repeating portion of the word, $w' \not\in L$. \end{proof} The final lemma, presented below, establishes the correctness and polynomial running time of the procedure $\Findperiod$. \begin{lem}% \label{lemma-Findperiod} Suppose $L_{q_0,q} \cdot {(L_{q,q})}^{\omega} \setminus L \neq \emptyset$. Then, in polynomial time in $|M|$ and $|T_L|$, the procedure $\Findperiod(M,q)$ with restricted query access to $L$ returns a period word $v$ satisfying the condition \[L_{q_0,q} \cdot {(v)}^{\omega} \setminus L \neq \emptyset.\] \end{lem} \begin{proof} The preconditions of $\Findperiod$ are satisfied, and it calls $\Nextword(M,q,y)$ repeatedly, with $y = \varepsilon$, then $y = v_1$, then $y = v_1 v_2$, and so on, where $v_{n+1}$ is the value returned by the call with $y = v_1 v_2 \cdots v_n$. Each of these calls satisfies the preconditions of $\Nextword$, so after at most $|T_L|$ such calls, $\Findperiod$ returns a correct period word $v$, by Lemma~\ref{lemma-loop-on-vs}. \end{proof} These lemmas can be used in combination to prove Theorem~\ref{theorem-restricted-subset-nbw-dbw}, giving a polynomial time reduction of unrestricted subset queries to restricted subset queries for NBW acceptors (resp., DBW acceptors.) \subsection{Correctness of \texorpdfstring{$\A_{{\Trees}}$}{A-trees}}% \label{subsection-correctness-of-Atrees} The lemmas established in the previous subsection also show the correctness and running time of $\Findctrex(M',q)$ when called by $\A_{{\Trees}}$, provided that each $\RSQ$ about $L$ is correctly answered and $q$ satisfies the precondition of $\Findctrex$. To complete the consideration of representation issues, we must prove that $\A_{{\Trees}}$ can successfully simulate $\Findctrex$ as stated in Lemma~\ref{lemma-regular-tree-inputs}. \begin{lem}\label{lemma-regular-tree-inputs} When $\A_{{\Trees}}$ simulates $\Findctrex(M',q)$ in response to a negative counterexample $t$, every $\RSQ$ can be simulated with a $\MQ$ about $\Trees_d(L)$. \end{lem} \begin{proof} In the learning algorithm $\A_{{\Trees}}$, when a negative counterexample $t$ represented by $A_t$ is received, the algorithm simulates the procedure $\Findctrex(M',q)$ where $M' = \acceptor(A_t)$ is a NBW acceptor recognizing $\paths(t)$ and $q$ is an accepting state of $M'$. Note that by Lemma~\ref{lemma-tree-nbw}, $|M'| \le |A_t|$ and the out-degree of $M'$ is at most $d$, the arity of $t$. Then Corollary~\ref{lemma-nbw-dbw-inputs} shows that each $\RSQ$ is with a NBW acceptor that has out-degree at most the out-degree of $M'$, which is at most $d$. Also, each such NBW acceptor can be constructed in time polynomial in $|M'|$ and parameters giving the length restrictions and the lengths of any words that appear. The final observation is that each such $\RSQ$ is made with an NBW acceptor that recognizes a safety language of the form $P \cdot {(S)}^{\omega}$, where $P$ and $S$ are each languages of fixed-length finite words. Then, by Lemma~\ref{lemma-nbw-tree} each such $\RSQ(N)$ can be simulated by $\A_{{\Trees}}$ using $\MQ(\tree_d(N))$ about $\Trees_d(L)$. \end{proof} If $q$ does not satisfy the precondition of $\Findctrex$, then the procedure may run forever. However, at least one accepting state $q$ satisfies the precondition, so at least one simulation will halt and return $(u,v)$, at which point $\A_{{\Trees}}$ terminates all the other simulations. This concludes the proof of the reduction given by $\A_{{\Trees}}$, whose general statement is given in Theorem~\ref{theorem-reduction-general} below. \begin{thm}% \label{theorem-reduction-general} Suppose $\class{C} \subseteq \dbw$ and $\A$ is a polynomial time algorithm that learns class $\class{C}$ using membership and equivalence queries. Then for every positive integer $d$ there is a polynomial time algorithm $\A_{{\Trees}}$ that learns the class $\Trees_d(\class{C})$ using membership and equivalence queries. \end{thm} This theorem, together with Maler and Pnueli's~\cite{Maler1995} polynomial time algorithm to learn the class of weak regular $\omega$-word languages using membership and equivalence queries proves our main result --- Theorem~\ref{theorem-learn-trees}. \section{Discussion} We have shown that if $\class{C} \subseteq \dbw$ can be learned in polynomial time with membership and equivalence queries, then $\Trees_d(\class{C})$ can be learned in polynomial time with membership and equivalence queries for all $d \ge 1$. Consequently, there is a polynomial time algorithm to learn $\Trees_d(\dwpw)$ with membership and equivalence queries. We have also shown that there are polynomial time algorithms that implement unrestricted subset queries using restricted subset queries for $\dfw$, $\nfw$, $\dbw$ and $\nbw$. One open question is whether there is an interesting subclass of $\dbw$ that is larger than $\dwpw$ but still learnable in polynomial time using membership and equivalence queries, to which Theorem~\ref{theorem-reduction-general} would also apply. \section*{Acknowledgment} The authors would like to thank the anonymous reviewers for their valuable feedback and helpful suggestions. This research was supported by the United States - Israel Binational Science Foundation, Jerusalem, Israel (BSF) under grant number \#8758451 and by the Office of Naval Research (ONR) under grant number \#N00014-17-1-2787. \bibliographystyle{alpha}
1,314,259,993,082
arxiv
\section{How can \textit{classical} gravity teach us anything about \textit{quantum} spacetime?} \label{sec:topdown} In one sentence, the paradigm that we will explore \cite{rop} is the following: Gravity is an emergent phenomenon like gas dynamics or elasticity with the gravitational field equations having the same status as, say, the equations of fluid dynamics/elasticity. Historically, this paradigm originated with Sakharov \cite{sakharov} and was interpreted in different ways by Jacobson \cite{ted}, Volovik \cite{grisha}, Bei-Lok Hu \cite{hu} and many others. (Analogue models \cite{analogue} as well as the membrane paradigm \cite{membrane} for black holes have some similarities with this approach. For a sample of recent work re-assembling these ideas, see \cite{others}). I will now elaborate on this theme drawing mainly from the work I was involved in. \footnote{I will use mostly positive signature with English alphabets covering $0,1,...D-1$ and Greek alphabets covering the spatial indices $1,2,...D-1$ of a $D$ dimensional spacetime.} Part of my programme involves a ``top-down'' approach to quantum spacetime (in the sense of zooming in from the top to smaller and smaller spatial scales, like in a Google map) to learn key lessons (Sections 1--5), which are then used to provide a thermodynamic derivation of field equations from extremising spacetime entropy density (Section 6). Some people use the word ``top-down'' to mean exactly the opposite; I will use ``top-down'' the way I have defined it, viz. from classical to quantum domain. One may find it surprising that such an attempt, to determine the features of the microscopic theory from knowing its properties at the macroscopic scales., is so successful. Of course, in the \textit{strict} sense, classical theories cannot tell us anything about quantum dynamics; after all, classical physics, by definition, is independent of $\hbar$ while quantum effects do depend on $\hbar$. But there is one effect, viz., the thermodynamics of spacetime horizons \cite{daviesunruh} which brings together the principles of quantum theory and gravity. This fact, along with a judicious choice for the questions to ask, allows one to make a fairly persuasive case for the structure of quantum spacetime. To see such a ``top-down'' approach in context, let me describe at least three other --- more conventional --- examples in which the deeper, more exact, (`bottom layer') theory leaves a tell-tale signature on the `top layer'. \medskip \noindent (i) \textit{Electrons in a helium atom}: Suppose you manage to solve the Schr\"{o}dinger equation for the two electrons in the helium atom and determine the energy eigenfunctions $\psi(\bm x_1, \bm x_2)$. Your experimental friend will tell you that only half of these wave functions [which are antisymmetric under the interchange ($\bm x_1 \Leftrightarrow \bm x_2$)] occur in the real world. Any amount of your staring at the Schr\"{o}dinger equation for the helium atom will not tell you why nature requires this antisymmetry under pair exchange for electrons. The reason lies deep down in relativistic quantum theory but its residual effect remains as a tell-tale signature even in the $c=\infty$ limit of field theory, viz., the non-relativistic quantum mechanics. \medskip \noindent (ii) \textit{Boltzmann's conjecture of atoms}: Classical thermodynamics of a gas/fluid uses variables like density, pressure etc. in the continuum description. But the fact that such a fluid can store and exchange heat energy \textit{cannot} be understood within the continuum theory. Boltzmann had the insight to suggest that thermal phenomena \textit{demand} the existence of microscopic degrees of freedom in matter. In fact, the law of equipartition, expressed as $E/[(1/2)k_BT] = N$ relates two thermodynamic variables $E, T$ (which are well-defined for a continuum fluid) to $N$, the number density of microscopic degrees of freedom, which cannot be interpreted in the continuum limit at all. The Avogadro's number, closely related to $N$, was determined even before we fully understood what exactly it counts and without any direct evidence for the molecular structure of matter. This is another example of our being able to say something about microscopic structure from the features of macroscopic theory. \medskip \noindent (iii) \textit{Equality of inertial and gravitational masses}: The most dramatic example is provided by Einstein's use of principle of equivalence which, of course, was known for centuries. Einstein realized that $m_i = m_g$ is not a trivial algebraic accident which should be taken for granted, as others before him have done, but requires an explanation. This led him to the description of gravity in terms of the geometry of spacetime. The relation $m_i=m_g$ was a signature of the deeper theory discernible in the approximate (top-layer) description. The lesson from these three examples is obvious. For a top down approach to be useful, you need to ask the right questions! One way is to pick up features of the theory that are usually taken for granted (`algebraic accidents') --- or not even noticed --- and demand deeper explanations for them. This is the procedure I will follow in this programme to probe the quantum structure of spacetime from known aspects of classical gravity. \section{The conceptual background} I will describe several peculiar features of classical gravitational theories from which we can obtain a broad picture regarding the quantum microstructure of the spacetime, in the form of a series of ``lessons''. Most of these (starting from lesson 4!) will deal with specific mathematical features of the theory. But to provide the necessary backdrop, I will distill out of these mathematical features three conceptual points and describe them right at the outset, even though the explicit evidence for these will emerge only later on, in the course of the discussion. \vskip 0.1in \subsection*{\textbf{\itshape Lesson 1: Providing a quantum description of spacetime structure is quite different from constructing a quantum theory of gravity.}} \vskip 0.1in In this approach, it is necessary to make a clear distinction between quantum description of spacetime structure and a theory of quantum gravity. Classical field equations of gravity \textit{also} happens to describe the classical dynamics of the spacetime because of the geometrical interpretation. In the emergent gravity paradigm, these field equations have a status similar to the equations of fluid mechanics or elasticity. So, if this paradigm is correct, one should \textit{not} expect quantizing a classical theory of gravity to lead us to the quantum structure of spacetime any more than quantizing the equations of elasticity or hydrodynamics will lead us to atomic structure of matter! Quantizing the elastic vibrations of a solid will lead only to phonon physics \cite{hu} just as quantizing a classical theory of gravity will lead to graviton physics. The latter could be quite different from a description of quantum structure of spacetime just as phonon physics is quite different from the physics of the atom. \subsection*{\textbf{\itshape Lesson 2: The guiding principle to use for understanding the quantum microstructure of the spacetime should be the thermodynamics of horizons.}} \vskip 0.1in Combining the principles of GR and quantum theory is not a technical problem that could be solved just by using sufficiently powerful mathematics. It is more of a conceptual issue and decades of failure of sophisticated mathematics in delivering quantum gravity indicates that we should try a different approach. This is very much in tune with item (iii) mentioned in Sec. \ref{sec:topdown}. Einstein did not create a sophisticated mathematical model for $m_i$ and $m_g$ and try to interpret $m_i = m_g$. He used thought experiments to arrive at a conceptual basis in which $m_i = m_g$ can be interpreted in a natural manner so that $m_i=m_g$ will cease to be an algebraic accident. Once this is done, physics itself led him to the maths that was needed. Of course, the key issue is what could play the role of a guiding principle similar to principle of equivalence in the present context. \textit{For this, my bet will be on the thermodynamics of horizons.}\cite{rop,tpPR} A successful model will have the connection between horizon thermodynamics and gravitational dynamics \textit{ at its foundation} rather than this feature appearing as a result derived in the context of certain specific solutions to the field equations. We will see evidence for its importance throughout the discussion in what follows. \subsection*{\textbf{\itshape Lesson 3: Think beyond Einstein gravity, black hole thermodynamics and think off-shell.}} \vskip 0.1in There are four technical points closely related to the above conjecture (viz., thermodynamics of horizons should play a foundational role) which needs to be recognized if this approach has yield dividends: \begin{itemize} \item One must concentrate on the general context of observer dependent, local, thermodynamics associated with the local horizons, going beyond the \textit{black hole} thermodynamics. Black hole horizons in the classical theory are far too special, on-shell, global constructs to provide a sufficiently general back-drop to understand the quantum structure of spacetime. The preoccupation with the black hole horizons loses sight of the conceptual fact that all horizons are endowed with temperature as perceived by the appropriate class of observers. Observer dependence \cite{tpdialogue} of thermal phenomena is a feature and not a bug! \item One should also think beyond Einstein's theory and use the structure of, say, Lanczos-Lovelock\ models of gravity \cite{lovelock} in exploring the microstructure of spacetime. Previous work (starting from ref. \cite{TPParis}) has shown that the interpretation of gravity as an emergent phenomenon transcends Einstein's theory and remains applicable to (at least) all Lanczos-Lovelock\ models. Exploiting this connection will allow us to discriminate between results of general validity from those which are special to Einstein's theory in $D=4$. Irrespective of whether Lanczos-Lovelock\ models are relevant to real world, they provide a good test-bed to see which concepts and results are robust and general. \item A corollary is that one should \textit{not} think of entropy of horizons as being proportional to their area. This result, which is true in Einstein's theory, fails for all higher order Lanczos-Lovelock\ models \cite{wald}. But all the general thermodynamic features still continue to remain valid. Because area brings in several other closely related geometrical notions, restricting oneself to Einstein's theory leads to an incorrect view of what entropy and quantum microstructure of spacetime could be. \item The quantum features of a theory are off-shell features. But, fortunately, action principle provides a window to quantum theory because of the path integral formalism. Therefore any peculiar feature of a classical action functional could give us insights into the underlying quantum theory much more than the structure of field equations. This suggests that we need to look at the off-shell structure of the theory using the form of action principles rather than tie ourselves down to field equations. \end{itemize} These ingredients, to a great extent, distinguish the approach I was developing from those of many others. \section{Lessons from the thermodynamics of horizons} Having outlined the broad conceptual features, I will now move on to specifics. There are four lessons one can learn from putting together well-known features of horizon thermodynamics in an appropriate manner. \vskip 0.1in \subsection*{\textbf{\itshape Lesson 4: Temperature of horizons does not depend on the field equations of the theory and is just an indication that spacetimes, like matter, can be hot, but in a observer-dependent manner.}} \vskip 0.1in \noindent One can associate a temperature with any null surface that can act as horizon for a class of observers, in any spacetime (including flat spacetime). This temperature is determined by the behaviour of the metric close to the horizon and has\textit{ nothing to do with the field equations} (if any) which are obeyed by the metric. The simplest situation is that of Rindler observers in flat spacetime with acceleration $\kappa$ who will attribute a temperature $ k_BT=(\hbar/c)(\kappa/2\pi)$ to the Rindler horizon --- which is just a $X=T$ surface in the flat spacetime having no special significance to the inertial observers. While this result is usually proved for an eternally accelerating observer, they also hold in the (appropriately) approximate sense for an observer with variable acceleration \cite{dawoodtp10}. In general, this result can be used to show that the vacuum state in a freely falling frame will appear to be a thermal state in the locally accelerated frame for high frequency modes if $\kappa^{-1}$ is smaller than the local radius of (spacetime) curvature. In the usual context of a bifurcation horizon that divides the spacetime into two causally disconnected regions $R$ and $L$, the global vacuum state $|{\rm vac}\rangle$ of a quantum field theory can be described by a vacuum functional $\langle {\rm vac} | \phi_L, \phi_R\rangle$ in terms of the field configurations $\phi_L$ in $L$ and $\phi_R$ in $R$. Using the Euclidean path integral representation for the ground state functional, one can express \cite{leeunruh} this functional in two different ways and obtain: \begin{equation} \langle {\rm vac} |\phi_L , \phi_R \rangle \propto \int_{T_E=0;\phi=(\phi_L,\phi_R)}^{T_E=\infty;\phi=(0,0)}{\cal D}\phi e^{-A}\ \propto \int_{\kappa t_E=0;\phi=\phi_R}^{\kappa t_E=\pi;\phi=\phi_L}{\cal D}\phi e^{-A} \propto \langle \phi_L|e^{-(\pi/\kappa )H_R}|\phi_R\rangle \end{equation} where $H_R$ is the Hamiltonian describing the dynamics in one of the wedges and $\kappa$ is the acceleration. Both path integrals cover the upper-half of the Euclidean $X-T_E$ plane. The first path integral is in the global coordinate system (inertial, Kruskal ....) with time $T_E$ running from $T_E=0$ to $T_E=\infty$ with the boundary conditions at both limits as indicated. The second path integral is in the coordinate system adapted to region outside the horizon (Rindler, Schwarzschild .....) with the time coordinate behaving like a polar angle in the plane, going from $\kappa t_E=0$ (the right wedge) to $\kappa t_E=\pi$ (the left wedge), with the fields taking appropriate boundary values in the two limits. Thus we get: \begin{equation} \langle {\rm vac} |\phi_L , \phi_R \rangle \propto \langle \phi_L|e^{-(\pi/\kappa )H_R}|\phi_R\rangle \end{equation} For describing the physics in the region outside the horizon, say in $R$, we will trace out the modes $\phi_L$ beyond the horizon. This gives a thermal density matrix for the observables in the right wedge: \begin{equation} \rho ( \phi_R',\phi_R)\propto \int {\cal D}\phi_L \langle \phi_L , \phi_R' | {\rm vac}\rangle \langle {\rm vac} |\phi_L , \phi_R \rangle \propto \langle \phi'_R|e^{-(2\pi/\kappa )H_R}|\phi_R\rangle \end{equation} corresponding to the horizon temperature $T=\kappa/2\pi$. This result only depends on the near horizon geometry having the approximate form of a Rindler metric and is independent of the field equations of the theory. \subsection*{\textbf{\itshape Lesson 5: All thermodynamic variables are observer dependent.}} \vskip 0.1in An immediate consequence, not often emphasized, is that \textit{all} thermodynamic variables must become observer dependent if vacuum acquires an observer dependent temperature. A ``normal'' gaseous system with ``normal'' thermodynamic variables ($T, S, F$ etc.....) must be considered as a highly excited state of the inertial vacuum. It is obvious that a Rindler observer will attribute to this highly excited state different thermodynamic variables compared to what an inertial observer will attribute. Thus thermal effects in the accelerated frame brings in \cite{tpdialogue,marolf} a new level of observer dependence even to \textit{normal} thermodynamics. One need not panic if variables like entropy now acquire an observer dependence and loses their absolute nature. \subsection*{\textbf{\itshape Lesson 6: In sharp contrast to temperature, the entropy of horizons depends on the field equations of gravity and cannot be determined by using just QFT in a background metric.}} \vskip 0.1in One would have expected that if integrating out certain field modes leads to a thermal density matrix $\rho$, then the entropy of the system should be related to lack of information about the \textit{same} field modes and should be given by $S= - {\rm Tr}\ \rho \ln \rho$. This entropy, called entanglement entropy, (i) is proportional to area of the horizon and (ii) is divergent without a cut-off \cite{entang-entropy}. Such a divergence makes the result meaningless and thus we cannot attribute a unique entropy to horizon using just QFT in a background metric.\footnote{In the literature, one often ``regularizes'' the expression for entanglement entropy by introducing a Planck scale cut-off by hand. This procedure lacks justification because of two reasons. First, a free quantum field theory in flat spacetime should not require any cut-off to give meaningful results. Second, in the conventional approach, there is no way a flat spacetime ($G=0$) quantum field theory will know anything about Planck length. In fact, the divergence of entanglement entropy and the need for a Planck scale cut-off is an indication that there is no such thing as flat spacetime, just as there is no such thing as classical, continuum, solid \cite{tpentangle}.} That is, while the temperature of the horizon can be obtained through the study of test-QFT in an external geometry, one cannot understand the entropy of the horizon by the same procedure. This is because, unlike the temperature, the \textit{entropy} associated with a horizon in the theory depends on the field equations of the theory, which we will briefly review. Given the principle of equivalence (interpreted as gravity being spacetime geometry) and principle of general covariance, one could still construct a wide class of theories of gravity. For example, if we take the action functional \begin{equation} A=\int d^Dx \sqrt{-g}\left[L(R^{ab}_{cd}, g^{ab})+L_{\rm matt}(g^{ab},\phi_A)\right] \end{equation} where $L_{\rm matt} $ is the matter Lagrangian (for some matter variables denoted symbolically as $\phi_A$) and vary the metric with appropriate boundary conditions, we will get the field equations (see e.g., chapter 15 of \cite{gravitation}): \begin{equation} \mathcal{G}_{ab}=P_a^{\phantom{a} cde} R_{bcde} - 2 \nabla^c \nabla^d P_{acdb}- \frac{1}{2} L g_{ab} \equiv \mathcal{R}_{ab}- \frac{1}{2} L g_{ab}=\frac{1}{2}T_{ab} \end{equation} where $P^{abcd} \equiv (\partial L/\partial R_{abcd})$. A nice subclass of theories in which the field equations remain second order in the metric is obtained if we choose $L$ such that $\nabla_a P^{abcd}=0$. The most general scalar functionals $L(R^{ab}_{cd}, \gu ij)$ satisfying this condition are specific polynomials in curvature tensor which lead to the Lanczos-Lovelock\ models \cite{lovelock} with the field equations: \begin{equation} P_{ac}^{de} R_{de}^{bc} - \frac{1}{2} L \delta_{a}^{b}= \mathcal{R}_{a}^{b}- \frac{1}{2m} \mathcal{R}\delta_{a}^{b} =\frac{1}{2}T_{a}^{b}; \quad \mathcal{R}_{a}^{b} \equiv P_{ac}^{de} R_{de}^{bc}; \qquad \mathcal{R} = \mathcal{R}^a_a \label{scalarpr} \end{equation} The second form of the equation is valid for the $m-$th order Lanczos-Lovelock\ model for which $\mathcal{R} = R^{abcd} (\partial L/\partial R^{abcd}) = mL$. In the simplest context of $m=1$ we take $ L\propto R=R/16\pi$ (with conventional normalization), leading to $P^{ab}_{cd}=(32\pi)^{-1} (\delta^a_c\delta^b_d-\delta^a_d\delta^b_c)$, as well as $\mathcal{R}^a_b = R^a_b/16\pi, \mathcal{G}^a_b = G^a_b/16\pi$ and one recovers Einstein's equations. The structure of the theory is essentially determined by the tensor $P^{ab}_{cd}$ which has the algebraic symmetries of curvature tensor and is divergence-free in all indices. In any such generally covariant theory, the infinitesimal coordinate transformation $x^a \to x^a + q^a$ leads to the conservation of a Noether current $J^a$ (which depends on $q^a$) given by: \begin{equation} J^a \equiv \left( 2\mathcal{G}^{a}_{b} q^b + Lq^a + \delta_{q}v^a \right) =2\mathcal{R}^{a}_{b} q^b+\delta_{q}v^a; \qquad \nabla_a J^a = 0. \label{current} \end{equation} where $\delta_{q}v^a$ represents the boundary term in the action which arises for the variation of the metric in the form $ \delta g^{ab} = ( \nabla^a q^b + \nabla^b q^a$). Given $\nabla_a J^a = 0$, we can introduce an anti-symmetric tensor $J^{ab}$ by $J^a = \nabla_b J^{ab}$. For the Lanczos-Lovelock\ models, one can determine $\delta_q v^a$ and show that the $J^{ab}$ and $J^a$ can be expressed in the form \begin{equation} J^{ab} = 2 P^{abcd} \nabla_c q_d;\qquad J^a = 2 P^{abcd} \nabla_b \nabla_c q_d \label{noedef} \end{equation} The field equations of Lanczos-Lovelock\ models admit black hole solutions (with horizons) in asymptotically flat spacetime. Studying the physical processes occurring in such spacetimes, one can obtain an expression for the entropy of the horizon (called Wald entropy \cite{wald}) which is closely related to the Noether current $J^a$ as follows: \begin{equation} S \equiv \beta\int d^{D-1}\Sigma_{a}\; J^{a}= \beta \int d^{D-2}\Sigma_{ab}\; J^{ab} = \frac{1}{4} \int_\mathcal{H}(32\pi\, P^{ab}_{cd})\epsilon_{ab}\epsilon^{dc} d\sigma \label{noetherint} \end{equation} where $\beta^{-1}=\kappa/2\pi$ is the horizon temperature and $J^a$ is the Noether current for $q^a = \xi^a$ where $\xi^a$ is the Killing vector corresponding to time translation symmetry of the asymptotically static black hole solution. In the final expression the integral is over any surface with $(D-2)$ dimension which is a spacelike cross-section of the Killing horizon on which the norm of $\xi^a$ vanishes, with $\epsilon_{ab}$ denoting the bivector normal to the bifurcation surface. Thus horizon entropy is given by an integral over the horizon surface of the $P^{abcd}$, which we may call the \textit{entropy tensor} of the theory. Note that the Noether current $J^a$ multiplied by $\beta_{loc}\equiv N\beta$, where $N$ is the lapse function, can be thought of as the entropy current density. In Einstein's theory, with $32\pi\, P^{ab}_{cd} = (\delta^a_c \delta^b_d - \delta^a_d \delta^b_c)$, the entropy in \eq{noetherint} will be one quarter of the area of the horizon. But in general, the entropy of the horizon is \textit{not} proportional to the area and depends on the theory.\footnote{This feature again shows that the entanglement entropy --- which is always proportional to the horizon area in the conventional approach and is independent of the field equations obeyed by the metric --- cannot be identified with the entropy of the Lanczos-Lovelock\ models without modifying the regularization procedure. In the emergent paradigm one can argue that such a modification is indeed required. Then, using a generalisation of ideas described in ref.\cite{first}, one can possibly tackle this issue. I will not this discuss here; for more details, see ref. \cite{tpentangle}.} This dichotomous situation as regards temperature versus entropy is the first indication that the thermodynamics of the horizon, probed by QFT in a external gravitational field, is just the tip of an iceberg. As we will see the emergent paradigm provides a better understanding of these features. \subsection*{\textbf{\itshape Lesson 7: The connection between horizon entropy and the conserved current arising from the diffeomorphism invariance demands deeper understanding.}} \vskip 0.1in Why should a current $J^a$, conserved due to diffeomorphism invariance of the theory, have anything to do with a thermodynamical variable like entropy of horizons in the theory ? In the conventional approach, which views $x^a \to x^a + q^a$ as a relabeling of coordinates, this question has no answer. In contrast, if we take the `active' point of view, we notice that $x^a \to x^a + q^a$ also shifts (virtually) the location of null surfaces and thus the information accessible to specific observers. The connection with entropy arises due to the cost of gravitational entropy involved in the virtual displacements of null surfaces. This idea can be made more precise in terms of entropy balance at local Rindler horizons \cite{entdenspacetime}. Let us choose any event $\mathcal{P}$ and introduce a local inertial frame (LIF) around it with Riemann normal coordinates $X^a=(T,\mathbf {X})$ such that $\mathcal{P}$ has the coordinates $X^a=0$ in the LIF. Let $k^a$ be a future directed null vector at $\mathcal{P}$ and we align the coordinates of LIF such that it lies in the $X-T$ plane at $\mathcal{P}$. We next transform from the LIF to a local Rindler frame LRF with acceleration $\kappa$ along the $X$ axis. Let $\xi^a$ be the approximate Killing vector corresponding to translation in the Rindler time such that the vanishing of $\xi^a\xi_a \equiv -N^2$ characterizes the location of the local horizon $\mathcal{H}$ in LRF. Usually, we shall do all the computation on a time-like surface infinitesimally away from $\mathcal{H}$ with $N=$ constant, called a ``stretched horizon''. Let the time-like unit normal to the stretched horizon be $r_a$. Consider an infinitesimal displacement of a local patch of the stretched horizon in the direction of $r_a$, by an infinitesimal proper distance $\epsilon$, which will change the proper volume by $dV_{prop}=\epsilon\sqrt{\sigma}d^{D-2}x$ where $\sigma_{ab}$ is the metric in the transverse space. The flux of energy through the surface will be $T^a_b \xi^b r_a$ and the corresponding entropy flux can be obtained by multiplying the energy flux by $\beta_{\rm loc}=N\beta$ which corresponds to the properly redshifted, local, Tolman temperature. Hence the `loss' of matter entropy to the outside observer because the virtual displacement of the horizon has engulfed some matter is $ \delta S_m=\beta_{\rm loc}\delta E=\beta_{\rm loc} T^{aj}\xi_a r_j dV_{prop}. $ Recalling from \eq{noetherint} that $\beta_{loc}J^a$ gives the gravitational entropy current, the change in the gravitational entropy is given by $\delta S_{\rm grav} \equiv \beta_{loc} r_a J^a dV_{prop}$ where $J^a$ is the Noether current corresponding to the local Killing vector $\xi^a$ given by $J^a=2\mathcal{G}^a_b\xi^b+L\xi^a$. As the stretched horizon approaches the true horizon, it can be shown that $N r^a \to \xi^a$ and $\beta \xi^a \xi_a L \to 0$. Hence we get, in this limit: $ \delta S_{\rm grav} \equiv \beta \xi_a J^a dV_{prop} = 2 \beta \mathcal{G}^{aj}\xi_a \xi_j dV_{prop}. $ Comparing $\delta S_{\rm grav}$ and $\delta S_m$ we see that the field equations $2\mathcal{G}^a_b=T^a_b$ can be interpreted as the entropy balance condition $\delta S_{grav}=\delta S_{matt}$ thereby providing direct thermodynamic interpretation of the field equations as local entropy balance in local Rindler frame. In the emergent paradigm, the spacetime is analogous to a solid made of atoms and $x^a\to x^a+q^a(x)$ is analogous to the deformation of an elastic solid \cite{tpijmp04}. When such a deformation leads to changes in accessible information --- like when one considers the virtual displacements of horizons --- it costs some amount of gravitational entropy thereby providing a direct link between the transformation $x^a\to x^a+q^a(x)$ and spacetime entropy --- a link that is lacking in the conventional approach. We will say more about this in Sec. \ref{sec:entmax}. \section{Thermodynamic interpretation of field equations and action functionals} I stressed in Sec. \ref{sec:topdown} that for the top-down approach to be of use, we need to identify the `algebraic accidents' in the top level description which are usually taken for granted without a demand for explanation. I will briefly summarize three such issues in classical gravity, which can give us clues about the microscopic theory. \subsection*{\textbf{\itshape Lesson 8: The gravitational field equations reduce to a thermodynamic identity on the horizon in a wide class of theories.}} \vskip 0.1in It can be shown that \cite{tpdawoodgentds} the field equations in any Lanczos-Lovelock\ model, when evaluated on a static solution of the theory which has a horizon, can be expressed in the form of a thermodynamic identity $TdS = dE_g + PdV$. Here $S$ is the correct Wald entropy of the horizon in the theory, $E_g$ is a geometric expression involving an integral of the scalar curvature of the sub-manifold of the horizon and $PdV$ represents the work function of the matter source. The differentials $dS, dE_g$ etc. should be thought of as indicating the difference in $S,E_g$ etc between two solutions in which the location of the horizon is infinitesimally displaced. This equality between field equations on the horizon and the thermodynamic identity --- originally obtained \cite{tdsingr} for spherical horizons in Einstein's theory, has now been demonstrated for an impressively wide class of models \cite{KSP} like stationary axisymmetric horizons and evolving spherically symmetric horizons in Einstein gravity, static spherically symmetric horizons and dynamical apparent horizons in Lanczos-Lovelock\ gravity, generic, static horizon in Lanczos-Lovelock\ gravity, three dimensional BTZ black hole horizons, FRW cosmological models in various gravity theories and even in the case Horava-Lifshitz gravity. This result is non-trivial in the sense that the field equation on the horizon does not look very ``thermodynamical'' at first sight. For example, in the simplest context of spherically symmetric horizon in Einstein's theory [with $-g_{00} = g_{11}^{-1} = f(r) $ with $f(a) =0$ determining the location of the horizon at $r=a$], the field equation on the horizon reduces to \begin{equation} \frac{c^4}{G}\left[{\kappa a\over c^2} - {1\over 2}\right] = 4\pi P a^2 \end{equation} where $\kappa = f'(a)/2$ is the surface gravity and $P$ is the pressure of the source. As I said, this equation does not seem to have any thermodynamics in it. However, if we multiply it by $da$ it can be re-written in the form: \begin{equation} \underbrace{\frac{\hbar} {c}\left(\frac{\kappa}{2\pi}\right) }_{\begin{minipage}[c]{2em} \vskip 0.1in {$k_BT$}\end{minipage}} \ \underbrace{\frac{c^3}{G\hbar}d\left( {1\over 4} 4\pi a^2 \right)}_{\begin{minipage}[c]{1em} \vskip 0.1in {$k_B^{-1}dS$}\end{minipage}} \ \underbrace{-\ {1\over 2}\frac{c^4 da}{G}}_{\begin{minipage}[c]{2em} \vskip 0.1in {$-dE_g$}\end{minipage}} = \underbrace{ P d \left( {4\pi \over 3} a^3 \right) }_{\begin{minipage}[c]{2em} \vskip 0.1in {$P\, dV$}\end{minipage}} \end{equation} The only extra input we needed was the expression for the horizon temperature in terms of the surface gravity which needed introducing $\hbar$ in the numerator and denominator. Similar miracle occurs in all the gravitational theories, much more general than Einstein's theory, in which entropy is no longer proportional to horizon area. As we discussed earlier, the temperature of the horizon knows \textit{nothing} about the field equations of the theory but the entropy does. It is therefore remarkable that one obtains the correct combination $TdS$ for a wide variety of theories showing that the information about the theory is encoded in the entropy functional, exactly as it would be for a macroscopic body. There are significant differences between this identity $TdS = dE_g + PdV$ to which field equations reduce to and the so called Clausius relation $TdS = dE_m$ (used, for example, by Jacobson \cite{ted}) which need to be recognised: \begin{itemize} \item In addition to the obvious existence of the work term $PdV$, it should be stressed that $E_m$ used in the Clausius relation $TdS = dE_m$ is related to \textit{matter stress tensor} while $E_g$ in the $TdS = dE_g + PdV$ is a purely geometrical construct built out of the metric. The origin of these differences can be traced to two different kinds of virtual displacements of the horizons considered in these two approaches to define the infinitesimal differences \cite{dawoodnew}. \item More importantly, \textit{while $TdS = dE_g + PdV$ holds in widely different contexts,} it has been found to be \textit{impossible} to generalize $TdS = dE_m$ beyond Einstein's theory without introducing additional assumptions (like dissipation), the physical meaning of which remains unclear. \end{itemize} Incidentally, while Davies-Unruh temperature scales as $\hbar$ the entropy scales as $1/\hbar$ (coming from inverse Planck area), thereby making $TdS$ independent of $\hbar$! This is reminiscent of the fact that in normal thermodynamics $T\propto 1/k_B,S\propto k_B$ making $TdS$ independent of $k_B$. In both cases, the effects due to discrete microstructure (indicated by non-zero $\hbar$ or $k_B$) disappears in the continuum limit thermodynamics. Thermal phenomena requires microstructure but thermodynamical laws are independent of it! Similarly we expect the thermodynamic description of spacetime to be useful and independent of exact nature of the QG description. Any (`bottom-up``) model for quantum gravity which leads to horizon thermodynamics and gives Davies-Unruh temperature for QFT in the semi-classical limit, must be consistent with the (`top-down') thermodynamic description merging together in the correct limit. \subsection*{\textbf{\itshape Lesson 9: Holographic structure of gravitational action functionals finds a natural explanation in the thermodynamic interpretation of the field equations.}} \vskip 0.1in If the gravitational dynamics and horizon thermodynamics are so closely related, with field equations becoming thermodynamic identities on the horizon, then the action functionals of the theory (from which we obtain the field equations) must contain information about this connection. This clue comes in the form of another unexplained algebraic accident related to the structure of the action functional and tells us something significant about the \textit{off-shell structure} of the theory. Gravity is the only theory known to us for which the natural action functional preserving symmetries of the theory contain second derivatives of the dynamical variables but still leads to second order differential equations. Usually, this is achieved by separating out the terms involving the second derivatives of the metric into a surface term which is either ignored or its variation is cancelled by a suitable counter-term. However, this leads to a serious conceptual mystery in the conventional approach when we recall the following two facts together: (a) The field equations can be obtained by varying the bulk term after ignoring (or by canceling with a counter-term) the surface term. (b) But if we evaluate the surface term on the horizon of any solution to the field equations of the theory, one obtains the entropy of the horizon! \textit{How does the surface term, which was discarded before the field equations were obtained, know about the entropy associated with a solution to those field equations?!} In the conventional approach we need to accept it as another `algebraic accident' without any explanation and, in fact, no explanation is possible within the standard framework. The explanation lies in the fact that the surface and bulk term of the Lagrangian are related in a specific manner thereby duplicating the information about the horizon entropy \cite{ayan}. One can show that there exists a relation of the form: \begin{equation} \sqrt{-g}L_{\rm sur}=-\partial_a\left(g_{ij} \frac{\delta \sqrt{-g}L_{\rm bulk}}{\delta(\partial_ag_{ij})}\right) \label{surbulk} \end{equation} All Lanczos-Lovelock\ action functionals have this form \cite{TPParis}. In fact, this relation is crucial for an action with second derivatives of the dynamical variables to still lead to field equations which are only second order --- a feature shared by all the Lanczos-Lovelock\ models. It can be shown that this result will be true for actions that can be separated into a surface term and a bulk term with the surface term being an integral over $\partial_a (q^A \pi^a_A)$ where $q^A$ are the dynamical variables and $\pi^a_A$ are the canonical momentum. This structure allows one to interpret all these action functionals, including Einstein-Hilbert action, as providing the momentum space description (see p. 292 of \cite{gravitation}) of the theory. The duplication of information between surface and bulk term in \eq{surbulk} also allows one to obtain the full action \cite{tpPR} from the surface term alone using the entropic interpretation. In fact, in the the Riemann normal coordinates around any event $\mathcal{P}$ the gravitational action reduces to a pure surface term, again showing that the dynamical content is actually stored on the boundary rather than in the bulk. We can also use this fact to relate the variation of the surface term to $\mathcal{R}^{a}_{b}$ of the theory. From \eq{current}, it follows that: \begin{eqnarray} \int_{\partial\mathcal{V}}d^{D-1}x\sqrt{h} n_a(\delta_qv^a) &=&\int_{\mathcal{V}}d^{D}x\sqrt{-g} \nabla_a(\delta_qv^a)\nonumber\\ &=&\int_{\mathcal{V}}d^{D}x\sqrt{-g} \nabla_a(2\mathcal{R}^{a}_{b} q^b) = \int_{\partial\mathcal{V}}d^{D-1}x\sqrt{h} n_a (2\mathcal{R}^{a}_{b} q^b) \end{eqnarray} Computing the corresponding variation of matter action under the change $\delta g^{ab}=\nabla^aq^b+\nabla^bq^a$, one can construct a variational principle to obtain the field equations, purely from the surface term \cite{TPsurfaceaction}. More importantly, since the variation of the surface term gives the change in the gravitational entropy, we see that $\mathcal{R}^{ab}$ essentially determines the gravitational entropy density of the spacetime. We will say more about this in sec. \ref{sec:entmax}. \subsection*{\textbf{\itshape Lesson 10: Gravitational actions have a surface and bulk terms because they give the entropy and energy of a static spacetimes with horizons, adding up to make the action the free energy of the spacetime.}} \vskip 0.1in This provides yet another, direct, physical interpretation for the structure of the gravitational action functionals analyzed above. The result is most easily seen for any Lanczos-Lovelock\ model by writing the time component of the Noether current in \eq{current} for the Killing vector $q^a = \xi^a = (1,\mathbf{0})$ in the form: \begin{equation} L = \frac{1}{\sqrt{-g}} \partial_\alpha \left( \sqrt{-g}\, J^{0\alpha}\right) - 2 \mathcal{G}^0_0 \label{strucL} \end{equation} Only spatial derivatives contribute in the first term on the right hand side when the spacetime is static. Integrating over $L\sqrt{-g}$ to obtain the action it is is easy to see (using \eq{noetherint}) that the first term gives the entropy and the second term can be interpreted as energy \cite{sanvedtp}. Finally, I stress again that the real importance of these results arises from the fact that they hold for all Lanczos-Lovelock\ models in an identical manner. \section{The Avogadro number of the spacetime} The results described in the previous sections suggest that there is a deep connection between horizon thermodynamics and the gravitational dynamics. Because the spacetime can be heated up just like a body of gas, the Boltzmann paradigm (``If you can heat it, it has microstructure'') motivates the study of the microscopic degrees of freedom\ of the spacetime exactly the way people studied gas dynamics \textit{before} they understood the atomic structure of matter. There exists, fortunately, an acid test of this paradigm which it passes with flying colours. \subsection*{\textbf{\itshape Lesson 11: Gravitational field equations imply the law of equipartition $\Delta E=(1/2)k_BT\Delta N$ in any static spacetime, allowing the determination of density of microscopic degrees of freedom. The result again displays holographic scaling.}} \vskip 0.1in Boltzmann's conjecture led to the equipartition law $\Delta E = (1/2) k_BT \Delta N$ relating the number density $\Delta N$ of microscopic degrees of freedom\ required to store an energy $\Delta E$ at temperature $T$ and to the determination of Avogadro number of a gas. If our ideas are correct, we should be able to relate the $E$ and $T$ of a given spacetime to determine the number density of microscopic degrees of freedom\ of the spacetime. Remarkably enough, this can be done directly from the field equations \cite{surfaceprd}. In a hot spacetime, Einstein's equations \textit{imply} the equipartition law \begin{equation} E = \frac{1}{2}k_B \int_{\partial\cal V} \frac{\sqrt{\sigma}\, d^2x}{L_P^2}\ \left\{\frac{N a^\mu n_\mu}{2\pi}\right\} \equiv \frac{1}{2} k_B \int_{\partial\cal V}dn\, T_{\rm loc} \end{equation} (where $T_{\rm loc} = (Na^\mu n_\mu /2\pi)$ is the local acceleration temperature and $\Delta n = \sqrt{\sigma}\, d^2 x/ L_P^2$) thereby allowing us to read off the number density of microscopic degrees of freedom. We again see that gravity is holographic in the sense that the number density $\Delta n$ scales as the proper area $\sqrt{\sigma}\, d^2 x$ of the boundary of the region rather than the volume. (In the case of a gas, we would have got an integral over the volume of the form $dV(dn/dV)$ rather than an area integral.) We also notice that, in Einstein's theory, the number density $(dn/dA)$ is a constant with every Planck area contributing a single degree of freedom. The true elegance of this result again rests on the fact that it holds true for all Lanczos-Lovelock\ models! For a Lanczos-Lovelock\ model with an entropy tensor $P^{ab}_{cd}$ one gets the result \begin{equation} E=\frac{1}{2}k_B\int_{\partial\cal V} dn T_{loc}; \qquad \frac{dn}{dA}=\frac{dn}{\sqrt{\sigma}d^{D-2}x}=32\pi P^{ab}_{cd}\epsilon_{ab}\epsilon^{cd} \label{diffeoeqn} \end{equation} where $\epsilon_{ab}$ is the binormal on the codimension-2 cross-section. All these gravitational theories are holographic and the density of microscopic degrees of freedom\ encodes information about the theory through the entropy tensor. \textit{I consider these results as the most direct evidence for the emergent paradigm of gravity.} \subsection*{\textbf{\itshape Lesson 12: One can obtain the Wald entropy for a general theory directly from law of equipartition.}} \vskip 0.1in The density of microscopic degrees of freedom\ obtained in \eq{diffeoeqn} suggests that the entropy associated with a \textit{general surface} in Lanczos-Lovelock\ models (or the entropy associated with a horizon in a more general theory) will be proportional to an integral over $P^{ab}_{cd}\epsilon_{ab}\epsilon^{cd}$. That is,\footnote{Note that in Einstein's theory, we get $\Delta n=\Delta A/L_P^2$. One usually considers this as arising due to dividing the area $\Delta A$ into $\Delta n$ patches of area $L_P^2$. If we attribute $f$ internal states to each patch, then the total number of microstates $\Delta\Omega$ will be $\Delta\Omega=f^{(\Delta n)}$ and $\Delta S=\ln \Delta\Omega\propto \Delta n$ which is how the extensivity $\Delta S\propto \Delta n$ arises. In a more general theory, we replace $\Delta n=\Delta A/L_P^2$ by the expression in \eq{diffeoeqn}.} \begin{equation} S\propto\int_{\partial\mathcal{V}} dn \propto\int_{\partial\mathcal{V}}32\pi P^{ab}_{cd}\epsilon_{ab}\epsilon^{cd}\sqrt{\sigma}d^{D-2}x \label{waldentro} \end{equation} This is precisely the expression for Wald entropy \cite{wald} \textit{but we have obtained it using only the equipartition law and as a local statement}! This comes about because the field equations have a specific relationship with Noether current. Further field equations imply equipartition law while Noether current is related to Wald entropy, thereby connecting all the three. Let me indicate how this comes about by a more direct analysis. In static spacetimes, we have a Killing vector $\xi^a$ corresponding to time translation invariance. If we take $q^a=\xi^a$, the expression for the Noether current is quite simple and we get $J^a=2\mathcal{R}^a_b\xi^b$. Using the relations $J^a \equiv \nabla_b J^{ab}, \xi^a=Nu^a$ and the antisymmetry of $J^{ab}$ one can easily show that: \begin{equation} D_\alpha (J^{b\alpha} u_b)=2N\mathcal{R}_{ab} u^a u^b \end{equation} This is a generalization of the relation $D_\mu(N a^\mu) = 4\pi \rho_{\rm komar}$ between the divergence of the acceleration and the Komar energy density in Einstein's theory, once again showing the role of Noether potential $J^{ab}$ in the dynamics. The integral version of this relation for a region $\mathcal{V}$ bounded by $\partial\mathcal{V}$ is: \begin{equation} \int_{\partial\mathcal{V}}d^{D-2}x \sqrt{\sigma}(n_iu_bJ^{bi}) = \int_{\partial\mathcal{V}}d^{D-2}x \sqrt{\sigma} (N n_\alpha J^{\alpha 0})=\int_\mathcal{V} 2 N\mathcal{R}_{ab} u^a u^b \sqrt{h}\, d^{D-1}x \label{identity} \end{equation} where we have used $u_a=-N\delta_a^0$ and $J^{0\alpha}=-J^{\alpha 0}$. (The middle relation shows that the result is essentially an integral over $\partial\mathcal{V}$ of $J^{bi}d\sigma_{ib}$, where $d\sigma_{ib}=(1/2)n_{[i}u_{b]} \sqrt{\sigma}d^{D-2}x$.) Now consider a static spacetime with a bifurcation horizon $\mathcal{H}$ given by the surface $N^2 \equiv - \xi^a \xi_a =0$. The horizon temperature $T \equiv\beta^{-1}= \kappa/2\pi$ where $\kappa$ is the surface gravity. Since the Wald entropy of the horizon is essentially the Noether charge (multiplied by $\beta$), we will interpret \cite{entdenspacetime} the Noether charge \textit{density} $\beta J_b u^b $ (multiplied by $\beta$) as the entropy density of the spacetime as perceived by the static observers with four velocity $u^a= \xi^a/N$, so that the total entropy is \begin{equation} S_{\rm grav}[u^i] = \beta\int_\mathcal{V} J_b u^b \sqrt{h}\, d^{D-1}x \label{defs1} \end{equation} Using $J^a = 2 \mathcal{R}^a_b \xi^b$ and \eq{identity} and integrating the expression over a region bounded by the $N=$ constant surface, it is easy to see that \begin{equation} S=\frac{1}{2}\, \beta E \end{equation} which is a statement of equipartition, first obtained \cite{cqgpap} in 2004 in the form of a relation $E=2TS$ in Einstein's theory and is generalized to all Lanczos-Lovelock\ models in ref.\cite{surfaceprd}. Further, if we take $\partial\mathcal{V}$ to be the horizon $\mathcal{H}$ and use $\beta T=1$, we get the horizon entropy to be \begin{equation} S=\frac{1}{4}\int_\mathcal{H} dn =\frac{1}{4}\int_{\mathcal{H}}32\pi P^{ab}_{cd}\epsilon_{ab}\epsilon^{cd}\sqrt{\sigma}d^{D-2}x \end{equation} which is the standard expression for Wald entropy in a general theory thereby justifying the choice in \eq{defs1}. This ansatz in \eq{defs1} also fixes the proportionality constant in \eq{waldentro} to be $1/4$. Our expressions for entropy and energy in differential form are given by $dE_{\rm komar} = (1/2) T_{\rm loc}(dn/dA) dA$, $dS= (1/4) (dn/dA) dA$. The resulting expression for $TdS$ is essentially equivalent to what we found earlier in the case of first law, $TdS = dE_g + PdV$, applied to infinitesimal horizon displacements when the differentials appearing in the two expressions are properly related. \subsection*{\textbf{\itshape Lesson 13: Gravity is intrinsically quantum mechanical at all scales}} \vskip 0.1in The holographic nature of gravity which I have alluded to several times shows that area elements play a significant role in the microscopic description of the theory. This is directly related to the fact that the basic unit of the theory is the Planck \textit{area} $\mathcal{A}_P\equiv (G\hbar/c^3)$. Only by taking a square root, rather artificially, one obtains the Planck \textit{length}. Classical gravity, in fact, should be described using $\mathcal{A}_P$ rather than using $G$ with Newton's law of gravity written in the form $F = (\mathcal{A}_P c^3/\hbar) (m_1 m_2/r^2)$. This has the crucial consequence that one cannot really take $\hbar \to 0$ limit at fixed $\mathcal{A}_P$ and call it classical gravity. Gravity is intrinsically quantum mechanical at all scales \cite{tp2002} because of the microstructure of spacetime. As an aside, one may mention that, strictly speaking, normal matter is also intrinsically quantum mechanical at all scales due to the atomic structure. For example, one cannot study classical elasticity, say, by taking the strict, mathematical, limit $\hbar \to 0$ in a crystal lattice, because such a limit will also make all the electrons in the atom collapse! What we actually do is to keep $\hbar$ nonzero at subatomic scales, ensuring the atomic stability and take the $\hbar \to 0$ limit for the lattice interactions in the continuum limit to obtain the laws of elasticity. We need to do something analogous to obtain classical spacetime from quantum spacetime. \section{Entropy density of spacetime an its extremisation}\label{sec:entmax} So far we have been faithfully following the `top-down' philosophy of starting from known results in classical gravity and obtaining consequences which suggests an alternative paradigm. For example, the results in the last section were obtained by starting from the field equations of the theory, rewriting them in the form of law of equipartition and thus determining the density of microscopic degrees of freedom. Ultimately, however, we have to start from a microscopic theory and obtain the classical results as a consequence. We know that the thermodynamical behaviour of a normal system can be described by an extremum principle for a suitable potential (entropy, free energy ...) treated as a functional of appropriate variables (volume, temperature ,....). If our ideas related to gravitational theories are correct, it must be possible to obtain the field equations by extremising a suitably defined thermodynamic potential. The fact that null surfaces block information suggests that this thermodynamic potential should be closely related to null surfaces in the spacetime. This expectation turns out to be correct \cite{aseemtp}. \subsection*{\textbf{\itshape Lesson 14: Gravitational field equations can be obtained from an alternative, thermodynamic, extremum principle.}} \vskip 0.1in Recall that `how gravity tells matter to move' can be determined by demanding the validity of special relativistic laws for \textit{all} locally inertial observers. Similarly, `how matter curves spacetime' can be determined by demanding that the a suitable thermodynamic potential of the microscopic degrees of freedom\ of the spacetime should be an extremum \textit{all} local Rindler observers. The physical content of this potential (free energy, entropy, enthalpy ....) will depend on the context but the argument works for any one of them. The mathematics involves associating with every null vector field $n^a(x)$ in the spacetime a thermodynamic potential $\Im(n^a)$ which is quadratic in $n^a$ and given by: \begin{equation} \Im[n^a]= \Im_{grav}[n^a]+\Im_{matt}[n^a] \equiv- \left(4P_{ab}^{cd} \ensuremath{\nabla}_cn^a\ensuremath{\nabla}_dn^b - T_{ab}n^an^b\right) \,, \label{ent-func-2} \end{equation} where $P_{ab}^{cd}$ is a tensor having the symmetries of curvature tensor and is divergence-free in all its indices and $T_{ab}$ is a divergence-free symmetric tensor. (Once we get the field equations we can read off $T_{ab}$ as the matter energy-momentum tensor; the notation anticipates this result). We also know that the $P^{abcd}$ with the assigned properties can be expressed as $P_{ab}^{cd}=\partial L/\partial R^{ab}_{cd}$ where $L$ is the Lanczos-Lovelock\ Lagrangian and $R_{abcd}$ is the curvature tensor \cite{rop}. This choice in \eq{ent-func-2} will also ensure that the equations resulting from the entropy extremisation do not contain any derivative of the metric which is of higher order than second. (More general possibilities exist which I will not discuss here.). We now demand that $\delta \Im/\delta n^a=0$ for the variation of all null vectors $n^a$ with the condition $n_an^a=0$ imposed by adding a Lagrange multiplier function $\lambda(x)g_{ab}n^an^b$ to $\Im[n^a]$. Using \begin{equation} \frac{\partial \Im}{\partial ( \nabla_c n^a)}= (-8 P^{cd}_{ab} \nabla_d n^b);\quad \frac{\partial \Im}{ \partial n^a}=2[T_{ab}+\lambda(x)g_{ab}]n^b \label{sgravder1} \end{equation} the Euler-Lagrange equations reduce to: \begin{equation} \nabla_c\left[ -8 P^{cd}_{ab} \nabla_d n^b\right]=2[T_{ab}+\lambda(x)g_{ab}]n^b \end{equation} Because of the condition $\nabla_cP^{cd}_{ab}=0$ and the antisymmetry $P^{cd}_{ab}=-P^{dc}_{ab}$ we find that all the derivatives disappear on the left hand side and an elementary calculation gives: \begin{equation} \left(2\mathcal{R}^a_b - T{}^a_b-\lambda \delta^a_b\right) n_a=0\,, \label{ent-func-6} \end{equation} where $\mathcal{R}^a_b\equiv P_{bi}^{jk}R^{ai}_{jk}$. We demand that \eq{ent-func-6} should hold for all null vector fields $ n^a$. Using the generalized Bianchi identity and the condition $\nabla_aT^a_b=0$ we obtain \cite{rop,aseemtp} from \eq{ent-func-6} the equations \begin{equation} \mathcal{G}^a_b = \mathcal{R}^a_b-\frac{1}{2}\delta^a_b L = \frac{1}{2}T{}_b^a +\Lambda\delta^a_b \label{ent-func-71} \end{equation} where $\Lambda$ is a constant. These are precisely the field equations for gravity in a theory with Lanczos-Lovelock\ Lagrangian $L$ (with an undetermined cosmological constant $\Lambda $ which arises as an integration constant. The thermodynamical potential can be obtained by integrating the density $\Im[n^a]$ over a region of space or a surface etc. depending on the context. The matter part of the $\Im$ is proportional to $T_{ab}n^an^b$ which will pick out the contribution $(\rho+p)$ for an ideal fluid, which is the enthalpy density. If multiplied by $\beta=1/T$, this reduces to the entropy density because of Gibbs-Duhem relation. When the multiplication by $\beta$ can be reinterpreted in terms of integration over $(0,\beta)$ of the time coordinate, the corresponding potential can be interpreted as entropy and the integral over space coordinates can be interpreted as rate of generation of entropy. [This was the interpretation provided in the earlier works \cite{rop,aseemtp} but the result is independent of this interpretation as long as suitable boundary conditions can be imposed]. One can also think of $\Im[n^a]$ as an effective Lagrangian for a set of collective variables $n^a$ describing the deformations of null surfaces. In addition to providing a purely thermodynamic extremum principle for the field equations of gravity, the above approach also has the following attractive features. \begin{itemize} \item The extremum value of the thermodynamic potential, when computed on-shell for a solution with static horizon, leads to the Wald entropy. This is a non-trivial consistency check on the approach because it was not designed to reproduce the Wald entropy. It also shows that when the field equations hold, the total entropy of a region $\mathcal{V}$ resides on its boundary $\partial\mathcal{V}$ which is yet another illustration of the holographic nature of gravity. \item In the semi-classical limit, one can show \cite{entropyquant} that the gravitational (Wald) entropy is quantized with $S_{\rm grav}$ [on-shell] $=2\pi n$. In the lowest order Lanczos-Lovelock\ theory, the entropy is proportional to area and this result leads to area quantization. More generally, it is the gravitational entropy that is quantized. The law of equipartition for the surface degrees of freedom is closely related to this entropy quantization because both arise from the existence of discrete structures on the surfaces in question. \item The entropy functional in \eq{ent-func-2} is invariant under the shift $T_{ab} \to T_{ab} + \rho_0 \gl ab$ which shifts the zero of the energy density. This symmetry allows any low energy cosmological constant, appearing as a parameter in the variational principle, to be gauged away thereby alleviating the cosmological constant problem to a great extent \cite{tpcc}. I will not discuss this issue here. \end{itemize} There is another way of interpreting \eq{ent-func-6} which is more in tune with the emergent perspective of gravity. Note that, while \eq{ent-func-6} holds for any vector field once the normalization condition is imposed through the Lagrange multiplier, the entropy was originally attributed to \textit{null} vectors and hence it is natural to study \eq{ent-func-6} when $n^a = \ell^a$, the null normal of a null surface $\mathcal{S}$ in the spacetime and project \eq{ent-func-6} onto the null surface. If $\w{\ell}$ is the normal to $\mathcal{S}$, then such a projection leads to the equations: \begin{equation} R_{mn}\ell^m q^n_{a}=8\pi T_{mn}\ell^m q^n_{a}; \quad R_{mn}\ell^m \ell^n=8\pi T_{mn}\ell^m \ell^n \label{albertpair} \end{equation} where $q_{ab} = \gl ab+ \ell_a k_b + \ell_b k_a$ with $k^a$ being another auxiliary null vector satisfying $\w{\ell} \w \cdot \w k = -1$. The metric $q_{ab}$ with $q_{ab} \ell^b =0 = q_{ab} k^b$ acts as a projector to $\mathcal{S}$ (see ref. \cite{dns} for details). It is possible to rewrite the first equation in \eq{albertpair} in the form of a Navier-Stokes equation thereby providing a hydrodynamic analogy for gravity. This equation, known in the literature as Damour-Navier-Stokes (DNS) equation \cite{damourthesis}, is usually derived by rewriting the field equations. Our analysis \cite{dns} provides an entropy extremisation principle for the DNS equation which makes the hydrodynamic analogy natural and direct. It may also be noted that the gravitational entropy density --- which is the integrand $\Im_{grav}\propto ( -P_{ab}^{cd} \ensuremath{\nabla}_c\ell^a\ensuremath{\nabla}_d\ell^b)$ in \eq{ent-func-2} --- obeys the relation: \begin{equation} \frac{\partial \Im_{\rm grav}}{\partial ( \nabla_c \ell^a)}\propto (- P^{cd}_{ab} \nabla_d \ell^b) \propto (\nabla_a \ell^c - \delta^c_a \nabla_i \ell^i) \label{sgravder} \end{equation} where the second relation is for Einstein's theory. This term is analogous to the more familiar object $t^c_a = K^c_a - \delta^c_a K$ (where $K_{ab}$ is the extrinsic curvature) that arises in the (1+3) separation of Einstein's equations. (More precisely, the projection to 3-space leads to $t^c_a$.) This combination can be interpreted as a surface energy momentum tensor in the context of membrane paradigm \cite{pricethorn} because $t_{ab}$ couples to $\delta h^{ab}$ on the boundary surface when we vary the gravitational action ( see, e.g., eq.(12.109) of \cite{gravitation}). Equation~(\ref{sgravder}) shows that the entropy density of spacetime is directly related to $t^c_a$ and its counterpart in the case of null surface. This term also has the interpretation as the canonical momentum conjugate to the spatial metric in (1+3) context and \eq{sgravder} shows that the entropy density leads to a similar structure. That is, the canonical momentum conjugate to metric in the conventional approach and the momentum conjugate to $\ell^a$ in $S_{\rm grav}$ are essentially the same. Further, the \textit{functional} derivative of the gravitational entropy in \eq{ent-func-2} has the form, in any Lanczos-Lovelock\ model: \begin{equation} \frac{\delta \Im_{\rm grav}}{\delta \ell^a} \propto \mathcal{R}_{ab}\ell^b \propto J_a \end{equation} Previous discussion has shown that the current $J_a= 2\mathcal{R}_{ab} \ell^b$ plays a crucial role in interpreting gravitational field equations as entropy balance equations. In the context of local Rindler frames, when $\ell^a$ arises as a limit of the time-like Killing vector in the local Rindler frame, $J_a$ can be interpreted as the Noether (entropy) current associated with the null surface. In that case, the generalization of the two projected equations in \eq{albertpair} to Lanczos-Lovelock\ models will read as \begin{equation} J_a \ell^a = \frac{1}{2} T_{ab} \ell^a \ell^b; \quad J_aq^a_m = \frac{1}{2}T_{ab}\ell^a q^b_m \end{equation} which relate the gravitational entropy density and flux to matter energy density and momentum flux. (The second equation in the above set becomes the DNS equation in the context of Einstein's theory.) All these results, including the DNS equation, will have direct generalization to Lanczos-Lovelock\ models which can be structured using the above concepts. We again see that all these ideas find a natural home in the emergent paradigm. \section{Concluding Comments} As promised, I have presented the internal evidence hidden in the structure of classical gravitational theories which suggest that gravity is an emergent phenomenon. This evidence brings up the holographic nature of gravity in more than one way (surface density of microscopic degrees of freedom, structure of gravitational action functionals .....), provides a thermodynamic interpretation to field equations (field equations reducing to $TdS=dE_g+PdV$ on the horizons, entropy balance for virtual displacements of horizons, equipartition ), allows one to explicitly determine the number density of microscopic degrees of freedom and --- finally --- derive the field equations from an entropy maximization procedure. The approach also clarifies several issues which have no explanation in conventional procedure and links several ideas together (like e.g. the relation between the diffeomorphism invariance and the entropy of null surfaces). All of these work in any Lanczos-Lovelock\ model seamlessly without we having to tinker anything. It is worthwhile to list explicitly the questions which have natural answers in the emergent paradigm while have to be treated as algebraic accidents in the conventional approach: \begin{enumerate} \item While the temperature of the horizon can be obtained using QFT in curved spacetime, the \textit{corresponding} entanglement entropy is divergent and meaningless. Why? \item The temperature of horizon is independent of the field equations of gravity but the entropy of the horizon depends explicitly on the field equations. What does this difference signify? \item The horizon entropy can be expressed in terms of the Noether current which is conserved due to diffeomorphism invariance. Why should an infinitesimal coordinate transformation $x^a \to x^a + q^a$ have anything to do with a thermodynamic variable like entropy? \item Why does gravitational field equations (which does not look very ``thermodynamical''!) reduce to $TdS = dE_g + PdV$ on the horizon, picking up the correct expression for $S$ for a wide class of theories? \item How come all gravitational action principle have a surface and bulk term which are related in a specific manner (see \eq{surbulk})? Why do the surface and the bulk terms allow the interpretation as entropy and energy in static spacetimes? \item The field equations for gravity can be obtained from the bulk part of the action after discarding the surface term. But the surface term evaluated on the horizon of a solution gives the entropy of the horizon! How does the surface term --- which was discarded before the field equations were obtained --- know about the entropy of a solution? \item Why does the gravitational field equations reduce to the equipartition form, expressible as $\Delta E = (1/2) (k_B T) \Delta n$ allowing us to determine the analog of Avogadro's number for the spacetime? And, why does the relevant microscopic degrees of freedom\ for a region reside on the boundary of the region? \item Finally, why is it possible to derive the field equations of any diffeomorphism invariant theory of gravity by extremizing an entropy functional associated with the null surfaces in the spacetime, without treating the metric as a dynamical variable? \end{enumerate} Obviously, any alternative perspective, including the conventional approach, need to provide the answers for the above questions if they have to be considered a viable alternative to emergent paradigm. The explanations need to work for all Lanczos-Lovelock\ models and not for just Einstein's theory. I think the emergent paradigm scores on all these counts and provides valuable insights into the deeper structure of the theory.
1,314,259,993,083
arxiv
\section{Introduction} Chinese word segmentation (CWS) is a preliminary and important task for Chinese natural language processing (NLP). Currently, the state-of-the-art methods are based on statistical supervised learning algorithms, and rely on a large-scale annotated corpus whose cost is extremely expensive. Although there have been great achievements in building CWS corpora, they are somewhat incompatible due to different segmentation criteria. As shown in Table \ref{tab:example}, given a sentence ``YaoMing reaches the final'', the two commonly-used corpora, PKU's People's Daily (PKU) \cite{Yu:2001a} and Penn Chinese Treebank (CTB) \cite{fei2000part}, use different segmentation criteria. In a sense, it is a waste of resources if we fail to fully exploit these corpora. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{example.jpg} \caption{Illustration of the different segmentation criteria.}\label{tab:example} \end{figure} Recently, some efforts have been made to exploit heterogeneous annotation data for Chinese word segmentation or part-of-speech tagging \cite{Jiang:2009,sun2012reducing,qiu2013joint,li-EtAl:2015:ACL-IJCNLP3,li2016fast}. These methods adopted stacking or multi-task architectures and showed that heterogeneous corpora can help each other. However, most of these model adopt the shallow linear classifier with discrete features, which makes it difficult to design the shared feature spaces, usually resulting in a complex model. Fortunately, recent deep neural models provide a convenient way to share information among multiple tasks \cite{collobert2008unified,luong2015multi,chenneural}. In this paper, we propose an adversarial multi-criteria learning for CWS by integrating shared knowledge from multiple segmentation criteria. Specifically, we regard each segmentation criterion as a single task and propose three different shared-private models under the framework of multi-task learning \cite{caruana1997multitask,Ben-David:2003}, where a shared layer is used to extract the criteria-invariant features, and a private layer is used to extract the criteria-specific features. Inspired by the success of adversarial strategy on domain adaption \cite{ajakan2014domain,ganin2016domain,bousmalis2016domain}, we further utilize adversarial strategy to make sure the shared layer can extract the common underlying and criteria-invariant features, which are suitable for all the criteria. Finally, we exploit the eight segmentation criteria on the five simplified Chinese and three traditional Chinese corpora. Experiments show that our models are effective to improve the performance for CWS. We also observe that traditional Chinese could benefit from incorporating knowledge from simplified Chinese. The contributions of this paper could be summarized as follows. \begin{itemize*} \item Multi-criteria learning is first introduced for CWS, in which we propose three shared-private models to integrate multiple segmentation criteria. \item An adversarial strategy is used to force the shared layer to learn criteria-invariant features, in which an new objective function is also proposed instead of the original cross-entropy loss. \item We conduct extensive experiments on eight CWS corpora with different segmentation criteria, which is by far the largest number of datasets used simultaneously. \end{itemize*} \section{General Neural Model for Chinese Word Segmentation} Chinese word segmentation task is usually regarded as a character based sequence labeling problem. Specifically, each character in a sentence is labeled as one of $\mathcal{L} = \{B, M, E, S\}$, indicating the begin, middle, end of a word, or a word with single character. There are lots of prevalent methods to solve sequence labeling problem such as maximum entropy Markov model (MEMM), conditional random fields (CRF), etc. Recently, neural networks are widely applied to Chinese word segmentation task for their ability to minimize the effort in feature engineering \cite{zheng2013deep,pei2014maxmargin,chen2015gated,chen2015long}. Specifically, given a sequence with $n$ characters $X = \{x_1, \dots, x_n\}$, the aim of CWS task is to figure out the ground truth of labels $Y^* = \{y_1^*, \dots, y_n^*\}$: \begin{equation} Y^* = \argmax_{Y \in \mathcal{L}^n} p (Y | X), \label{eq:argmax} \end{equation} where $\mathcal{L}=\{B, M, E, S\}$. The general architecture of neural CWS could be characterized by three components: (1) a character embedding layer; (2) feature layers consisting of several classical neural networks and (3) a tag inference layer. The role of feature layers is to extract features, which could be either convolution neural network or recurrent neural network. In this paper, we adopt the bi-directional long short-term memory neural networks followed by CRF as the tag inference layer. Figure \ref{fig:gnm} illustrates the general architecture of CWS. \begin{figure} \centering \includegraphics[width=0.4\textwidth,height=0.25\textheight]{rnn_cws} \caption{General neural architecture for Chinese word segmentation.}\label{fig:gnm} \end{figure} \subsection{Embedding layer} In neural models, the first step usually is to map discrete language symbols to distributed embedding vectors. Formally, we lookup embedding vector from embedding matrix for each character $x_i$ as $\mathbf{e}_{x_i} \in \mathbb{R}^{d_e}$, where $d_e$ is a hyper-parameter indicating the size of character embedding. \subsection{Feature layers} We adopt bi-directional long short-term memory (Bi-LSTM) as feature layers. While there are numerous LSTM variants, here we use the LSTM architecture used by \cite{jozefowicz2015empirical}, which is similar to the architecture of \cite{graves2013generating} but without peep-hole connections. \paragraph{LSTM} LSTM introduces gate mechanism and memory cell to maintain long dependency information and avoid gradient vanishing. Formally, LSTM, with input gate $\mathbf{i}$, output gate $\mathbf{o}$, forget gate $\mathbf{f}$ and memory cell $\mathbf{c}$, could be expressed as: \begin{gather} \begin{align} \left[ \begin{array}{c} \mathbf{i}_i \\ \mathbf{o}_i \\ \mathbf{f}_i \\ \tilde{\mathbf{c}}_i \end{array}\right] &= \left[\begin{array}{c} \sigma\\ \sigma\\ \sigma\\ \phi \end{array}\right] \left( {{\mathbf{W}}_g}^\intercal \left[\begin{array}{c} \mathbf{e}_{x_i}\\ {\mathbf{h}}_{i-1} \end{array}\right] + {\mathbf{b}}_g \right), \\ \mathbf{c}_i &= \mathbf{c}_{i - 1} \odot \mathbf{f}_i + \tilde{\mathbf{c}}_i \odot \mathbf{i}_i, \\ {\mathbf{h}}_i &= \mathbf{o}_i \odot \phi( \mathbf{c}_i ), \end{align} \end{gather} where $\mathbf{W}_g \in \mathbb{R}^{(d_e + d_h) \times 4d_h}$ and $\mathbf{b}_g \in \mathbb{R}^{4d_h}$ are trainable parameters. $d_h$ is a hyper-parameter, indicating the hidden state size. Function $\sigma(\cdot)$ and $\phi(\cdot)$ are sigmoid and tanh functions respectively. \paragraph{Bi-LSTM} In order to incorporate information from both sides of sequence, we use bi-directional LSTM (Bi-LSTM) with forward and backward directions. The update of each Bi-LSTM unit can be written precisely as follows: \begin{align} \mathbf{h}_i &= \overrightarrow{\mathbf{h}}_i \oplus {\overleftarrow{\mathbf{h}}_i},\\ &= \text{Bi-LSTM}(\mathbf{e}_{x_i}, \overrightarrow{\mathbf{h}}_{i-1}, \overleftarrow{\mathbf{h}}_{i+1}, \theta), \end{align} where $\overrightarrow{\mathbf{h}}_i$ and $\overleftarrow{\mathbf{h}}_i$ are the hidden states at position $i$ of the forward and backward LSTMs respectively; $\oplus$ is concatenation operation; $\theta$ denotes all parameters in Bi-LSTM model. \subsection{Inference Layer} After extracting features, we employ conditional random fields (CRF) \cite{lafferty2001conditional} layer to inference tags. In CRF layer, $p (Y | X)$ in Eq (\ref{eq:argmax}) could be formalized as: \begin{equation} p (Y | X) = \frac{\Psi (Y | X)}{\sum_{Y^\prime \in \mathcal{L}^n} \Psi (Y^\prime | X)}. \end{equation} Here, $\Psi (Y | X)$ is the potential function, and we only consider interactions between two successive labels (first order linear chain CRFs): \begin{gather} \Psi (Y | X) = \prod_{i = 2}^n \psi (X, i, y_{i-1}, y_i),\\ \psi (\mathbf{x}, i, y^\prime, y) = \exp(s(X, i)_{y} + \mathbf{b}_{y^\prime y}), \end{gather} where $\mathbf{b}_{y^\prime y} \in \mathbf{R}$ is trainable parameters respective to label pair $(y^\prime, y)$. Score function $s(X, i) \in \mathbb{R}^{|\mathcal{L}|}$ assigns score for each label on tagging the $i$-th character: \begin{equation} s(X, i) = \mathbf{W}_s^\top \mathbf{h}_i + \mathbf{b}_s, \end{equation} where $\mathbf{h}_i$ is the hidden state of Bi-LSTM at position $i$; $\mathbf{W}_s \in \mathbb{R}^{d_h \times |\mathcal{L}|}$ and $\mathbf{b}_s \in \mathbb{R}^{|\mathcal{L}|}$ are trainable parameters. \begin{figure*}[t!] \centering \subfloat[Model-I]{ \includegraphics[width=0.225\textwidth]{Model-I} \label{fig:Model-I} } \hspace{3em} \subfloat[Model-II]{ \includegraphics[width=0.225\textwidth]{Model-II} \label{fig:Model-II} } \hspace{3em} \subfloat[Model-III]{ \includegraphics[width=0.225\textwidth]{Model-III} \label{fig:Model-III} } \caption{Three shared-private models for multi-criteria learning. The yellow blocks are the shared Bi-LSTM layer, while the gray block are the private Bi-LSTM layer. The yellow circles denote the shared embedding layer. The red information flow indicates the difference between three models.}\label{fig:Three_Sharing_Models} \end{figure*} \section{Multi-Criteria Learning for Chinese Word Segmentation} Although neural models are widely used on CWS, most of them cannot deal with incompatible criteria with heterogonous segmentation criteria simultaneously. Inspired by the success of multi-task learning \cite{caruana1997multitask,Ben-David:2003,liu2016deep-multitask,liu2016recurrent}, we regard the heterogenous criteria as multiple ``related'' tasks, which could improve the performance of each other simultaneously with shared information. Formally, assume that there are $M$ corpora with heterogeneous segmentation criteria. We refer $\mathcal{D}_m$ as corpus $m$ with $N_m$ samples: \begin{equation} \mathcal{D}_m = \{(X_i^{(m)},Y_i^{(m)})\}_{i=1}^{N_m}, \end{equation} where $X_i^m$ and $Y_i^m$ denote the $i$-th sentence and the corresponding label in corpus $m$. To exploit the shared information between these different criteria, we propose three sharing models for CWS task as shown in Figure \ref{fig:Three_Sharing_Models}. The feature layers of these three models consist of a private (criterion-specific) layer and a shared (criterion-invariant) layer. The difference between three models is the information flow between the task layer and the shared layer. Besides, all of these three models also share the embedding layer. \subsection{Model-I: Parallel Shared-Private Model} In the feature layer of Model-I, we regard the private layer and shared layer as two parallel layers. For corpus $m$, the hidden states of shared layer and private layer are: \begin{align} \mathbf{h}^{(s)}_i =& \text{Bi-LSTM}(\mathbf{e}_{x_i}, \overrightarrow{\mathbf{h}}^{(s)}_{i-1}, \overleftarrow{\mathbf{h}}^{(s)}_{i+1},\theta_s),\\ \mathbf{h}^{(m)}_i =& \text{Bi-LSTM}(\mathbf{e}_{x_i}, \overrightarrow{\mathbf{h}}^{(m)}_{i-1}, \overleftarrow{\mathbf{h}}^{(m)}_{i+1},\theta_m), \end{align} and the score function in the CRF layer is computed as: \begin{align} \noindent s^{(m)}(X, i) ={\mathbf{W}^{(m)}_s}^\top \begin{bmatrix} \mathbf{h}^{(s)}_i \\ \mathbf{h}^{(m)}_i \end{bmatrix} + \mathbf{b}^{(m)}_s,\label{eq:m1-3} \end{align} where $\mathbf{W}^{(m)}_s \in \mathbb{R}^{2d_h \times |\mathcal{L}|}$ and $\mathbf{b}^{(m)}_s \in \mathbb{R}^{|\mathcal{L}|}$ are criterion-specific parameters for corpus $m$. \subsection{Model-II: Stacked Shared-Private Model} In the feature layer of Model-II, we arrange the shared layer and private layer in stacked manner. The private layer takes output of shared layer as input. For corpus $m$, the hidden states of shared layer and private layer are: {\small\begin{align} \mathbf{h}^{(s)}_i& = \text{Bi-LSTM}(\mathbf{e}_{x_i}, \overrightarrow{\mathbf{h}}^{(s)}_{i-1}, \overleftarrow{\mathbf{h}}^{(s)}_{i+1},\theta_s),\label{eq:m2-1}\\ \mathbf{h}^{(m)}_i &= \text{Bi-LSTM}(\begin{bmatrix} \mathbf{e}_{x_i}\\ \mathbf{h}^{(s)}_i \end{bmatrix}, \overrightarrow{\mathbf{h}}^{(m)}_{i-1}, \overleftarrow{\mathbf{h}}^{(m)}_{i+1},\theta_m)\label{eq:m2-2} \end{align}} and the score function in the CRF layer is computed as: \begin{align} s^{(m)}(X,i) ={\mathbf{W}^{(m)}_s}^\top \mathbf{h}^{(m)}_i + \mathbf{b}^{(m)}_s, \end{align} where $\mathbf{W}^{(m)}_s \in \mathbb{R}^{2d_h \times |\mathcal{L}|}$ and $\mathbf{b}^{(m)}_s \in \mathbb{R}^{|\mathcal{L}|}$ are criterion-specific parameters for corpus $m$. \subsection{Model-III: Skip-Layer Shared-Private Model} In the feature layer of Model-III, the shared layer and private layer are in stacked manner as Model-II. Additionally, we send the outputs of shared layer to CRF layer directly. The Model III can be regarded as a combination of Model-I and Model-II. For corpus $m$, the hidden states of shared layer and private layer are the same with Eq (\ref{eq:m2-1}) and (\ref{eq:m2-2}), and the score function in CRF layer is computed as the same as Eq (\ref{eq:m1-3}). \subsection{Objective function} The parameters of the network are trained to maximize the log conditional likelihood of true labels on all the corpora. The objective function $\mathcal{J}_{seg}$ can be computed as: \begin{equation}\small \mathcal{J}_{seg}(\Theta^{m},\Theta^{s}) = \sum_{m=1}^{M} \sum_{i=1}^{N_m}\log p(Y^{(m)}_i|X^{(m)}_i;\Theta^{m},\Theta^{s}), \end{equation} where $\Theta^{m}$ and $\Theta^{s}$ denote all the parameters in private and shared layers respectively. \section{Incorporating Adversarial Training for Shared Layer} \begin{figure} \centering \includegraphics[width=0.35\textwidth]{Model-IIIadv} \caption{Architecture of Model-III with adversarial training strategy for shared layer. The discriminator firstly averages the hidden states of shared layer, then derives probability over all possible criteria by applying softmax operation after a linear transformation.}\label{fig:Adversary_structure} \end{figure} Although the shared-private model separates the feature space into shared and private spaces, there is no guarantee that sharable features do not exist in private feature space, or vice versa. Inspired by the work on domain adaptation \cite{ajakan2014domain,ganin2016domain,bousmalis2016domain}, we hope that the features extracted by shared layer is invariant across the heterogonous segmentation criteria. Therefore, we jointly optimize the shared layer via adversarial training \cite{goodfellow2014generative}. Therefore, besides the task loss for CWS, we additionally introduce an adversarial loss to prevent criterion-specific feature from creeping into shared space as shown in Figure \ref{fig:Adversary_structure}. We use a criterion discriminator which aims to recognize which criterion the sentence is annotated by using the shared features. Specifically, given a sentence $X$ with length $n$, we refer to $\mathbf{h}^{(s)}_X$ as shared features for $X$ in one of the sharing models. Here, we compute $\mathbf{h}^{(s)}_X$ by simply averaging the hidden states of shared layer $\mathbf{h}^{(s)}_X = \frac{1}{n} \sum_{i}^n \mathbf{h}^{(s)}_{x_i}$. The criterion discriminator computes the probability $p(\cdot|X)$ over all criteria as: \begin{equation}\small p(\cdot|X;\Theta^d,\Theta^s)= \mathrm{softmax}(\mathbf{W}_d^\top \mathbf{h}^{(s)}_X + \mathbf{b}_d), \end{equation} where $\Theta^d$ indicates the parameters of criterion discriminator $\mathbf{W}_d \in \mathbb{R}^{d_h \times M}$ and $\mathbf{b}_d \in \mathbb{R}^{M}$; $\Theta^s$ denotes the parameters of shared layers. \subsection{Adversarial loss function} The criterion discriminator maximizes the cross entropy of predicted criterion distribution $p(\cdot|X)$ and true criterion. \begin{equation}\small \max_{\Theta^d} \mathcal{J}^1_{adv}(\Theta^d) = \sum_{m=1}^{M} \sum_{i=1}^{N_m}\log p(m|X^{(m)}_i;\Theta^d,\Theta^s). \end{equation} An adversarial loss aims to produce shared features, such that a criterion discriminator cannot reliably predict the criterion by using these shared features. Therefore, we maximize the entropy of predicted criterion distribution when training shared parameters. \begin{equation}\small \max_{\Theta^s} \mathcal{J}^2_{adv}(\Theta^s) = \sum_{m=1}^{M} \sum_{i=1}^{N_m} \mathrm{H}\left(p(m|X^{(m)}_i;\Theta^d,\Theta^s)\right), \label{eq:J_ST} \end{equation} where $\mathrm{H}(p) = -\sum_i p_i\log p_i$ is an entropy of distribution $p$. Unlike \cite{ganin2016domain}, we use entropy term instead of negative cross-entropy. \section{Training} Finally, we combine the task and adversarial objective functions. \begin{equation}\small \mathcal{J}(\Theta;\mathcal{D}) = \mathcal{J}_{seg}(\Theta^{m},\Theta^{s}) + \mathcal{J}^1_{adv}(\Theta^d) + \lambda \mathcal{J}^2_{adv}(\Theta^s), \end{equation} where $\lambda $ is the weight that controls the interaction of the loss terms and $\mathcal{D}$ is the training corpora. The training procedure is to optimize two discriminative classifiers alternately as shown in Algorithm \ref{al:adv_multi}. We use Adam \cite{kingma2014adam} with minibatchs to maximize the objectives. \begin{algorithm}[t! \caption{Adversarial multi-criteria learning for CWS task.} \label{al:adv_multi} \begin{algorithmic}[1] \FOR{$i=1$; $i<=n\_epoch$; $i++$} \STATE \emph{\# Train tag predictor for CWS} \FOR{$m=1$; $m<=M$; $m++$} \STATE \emph{\# Randomly pick data from corpus $m$} \STATE $\mathcal{B}=\{X, Y\}_{1}^{b_m} \in \mathcal{D}^m$ \STATE $\Theta^s$ += $\alpha \nabla_{\Theta^s} \mathcal{J}(\Theta; \mathcal{B})$ \STATE $\Theta^m$ += $\alpha \nabla_{\Theta^m} \mathcal{J}(\Theta; \mathcal{B})$ \ENDFOR \STATE \emph{\# Train criterion discriminator} \FOR{$m=1$; $m<=M$; $m++$} \STATE $\mathcal{B}=\{X, Y\}_{1}^{b_m} \in \mathcal{D}^m$ \STATE $\Theta^d$ += $\alpha \nabla_{\Theta^d} \mathcal{J}(\Theta; \mathcal{B})$ \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} Notably, when using adversarial strategy, we firstly train 2400 epochs (each epoch only trains on eight batches from different corpora), then we only optimize $\mathcal{J}_{seg}(\Theta^{m},\Theta^{s})$ with $\Theta^{s}$ fixed until convergence (early stop strategy). \section{Experiments} \begin{table*}[t]\small \setlength{\tabcolsep}{10pt} \centering \begin{tabular}{|c|c|c|rrrrrr|} \hline \multicolumn{3}{|c|}{Datasets} & Words & Chars & Word Types & Char Types &Sents & OOV Rate\\\hlin \hline \multirow{4}{*}{\rotatebox{90}{Sighan05}} &\multirow{2}*{MSRA}&Train&2.4M& 4.1M& 88.1K& 5.2K& 86.9K&-\\ &&Test&0.1M& 0.2M& 12.9K& 2.8K& 4.0K&2.60\%\\ \cline{2-9} &\multirow{2}*{AS}&Train&5.4M& 8.4M& 141.3K& 6.1K& 709.0K&-\\ &&Test&0.1M& 0.2M& 18.8K& 3.7K& 14.4K&4.30\%\\ \hline \hline \multirow{12}{*}{\rotatebox{90}{Sighan08}} &\multirow{2}*{PKU}&Train& 1.1M& 1.8M& 55.2K& 4.7K& 47.3K&-\\ &&Test&0.2M& 0.3M& 17.6K& 3.4K& 6.4K&-\\ \cline{2-9} &\multirow{2}*{CTB}&Train&0.6M& 1.1M& 42.2K& 4.2K& 23.4K&-\\ &&Test&0.1M& 0.1M& 9.8K& 2.6K& 2.1K&5.55\%\\ \cline{2-9} &\multirow{2}*{CKIP}&Train&0.7M& 1.1M& 48.1K& 4.7K& 94.2K&-\\ &&Test& 0.1M& 0.1M& 15.3K& 3.5K& 10.9K&7.41\%\\ \cline{2-9} &\multirow{2}*{CITYU}&Train& 1.1M& 1.8M& 43.6K& 4.4K& 36.2K&-\\ &&Test&0.2M& 0.3M& 17.8K& 3.4K& 6.7K&8.23\%\\ \cline{2-9} &\multirow{2}*{NCC}&Train&0.5M& 0.8M& 45.2K& 5.0K& 18.9K&-\\ &&Test&0.1M& 0.2M& 17.5K& 3.6K& 3.6K&4.74\%\\ \cline{2-9} &\multirow{2}*{SXU}&Train&0.5M& 0.9M& 32.5K& 4.2K& 17.1K&-\\ &&Test&0.1M& 0.2M& 12.4K& 2.8K& 3.7K&5.12\%\\ \hline \end{tabular} \caption{Details of the eight datasets. }\label{tab:info_datasets \end{table*} \subsection{Datasets} To evaluate our proposed architecture, we experiment on eight prevalent CWS datasets from SIGHAN2005 \cite{emerson2005second} and SIGHAN2008 \cite{moe2008fourth}. Table \ref{tab:info_datasets} gives the details of the eight datasets. Among these datasets, AS, CITYU and CKIP are traditional Chinese, while the remains, MSRA, PKU, CTB, NCC and SXU, are simplified Chinese. We use 10\% data of shuffled train set as development set for all datasets. \subsection{Experimental Configurations} For hyper-parameter configurations, we set both the character embedding size $d_e$ and the dimensionality of LSTM hidden states $d_h$ to 100. The initial learning rate $\alpha$ is set to 0.01. The loss weight coefficient $\lambda$ is set to 0.05. Since the scale of each dataset varies, we use different training batch sizes for datasets. Specifically, we set batch sizes of AS and MSR datasets as 512 and 256 respectively, and 128 for remains. We employ dropout strategy on embedding layer, keeping 80\% inputs (20\% dropout rate). For initialization, we randomize all parameters following uniform distribution at $(-0.05, 0.05)$. We simply map traditional Chinese characters to simplified Chinese, and optimize on the same character embedding matrix across datasets, which is pre-trained on Chinese Wikipedia corpus, using word2vec toolkit \cite{mikolov2013efficient}. Following previous work \cite{chen2015long,pei2014maxmargin}, all experiments including baseline results are using pre-trained character embedding with bigram feature. \begin{table*}[t]\small \centering \begin{tabular}{|c|*{9}{c|}>{\columncolor[gray]{.8}}c|} \hline \multicolumn{2}{|c|}{Models}& MSRA &AS &PKU &CTB &CKIP &CITYU &NCC &SXU &Avg.\\ \hline \hline \multirow{4}*{LSTM} &P &95.13 &93.66 &93.96 &95.36 &91.85 &94.01 &91.45 &95.02 &93.81\\ &R &95.55 &94.71 &92.65 &85.52 &93.34 &94.00 &92.22 &95.05 &92.88\\ &F &95.34 &94.18 &93.30 &\textbf{95.44} &92.59 &94.00 &91.83 &95.04 &93.97\\ &OOV &63.60 &69.83 &66.34 &76.34 &68.67 &65.48 &56.28 &69.46 &67.00 \\ \hline \multirow{4}*{Bi-LSTM} &P &95.70 &93.64 &93.67 &95.19 &92.44 &94.00 &91.86 &95.11 &93.95 \\ &R &95.99 &94.77 &92.93 &95.42 &93.69 &94.15 &92.47 &95.23 &94.33 \\ &F &\textbf{95.84} &94.20 &93.30 &95.30 &\textbf{93.06} &\textbf{94.07} &92.17 &95.17 &\textbf{94.14} \\ &OOV &66.28 &70.07 &66.09 &76.47 &72.12 &65.79 &59.11 &71.27 &68.40 \\ \hline \multirow{4}*{Stacked Bi-LSTM} &P &95.69 &93.89 &94.10 &95.20 &92.40 &94.13 &91.81 &94.99 &94.03\\ &R &95.81 &94.54 &92.66 &95.40 &93.39 &93.99 &92.62 &95.37 &94.22\\ &F &95.75 &\textbf{94.22} &\textbf{93.37} &95.30 &92.89 &94.06 &\textbf{92.21} &\textbf{95.18} &94.12\\ &OOV &65.55 &71.50 &67.92 &75.44 &70.50 &66.35 &57.39 &69.69 &68.04\\ \hline \hline \multicolumn{11}{|l|}{Multi-Criteria Learning } \\ \hline \multirow{4}*{Model-I} &P &95.67 &94.44 &94.93 &95.95 &93.99 &95.10 &92.54 &96.07 &94.84 \\ &R &95.82 &95.09 &93.73 &96.00 &94.52 &95.60 &92.69 &96.08 &94.94 \\ &F &95.74 &94.76 &\textbf{94.33} &95.97 &\textbf{94.26} &95.35 &\textbf{92.61} &96.07 &\textbf{94.89} \\ &OOV &69.89 &74.13 &72.96 &81.12 &77.58 &80.00 &64.14 &77.05 &74.61 \\ \hline \multirow{4}*{Model-II} &P &95.74 &94.60 &94.82 &95.90 &93.51 &95.30 &92.26 &96.17 &94.79 \\ &R &95.74 &95.20 &93.76 &95.94 &94.56 &95.50 &92.84 &95.95 &94.94 \\ &F &95.74 &\textbf{94.90} &94.28 &95.92 &94.03 &95.40 &92.55 &96.06 &94.86 \\ &OOV &69.67 &74.87 &72.28 &79.94 &76.67 &81.05 &61.51 &77.96 &74.24 \\ \hline \multirow{4}*{Model-III} &P &95.76 &93.99 &94.95 &95.85 &93.50 &95.56 &92.17 &96.10 &94.74 \\ &R &95.89 &95.07 &93.48 &96.11 &94.58 &95.62 &92.96 &96.13 &94.98 \\ &F &\textbf{95.82} &94.53 &94.21 &\textbf{95.98} &94.04 &\textbf{95.59} &92.57 &\textbf{96.12} &94.86 \\ &OOV &70.72 &72.59 &73.12 &81.21 &76.56 &82.14 &60.83 &77.56 &74.34 \\ \hline \hline \multicolumn{11}{|l|}{Adversarial Multi-Criteria Learning} \\ \hline \multirow{4}*{Model-I+ADV} &P &95.95 &94.17 &94.86 &96.02 &93.82 &95.39 &92.46 &96.07 &94.84 \\ &R &96.14 &95.11 &93.78 &96.33 &94.70 &95.70 &93.19 &96.01 &95.12 \\ &F &\textbf{96.04} &94.64 &\textbf{94.32} &\textbf{96.18} &\textbf{94.26} &\textbf{95.55} &\textbf{92.83} &96.04 &\textbf{94.98} \\ &OOV &71.60 &73.50 &72.67 &82.48 &77.59 &81.40 &63.31 &77.10 &74.96\\ \hline \multirow{4}*{Model-II+ADV} &P &96.02 &94.52 &94.65 &96.09 &93.80 &95.37 &92.42 &95.85 &94.84 \\ &R &95.86 &94.98 &93.61 &95.90 &94.69 &95.63 &93.20 &96.07 &94.99 \\ & F &95.94 &\textbf{94.75} &94.13 &96.00 &94.24 &95.50 &92.81 &95.96 &94.92 \\ &OOV &72.76 &75.37 &73.13 &82.19 &77.71 &81.05 &62.16 &76.88 &75.16\\ \hline \multirow{4}*{Model-III+ADV} &P &95.92 &94.25 &94.68 &95.86 &93.67 &95.24 &92.47 &96.24 &94.79 \\ &R &95.83 &95.11 &93.82 &96.10 &94.48 &95.60 &92.73 &96.04 &94.96 \\ &F &95.87 &94.68 &94.25 &95.98 &94.07 &95.42 &92.60 &\textbf{96.14} &94.88 \\ &OOV &70.86 &72.89 &72.20 &81.65 &76.13 &80.71 &63.22 &77.88 &74.44\\ \hline \end{tabular} \caption{Results of proposed models on test sets of eight CWS datasets. There are three blocks. The first block consists of two baseline models: Bi-LSTM and stacked Bi-LSTM. The second block consists of our proposed three models without adversarial training. The third block consists of our proposed three models with adversarial training. Here, P, R, F, OOV indicate the precision, recall, F value and OOV recall rate respectively. The maximum F values in each block are highlighted for each dataset. }\label{tab:res} \end{table*} \subsection{Overall Results} Table \ref{tab:res} shows the experiment results of the proposed models on test sets of eight CWS datasets, which has three blocks. (1) In the first block, we can see that the performance is boosted by using Bi-LSTM, and the performance of Bi-LSTM cannot be improved by merely increasing the depth of networks. In addition, although the F value of LSTM model in \cite{chen2015long} is 97.4\%, they additionally incorporate an external idiom dictionary. (2) In the second block, our proposed three models based on multi-criteria learning boost performance. Model-I gains 0.75\% improvement on averaging F-measure score compared with Bi-LSTM result (94.14\%). Only the performance on MSRA drops slightly. Compared to the baseline results (Bi-LSTM and stacked Bi-LSTM), the proposed models boost the performance with the help of exploiting information across these heterogeneous segmentation criteria. Although various criteria have different segmentation granularities, there are still some underlying information shared. For instance, MSRA and CTB treat family name and last name as one token ``宁泽涛 (NingZeTao)'', whereas some other datasets, like PKU, regard them as two tokens, ``宁 (Ning)'' and ``泽涛 (ZeTao)''. The partial boundaries (before ``宁 (Ning)'' or after ``涛 (Tao)'') can be shared. (3) In the third block, we introduce adversarial training. By introducing adversarial training, the performances are further boosted, and Model-I is slightly better than Model-II and Model-III. The adversarial training tries to make shared layer keep criteria-invariant features. For instance, as shown in Table \ref{tab:res}, when we use shared information, the performance on MSRA drops (worse than baseline result). The reason may be that the shared parameters bias to other segmentation criteria and introduce noisy features into shared parameters. When we additionally incorporate the adversarial strategy, we observe that the performance on MSRA is improved and outperforms the baseline results. We could also observe the improvements on other datasets. However, the boost from the adversarial strategy is not significant. The main reason might be that the proposed three sharing models implicitly attempt to keep invariant features by shared parameters and learn discrepancies by the task layer. \subsection{Speed} To further explore the convergence speed, we plot the results on development sets through epochs. Figure \ref{fig:dev} shows the learning curve of Model-I without incorporating adversarial strategy. As shown in Figure \ref{fig:dev}, the proposed model makes progress gradually on all datasets. After about 1000 epochs, the performance becomes stable and convergent. We also test the decoding speed, and our models process 441.38 sentences per second averagely. As the proposed models and the baseline models (Bi-LSTM and stacked Bi-LSTM) are nearly in the same complexity, all models are nearly the same efficient. However, the time consumption of training process varies from model to model. For the models without adversarial training, it costs about 10 hours for training (the same for stacked Bi-LSTM to train eight datasets), whereas it takes about 16 hours for the models with adversarial training. All the experiments are conducted on the hardware with Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz and NVIDIA GeForce GTX TITAN X. \begin{figure}[t] \centering \pgfplotsset{width=0.43\textwidth} \begin{tikzpicture} \begin{axis}[ xlabel={epoches}, ylabel={F-value(\%)}, legend entries={MSRA, AS, PKU, CTB, CKIP, CITYU, NCC, SXU}, mark size=1.0pt, ymajorgrids=true, grid style=dashed, legend pos= south east, legend style={font=\tiny,line width=.5pt,mark size=.5pt, legend columns=2, /tikz/every even column/.append style={column sep=0.5em}}, smooth, ] \addplot [red,mark=*] table [x index=0, y index=1] {dev.txt}; \addplot [blue,dashed,mark=square*] table [x index=0, y index=2] {dev.txt}; \addplot [green,dotted,mark=otimes*] table [x index=0, y index=3] {dev.txt}; \addplot [cyan,dashed,mark=diamond*] table [x index=0, y index=4] {dev.txt}; \addplot [pink,densely dashed,mark=triangle*] table [x index=0, y index=5] {dev.txt}; \addplot [black,solid,mark=+] table [x index=0, y index=6] {dev.txt}; \addplot [red,dotted,mark=*] table [x index=0, y index=7] {dev.txt}; \addplot [blue,dotted,mark=*] table [x index=0, y index=8] {dev.txt}; \end{axis} \end{tikzpicture} \caption{Convergence speed of Model-I without adversarial training on development sets of eight datasets.}\label{fig:dev} \end{figure} \begin{figure}[t!] \centering \subfloat[]{ \includegraphics[width=0.225\textwidth]{cityu_baseline_multi} \label{fig:baseline_multi} } \hspace{-0.5em} \subfloat[]{ \includegraphics[width=0.225\textwidth]{cityu_baseline_multiadv} \label{fig:baseline_multiadv} }% \caption{F-measure scores on test set of CITYU dataset. Each point denotes a sentence, with the (x, y) values of each point denoting the F-measure scores of the two models, respectively. (a) is comparison between Bi-LSTM and Model-I. (b) is comparison between Bi-LSTM and Model-I with adversarial training.}\label{fig:error_analysis} \end{figure} \subsection{Error Analysis} We further investigate the benefits of the proposed models by comparing the error distributions between the single-criterion learning (baseline model Bi-LSTM) and multi-criteria learning (Model-I and Model-I with adversarial training) as shown in Figure \ref{fig:error_analysis}. According to the results, we could observe that a large proportion of points lie above diagonal lines in Figure \ref{fig:baseline_multi} and Figure \ref{fig:baseline_multiadv}, which implies that performance benefit from integrating knowledge and complementary information from other corpora. As shown in Table \ref{tab:res}, on the test set of CITYU, the performance of Model-I and its adversarial version (Model-I+ADV) boost from 92.17\% to 95.59\% and 95.42\% respectively. In addition, we observe that adversarial strategy is effective to prevent criterion specific features from creeping into shared space. For instance, the segmentation granularity of personal name is often different according to heterogenous criteria. With the help of adversarial strategy, our models could correct a large proportion of mistakes on personal name. Table \ref{tab:error} lists the examples from 2333-th and 89-th sentences in test sets of PKU and MSRA datasets respectively. \begin{figure \centering \includegraphics[width=0.4\textwidth]{person} \caption{Segmentation cases of personal names.}\label{tab:error} \end{figure} \section{Knowledge Transfer} We also conduct experiments of whether the shared layers can be transferred to the other related tasks or domains. In this section, we investigate the ability of knowledge transfer on two experiments: (1) simplified Chinese to traditional Chinese and (2) formal texts to informal texts. \begin{table}[t]\small \centering \begin{tabular}{|c|*{3}{c|}>{\columncolor[gray]{.8}}c|} \hline Models&AS&CKIP&CITYU&Avg.\\ \hline Baseline(Bi-LSTM)&\textbf{94.20}& 93.06 &94.07&93.78\\ Model-I$^*$ &94.12&\textbf{93.24}&\textbf{95.20}&\textbf{94.19}\\ \hline \end{tabular} \caption{Performance on 3 traditional Chinese datasets. Model-I$^*$ means that the shared parameters are trained on 5 simplified Chinese datasets and are fixed for traditional Chinese datasets. Here, we conduct Model-I without incorporating adversarial training strategy.}\label{tab:traditional_chinese} \end{table} \begin{table*}\small \centering \begin{tabular}{|c|rrrrrr|} \hline Dataset & Words & Chars & Word Types & Char Types &Sents & OOV Rate\\\hline Train & 421,166 &688,743& 43,331& 4,502 &20,135&-\\\hline Dev & 43,697 &73,246 &11,187& 2,879& 2,052 &6.82\% \\\hline Test & 187,877& 315,865& 27,804 &3,911 &8,592 &6.98\%\\\hline \end{tabular} \caption{Statistical information of NLPCC 2016 dataset. }\label{tb:datasetour \end{table*} \begin{table}\smal \centering \begin{tabular}{|c|*{4}{c|}} \hline Models & P & R & F & OOV\\ \hline Baseline(Bi-LSTM) &93.56 &94.33 &93.94 &70.75\\ Model-I$^*$ &\textbf{93.65} &\textbf{94.83} &\textbf{94.24} &\textbf{74.72} \\ \hline \end{tabular} \caption{Performances on the test set of NLPCC 2016 dataset. Model-I$^*$ means that the shared parameters are trained on 8 Chinese datasets (Table \ref{tab:info_datasets}) and are fixed for NLPCC dataset. Here, we conduct Model-I without incorporating adversarial training strategy.}\label{tab:res_nlpcc2016} \end{table} \subsection{Simplified Chinese to Traditional Chinese} Traditional Chinese and simplified Chinese are two similar languages with slightly difference on character forms (e.g. multiple traditional characters might map to one simplified character). We investigate that if datasets in traditional Chinese and simplified Chinese could help each other. Table \ref{tab:traditional_chinese} gives the results of Model-I on 3 traditional Chinese datasets under the help of 5 simplified Chinese datasets. Specifically, we firstly train the model on simplified Chinese datasets, then we train traditional Chinese datasets independently with shared parameters fixed. As we can see, the average performance is boosted by 0.41\% on F-measure score (from 93.78\% to 94.19\%), which indicates that shared features learned from simplified Chinese segmentation criteria can help to improve performance on traditional Chinese. Like MSRA, as AS dataset is relatively large (train set of 5.4M tokens), the features learned by shared parameters might bias to other datasets and thus hurt performance on such large dataset AS. \subsection{Formal Texts to Informal Texts} \subsubsection{Dataset} We use the NLPCC 2016 dataset\footnote{\url{https://github.com/FudanNLP/NLPCC-WordSeg-Weibo}} \cite{qiu2016overview} to evaluate our model on micro-blog texts. The NLPCC 2016 data are provided by the shared task in the 5th CCF Conference on Natural Language Processing \& Chinese Computing (NLPCC 2016): Chinese Word Segmentation and POS Tagging for micro-blog Text. Unlike the popular used newswire dataset, the NLPCC 2016 dataset is collected from Sina Weibo\footnote{\url{http://www.weibo.com/}}, which consists of the informal texts from micro-blog with the various topics, such as finance, sports, entertainment, and so on. The information of the dataset is shown in Table \ref{tb:datasetour}. \subsubsection{Results} Formal documents (like the eight datasets in Table \ref{tab:info_datasets}) and micro-blog texts are dissimilar in many aspects. Thus, we further investigate that if the formal texts could help to improve the performance of micro-blog texts. Table \ref{tab:res_nlpcc2016} gives the results of Model-I on the NLPCC 2016 dataset under the help of the eight datasets in Table \ref{tab:info_datasets}. Specifically, we firstly train the model on the eight datasets, then we train on the NLPCC 2016 dataset alone with shared parameters fixed. The baseline model is Bi-LSTM which is trained on the NLPCC 2016 dataset alone. As we can see, the performance is boosted by 0.30\% on F-measure score (from 93.94\% to 94.24\%), and we could also observe that the OOV recall rate is boosted by 3.97\%. It shows that the shared features learned from formal texts can help to improve the performance on of micro-blog texts. \section{Related Works} There are many works on exploiting heterogeneous annotation data to improve various NLP tasks. \citet{Jiang:2009} proposed a stacking-based model which could train a model for one specific desired annotation criterion by utilizing knowledge from corpora with other heterogeneous annotations. \citet{sun2012reducing} proposed a structure-based stacking model to reduce the approximation error, which makes use of structured features such as sub-words. These models are unidirectional aid and also suffer from error propagation problem. \citet{qiu2013joint} used multi-tasks learning framework to improve the performance of POS tagging on two heterogeneous datasets. \citet{li-EtAl:2015:ACL-IJCNLP3} proposed a coupled sequence labeling model which could directly learn and infer two heterogeneous annotations. \citet{chao2015exploiting} also utilize multiple corpora using coupled sequence labeling model. These methods adopt the shallow classifiers, therefore suffering from the problem of defining shared features. Our proposed models use deep neural networks, which can easily share information with hidden shared layers. \citet{chenneural} also adopted neural network models for exploiting heterogeneous annotations based on neural multi-view model, which can be regarded as a simplified version of our proposed models by removing private hidden layers. Unlike the above models, we design three sharing-private architectures and keep shared layer to extract criterion-invariance features by introducing adversarial training. Moreover, we fully exploit eight corpora with heterogeneous segmentation criteria to model the underlying shared information. \section{Conclusions \& Future Works} In this paper, we propose adversarial multi-criteria learning for CWS by fully exploiting the underlying shared knowledge across multiple heterogeneous criteria. Experiments show that our proposed three shared-private models are effective to extract the shared information, and achieve significant improvements over the single-criterion methods. \section*{Acknowledgments} We appreciate the contribution from Jingjing Gong and Jiacheng Xu. Besides, we would like to thank the anonymous reviewers for their valuable comments. This work is partially funded by National Natural Science Foundation of China (No. 61532011 and 61672162), Shanghai Municipal Science and Technology Commission on (No. 16JC1420401).
1,314,259,993,084
arxiv
\section{Motivation} The transverse single-spin asymmetry, $A_N$, is an observable that probes the spin structure of the proton. It is defined via \begin{equation} A\left(\phi\right)=\frac {d\sigma^{\uparrow}\left(\phi\right)-d\sigma^{\downarrow}\left(\phi\right)} {d\sigma^{\uparrow}\left(\phi\right)+d\sigma^{\downarrow}\left(\phi\right)} =A_N\cos\phi, \label{eqAN} \end{equation} where $d\sigma^{\uparrow(\downarrow)}\left(\phi\right)$ is a differential cross section, {\it e.g.}, for $\pi^0$ production, with azimuthal angle $\phi$, from a spin-up(down) proton $p^{\uparrow(\downarrow)}$ scattering off an unpolarized proton. The spin asymmetry $A\left(\phi\right)$ is modulated by $\cos\phi$, and the amplitude is denoted by $A_N$. If $\phi=0$ represents leftward $\pi^0$ production, then a positive $A_N$ indicates spin-up(down) proton scattering favors producing $\pi^0$s to the left(right). $A_N$ for forward $\pi^0$s rises with Feynman-$x$ and is independent of center-of-mass energy $\sqrt{s}$ \cite{xF1,xF2}; moreover, $A_N$ is systematically larger for isolated $\pi^0$s than for those not as isolated \cite{heppelmannAN,mondalAN}. Several models have been proposed to explain the origin of this large $A_N$ \cite{sivers1,sivers2,collins,twist3}, and although the most promising of these involves a novel twist-3 fragmentation process \cite{twist3}, the origin of the $\pi^0$-isolation dependence remains unclear. A possible channel for isolated $\pi^0$ production is the $p^{\uparrow}p\to p\pion X$ process, as shown schematically in the left panel of figure \ref{fig1}. The forward polarized proton $p^{\uparrow}$ scatters off the backward proton $p$; the forward proton is deflected slightly with the production of a forward $\pi^0$, while the backward proton fragments into remnants denoted by $X$. By energy conservation, the sum of the deflected proton and forward $\pi^0$ energies is equal to or less than the incident proton energy, while the observed $\pi^0$ and proton transverse momentum sum should balance that of $X$. \begin{figure}[t] \centerline{\includegraphics[width=0.7\textwidth]{fig1.pdf}} \caption{ Left: schematic of $p^{\uparrow}p\to p\pion X$. Right: schematic of detectors, the Forward Meson Spectrometer (FMS) for the $\pi^0\to\gamma\gamma$ (red dashed arrows) and the Roman Pots (RP) for the proton (blue solid arrow). } \label{fig1} \end{figure} Further study is needed to understand the $p^{\uparrow}p\to p\pion X$ underlying mechanism, and especially its spin dependence. One possible model assumes the $p^{\uparrow}$ fluctuates into a $p+\pi^0$ state, with the $\pi^0$ in the proton periphery; if the $\pi^0$ scatters off another proton such that the $p+\pi^0$ state separates, then the $\pi^0$ could scatter with a moderate $p_T$, while the proton recoils at near-beam rapidity. It is thought that the proton angular momentum in the peripheral region is likely dominantly from orbital angular momentum, rather than from parton spin \cite{weiss}; assuming the orbital angular momentum of the peripheral $\pi^0$ correlates to the proton spin, measurements of spin asymmetries in the $p^{\uparrow}p\to p\pion X$ process could be sensitive to proton peripheral angular momentum. \section{Event Selection and Kinematics} The $p^{\uparrow}p\to p\pion X$ process has recently been observed at STAR in transversely-polarized proton-proton scattering at $\sqrt{s}=200$ GeV during the 2015 RHIC run. The $\pi^0$ is measured with the Forward Meson Spectrometer (FMS), a lead-glass electromagnetic calorimeter subtending the forward region $2.65<\eta<3.9$ \cite{aLL}, and the deflected proton with the Roman Pots (RP), hodoscopic silicon-strip trackers downstream of the FMS, at near-beam rapidity \cite{rp1,rp2}. The right panel of figure \ref{fig1} shows the detectors, with overlaying $\pi^0\to\gamma\gamma$ and proton trajectories. The $\pi^0$s were selected from each event's highest-energy photon pair, with a transverse momentum $p_T$ above the trigger threshold and energy $E_1+E_2>12$ GeV. The invariant mass was constrained to the $\pi^0$ mass region and the photons' energy imbalance to $\left|E_1-E_2\right|/\left(E_1+E_2\right)<0.8$. The proton was required to be detected in at least 7 of the 8 available silicon tracking planes, within geometric acceptance cuts, along with a veto on activity in the RPs in the backwards beam direction. The selected events included a large contribution from accidental coincidences, for example, two collisions occurring in a single proton bunch crossing, where one collision sent a $\pi^0$ to the FMS while the second one was elastic, sending a proton to the RPs. For many of these accidental coincidences, the sum of the $\pi^0$ and proton energies, $E_{sum}:=\xh{E}+\xp{E}$, is greater than the 100 GeV incident proton energy, which would violate energy conservation had the proton and $\pi^0$ originated from the same collision. The Beam Beam Counters (BBC), scintillators in both the forward and backward directions subtending $2.1<|\eta|<5$, were used with cuts set to reduce the level of accidental coincidences while minimizing the loss of $p^{\uparrow}p\to p\pion X$ candidates. Moreover, evidence of hits in the backward BBC as well as in the central-rapidity Time Of Flight (TOF) detector was seen for all $p^{\uparrow}p\to p\pion X$ events, indicating breakup of the backward-going proton. The left panel of figure \ref{fig2} shows a distribution of $E_{sum}$, and the right panel shows $\xp{E}$ plotted on the vertical axis versus $\xh{E}$ on the horizontal. The peak at $E_{sum}=100$ GeV represents the $p^{\uparrow}p\to p\pion X$ signal region, since the incident proton has an energy of 100 GeV and, by energy conservation, nothing else scattered in the forward direction; it corresponds to the region between the dashed lines in the right panel. The width of the 100 GeV $E_{sum}$ peak is dominantly from the FMS energy resolution and an event selection of $90<E_{sum}<105$ GeV was used for asymmetry analysis event selection. Since the RPs were designed to see elastic and diffractive-like events, the $\xp{E}$ distribution has a large peak at $\xp{E}=100$ GeV, which manifests as a band that spans the full $\xh{E}$ range. These events along with any others with $E_{sum}$ above the $p^{\uparrow}p\to p\pion X$ signal region are accidental coincidences, and their $E_{sum}$ distribution likely extends to low $E_{sum}$ as the dominant source of background under the $p^{\uparrow}p\to p\pion X$ peak. The aforementioned BBC cut was tuned to minimize the accidental coincidence background distribution and maximize the $p^{\uparrow}p\to p\pion X$ signal purity. \begin{figure}[t] \centerline{\includegraphics[width=\textwidth]{fig2.pdf}} \caption{ Left: distribution of summed $\pi^0$ and proton energies, $E_{sum}$, shown with the $p^{\uparrow}p\to p\pion X$ selection region. Right: proton energy on the vertical axis plotted against $\pi^0$ energy; the region between the dashed lines is the $p^{\uparrow}p\to p\pion X$ selection region. } \label{fig2} \end{figure} The resulting events have the following kinematics: the $\pi^0$ and proton transverse momenta respectively span $1<p_{T,\pi}<4$ GeV/$c$ and $0.1<p_{T,p}<0.45$ GeV/$c$, while their energies span $12<\xh{E}<35$ GeV and $68<\xp{E}<90$ GeV. For about $2/3$ of the events, the $\pi^0$ and proton are observed back-to-back, with azimuthal angles $\xh{\phi}$ and $\xp{\phi}$ such that $\Delta\phi:=\xh{\phi}-\xp{\phi}\sim\pi$. While the FMS spans the full $2\pi$ azimuth, the RP silicon tracking planes are positioned above and below the beam, and $\xp{\phi}\sim0$ and $\xp{\phi}\sim\pm\pi$, respectively left and right, are outside the RP acceptance. There is a further limit on $\xp{\phi}$, since the RPs are positioned downstream of a RHIC dipole magnet that bends the outgoing beam to the left. This magnet is tuned to bend beam-energy protons appropriately, so any scattered proton with $\xp{E}\sim100$ GeV is likely to pass within the horizontal extent of the RPs. The $p^{\uparrow}p\to p\pion X$ events, however, have protons with $\xp{E}<90$ GeV, which are bent more leftward than the 100 GeV protons. Therefore the azimuthal acceptance is biased toward rightward-scattered protons: $\pi/2<|\xp{\phi}|<\pi$ for $90\%$ of the events. Despite this bias, it is still possible to analyze spin asymmetries which depend on both $\xh{\phi}$ and $\xp{\phi}$; an upgraded RP system is required to characterize $p^{\uparrow}p\to p\pion X$ events with full proton azimuthal acceptance. \section{Asymmetries} Spin asymmetries of the $p^{\uparrow}p\to p\pion X$ process can be modulated by two possible azimuthal angles: $\xh{\phi}$ and $\xp{\phi}$. In general, asymmetries and cross sections can depend on the incident $p^{\uparrow}$ momentum vector $\vec{Z}$, the observed $\pi^0$ and proton momentum vectors, respectively $\vec{\Pi}$ and $\vec{P}$, and the $p^{\uparrow}$ spin pseudovector $\vec{S}$ with spin projection $s=\pm\hbar/2$. Physically allowed terms must be Lorentz invariant and parity conserving, {\it i.e.} scalar, which can be formed by geometric products of momenta and spin. Asymmetry contributions must also depend on spin $s$ and be invariant under rotations. For inclusive $\pi^0$ production, the scalar $\left(\vec{Z}\times\vec{\Pi}\right)\cdot \vec{S}\propto s\cos{\xh{\phi}}$ represents the $\pi^0$ transverse single-spin asymmetry $A_N$ of equation \ref{eqAN}. In $p^{\uparrow}p\to p\pion X$, the additional proton momentum allows for the construction of scalars which depend on both $\xp{\phi}$ and $\xh{\phi}$. Letting $\vec{L}_{\pi}:=\vec{Z}\times\vec{\Pi}$ and $\vec{L}_p:=\vec{Z}\times \vec{P}$, a possible scalar that satisfies the aforementioned requirements and depends on both $\xp{\phi}$ and $\xh{\phi}$ is \begin{equation} \left(\vec{L}_{\pi}\cdot\vec{L}_p\right) \left(\vec{L}_p\cdot \vec{S}\right) \propto s\cos\phip\cos\delphi, \label{eqAsym} \end{equation} which represents the transverse single-spin asymmetry of the $\pi^0$ within the scattering plane of the observed proton. Letting $A_{p\pi}$ denote the amplitude of this modulation, $\left|A_{p\pi}\right|$ is large when the proton scatters left or right ($\xp{\phi}\sim 0$ or $\pi$) and when the $\pi^0$ is close to the proton scattering plane ($\Delta\phi\sim0$ or $\pi$). Other possible scalars were tested, but their measured asymmetries were consistent with zero. Let $N^{\uparrow(\downarrow)}\left(\xh{\phi},\xp{\phi}\right)$ denote the yield from a spin-up(down) proton which scatters to a $\pi^0$ and proton with respective azimuthal angles $\xh{\phi}$ and $\xp{\phi}$. With $P$ denoting the beam polarization, the single-spin asymmetry was measured following equation \ref{eqAN} as \begin{equation} A\left(\xh{\phi},\xp{\phi}\right)=\frac{1}{P} \frac{N^{\uparrow}\left(\xh{\phi},\xp{\phi}\right)-N^{\downarrow}\left(\xh{\phi},\xp{\phi}\right)} {N^{\uparrow}\left(\xh{\phi},\xp{\phi}\right)+N^{\downarrow}\left(\xh{\phi},\xp{\phi}\right)}. \label{eqAppi} \end{equation} Figure \ref{fig4} shows $A\left(\xh{\phi},\xp{\phi}\right)$ in bins of $\cos\phip\cos\delphi$, including a linear fit with a slope that corresponds to the amplitude of the $\cos\phip\cos\delphi$ modulation, $A_{p\pi}$, which evaluates to $-19\%\pm5.2\%$. The fit's constant term $R$ is included to account for possible nonzero relative luminosity which would systematically shift all data points upward or downward across all $\cos\phip\cos\delphi$ bins. The vertical error bars represent statistical uncertainty, and the horizontal error bars are the combined propagated $\pi^0$ and proton position uncertainties. The average beam polarization was $56.5\%$ and its uncertainty propagates to a $3.1\%$ systematic uncertainty on the asymmetry scale. \begin{figure}[t] \centerline{\includegraphics[width=0.5\textwidth]{fig4.pdf}} \caption{ Transverse single-spin asymmetry in bins of $\cos\phip\cos\delphi$. A linear fit is included, with constant term $R$ and slope $A$, and the resulting fit values in the upper right corner. } \label{fig4} \end{figure} A complementary view of this asymmetry is shown in figure \ref{fig5}, where the $\cos\xh{\phi}$ modulation ($\pi^0$ $A_N$) is shown for $\pi^0$s which scatter near the proton scattering plane (left panel), where $\Delta\phi$ is within $\pi/6$ radians of $0$ or $\pm\pi$, compared to the case where $\pi^0$s scatter away from the proton scatter plane (right panel), where $\left|\Delta\phi\pm\pi/2\right|<\pi/6$. When the $\pi^0$ scatters near the proton scatter plane, it shows an asymmetry of $-20\%\pm5.7\%$, whereas when the $\pi^0$ scatters out-of-plane, the asymmetry is nearly consistent with zero, at $4.5\%\pm3.8\%$. \begin{figure}[t] \centerline{\includegraphics[width=\textwidth]{fig5.pdf}} \caption{ Transverse single-spin asymmetry in bins of $\cos\xh{\phi}$ for $\pi^0$s near the proton scattering plane (left) or away (right). A linear fit is included in each. } \label{fig5} \end{figure} Projections of $A_{p\pi}\cos\phip\cos\delphi$ onto $\xh{\phi}$, $\xp{\phi}$, and $\Delta\phi$ were used to assess the impact of the limited $\xp{\phi}$ acceptance; these are projections of a 2-dimensional asymmetry to 1-dimensional asymmetries and can be cross-checked with the corresponding 1-dimensional asymmetries in the data. Assuming the nominal value of $A_{p\pi}=-0.19$, projections of $A_{p\pi}\cos\phip\cos\delphi$ onto 1-dimensional asymmetries modulated by $\xh{\phi}$, $\xp{\phi}$, or $\Delta\phi$ agree with data only when the $\xp{\phi}$ acceptance limitations are applied. While the 1-dimensional asymmetries are dependent on the $\xp{\phi}$ acceptance limitations, the 2-dimensional $A_{p\pi}$ asymmetry is not and seems to most closely match the data. Several other possibilities were tested, such as the assumption that the asymmetry is just a $\pi^0$ single-spin asymmetry, however their projections do not agree with the data. \section{Summary} The $p^{\uparrow}p\to p\pion X$ process has been observed at STAR, and a $-19\%\pm5.2\%$ asymmetry of the $\pi^0$ in the scattering plane of the proton is observed, via the modulation in equation \ref{eqAsym}. This effect may serve as a probe to the orbital angular momentum of fluctuated $\pi^0$s in the proton periphery. As far as we know, the spin-dependence of this process has otherwise not yet been explored experimentally and a model is needed to understand it. Moreover, this process should be studied in more detail experimentally, with better azimuthal and kinematic coverage.
1,314,259,993,085
arxiv
\section{Algorithm}\label{sec:algorithm} \section{Conclusions and Future Research} Quasi-perfect equilibrium has been studied in extensive-form games, but was poorly understood in Stackelberg settings We provided a game-theoretic, axiomatic definition of \emph{quasi-perfect Stackelberg equilibrium (QPSE)}. We developed a family of game perturbation schemes that lead to a QPSE in the limit. Our family generalizes prior perturbation schemes introduced for finding (even non-Stackelberg) quasi-perfect equilibria. Using our perturbation schemes, we developed a branch-and-bound algorithm for QPSE. It leverages a perturbed variant of the linear program for computing a Stackelberg extensive-form correlated equilibrium. Experiments show that our algorithm can be used to find an approximate QPSE in games with thousands of nodes. We showed that some perturbation schemes outside our family do not lead to QPSEs in some games. It remains an open question whether our perturbation family fully characterizes the whole set of QPSEs. As to requirement (i) in Definition~\ref{def:qp_pert}, can all QPSEs be captured by perturbation schemes that only use polynomial lower bounds on trembles? It was recently shown that in non-Stackelberg extensive-form games, there exists a perturbation size that is small enough (while still strictly positive) that an exact refined (e.g., quasi-perfect) equilibrium can be found by solving a mathematical program with that perturbation size~\cite{miltersen2010computing,DBLP:conf/aaai/Farina017,Farina18:Practical}, and \citet{Farina18:Practical} provide an algorithm for checking whether a given guess of perturbation size is small enough. That obviates the need to try to explicitly compute a limit of a sequence. It would be interesting to see whether such theory can also be developed for Stackelberg extensive-form games---and for our perturbation family in particular. \section{Definition of Quasi-Perfect Stackelberg Equilibrim}\label{sec:definitions} In this section, we introduce QPSEs, which refine SEs in SEFGs using an approach resembling that adopted by~\citeauthor{van1984relation}~\shortcite{van1984relation} for defining QPEs in EFGs. First, we provide needed additional notation. We say that $\pi_i \in \Pi_i$ is \emph{completely mixed} if $\pi_{ia} > 0$ for all $a \in \mathcal{A}_i$. Given two information sets $I, \hat{I} \in \mathcal{I}_i$, we write $I \succeq \hat{I}$ whenever $\hat{I}$ follows $I$, \emph{i.e.}, there exists a path from $h \in I$ to $\hat{h} \in \hat{I}$. We assume $I_\emptyset \succeq \hat{I}$ for all $\hat{I} \in \mathcal{I}_i$ such that there is no $I \neq \hat{I} \in \mathcal{I}_i : I \succeq \hat{I}$. In perfect-recall games, $\succeq$ is a partial order over $\mathcal{I}_i \cup \{ I_\emptyset \}$. Given $\pi_i, \hat{\pi}_i \in \Pi_i$ and $I \in \mathcal{I}_i \cup \{I_\emptyset\}$, $\pi_i \big/_{I} \hat{\pi}_i$ is equal to $\hat{\pi}_i$ at all $\hat{I} \in \mathcal{I}_i : I \succeq \hat{I}$, while it is equal to $\pi_i$ everywhere else. Moreover, for $I \in \mathcal{I}_i$, we write $\pi_i =_{I} \hat{\pi}_i$ if $\pi_{i a} = \hat{\pi}_{i a}$ for all $a \in A(I)$. Finally, given completely mixed strategies $\pi_\ell \in \Pi_\ell$, $\pi_f \in \Pi_f$ and $I \in \mathcal{I}_i$, $u_{i, I}(\pi_\ell,\pi_f)$ denotes player $i$'s expected utility given that $I$ has been reached and strategies $\pi_\ell$ and $\pi_f$ are played. Next, we introduce a fundamental building block: the idea of follower's best response at an information set $I \in \mathcal{I}_f$. Intuitively, $\pi_f$ is an $I$-best response to $\pi_\ell$ whenever playing as prescribed by $\pi_f$ at the information set $I$ is part of some follower's best response to $\pi_\ell$ in the game following $I$, given that $I$ has been reached during play. Formally: \begin{definition}\label{def:info_set_br} Given an SEFG $\Gamma$, a completely mixed $\pi_\ell \in \Pi_\ell$, and $I \in \mathcal{I}_f$, we say that $\pi_f \in \Pi_f$ is an \emph{$I$-best response} to $\pi_\ell$, written $\pi_f \in \mathsf{BR}_{I}(\pi_\ell)$, if the following holds: $$ \max_{\substack{ \hat{\pi}_f \in \Pi_f : \\ \pi_f =_{I} \hat{\pi}_f }} u_{f, I} \left( \pi_\ell, \pi_f \big/_{I} \hat{\pi}_f \right) = \max_{\hat{\pi}_f \in \Pi_f} u_{f, I} \left( \pi_\ell, \pi_f \big/_{I} \hat{\pi}_f \right). $$ \end{definition} For $i \in \mathcal{N}$ and $\pi_i \in \Pi_i$, let $ \{ \pi_{i, k} \}_{k \in \mathbb{N}}$ be a sequence of completely mixed player $i$'s strategies with $\pi_i$ as a limit point. We are now ready to define the refinement concept. In words, in a QPSE, the leader selects an optimal strategy to commit to in \emph{all} information sets, given that the follower best responds to it at \emph{every} information set, following \emph{some} tie-breaking rule. Specifically, point (ii) in Definition~\ref{def:qpse} ensures that the leader's commitment is optimal also in those information sets that are unreachable in absence of players' errors. Notice that the leader only accounts for follower's future errors, while the follower assumes that only the leader can make mistakes in future. This is in line with the idea underlying QPEs in EFGs~\cite{van1984relation}.\footnote{\citet{van1984relation} defines a QPE of an $n$-player extensive-form game as a strategy profile $(\pi_i)_{i \in \mathcal{N}}$ obtained as a limit point of a sequence of completely mixed strategy profiles $\{ (\pi_{i,k})_{i \in \mathcal{N}} \}_{k \in \mathbb{N}}$ such that $\pi_i \in \mathsf{BR}_{I}((\pi_{j,k})_{j \neq i \in \mathcal{N}})$ for all $i \in \mathcal{N}$ and $I \in \mathcal{I}_i$.} \begin{definition}\label{def:qpse} Given an SEFG $\Gamma$, $(\pi_\ell, \pi_f) $ is a \emph{quasi-perfect Stackelberg equilibrium (QPSE)} of $\Gamma$ if there exist sequences $ \{ \pi_{i, k} \}_{k \in \mathbb{N}}$, defined for every $i \in \mathcal{N}$ and $\pi_i \in \Pi_i$, such that: % \begin{enumerate}[(i)] \item $\pi_f \in \mathsf{BR}_{I}(\pi_{\ell,k}) $ for all $ I \in \mathcal{I}_f$; % \item for all ${I} \in \mathcal{I}_\ell \cup \{ I_\emptyset \}$ and $\hat{\pi}_\ell \in \Pi_\ell$, there exists $ \hat{\pi}_f \in \Pi_f : \hat{\pi}_f \in \mathsf{BR}_{\hat{I}}(\pi_{\ell,k} \big/_{I} \hat{\pi}_{\ell,k}) $ for all $ \hat{I} \in \mathcal{I}_f$, with: \begin{align}\label{eq:qpse} u_{\ell} \left(\pi_{\ell,k} \big/_{I} \pi_\ell, \pi_{f,k} \right) \geq u_{\ell} \left(\pi_{\ell,k} \big/_{I} \hat{\pi}_\ell, \hat{\pi}_{f,k} \right). \end{align} \end{enumerate} \end{definition} As with SEs, we introduce the \emph{strong} version of QPSEs.\footnote{Since Equation~\eqref{eq:qpse} must hold for every $\hat{\pi}_\ell \in \Pi_\ell$ and $ \hat{\pi}_f \in \Pi_f : \hat{\pi}_f \in \mathsf{BR}_{\hat{I}}(\pi_{\ell,k} \big/_{I} \hat{\pi}_{\ell,k}) $ for all $ \hat{I} \in \mathcal{I}_f$, Definition~\ref{def:qpsse} assumes that the follower breaks ties in favor of the leader.} \begin{definition}\label{def:qpsse} Given an SEFG $\Gamma$, $(\pi_\ell, \pi_f) $ is a \emph{quasi-perfect strong Stackelberg equilibrium (QPSSE)} of $\Gamma$ if there exist $ \{ \pi_{i, k} \}_{k \in \mathbb{N}}$, defined for every $i \in \mathcal{N}$ and $\pi_i \in \Pi_i$, such that: % \begin{enumerate}[(i)] \item $\pi_f \in \mathsf{BR}_{I}(\pi_{\ell,k}) $ for all $ I \in \mathcal{I}_f$; % \item for all ${I} \in \mathcal{I}_\ell \cup \{ I_\emptyset \}$, $\hat{\pi}_\ell \in \Pi_\ell$, and $ \hat{\pi}_f \in \Pi_f : \hat{\pi}_f \in \mathsf{BR}_{\hat{I}}(\pi_{\ell,k} \big/_{I} \hat{\pi}_{\ell,k}) $ for all $ \hat{I} \in \mathcal{I}_f$, Equation~\eqref{eq:qpse} holds. % \end{enumerate} \end{definition} As we will show in Section~\ref{sec:perturbed_games}, QPSEs are refinements of SEs, that is, any QPSE is also an SE. \section{Experiments} We conducted experiments with our algorithm on two common benchmark EFGs. The first is a search game played on the graph shown in Figure~\ref{fig:search game}. It is a simultaneous-move game (which can be modeled as a turn-taking EFG with appropriately chosen information sets). The leader controls two patrols that can each move within their respective shaded areas (labeled P1 and P2), and at each time step the controller chooses a move for both patrols. The follower is always at a single node on the graph, initially the leftmost node labeled $S$ and can move freely to any adjacent node (except at patrolled nodes, the follower cannot move from a patrolled node to another patrolled node). The follower can also choose to wait in place for a time step in order to clean up their traces. If a patrol visits a node that was previously visited by the follower, and the follower did not wait to clean up their traces, they can see that the follower was there. If the follower reaches any of the rightmost nodes they received the respective payoff at the node ($5$ and $10$, respectively). If the follower and any patrol are on the same node at any time step, the follower is captured, which leads to a payoff of $0$ for the follower and a payoff of $1$ for the leader. Finally, the game times out after $k$ simultaneous moves, in which case the leader receives payoff $0$ and the follower receives $-\infty$ (because we are interested in games where the follower attempts to reach an end node). This is the game considered by \citet{Kroer18:Robust} except with the bottom layer removed, and is similar to games considered by \citet{Bosansky14:Exact} and \citet{Bosansky15:Sequence}. \begin{figure}[!h] \centering \scalebox{0.65}{ \input{graph_search_game} } \caption{The graph on which the search game is played.} \label{fig:search game} \end{figure} The second game is a variant on Goofspiel~\cite{Ross71:Goofspiel}, a bidding game where each player has a hand of cards numbered $1$ to $3$. There are $3$ prizes worth $1, \ldots, 3$. In each turn, the prize is the smallest among the remaining prizes. Within the turn, the each of two players simultaneously chooses some private card to play. The player with the larger card wins the prize. In case of a tie, the prize is discarded, so this is not a constant-sum game. The cards that were played get discarded. Once all cards have been played, a player's score is the sum of the prizes that she has won. The LP solver we used is GLPK 4.63~\cite{glpk4.63}. We had to make the following changes to GLPK. First, we had to expose some internal routines so that we could input to the solver rational numbers rather than double-precision numbers. Second, we fixed a glitch in GLPK's rational LP solver in its pivoting step (it was not correct when the rational numbers were too small). Our code and GLPK use the GNU GMP library to provide arbitrary-precision arithmetic. The code, written in the C++14 language, was compiled with the g++ 7.2.0 compiler. It was run on a single thread on a 2.3 GHz Intel Xeon processor. The results are shown in Figure~\ref{fig:plot1}. \begin{figure}[!ht] \centering \includegraphics[width=0.83\columnwidth]{plot_2}\\ \includegraphics[width=0.83\columnwidth]{plot_1} \caption{Experiments. Dashed lines show compute time. Solid lines show the loss in the leader's utility compared to the SSE value in the unperturbed game.} \label{fig:plot1} \end{figure} \section{Introduction}\label{ec:introduction} The main solution concept in game theory, \emph{Nash equilibrium (NE)}, may prescribe non-credible strategies in \emph{extensive-form (i.e., tree-form) games (EFGs)}. To solve that problem, equilibrium refinements have been proposed for such games~\cite{selten1975reexamination}. Among the plethora of NE refinements (see~\citeauthor{vanDamme:1987:SPN:38403}~\shortcite{vanDamme:1987:SPN:38403} for details), the \emph{quasi-perfect equilibrium (QPE)}, proposed by~\citeauthor{van1984relation}~\shortcite{van1984relation}, plays a central role, and it is considered one of the most attractive NE refinement concepts, as argued, for example, by~\citeauthor{RePEc:eee:gamebe:v:8:y:1995:i:2:p:378-388}~\shortcite{RePEc:eee:gamebe:v:8:y:1995:i:2:p:378-388}. The rationale behind the QPE concept is that every player, in every information set, plays her best response to perturbed---that is, subject to trembles---strategies of the opponents. Unlike the \emph{normal-form perfect equilibrium}, the QPE guarantees that the strategies of the players are sequentially rational, and furthermore, quasi-perfection implies normal-form perfection. Unlike the \emph{extensive-form perfect equilibrium (EFPE)}, in a QPE every player (reasonably) assumes that she will not make mistakes in the future, and this excludes some unreasonable strategies~\cite{RePEc:eee:gamebe:v:8:y:1995:i:2:p:378-388}. Computation of NE refinements has received extensive attention in the literature. In the two-player case, \citeauthor{miltersen2010computing}~\shortcite{miltersen2010computing} provide algorithms for finding a QPE, while~\citeauthor{DBLP:conf/aaai/Farina017}~\shortcite{DBLP:conf/aaai/Farina017} for finding an EFPE. In particular, \citeauthor{miltersen2010computing}~\shortcite{miltersen2010computing} show that a strict subset of the QPEs can be found when the sequence form is subject to a specific perturbation, while~\citeauthor{DBLP:conf/aaai/Farina017}~\shortcite{DBLP:conf/aaai/Farina017} do the same for the EFPE. Iterative algorithms for such perturbed games in the zero-sum EFPE setting were introduced by \citet{kroer2017smoothing} and \citet{farina2017regret}.\footnote{\emph{Normal-form proper equilibrium} is a refinement of QPE~\cite{van1984relation}, but it has drawbacks: (1) it requires players to assume a very specific structure on trembles which is not necessarily well-motivated, (2) the minimum tremble magnitudes depend on the action probabilities, which begets additional computational challenges, and (3) it is unknown whether it can be represented via perturbation schemes, even in the non-Stackelberg setting. For the zero-sum case, \citet{miltersen2008fast} show a polynomial-time approach using the sequence form, but it is based on solving a large (possibly linear in game-size) number of LPs, and thus may not be practical. For the general-sum case, it is not even known whether the sequence form can be applied; the only known approach relies on conversion to normal form---which causes an exponential blow-up---and then applying a pivoting algorithm~\cite{Sorensen12:Computing}.} In \emph{Stackelberg games}, a \emph{leader} commits to a (possibly mixed) strategy first, and then a \emph{follower} best responds to that strategy~\cite{Stackelberg34:Marktform}. Stackelberg games have received significant attention in recent years~\cite{conitzer2006computing} due to their applications, for example, in security domains~\cite{tambe2011security}. Work on equilibrium refinements in the context of \emph{Stackelberg extensive-form games} has only started recently. Akin to usual extensive-form game refinements, \emph{Stackelberg equilibrium (SE)} refinements should guarantee both the optimality of the commitment off the equilibrium path and some form of robustness against small trembles of the opponent. To our knowledge, there is only one prior study of refinements for Stackelberg extensive-form games~\cite{DBLP:conf/ijcai/FarinaMK0S18}. They characterize a set of SE refinements based on what solutions can be obtained by imposing a perturbation scheme on the game---where players tremble onto suboptimal strategies with some small probabilities---and taking the limit as the trembling probability approaches zero. They prove that, for any perturbation scheme, all the limit points of sequences of SEs in a perturbed game are SEs of the original, unperturbed game. Interestingly, they prove that when restricting attention to the common tie-breaking rules for the follower (\emph{strong} SE assumes the follower breaks ties in the best way for the leader and \emph{weak} SE assumes the follower breaks tie in the worst way for the leader), this is no longer the case. Their approach does not start from a game-theoretic, axiomatic definition of the refinement concept. As we show in this paper, their approach captures only a strict subset of the solutions that are consistent with our natural game-theoretically defined refinement concept. One way to view this is that their operational definition is deficient in that it does not characterize all the solutions that are consistent with the natural, axiomatic definition of the refinement concept. Another view is that they have an operational definition and we provide a generalization. In terms of complexity, they prove that finding any SE is $\mathsf{NP}$-hard. (Hardness had previously been proven for finding a strong SE~\cite{Letchford:2010:COS:1807342.1807354}.) Therefore, finding any SE refinement is also $\mathsf{NP}$-hard. \textbf{Our contributions}. In this paper, we formally define the \emph{quasi-perfect Stackelberg equilibrium (QPSE)} refinement game theoretically in the same axiomatic fashion as QPE was defined for non-Stackelberg games~\cite{van1984relation}. As in the case of QPEs, our definition is based on a set of properties of the players' strategies, and it cannot be directly used to search for a QPSE. Subsequently, we define a class of perturbation schemes for the sequence form such that any limit point of a sequence of SEs in a perturbed game is a QPSE. This class of perturbation schemes strictly includes those used to find a QPE by~\citeauthor{miltersen2010computing}~\shortcite{miltersen2010computing}. Then, we extend the algorithm by~\citet{Cermak16:Using} to the case of QPSE computation. We derive the corresponding mathematical program for computing a \emph{Stackelberg extensive-form correlated equilibrium (SEFCE)} when a perturbation scheme is introduced and we discuss how the individual steps of the algorithm change. In particular, the implementation of our algorithm is much more involved, requiring the combination of branch-and-bound techniques with arbitrary-precision arithmetic to deal with small perturbations. This does not allow a direct application of off-the-shelf solvers. Finally, we experimentally evaluate the scalability of our algorithm. \section{Limits of SEs in $\xi$-Perturbed Games are QPSEs of the Unperturbed Games}\label{sec:limits_sse} Here, we prove Theorem~\ref{thm:limit_se}. First, we introduce two lemmas. The first provides a characterization of $I$-best responses in terms of sequence form. Intuitively, a follower's strategy $\pi_f$ is an $I$-best response to $\pi_\ell$ if and only if it places positive probability only on actions $a \in A(I)$ that are part of some best response of the follower below information set $I$. \begin{restatable}{lemma}{lemmabr}\label{lem:lemma_br_I} Given an SEFG $\Gamma$, a completely mixed $\pi_\ell \in \Pi_\ell$ and $I \in \mathcal{I}_f$, $\pi_f \in \mathsf{BR}_{I}(\pi_\ell)$ if for every $a \in A(I)$: $$ \pi_{i a} > 0 \implies \hspace{-0.3cm}\max_{\substack{\hat{r}_f \in R_f(a)}} \hspace{-0.1cm} g_{f, I}(r_\ell, \hat{r}_f) = \hspace{-0.3cm}\max_{\substack{\hat{r}_f \in R_f(I)}} g_{f, I}(r_\ell, \hat{r}_f), $$ % where $r_\ell \in R_\ell$ is equivalent to $\pi_\ell$. \end{restatable} The next lemma shows that any limit point of a sequence of follower's best responses in $\xi$-perturbed games is a follower's best response at every information set in $\Gamma$. \begin{restatable}{lemma}{lemfive}\label{lem:limit_follower_response} Given a $\xi$-perturbed SEFG $(\Gamma, \xi_\ell, \xi_f)$, let $\{ \epsilon_k \}_{k \in \mathbb{N}} \rightarrow 0$ and let $\{ (r_\ell(\epsilon_k), r_f(\epsilon_k)) \}_{k \in \mathbb{N}}$ be a sequence of realization plans in $\Gamma(\epsilon_k)$ with $r_f(\epsilon_k) \in \mathsf{BR}_{\Gamma(\epsilon_k)} (r_\ell(\epsilon_k))$. % Then, any limit point $(\pi_\ell, \pi_f)$ of $\{ (\pi_{\ell, k}, \pi_{f, k}) \}_{k \in \mathbb{N}}$ is such that, eventually, $\pi_f \in \mathsf{BR}_{I_f}(\pi_{\ell,k})$ for all $I \in \mathcal{I}_f$, % where $(\pi_{\ell, k}, \pi_{f, k})$ are equivalent to $(r_\ell(\epsilon_k), r_f(\epsilon_k))$ for all $k \in \mathbb{N}$. % \end{restatable} Finally, we can prove Theorem~\ref{thm:limit_se}. \begin{proof}[Proof of Theorem~\ref{thm:limit_se}] First, since $r_f(\epsilon_k) \in \mathsf{BR}_{\Gamma(\epsilon_k)} (r_\ell(\epsilon_k))$ for all $k \in \mathbb{N}$, Lemma~\ref{lem:limit_follower_response} allows us to conclude that requirement (i) in Definition~\ref{def:qpse} holds. Therefore, in order to prove Theorem~\ref{thm:limit_se}, we need to show that requirement (ii) holds as well. For contradiction, suppose that point (ii) does not hold, that is, no matter how we choose sequences $\{ \pi_{i,k} \}_{k \in \mathbb{N}}$, for $i \in \mathcal{N}$ and $\pi_i \in \Pi_i$, there is an information set ${I} \in \mathcal{I}_\ell \cup \{ I_\emptyset \}$ and a leader's strategy $\hat{\pi}_\ell \in \Pi_\ell$ such that, for every $ \hat{\pi}_f \in \Pi_f : \hat{\pi}_f \in \mathsf{BR}_{\hat{I}}(\pi_{\ell,k} \big/_{I} \hat{\pi}_{\ell,k}) $ for all $ \hat{I} \in \mathcal{I}_f$, we have $ u_{\ell} (\pi_{\ell,k} \big/_{I} \pi_\ell, \pi_{f,k} ) < u_{\ell} (\pi_{\ell,k} \big/_{I} \hat{\pi}_\ell, \hat{\pi}_{f,k} ) $. By continuity, there must exist $\bar k \in \mathbb{N}$ such that, for all $k \in \mathbb{N}: k \geq \bar k$, $ u_{\ell} (\pi_{\ell,k} \big/_{I} \pi_{\ell,k}, \pi_{f,k} )= u_{\ell} (\pi_{\ell,k}, \pi_{f,k} ) < u_{\ell} (\pi_{\ell,k} \big/_{I} \hat{\pi}_{\ell,k}, \hat{\pi}_{f,k} ). $ Let sequence $\{ \hat{\pi}_{\ell, k} \}_{k \in \mathbb{N}}$ be such that $\hat{r}_\ell(\epsilon_k) \in R_\ell(\epsilon_k)$ for all $k \in \mathbb{N}$, where each realization plan $\hat{r}_\ell(\epsilon_k)$ is equivalent to the strategy $\pi_{\ell,k} \big/_{I} \hat{\pi}_{\ell,k}$. This is always possible since requirement (iii) in Definition~\ref{def:qp_pert} is satisfied. Consider a sequence $\{ (\hat{r}_\ell(\epsilon_k), \hat{r}_f(\epsilon_k) \}_{k \in \mathbb{N}}$ with $\hat{r}_f(\epsilon_k) \in \mathsf{BR}_{\Gamma(\epsilon_k)}(\hat{r}_\ell(\epsilon_k))$, and let $\{ (\pi_{\ell,k} \big/_{I} \hat{\pi}_{\ell,k}, \hat{\pi}_{f, k}) \}_{k \in \mathbb{N}}$ be a sequence such that each strategy $\hat{\pi}_{f,k}$ is equivalent to $\hat{r}_f(\epsilon_k)$. By Lemma~\ref{lem:limit_follower_response}, any limit point $(\pi_\ell \big/_{I} \hat{\pi}_\ell, \hat{\pi}_f)$ of $\{ (\pi_{\ell,k} \big/_{I} \hat{\pi}_{\ell,k}, \hat{\pi}_{f, k}) \}_{k \in \mathbb{N}}$ satisfies $\hat{\pi}_f \in \mathsf{BR}_{\hat{I}}(\pi_{\ell,k} \big/_{I} \hat{\pi}_{\ell,k}) $ for all $ \hat{I} \in \mathcal{I}_f$. Thus, using the equivalence between strategies and realization plans, for $k \in \mathbb{N} : k \geq \bar k$ we have that $u_\ell(r_\ell(\epsilon_k), r_f(\epsilon_k)) < u_\ell(\hat{r}_\ell(\epsilon_k), \hat{r}_f(\epsilon_k))$, no matter how we choose $\hat{r}_f(\epsilon_k) \in \mathsf{BR}_{\Gamma(\epsilon_k)}(\hat{r}_\ell(\epsilon_k))$. This contradicts the fact that $(r_\ell(\epsilon_k), r_f(\epsilon_k))$ is an SE of $\Gamma(\epsilon_k)$. \end{proof} \section{Family of Perturbation Schemes for QPSE}\label{sec:perturbed_games} We now introduce a family of \emph{perturbation schemes} for SEFGs in sequence form that satisfies the following fundamental property: \emph{limits of SEs in perturbed sequence-form SEFGs are QPSEs of the original unperturbed SEFGs as the magnitude of the perturbation goes to zero}. In addition to being theoretically relevant, this result enables us to design an algorithm for computing QPSEs in SEFGs (Section~\ref{sec:sefce}). \begin{definition}[$\xi$-perturbation scheme]\label{def:qp_pert} Given an SEFG $\Gamma$ and $i \in \mathcal{N}$, let $\xi_i : (0,1] \times Q_i \to \mathbb{R}^+$ be a function that maps a perturbation magnitude $\epsilon \in (0,1]$ and a sequence $\sigma_i \in \Sigma_i$ to a positive lower-bound $\xi_i(\epsilon,\sigma_i)$ on the probability of playing $\sigma_i$ such that: \begin{enumerate}[(i)] \item $\xi_i(\epsilon,\sigma_i)$ is a polynomial in $\epsilon$, for all $\sigma_i \in \Sigma_i$; \item $\lim_{\epsilon \rightarrow 0^+} \xi_i(\epsilon,\sigma_i) = 0$, for all $\sigma_i \in \Sigma_i \setminus \{\sigma_\emptyset\}$; \item $\lim_{\epsilon \rightarrow 0^+} \frac{ \xi_i(\epsilon, \sigma_i(I) a) }{ \xi_i(\epsilon, \sigma_i(I)) } = 0$, for all $I \in \mathcal{I}_i, a \in A(I)$. % \end{enumerate} Then, a \emph{$\xi_{i}$-perturbation scheme} for $R_i$ is a function $\epsilon \mapsto R_i(\epsilon)$ defined over $\epsilon \in (0,1]$ in which $R_i(\epsilon)$ is the set of all $r_i \in R_i$ such that $r_i(\sigma_i) \geq \xi_i(\epsilon,\sigma_i)$ for all $\sigma_i \in \Sigma_i$. \end{definition} In words, the lower-bounds on sequence probabilities enjoy the following properties: (i) they are polynomials in the variable $\epsilon$; (ii) they approach zero as $\epsilon$ goes to zero; and (iii) $\xi_i(\epsilon,\sigma_i(I) a)$ approaches zero faster than $\xi_i(\epsilon,\sigma_i(I))$. We denote by $(\Gamma, \xi_\ell, \xi_f)$ a \emph{$\xi$-perturbed} SEFG with $\xi_i$-perturbation schemes. We let $\Gamma(\epsilon)$ be a particular sequence-form \emph{$\xi$-perturbed game instance} obtained from $\Gamma$ by restricting each set of realization plans $R_i$ to be $R_i(\epsilon)$. We denote by $r_i(\epsilon)$ any realization plan in $R_i(\epsilon)$, and we let $\xi_i(\epsilon) \in \mathbb{R}^{|Q_i|}$ be a vector whose components are the lower-bounds $\xi_i(\epsilon,\sigma_i)$. We denote by $\tilde{r}_i(\epsilon) = r_i(\epsilon) -\xi_i(\epsilon)$ the \emph{residual} of $r_i(\epsilon)$, which represents the part of player $i$'s strategy that is not fixed by the perturbation.\footnote{We assume without loss of generality that $\Gamma(\epsilon)$ is well-defined, that is, each set $R_i(\epsilon)$ is non-empty for every $\epsilon \in (0,1]$.} Next, we state our main result about sequences of SEs in $\xi$-perturbed games. We postpone the proof to Section~\ref{sec:limits_sse}. \begin{theorem}\label{thm:limit_se} Given a $\xi$-perturbed SEFG $(\Gamma, \xi_\ell, \xi_f)$, let $\{ \epsilon_k \}_{k \in \mathbb{N}} \rightarrow 0$ and let $\{ (r_\ell(\epsilon_k), r_f(\epsilon_k)) \}_{k \in \mathbb{N}}$ be a sequence of SEs in $\Gamma(\epsilon_k)$. % Then, any limit point $(\pi_\ell, \pi_f)$ of the sequence $\{ (\pi_{\ell, k}, \pi_{f, k}) \}_{k \in \mathbb{N}}$ is a QPSE of $\Gamma$, % where $(\pi_{\ell, k}, \pi_{f, k})$ are equivalent to $(r_\ell(\epsilon_k), r_f(\epsilon_k))$ for all $k \in \mathbb{N}$. \end{theorem} Theorem~\ref{thm:limit_se} also allows us to conclude the following, as a consequence of Theorem~1 of~\citeauthor{DBLP:conf/ijcai/FarinaMK0S18}~\shortcite{DBLP:conf/ijcai/FarinaMK0S18}. \begin{corollary}\label{cor:qp_ref} Any QPSE of an SEFG $\Gamma$ is an SE of $\Gamma$. \end{corollary} Reuirements (ii)-(iii) in Definition~\ref{def:qp_pert} cannot be removed: \begin{observation}\label{obv:obs_pert} There are $\xi$-perturbed SEFGs $(\Gamma, \xi_\ell, \xi_f)$ with $\xi_i$-perturbation schemes that violate point (ii) or (iii) in Definition~\ref{def:qp_pert} for which Theorem~\ref{thm:limit_se} does not hold. \end{observation} \begin{proof} Consider the SEFG in Figure~\ref{fig:examples}b with $\xi_\ell(\epsilon,a_\ell^1)=\xi_\ell(\epsilon,a_\ell^2)=\epsilon$ and $\xi_\ell(\epsilon,a_\ell^2a_\ell^3)= \xi_\ell(\epsilon,a_\ell^2a_\ell^4)= \frac{\epsilon}{3}$, which violates requirement (iii) in Definition~\ref{def:qp_pert}. Clearly, any SE of $\Gamma(\epsilon)$ requires $r_\ell(\epsilon,a_\ell^1) = 1 - \epsilon$, $r_\ell(\epsilon,a_\ell^2) = \epsilon$, $r_\ell(\epsilon,a_\ell^2a_\ell^3) = \frac{\epsilon}{3}$, and $r_\ell(\epsilon, a_\ell^2 a_\ell^4) = \frac{2 \epsilon}{3}$. Thus, any limit point of a sequence of SEs has $\pi_\ell(a_\ell^3) = \frac{1}{3}$ and $\pi_\ell(a_\ell^4) = \frac{2}{3}$, which cannot be the case in a QPSE of $\Gamma$, as the leader's optimal strategy at $\ell.2$ is to play $a_\ell^4$. As for requirement (ii), we can build a similar example by setting $\xi_\ell(\epsilon,a_\ell^2)=\frac{1}{3}$. \end{proof} \begin{figure}[!h] \centering \includegraphics[scale=.8]{isolated_points.pdf} \caption{\small Examples SEFGs.} \label{fig:examples} \end{figure} \citeauthor{miltersen2010computing}~\shortcite{miltersen2010computing} introduced the idea of perturbing sequence-form EFGs in order to find a QPE. Our perturbation scheme generalizes theirs, where $\xi_i(\epsilon, \sigma_i) = \epsilon^{|\sigma_i|}$ for all $\sigma_i \in \Sigma_i \setminus \{\sigma_\emptyset\}$, with $|\sigma_i|$ being the number of actions in $\sigma_i$. There are games where our perturbation captures QPSEs that are \emph{not} obtainable with theirs. For instance, in the SEFG in Figure~\ref{fig:examples}a, $(\pi_\ell, \pi_f)$, with $\pi_\ell(a_\ell^1)=\pi_\ell(a_\ell^3)=1$, $\pi_\ell(a_\ell^2)=\pi_\ell(a_\ell^4)=0$, and $\pi_f(a_f^1)= \pi_f(a_f^2) = \frac{1}{2}$, is a QPSE that cannot be obtained with their perturbation scheme while it is reachable by setting $\xi_\ell(\epsilon, a_\ell^2)=\epsilon^2$. We observe that $(\pi_\ell, \pi_f)$ is also a QPE when we look at the game as an EFG without commitment; this shows that our perturbation scheme generalizes theirs also for QPEs. Finally, when restricting the attention to SSEs, we can state the following: limits of SSEs are QPSSEs. We make this formal in Theorem~\ref{thm:limit_sse} in the Appendix. \section{Preliminaries}\label{sec:preliminaries} Using standard notation~\cite{shoham2008multiagent}, a \emph{Stackelberg extensive-form game (SEFG)} of imperfect information is a tuple $(\mathcal{N},\mathcal{H},\mathcal{Z},\mathcal{A},\rho,\chi,\mathcal{C},u,\mathcal{I} )$. $\mathcal{N} = \{\ell,f \}$ is the set of players, the leader and the follower. $\mathcal{H} = \mathcal{H}_c \cup \mathcal{H}_\ell \cup \mathcal{H}_f$ is the set of nonterminal nodes, where $\mathcal{H}_c$ is the set of chance nodes, while $\mathcal{H}_\ell$ and $\mathcal{H}_f$ are the sets of leader's and follower's decision nodes, respectively. $\mathcal{Z}$ is the set of terminal nodes. $\mathcal{A} = \mathcal{A}_c \cup \mathcal{A}_\ell \cup \mathcal{A}_f $ is the set of actions, where $\mathcal{A}_c$ contains chance moves, while $\mathcal{A}_\ell$ and $\mathcal{A}_f$ are the sets of leader's and follower's actions, respectively. $\rho: \mathcal{H} \rightarrow 2^\mathcal{A}$ is the action function that assigns to each nonterminal node a set of available actions. $\chi: \mathcal{H} \times \mathcal{A} \rightarrow \mathcal{H} \cup \mathcal{Z}$ is the successor function that defines the node reached when an action is performed in a nonterminal node. $\mathcal{C} : \mathcal{H} \cup \mathcal{Z} \rightarrow[0,1]$ assigns each node with its probability of being reached given chance moves. $u=\{u_\ell, u_f\}$, where $u_\ell, u_f: \mathcal{Z} \rightarrow \mathbb{R}$ specify leader's and follower's payoffs, respectively, in each terminal node. Finally, $\mathcal{I} = \{ \mathcal{I}_\ell, \mathcal{I}_f \}$, where $\mathcal{I}_\ell$ and $\mathcal{I}_f$ define partitions of $\mathcal{H}_\ell$ and $\mathcal{H}_f$, respectively, into information sets, that is, groups of nodes that are indistinguishable by the player. For every information set $I \in \mathcal{I}_i$ and nodes $h, \hat{h} \in I$, it must be the case that $\rho(h) = \rho(\hat{h}) = A(I)$, otherwise player $i$ would be able to distinguish the two nodes. We focus on \emph{perfect-recall} SEFGs in which no player forgets what she did or knew in the past, that is, for every $i \in \mathcal{N}$ and $I \in \mathcal{I}_i$, all nodes belonging to $I$ share the same player $i$'s moves on their paths from the root. Thus, we can restrict the attention to \emph{behavioral strategies}~\cite{kuhn2016extensive}, which define, for every player $i \in \mathcal{N}$ and information set $I \in \mathcal{I}_i$, a probability distribution over the actions $A(I)$. For $i \in \mathcal{N}$, let $\pi_i \in \Pi_i$ be a player $i$'s behavioral strategy, with $\pi_{i a}$ denoting the probability of playing action $a \in \mathcal{A}_i$. Overloading notation, we use $u_i$ as if it were defined over strategies instead of terminal nodes. Specifically, $u_i(\pi_\ell, \pi_f)$ is player $i$'s expected utility when $\pi_\ell \in \Pi_\ell$ and $\pi_f \in \Pi_f$ are played. Perfect-recall SEFGs admit an equivalent representation called the \emph{sequence form}~\cite{von1996efficient,Romanovskii62:Reduction}. Every node $h \in \mathcal{H} \cup \mathcal{Z}$ defines a \emph{sequence} $\sigma_i(h)$ for player $i \in \mathcal{N}$, which is the ordered set of player $i$'s actions on the path from the root to $h$. Let $\Sigma_i$ be the set of player $i$'s sequences. As usual, let $\sigma_\emptyset \in \Sigma_i$ be a fictitious element representing the empty sequence. In perfect-recall games, given an information set $I \in \mathcal{I}_i$, for any pair of nodes $h, \hat{h} \in I$ it holds $\sigma_{i}(h) = \sigma_{i}(\hat{h}) = \sigma_i(I) $. Given $\sigma_i \in \Sigma_i$ and $a \in A(I)$ with $ I \in \mathcal{I}_i : \sigma_i = \sigma_i(I)$, we denote as $\sigma_i a$ the \emph{extended} sequence obtained by appending $a$ to $\sigma_i$. Moreover, for any pair $\sigma_i, \hat{\sigma}_i \in \Sigma_i$, we write $\hat{\sigma}_i \sqsubseteq \sigma_i$ whenever $\hat{\sigma}_i$ is a \emph{prefix} of $\sigma_i$, that is, $\sigma_i$ can be obtained by extending $\hat{\sigma}_i$ with a finite number of actions. Given $\sigma_i \in\Sigma_i$, we also let $I_i(\sigma_i)$ be the information set $I \in \mathcal{I}_i : \sigma_i = \sigma_i(I)a$ for some $a \in A(I)$. In the sequence form, a strategy, called a \emph{realization plan}, assigns each sequence with its probability of being played. For $i \in \mathcal{N}$, let $r_i \in R_i$ be a player $i$'s realization plan. In order to be well-defined, a realization plan $r_i$ must be such that $r_i(\sigma_\emptyset)=1$ and, for $I \in \mathcal{I}_i$, $r_i(\sigma_i(I)) = \sum_{a \in A(I)} r_i(\sigma_i(I)a)$. Finally, letting $\Sigma = \Sigma_\ell \times \Sigma_f$ be the set of sequence pairs $\sigma = (\sigma_\ell, \sigma_f)$, overloading notation, $u_i : \Sigma \rightarrow \mathbb{R}$ is player $i$'s utility function in the sequence form, with $u_i(\sigma) = \sum_{h \in \mathcal{Z} : \sigma_{\ell}(h) = \sigma_\ell \wedge \sigma_{f}(h) = \sigma_f } u_i(h) \mathcal{C}(h)$. Moreover, we also use $u_i$ as if it were defined over realization plans. Formally, $u_i(r_\ell, r_f) = \sum_{\sigma \in \Sigma} u_i(\sigma) r_\ell(\sigma_\ell) r_f(\sigma_f)$. The sequence form is usually expressed with matrix notation as follows. Player $i$'s utility function is a $|\Sigma_\ell| \times |\Sigma_f|$ matrix $U_i$ whose entries are the utilities $u_i(\sigma)$, for $\sigma \in \Sigma$. Constraints defining $r_i \in R_i$ are expressed as $F_i r_i = f_i$, where: $F_i$ is a $( |\mathcal{I}_i|+1 ) \times |\Sigma_i|$ matrix, $f_i \in \mathbb{R}^{|\mathcal{I}_i|+1}$, and, overloading notation, $r_i \in \mathbb{R}^{|\Sigma_i|}$ is a vector representing $r_i$. Specifically, introducing a fictitious information set $I_\emptyset$, the entry of $F_i$ indexed by $(I_\emptyset, \sigma_\emptyset)$ is 1, and, for $I \in \mathcal{I}_i$ and $\sigma_i \in \Sigma_i$, the entry indexed by $(I,\sigma_i)$ is $-1$ if $\sigma_i = \sigma_i(I)$, while it is $1$ if $\sigma_i = \sigma_i(I)a$ for some $a \in A(I)$. $F_i$ is zero everywhere else. Moreover, $f_i^T = (1 \; 0 \cdots 0)$. Finally, given $r_\ell \in R_\ell$ and $r_f \in R_f$, we can write $u_i(r_\ell, r_f) = r_\ell^T U_i r_f $. In perfect-recall games, behavioral strategies and realization plans are equally expressive. Given $r_i \in R_i$, we obtain an equivalent $\pi_i \in \Pi_i$ by setting, for all $I \in \mathcal{I}_i$ and $a \in A(I)$, $\pi_{i a} = \frac{r_i(\sigma_i(I)a)}{r_i(\sigma_i(I))}$ when $r_i(\sigma_i(I)) > 0$, while $\pi_{i a}$ can be any otherwise. Similarly, $\pi_i \in \Pi_i$ has an equivalent $r_i \in R_i$ with $r_i(\sigma_i) = \prod_{a \in \sigma_i} \pi_{i a}$ for all $\sigma_i \in \Sigma_i$.\footnote{Here, the equivalence is in terms of probabilities that the strategies induce on terminal nodes, \emph{i.e.}, it is \emph{realization equivalence}.} The solution concept associated with SEFGs is the SE. An SEFG may have many SEs, depending on the leader's assumption on how the follower breaks ties among multiple best responses. A leader's strategy is part of an SE if it is optimal for \emph{some} tie-breaking rule of the follower. Letting $\mathsf{BR}_{\Gamma} (\pi_\ell) = \arg\max_{{\pi}_f \in \Pi_f} u_f(\pi_\ell, {\pi}_f)$ be the set of follower's best responses to $\pi_\ell \in \Pi_\ell$ in an SEFG $\Gamma$, we have the following formal definition of SE.\footnote{In this paper, we define SEs following a characterization introduced by~\citeauthor{DBLP:conf/ijcai/FarinaMK0S18}~\shortcite{DBLP:conf/ijcai/FarinaMK0S18} (Lemma 2 in their paper).} \begin{definition}\label{def:se} Given an SEFG $\Gamma$, $(\pi_\ell, \pi_f)$ is an \emph{SE} of $\Gamma$ if $\pi_f \in \mathsf{BR}_{\Gamma} (\pi_\ell)$ and, for all $\hat{\pi}_\ell \in \Pi_\ell$, there exists $\hat{\pi}_f \in \mathsf{BR}_{\Gamma} (\hat{\pi}_\ell)$ such that $u_\ell(\pi_\ell, \pi_f) \geq u_\ell(\hat{\pi}_\ell, \hat{\pi}_f)$. \end{definition} Many papers on SEs focus on \emph{strong SEs} (SSEs), which assume that the follower breaks ties in favor of the leader. \begin{definition}\label{def:sse} Given an SEFG $\Gamma$, $(\pi_\ell, \pi_f)$ is an \emph{SSE} of $\Gamma$ if $\pi_f \in \mathsf{BR}_{\Gamma} (\pi_\ell)$ and, for all $\hat{\pi}_\ell \in \Pi_\ell$ and $\hat{\pi}_f \in \mathsf{BR}_{\Gamma} (\hat{\pi}_\ell)$, it holds $u_\ell(\pi_\ell, \pi_f) \geq u_\ell(\hat{\pi}_\ell, \hat{\pi}_f)$. \end{definition} Finally, SEs and SSEs can be defined analogously for SEFGs in sequence form (using the equivalence between behavioral strategies and realization plans). \section{Best Responses in $\xi$-Perturbed Games}\label{sec:props_games} We now study properties of the follower's best responses to the leader's strategy in $\xi$-perturbed games. These properties will be useful for proving our results later in the paper. In the following, letting $\Sigma_i(a) = \{ \sigma_i \in \Sigma_i \mid a \in \sigma_i \}$ for all $a \in \mathcal{A}_i$, $\Sigma_i(I) = \bigcup_{a \in A(I)} \Sigma_i(a)$ denotes player $i$'s sequences that pass through information set $I \in \mathcal{I}_i$. For ease of presentation, given $I \in \mathcal{I}_i$, $g_{i, I}(r_\ell,r_f) = \sum_{\sigma \in \Sigma : \sigma_i \in \Sigma_i(I)} u_i(\sigma) r_\ell(\sigma_\ell) r_f(\sigma_f)$ denotes player $i$'s expected utility contribution from terminal nodes reachable from $I$. Finally, for $I \in \mathcal{I}_i$, let $R_i(I) \subseteq R_i$ be the set of $r_i \in R_i : r_i(\sigma_i(I))=1$, while, for $a \in A(I)$, $R_i(a) \subseteq R_i(I)$ is the set of $r_i \in R_i : r_i(\sigma_i(I)a)=1$. Let $\mathsf{BR}_{\Gamma(\epsilon)}(r_\ell(\epsilon)) = \arg \max_{{r}_f(\epsilon) \in R_f(\epsilon)} u_f(r_\ell(\epsilon),{r}_f(\epsilon))$ be the set of follower's best responses to $r_\ell(\epsilon) \in R_\ell(\epsilon)$ in $\Gamma(\epsilon)$. The next lemma gives a mathematical programming formulation of the follower's best-response problem in $\Gamma(\epsilon)$. \begin{restatable}{lemma}{lemmanepert}\label{lem:ne_pert_game} For every $r_\ell(\epsilon) \in R_\ell(\epsilon)$, $r_f(\epsilon) \in \mathsf{BR}_{\Gamma(\epsilon)}(r_\ell(\epsilon))$ if and only if $\tilde{r}_f(\epsilon)$ is optimal for Problem~$\mathcal{P}(\epsilon)$ below. % \begin{equation*}\label{prob:primal_tilde} \mathcal{P}(\epsilon)\ :\ \left\{\begin{aligned} \displaystyle\max_{\tilde{r}_f} &\quad r_\ell(\epsilon)^T U_f \tilde{r}_f \\[-2mm] \textnormal{s.t.} &\quad F_f \tilde{r}_f = f_f - F_f \xi_f(\epsilon), \quad \tilde{r}_f \geq 0. \end{aligned}\right. \end{equation*} \end{restatable} All omitted proofs are in the Appendix. The dual of Problem~$\mathcal{P}(\epsilon)$ above is as follows. \begin{proposition}\label{prop:dual_br_pert} For $r_\ell(\epsilon) \in R_\ell(\epsilon)$, Problem~$\mathcal{D}(\epsilon)$ below is the dual of Problem~$\mathcal{P}(\epsilon)$, where $v_f \in \mathbb{R}^{|\mathcal{I}_f| + 1}$ is the vector of dual variables. % \begin{subequations}\label{prob:dual_br_pert} \begin{empheq}[left={\mathcal{D}(\epsilon)\ :\ \empheqlbrace}]{align} \min_{v_f} &\quad \left( f_f - F_f \xi_f(\epsilon) \right)^T v_f \\[-2mm] \textnormal{s.t.} &\quad F_f^T v_f \geq U_f^T r_\ell(\epsilon). \label{cons:dual} \end{empheq} \end{subequations} % \end{proposition} \begin{remark} Constraints~\eqref{cons:dual} in Problem~$\mathcal{D}(\epsilon)$ defined above ensure that, for every $I \in \mathcal{I}_f$ and $a \in A(I)$, we have \begin{align}\label{eq:constraints_rewritten} v_{f, I} \geq \hspace{-0.2cm} \sum_{\substack{\sigma \in \Sigma : \sigma_f = \sigma_f(I) a }} \hspace{-0.3cm} u_f(\sigma) r_\ell(\epsilon,\sigma_\ell) +\hspace{-0.2cm} \sum_{\substack{ \hat{I} \in \mathcal{I}_f: \sigma_f(\hat{I}) = \sigma_f(I)a}} \hspace{-0.5cm}v_{f, \hat{I}}. \end{align} \end{remark} The optimal solutions to Problem~$\mathcal{D}(\epsilon)$ enjoy important properties that are stated in the following lemmas. The first one says that, in an optimal solution, each variable $v_{f, I}$ must equal the maximum possible expected utility the follower can achieve following information set $I \in \mathcal{I}_f$. The second lemma says that if an optimal solution to Problem~$\mathcal{D}(\epsilon)$ satisfies Constraint~\eqref{eq:constraints_rewritten} with equality for an information set $I \in \mathcal{I}_f$ and an action $a \in A(I)$, then playing $a$ at $I$ is optimal in the game following $I$. \begin{restatable}{lemma}{lemmaperta}\label{lem:optimal_dual} For every $r_\ell(\epsilon) \in R_\ell(\epsilon)$, if $v_f^\ast \in \mathbb{R}^{|\mathcal{I}_f| + 1}$ is optimal for Problem~$\mathcal{D}(\epsilon)$, then for every $I \in \mathcal{I}_f$: % \begin{align}\label{eq:inductive_argument} v_{f, I}^\ast = \max_{\substack{\hat{r}_f \in R_f(I) }} g_{f, I}( r_\ell(\epsilon), \hat{r}_f). \end{align} \end{restatable} \begin{restatable}{lemma}{lemmapertb}\label{lem:dual_active_constraints} For every $r_\ell(\epsilon) \in R_\ell(\epsilon)$, $I \in \mathcal{I}_f$, and $a \in A(I)$, if Constraint~\eqref{eq:constraints_rewritten} holds with equality in an optimal solution to Problem~$\mathcal{D}(\epsilon)$, then \begin{align}\label{eq:active_cons_condition} \max_{\substack{\hat{r}_f \in R_f(a)}} g_{f, I}(r_\ell(\epsilon), \hat{r}_f) = \max_{\substack{\hat{r}_f \in R_f(I)}} g_{f, I}(r_\ell(\epsilon), \hat{r}_f). \end{align} \end{restatable} Now we are ready to prove a fundamental property of the follower's best responses in $\xi$-perturbed game instances $\Gamma(\epsilon)$. Intuitively, in a perturbed game instance, the follower best responds playing sequence $\sigma(I_f)a$ with probability strictly greater than its lower-bound $\xi_f(\epsilon,\sigma_f(I)a)$ only if playing $a$ is optimal in the game following $I$. Theorem~\ref{thm:compl_slack} formally expresses the idea that, in a perturbed game instance $\Gamma(\epsilon)$, when the follower decides how to best respond to a leader's commitment in a given information set, she does not take into account her future trembles, but only opponents' ones. \begin{theorem}\label{thm:compl_slack} Given $r_\ell(\epsilon) \in R_\ell(\epsilon)$, $r_f(\epsilon) \in \mathsf{BR}_{\Gamma(\epsilon)}(r_\ell(\epsilon))$, $I \in \mathcal{I}_f$, and $a \in A(I)$, if $r_f(\epsilon, \sigma_f(I) a) > \xi_f(\epsilon, \sigma_f(I) a)$, then $$\max_{\substack{\hat{r}_f \in R_f(a)}} g_{f, I}(r_\ell(\epsilon) , \hat{r}_f) = \max_{\substack{\hat{r}_f \in R_f(I)}} g_{f, I}(r_\ell(\epsilon), \hat{r}_f). $$ \end{theorem} \begin{proof} By Lemma~\ref{lem:ne_pert_game}, $r_f(\epsilon) \in \mathsf{BR}_{\Gamma(\epsilon)}(r_\ell(\epsilon))$ if and only if $\tilde{r}_f(\epsilon) = r_f(\epsilon) - \xi_f(\epsilon)$ is optimal for Problem~$\mathcal{P}(\epsilon)$. % % % By applying the complementarity slackness theorem to Problems~$\mathcal{P}(\epsilon)$~and~$\mathcal{D}(\epsilon)$ we have that, if $\tilde{r}_f(\epsilon)$ and $v_f^\ast \in \mathbb{R}^{|\mathcal{I}_f|+1}$ are optimal, then, % whenever $\tilde{r}_f(\epsilon, \sigma_f(I)a) > 0$, that is, $r_f(\epsilon, \sigma_f(I) a) > \xi_f(\epsilon, \sigma_f(I) a)$, Constraint~\eqref{eq:constraints_rewritten} for information set $I$ and action $a$ must hold with equality, which, by Lemma~\ref{lem:dual_active_constraints}, yields Equation~\eqref{eq:active_cons_condition}. % \end{proof} \section{Algorithm}\label{sec:sefce} One can use our perturbation scheme to compute an (approximate) QPSE. We do this by developing an LP for computing an SEFCE in a given $\xi$-perturbed game instance, where we maximize the leader's value. We then conduct a \emph{branch-and-bound} search on this SEFCE LP. It branches on which actions to \emph{force} be recommended to the follower (by the correlation device of the SEFCE). The idea is that, as long as we only recommend a single action to the follower at any given information set, we get an SE of the perturbed game (specifically an SSE, and an SSE has maximum value among all SEs), and, thus, according to Theorem~\ref{thm:limit_se}, a QPSE (specifically QPSSE) if we take the limit point of the perturbations. As in prior papers on EFCE computation in general-sum games, we focus on games without chance nodes~\cite{Stengel08:Extensive,Cermak16:Using}. For computing an SEFCE we need to specify joint probabilities over sequence pairs $(\sigma_\ell,\sigma_f)\in \Sigma$. However, not all pairs need to specify probabilities, only pairs such that choosing $\sigma_f$ is affected by the probability put on $\sigma_\ell$ (we do not need to care about the converse of this, as only the follower needs to be induced to follow the recommended strategy). Intuitively, the set of the leader's sequences relevant to a given $\sigma_f \in \Sigma_f$ is made of those sequences that affect the expected value of $\sigma_f$ or any alternative sequence $\hat{\sigma}_f \in \Sigma_f$ whose last action is available at $I_f(\sigma_f)$. \begin{definition}[Relevant sequences] A pair $(\sigma_\ell,\sigma_f) \in \Sigma$ is \emph{relevant} if either $\sigma_\ell = \sigma_\emptyset$ or there exists $h,\hat{h} \in \mathcal{H}$ s.t. $\hat{h}$ precedes $h$, $h \in I_f(\sigma_f)$, and $\hat{h} \in I_\ell(\sigma_\ell)$, or if the condition holds with the roles of $\sigma_\ell$ and $\sigma_f$ reversed. \end{definition} For every information set $I\in \mathcal{I}_i$, we let $rel(I)$ be the set of sequences relevant to each child sequence $\sigma_i(I)a$ for $a\in A(I)$. We let $p(\sigma_\ell,\sigma_f)$ be the probability that we recommend that the leader plays sequence $\sigma_\ell$, and that the follower sends her \emph{residual} (\emph{i.e.}, the probability that is not fixed by the perturbation) to $\sigma_f$. Moreover, we let $\eta(\sigma_f)$ be the maximum probability that the follower can put on a sequence $\sigma_f \in \Sigma_f$ given the $\xi_f$-perturbation scheme. First, we introduce a new value function representing the value to the leader of the sequence pair $(\sigma_\ell,\sigma_f) \in \Sigma$ given that $\sigma_f$ represents an assignment of residual probability: \[ u_\ell^{\epsilon}(\sigma_\ell,\sigma_f) = \sum_{\mathclap{\substack{h\in \ensuremath{\mathcal{Z}}: \sigma_\ell(h) = \sigma_\ell \wedge \sigma_f(h) = \sigma_f}}} \eta(\sigma_f)u_\ell(h) + \sum_{\hat{\sigma}_f \in \Sigma_f} \xi_f(\epsilon, \hat{\sigma}_f) u_\ell(\sigma_\ell, \hat{\sigma}_f). \] The following LP finds an SEFCE in a $\xi$-perturbed SEFG. \fontsize{8.5pt}{1pt}\selectfont \begin{subequations} \begin{align} \max_{p,v} & \quad \sum_{(\sigma_\ell,\sigma_f)\in \Sigma} p(\sigma_\ell, \sigma_f) u_\ell^{\epsilon}(\sigma_\ell,\sigma_f) \qquad \textrm{s.t.} \\%} \\ & p(\emptyset, \emptyset) = 1, \hspace{0.4cm} p(\sigma_\ell,\sigma_f) \geq 0 \hspace{1.7cm} \forall( \sigma_\ell, \sigma_f) \in \Sigma \\ & \sum_{\sigma_f \in rel(\sigma_\ell)}p(\sigma_\ell,\sigma_f) \geq \xi_\ell(\epsilon, \sigma_\ell) \hspace{1.91cm} \forall \sigma_\ell \in \Sigma_\ell \label{eq:xi_constraint}\\ & p(\sigma_\ell(I),\sigma_f) = \sum_{\mathclap{a \in A(I)}} p(\sigma_\ell(I)a,\sigma_f) \hspace{0.31cm} \forall I\in \ensuremath{\mathcal{I}}_\ell, \sigma_f \in rel(I) \label{eq:sefce_lp_leader_sequence_constraint}\\ & p(\sigma_\ell,\sigma_f(I)) = \sum_{\mathclap{a \in A(I)}} p(\sigma_\ell,\sigma_f(I)a) \hspace{0.32cm} \forall I\in \ensuremath{\mathcal{I}}_f, \sigma_\ell \in rel(I) \label{eq:sefce_lp_follower_sequence_constraint}\\% follower sequence probability sum & v(\sigma_f) = \eta(\sigma_f) \sum_{\sigma_\ell \in rel(\sigma_f)} p(\sigma_\ell,\sigma_f) u_f(\sigma_\ell, \sigma_f) \label{eq:sefce_lp_incentive_sequence} \\ & \hspace{1cm}+ \sum_{I\in \ensuremath{\mathcal{I}}_f: \sigma_f(I)=\sigma_f}\sum_{a \in A(I)} v(\sigma_f a) \hspace{1.4cm} \forall \sigma_f \in \Sigma_f \nonumber\\ & v(I,\sigma_f) \geq \eta(\sigma_f(I)a) \hspace{-2mm} \sum_{\sigma_\ell \in rel(\sigma_f)} p(\sigma_\ell,\sigma_f) u_f(\sigma_\ell, \sigma_f(I)a) \label{eq:sefce_lp_incentive_geq}\\ &\hspace{0.8cm} + \,\, \sum_{\mathclap{\hat{I} \in \ensuremath{\mathcal{I}}_f;\sigma_f(\hat{I})=\sigma_f(I)a}} v(\hat{I},\sigma_f) \hspace{0.9cm} \forall I\in \ensuremath{\mathcal{I}}_f, a \in A(I), \sigma_f \in prec(I) \nonumber\\ & v(\sigma_f(I)a) = v(I,\sigma_f(I)a) \hspace{1.45cm} \forall I\in \ensuremath{\mathcal{I}}_f, a\in A(I).\label{eq:sefce_lp_incentive_optimality} \end{align} \label{eq:eps sefce lp} \end{subequations} \normalsize% In \eqref{eq:sefce_lp_incentive_geq} of this LP, $prec(I)$, where $I \in \mathcal{I}_f$, is the set of follower's sequences $\sigma_f$ that precede $I$ in the sense that there is $\hat{I} \in \mathcal{I}_f$ with $\sigma_f(\hat{I})\sqsubseteq \sigma_f(I)$ and $\sigma_f=\sigma_f(\hat{I})a$ for some $a\in A(\hat{I})$. This LP is a modification of the SEFCE LP given by~\citet{Cermak16:Using}. The new LP has two modifications to allow perturbation. First, it has constraints \eqref{eq:xi_constraint} to ensure that the sum of recommendation probabilities on any leader's sequence is at least $\xi_\ell(\epsilon, \sigma_\ell)$. Second, because we are now recommending where to send residual probability for the follower, we must modify the objective in order to give the correct expected value for the leader.\footnote{We use the definition of relevant sequences and the LP from \citet{Stengel08:Extensive} rather than those of \citet{Cermak16:Using}. The latter are not well defined for \eqref{eq:sefce_lp_leader_sequence_constraint} and \eqref{eq:sefce_lp_follower_sequence_constraint}.} We can branch-and-bound on recommendations to the follower in a way that ensures that the final outcome is an SSE. That is guaranteed by the following theorem, which shows that we can add and remove constraints on which follower actions to recommend in a way that guarantees an SSE of the perturbed game as long as the follower is recommended a ``pure'' strategy with respect to the residual probabilities. \begin{restatable}{theorem}{thmsefcepure} If a solution to LP~\eqref{eq:eps sefce lp} is such that for all $I \in \ensuremath{\mathcal{I}}_f$ there exists $a\in A(I)$ such that $p(\sigma_\ell,\sigma_f(I)\hat{a}) = 0$ for all $\hat{a} \in A(I), \sigma_\ell\in rel(\sigma_f(I)a)$ with $\hat{a}\ne a$, then a strategy profile can be extracted in polynomial time such that it is an SSE of the perturbed game instance. \label{thm:sefce_pure} \end{restatable} Now it is obvious that the LP~\eqref{eq:eps sefce lp} upper bounds the value of any SSE since the SSE is a feasible solution to the LP. Theorem~\ref{thm:sefce_pure} shows that one way to find an SSE is to find a solution to LP~\eqref{eq:eps sefce lp} where the follower is recommended a pure strategy with respect to the residual probabilities. Since any SSE represents such a solution, we can branch on which actions we make pure at each information set, and use branch-and-bound to prune the space of possible solutions. This approach was proposed by \citet{Cermak16:Using} for computing SSEs in unperturbed games, where they showed that it performs better than a single MIP. Because our LP for perturbed games uses residual probabilities for the follower, we can apply the branching methodology of \citet{Cermak16:Using}. At each node in the search we choose some information set $I$ where more than one action is recommended. We then branch on which action in $A(I)$ to recommend. Forcing a given action is accomplished by requiring all other action probabilities be zero. Our branch-and-bound chooses information sets according to depth, always branching on the shallowest one with at least two recommended action. We explore actions in descending order of mass, where the mass on $a\in A(I)$ (with sequence $\sigma_f$) is $\sum_{\sigma_\ell \in rel(\sigma_f)}p(\sigma_\ell,\sigma_f)$. The algorithm finds an SSE of the perturbed game. In the limit as the perturbation approaches zero, this yields a QPSE. No algorithm is currently known for computing such an exact limit. In practice, we pick a small perturbation and solve the branch-and-bound using that value. This immediately leads to an approximate notion of QPSE (akin to approximate refinement notions in non-Stackelberg EFGs~\cite{farina2017regret,kroer2017smoothing}). Another approach is to use our algorithm as an anytime algorithm where one runs it repeatedly with smaller and smaller perturbation values. \section*{Appendix} \subsection*{Omitted Proofs} \lemmanepert* \begin{proof} % % Since, $r_f(\epsilon) \in \mathsf{BR}_{\Gamma(\epsilon)}(r_\ell(\epsilon))$ if and only if % $ r_f(\epsilon) \in \arg\max_{r_f: F_f r_f = f_f, r_f \geq \xi_f(\epsilon)} r_\ell(\epsilon)^T U_f r_f $, % introducing variables $\tilde{r}_f = r_f - \xi_f(\epsilon)$ and dropping the constant term $r_\ell(\epsilon)^T U_i \xi_f(\epsilon) $ from the objective, we obtain that $r_f(\epsilon)$ must be an optimal solution to Problem~$\mathcal{P}(\epsilon)$. % % % % \end{proof} \lemmaperta* \begin{proof} Let us consider Problem~$\mathcal{D}(\epsilon)$. % First, observe that, for every information set $I \in \mathcal{I}_f$, the objective function coefficient for the variable $v_{f, I}$ is equal to $\xi_f(\epsilon,\sigma_f(I)) - \sum_{a \in A(I)} \xi_f(\epsilon,\sigma_f(I) a)$. % Assuming $\Gamma(\epsilon)$ is well-defined, such coefficients are positive for every $v_{f, I}$. % Then, in an optimal solution $v_f^\ast \in \mathbb{R}^{|\mathcal{I}_f|+1}$ to Problem~$\mathcal{D}(\epsilon)$, each variable $v_{f,I}$ is set to its minimum given Constraints~\eqref{eq:constraints_rewritten}. % We prove Equation~\eqref{eq:inductive_argument} using a simple inductive argument. % The base case of the induction is when there is no information set $\hat{I} \neq I \in \mathcal{I}_f$ with $I \succeq \hat{I}$. % For every action $a \in A(I)$, $v_{f,I} \geq \sum_{\substack{\sigma \in \Sigma : \sigma_f = \sigma_f(I)a }} u_f(\sigma) r_\ell(\epsilon,\sigma_\ell)$, which, using the fact that $v_{f, I}^\ast$ must be set to its minimum possible value given the constraints, implies the following: % \begin{align*} v_{f, I}^\ast &= \max_{a \in A(I)} \sum_{\substack{\sigma \in \Sigma : \sigma_f = \sigma_f(I) a }} u_f(\sigma) r_\ell(\epsilon,\sigma_\ell) = \\ & = \max_{\substack{\hat{r}_f \in R_f(I)}} g_{f, I}( r_\ell(\epsilon), \hat{r}_f), \end{align*} % where the last equality holds since $\sum_{a \in A(I)} \hat{r}_f(\sigma_f(I)a) = \hat{r}_f(\sigma_f(I)) = 1$, for the definition of realization plan. % % As for the inductive step, let us consider an information set $I \in \mathcal{I}_f$ and assume, by induction, that Equation~\eqref{eq:inductive_argument} holds for every information set $\hat{I} \neq I \in \mathcal{I}_f$ with $I \succeq \hat{I}$. % We can write: % \begin{align*} v_{f, I}^\ast &= \max_{a \in A(I)} \hspace{-0.1cm} \sum_{\substack{\sigma \in \Sigma : \sigma_f =\sigma_f(I) a }} \hspace{-0.6cm} u_f(\sigma) r_\ell(\epsilon,\sigma_\ell) + \hspace{-0.8cm} \sum_{\substack{ \hat{I} \in \mathcal{I}_f: \sigma_f(\hat{I}) = \sigma_f(I)a}} \hspace{-0.8cm} v_{f,\hat{I}}^\ast = \\ & = \max_{a \in A(I)} \sum_{\substack{\sigma \in \Sigma : \sigma_f = \sigma_f(I) a}} u_f(\sigma) r_\ell(\epsilon,\sigma_\ell) \quad+ \\ & \quad\quad + \hspace{-0.5cm} \sum_{\substack{ \hat{I} \in \mathcal{I}_f: \sigma_f(\hat{I}) = \sigma_f(I)a}} \max_{\substack{\hat{r}_f \in R_f(\hat{I})}} g_{f, \hat{I}}(r_\ell(\epsilon), \hat{r}_f) = \\ & = \max_{\substack{\hat{r}_f \in R_f(I)}} g_{f, I}(r_\ell(\epsilon), \hat{r}_f), \end{align*} % where the first equality directly follows from the optimality of $v_f^\ast$, the second one from the inductive hypothesis, while the last equality holds since we have $\sum_{a \in A(I)} \hat{r}_f(\sigma_f(I)a) = \hat{r}_f(\sigma_f(I)) = 1$. % \end{proof} \lemmapertb* \begin{proof} % Let $v^\ast \in \mathbb{R}^{|\mathcal{I}_i| + 1}$ be an optimal solution to Problem~$\mathcal{D}(\epsilon)$ that satisfies Constraint~\eqref{eq:constraints_rewritten}, for $I \in \mathcal{I}_f$ and $a \in A(I)$, with equality. % % We can write: % \begin{align*} v_{f, I}^\ast &= \hspace{-0.3cm} \sum_{\substack{\sigma \in \Sigma : \sigma_f = \sigma_f(I) a}} \hspace{-0.5cm} u_f(\sigma) r_\ell(\epsilon,\sigma_\ell) + \hspace{-0.3cm} \sum_{\substack{ \hat{I}_f \in \mathcal{I}_f: \sigma_f(\hat{I}) = \sigma_f(I)a}} \hspace{-0.5cm} v_{f,\hat{I}}^\ast = \\ & =\hspace{-0.3cm} \max_{\substack{\hat{r}_f \in R_f(a)}} g_{f,I}(r_\ell(\epsilon) , \hat{r}_f) = \hspace{-0.3cm}\max_{\substack{\hat{r}_f \in R_f(I)}} g_{f,I}(r_\ell(\epsilon), \hat{r}_f), \end{align*} % where the second equality holds for the optimality of $v_f^\ast$ and the last one for Lemma~\ref{lem:optimal_dual}. % \end{proof} \lemmabr* \begin{proof} % First, let us notice that, for every $I\in \mathcal{I}_f$ and $a \in A(I)$, the following relation holds: % \begin{align}\label{eq:double_impl_agent_sequence} & \hspace{-0.2cm} \max_{\substack{\hat{r}_f \in R_f(a)}} g_{f, I}(r_\ell, \hat{r}_f) = \max_{\substack{\hat{r}_f \in R_f(I)}} g_{f,I}( r_\ell, \hat{r}_f ) \Longrightarrow \\ &\hspace{-0.2cm} \max_{ \substack{\hat{\pi}_f \in \Pi_f : \hat{\pi}_{f a} = 1}} u_{f, I} \left( \pi_\ell, \pi_f \big/_{I} \hat{\pi}_f \right) = \max_{\hat{\pi}_f \in \Pi_f} u_{f, I} \left( \pi_\ell, \pi_f \big/_{I} \hat{\pi}_f \right) \nonumber \end{align} % In order to see this, for $I \in \mathcal{I}_f$, let $Z(I) \subseteq \mathcal{Z}$ be the set of terminal nodes that are potentially reachable from $I$, and, for $h \in Z(I)$ and $\hat{\pi}_f \in \Pi_f$, let $\mathcal{U}_{f,h}(\pi_\ell, \hat{\pi}_f) = u_f(h) \prod_{a \in \sigma_\ell(h)} \pi_{\ell a} \prod_{a \in \sigma_f(h) \setminus \sigma_f(I) } \hat{\pi}_{fa} $. % Given the realization equivalence of $r_\ell$ and $\pi_\ell$, and the fact that $\hat{r}_f(\sigma_f(I)) = 1$, the left-hand side in the first line of Equation~\eqref{eq:double_impl_agent_sequence} is equivalent to $\max_{\hat{\pi}_f \in \Pi_f : \hat{\pi}_{f a} = 1} \sum_{h \in Z(I)} \mathcal{U}_{f,h}(\pi_\ell, \hat{\pi}_f) $, while the right-hand side is the same as $ \max_{\hat{\pi}_f \in \Pi_f} \sum_{h \in Z(I)} \mathcal{U}_{f,h}(\pi_\ell, \hat{\pi}_f) $. % Then, by dividing both sides of the equality in the first line of Equation~\eqref{eq:double_impl_agent_sequence} by $\sum_{h \in Z(I) } \prod_{a \in \sigma_f(h)} \pi_{f a}$, by definition of $u_{f, I}(\pi_\ell,\pi_f \big/_{I} \hat{\pi}_f)$ we get the second line. % % Now, say that the condition of the lemma holds for every $a \in A(I)$. % Clearly, we have $\max_{\hat{\pi}_f \in \Pi_f : \pi_f {=}_{I} \hat{\pi}_f } u_{f,I} ( \pi_\ell, \pi_f \big/_{I} \hat{\pi}_f ) = \sum_{a \in A(I)} \pi_{f a} \max_{\hat{\pi}_f \in \Pi_f : \hat{\pi}_{fa} = 1 } u_{f, I} ( \pi_\ell , \pi_f \big/_{I} \hat{\pi}_f ) $, and, since $\pi_{fa} > 0$ only if $\max_{\substack{\hat{r}_f \in R_f(a)}} g_{f, I}(r_\ell, \hat{r}_f) = \max_{\substack{\hat{r}_f \in R_f(I)}} g_{f,I}(r_\ell, \hat{r}_f)$, Eq.~\eqref{eq:double_impl_agent_sequence} proves the result. % \end{proof} \lemfive* \begin{proof} % % % First, notice that there must exist $\bar k \in \mathbb{N}$ such that, for all $k \in \mathbb{N} : k \geq \bar k$, and for every follower's information set $I \in \mathcal{I}_f$ and action $a \in A(I)$, if $\pi_{f a} > 0$, then ${r}_f(\epsilon_k,\sigma_f(I) a) > \xi_f(\epsilon_k,\sigma_f(I) a)$. % Otherwise, by conditions (ii)-(iii) in Definition~\ref{def:qp_pert}, it would be $\pi_{fa} = 0$. % Let us fix $I \in \mathcal{I}_f$ and $a \in A(I)$. % Suppose that $\pi_{fa} > 0$. % For all $k \in \mathbb{N} : k \geq \bar k $, we have that ${r}_f(\epsilon_k,\sigma_f(I) a) > \xi_f(\epsilon_k,\sigma_f(I) a)$, which, by Theorem~\ref{thm:compl_slack}, implies the following: $$ \max_{\substack{\hat{r}_f \in R_f(a)}} g_{f, I}({r}_\ell(\epsilon_k) , \hat{r}_f) = \max_{\substack{\hat{r}_f \in R_f(I)}} g_{f, I}({r}_\ell(\epsilon_k), \hat{r}_f). $$ % Thus, Lemma~\ref{lem:lemma_br_I} allows us to conclude that $\pi_f \in \mathsf{BR}_{I}(\pi_{\ell, k})$ for all $k \in \mathbb{N} : k \geq \bar k$, which proves the result. % % % \end{proof} \thmsefcepure* \begin{proof} First, we check that the leader strategy is valid. The argument is identical to that of \citet{Cermak16:Using}. For the leader strategy at a given information set $I$ we pick an arbitrary $\sigma_f \in rel(\sigma_\ell(I))$ that is played with positive probability and use the value $p(\sigma_\ell(I)a,\sigma_f)$ for all $a \in I$. All $\sigma_f \in rel(\sigma_\ell(I))$ recommend identical probability on $\sigma_\ell(I)a$ due to \eqref{eq:sefce_lp_leader_sequence_constraint} and the fact that we allow only a single follower action to be recommended at every follower information set. The incentive constraints \eqref{eq:sefce_lp_incentive_sequence} - \eqref{eq:sefce_lp_incentive_optimality} are identical to the original constraints given by \citet{Stengel08:Extensive}, so we only need to argue that we correctly represent the value of sending the residual along each sequence. But the value of sending the residual on $\sigma_f$ is simply the original value $\sum_{\sigma_\ell \in rel(\sigma_f)} p(\sigma_\ell,\sigma_f) u_f(\sigma_\ell, \sigma_f)$, except that we can send at most $\eta(\sigma_f)$ probability on $\sigma_f$, plus the value of whichever choice we make for sending residual along descendants of $\sigma_f$. This is exactly the value that we encode in our constraints. It is easy to see that any SSE is a feasible solution to the LP: since the follower plays a pure strategy we can assign them their pure strategy, and assign the leader SSE strategy the same way across all follower recommendations. \end{proof} \subsection*{Limits of SSEs are QPSSEs} Here, we show that limits of SSEs of perturbed SEFGs are QPSSEs of the original, unperturbed SEFGs, as the magnitude of the perturbation vanishes. \begin{theorem}\label{thm:limit_sse} Given a perturbed SEFG $(\Gamma, \xi_\ell, \xi_f)$, let $\{ \epsilon_k \}_{k \in \mathbb{N}} \rightarrow 0$ and let $\{ (r_\ell(\epsilon_k), r_f(\epsilon_k)) \}_{k \in \mathbb{N}}$ be a sequence of SSEs in $\Gamma(\epsilon_k)$. % Then, any limit point $(\pi_\ell, \pi_f)$ of the sequence $\{ (\pi_{\ell, k}, \pi_{f, k}) \}_{k \in \mathbb{N}}$ is a QPSSE of $\Gamma$, % where $(\pi_{\ell, k}, \pi_{f, k})$ are equivalent to $(r_\ell(\epsilon_k), r_f(\epsilon_k))$ for all $k \in \mathbb{N}$. \end{theorem} \begin{proof} % First, as for Theorem~\ref{thm:limit_se}, Lemma~\ref{lem:limit_follower_response} allows us to conclude that point (i) in Definition~\ref{def:qpsse} holds. % Let us prove point (ii). % % By contradiction, suppose that it does not hold, % i.e., no matter how we choose sequences $\{ \pi_{i,k} \}_{k \in \mathbb{N}}$, for $i \in \mathcal{N}$ and $\pi_i \in \Pi_i$, there are ${I} \in \mathcal{I}_\ell \cup \{ I_\emptyset \}$, $\hat{\pi}_\ell \in \Pi_\ell$, and $ \hat{\pi}_f \in \Pi_f : \hat{\pi}_f \in \mathsf{BR}_{\hat I}(\pi_{\ell,k} \big/_{I} \hat{\pi}_{\ell,k}) $ for all $ \hat I \in \mathcal{I}_f$, with $ u_{\ell} (\pi_{\ell,k} \big/_{I} \pi_\ell, \pi_{f,k} ) < u_{\ell} (\pi_{\ell,k} \big/_{I} \hat{\pi}_\ell, \hat{\pi}_{f,k} ) $. % By continuity, there exists $\bar k \in \mathbb{N}$ such that, for all $k \in \mathbb{N}: k \geq \bar k$, $ u_{\ell} (\pi_{\ell,k} \big/_{I} \pi_{\ell,k}, \pi_{f,k} ) = u_{\ell} (\pi_{\ell,k}, \pi_{f,k} ) < u_{\ell} (\pi_{\ell,k} \big/_{I} \hat{\pi}_{\ell,k}, \hat{\pi}_{f,k} ). $ % Let sequence $\{ \hat{\pi}_{\ell, k} \}_{k \in \mathbb{N}}$ be such that $\hat{r}_\ell(\epsilon_k) \in R_\ell(\epsilon_k)$ for all $k \in \mathbb{N}$, where each realization plan $\hat{r}_\ell(\epsilon_k)$ is equivalent to the strategy $\pi_{\ell,k} \big/_{I} \hat{\pi}_{\ell,k}$. % Similarly, let sequence $\{ \hat{\pi}_{f, k} \}_{k \in \mathbb{N}}$ be such that $\hat{r}_f(\epsilon_k) \in R_f(\epsilon_k)$ for all $k \in \mathbb{N}$, where each $\hat{r}_f(\epsilon_k)$ is equivalent to $\hat{\pi}_{f,k}$. % Notice that we can always choose the two sequences as described above, since we enforced point (iii) in Definition~\ref{def:qp_pert}. % Clearly, $\hat{r}_f(\epsilon_k) \in \mathsf{BR}_{\Gamma(\epsilon_k)}(\hat{r}_\ell(\epsilon_k))$, otherwise $ \hat{\pi}_f \notin \mathsf{BR}_{\hat I}(\pi_{\ell,k} \big/_{I} \hat{\pi}_{\ell,k}) $ for some $ \hat I \in \mathcal{I}_f$, a contradiction. % Using the equivalence between strategies and realization plans, for $k \in \mathbb{N} : k \geq \bar k$ we have that $u_\ell(r_\ell(\epsilon_k), r_f(\epsilon_k)) < u_\ell(\hat{r}_\ell(\epsilon_k),\hat{r}_f(\epsilon_k))$, which contradicts the fact that $(r_\ell(\epsilon_k), r_f(\epsilon_k))$ is an SSE of $\Gamma(\epsilon_k)$. % % % % % % % % % % % % \end{proof}
1,314,259,993,086
arxiv
\section*{Introduction} Nowadays floating electrons in a bulk diamond are discussed as alternative to classical lead in new electronic devices \cite{Ray2011}. Laterally confined electrons above liquid helium have been demonstrated and proposed for advanced computing applications \cite{Lyon2006, Platzman1999}. Floating electrons in a nanodiamond are discussed in this paper. We are justifying here that nanodiamonds can be used as quantum dots to hold unpaired electrons in qubits. \subsection*{Experimental evidences of surface states in nanodiamond} Alongside with diamagnetic properties nanodiamonds demonstrate paramagnetic properties \cite{Belobrov2001, Levin2008}, which is unusual for a bulk diamond. The nature of unpaired electron spins resulting in paramagnetism is subject of great interest because unpaired electrons are not connected with d-or f-paramagnetic ions \cite{Levin2008}. $^{13}C$~NMR relaxation time underlines evidence of unpaired spins in a nanodiamond \cite{Levin2008, Fang2009} and gives approximate answer about surface localization of unpaired electrons \cite{Fang2009}. Purpose of our work is to prove, that unpaired electrons exhibited in described experiments belong to surface states of a 5nm diamond ball. Surface localization of electronic density agrees with the PEELS scan of a single nanodiamond \cite{Peng2001, Belobrov2003}. This explains spikes (two oppositely charged layers) which depend on the size of detonation nanodiamonds \cite{Zhirnov2004} in electron emission researches. The g-factor of a unpaired electron in a nanodiamond is $2.0027$ \cite{Belobrov2001}, which is closer to free electron ($g =2.0023$) than to localized electron on atom ($\approx 1$) or NV-center. Existence and formation of NV-centres in a 5nm diamond ball is still not a solved problem \cite{Rabeau2007, Smith2009}. \subsection*{Theory of Tamm surface states} It is assumed, that the potential in crystals is infinite and periodic, so it possesses translation symmetry, that is generally not true for nanocrystals. There appears always one defect --- the surface that led us to the problem of electrons on the boundary or surface states. Surface states are electronic states at surface of crystals \cite{Davison1996}. They are formed due to transition from solid to vacuum and are found only at atom layers closest to the surface. Termination of a material with surface leads to a change of electronic band structure. Surface states were first described by I.E.~Tamm for an infinite dielectric crystal \cite{Tamm1932g}. Quantum nature of this states made them universal and identical on surface of bulk diamond and nanodiamond. I.E.~Tamm has considered diamond surface states by solving the Schröedinger equation for the Kronig-Penney potential. Tamm also predicted that electrons can move laterally on a surface like a free electron with diamond cohesion energy $0.1eV$. But still such floating electrons (surface conductivity of diamond) were not detected in bulk diamond \cite{Davison1996}. Electrons with energy $0.1eV$ correspond to de Broglie wave with $\approx 4nm$ wave length. This similarity of electron wave lengths and sizes of nanodiamods can explain both stability cases of Tamm surface states and distribution of nanodiamond size. In the discussion of surface states, one generally distinguishes between Tamm states \cite{Tamm1932g} and Shockley states\cite{Shockley1939}. However there is no real physical distinction between these two terms. "Shockley states" term is usually used for nearly-free electron approximation for clean and ideal surfaces. "Tamm states" is mostly used for tight-binding model. But this is not consistent and Tamm and Shockley states can coexist in the same system \cite{Klos2005}. Analysis of numerical solutions for electronic structure can give us answers to energy, electron localization in finite crystal and necessary conditions for appearing of surface states. Most important for analysis is to understand consequences of surface electron localization and effects of topological boundaries. Classical solid state theory can not to be used for nanodiamond because it is not allowed to introduce cycle boundary conditions for Bloch theorem \cite{Kittel2005} and get a good Brillouin zone for a limited crystal. \section*{Experimental} In this section we describe theoretical approaches and computational methods to calculations floating electrons in nanodiamond for comparison our results with experimental data. I.E.Tamm has considered diamond surface states by solving the Schrödinger equation for the Kronig-Penney potential \cite{Tamm1932g}. An ideal solution for finding surface states in 5nm diamond ball includes the Hamiltonian for 10000 moving carbons and 60000 electrons interacting with each other including magnetic moment (spin) interactions. This many electrons problem is impossible to solve completely. In view of small overlap between inner shell states, they can be assumed to be essentially the same as in isolated atoms. Using Born-Oppenheimer approximation nucleus positions are freezed. HOMO electrons can be imagined of as a "sea" of valence electrons moving in a crystalline lattice of nanodiamond. The mechanism looks similar to the theory of normal metals by Abricosov \cite{Abrikosov1972}, but these electrons can not be called conduction, floating or "free" electrons yet, because they interact quite strongly with ions. This strong bounding has been modelled with a deep potential holes of atoms nuclei. So, we can look for behaviour of electrons on HOMO in a field of strongly bound electrons (such as electrons on S orbitals). Although it is impossible to calculate this field exactly, many conclusions can be drawn from symmetry properties of crystal lattice, in particular its periodicity, which an average field must possess as well \cite{Abrikosov1972}. We therefore begin with analysis of the auxiliary problem of an electron in a periodic and limited field. We consider an electron moving in an external field characterized by a potential energy $U(r)$, which is periodic and limited. This less complex calculation in one dimensional periodic potential can help to estimate Tamm states and interpret physical and chemical properties on a nanodiamond \cite{Belobrov2001, Levin2008, Fang2009, Peng2001, Belobrov2003, Zhirnov2004, Kulakova2004}. Solutions of stationary single-electron Schrödinger equation in the framework of nearly-free electron approximation were made. Standard numerical methods were used \cite{Press1992v1} to solve and explore solutions. Software for calculation and visualization were written on Component Pascal language \cite{Mossenbock1993, Mosli1999}. Is is available at the project web page \cite{TammstatesURL} with detailed explorations of numerical methods. \section*{Results and Discussion} As shown in Figure \ref{fig:waveFunctions} electron density in some quantum states is localized on the surface and on energy level positioned between HOMO and LUMO (Fig. \ref{fig:energies}). This agrees with calculations of $n$-mantane ($C_{60}H_{60}$) electron structure calculations \cite{Belobrov2003}. \begin{figure}[htb] \begin{center} [width=16cm]{fig1waves.eps} \end{center} \caption{Wave functions in limited crystal lattice} \label{fig:waveFunctions} \end{figure} The wave functions of single electron in the potential of limited lattice with different eigenenergies are shown in Figure \ref{fig:waveFunctions}. The wave functions represent valence band (red) and conductivity band (blue) for a 1D limited crystal with 30 atoms. This approximately corresponds to the 1D slice of 5nm diamond balls. Between vacuum and lattice height electronic density (bound collective electrons) is placed. Two electrons with eigenenergies between valence band and conductivity band localized on the surface between lattice and vacuum on opposite sides of the 1D crystal. In diamond conductivity band are usually empty and represented as LUMO (blue) here. Number of states in valence band with energies including surface states is equal to number of atoms in lattice. This means that Tamm surface band is HOMO in the nanodiamond molecule, higher than the valence band (Figure \ref{fig:waveFunctions}) and always occupied with electrons from atoms of carbon (collective electrons of Tamm). \begin{figure}[htb] \begin{center} [width=16cm]{fig2spectr.eps} \end{center} \caption{Electron energy spectrum in limited crystal lattice} \label{fig:energies} \end{figure} The dependence of energy spectrum of electron in a limited 1D crystal on the lattice constant is presented in the Figure \ref{fig:energies}. Red lines are relevant to the electrons localized on the atoms in the crystal and represent valence band like electrons. The electrons with wave functions concentrated at the boundaries have higher energies. This level is parallel and separate to the valence band. The total amount of valence states and Tamm states is equal to the number of atoms in a 1D lattice. Lines above Tamm states present the electrons which are not localized on atoms but localized in whole crystal. This levels are empty in clear diamonds structures. Shockley states are separated from that conductivity band if they exist (for example in metals). The one dimensional solution can be extended for discussing floating electrons in 3D nanocrystal.This can give few localized floating electrons on the surface which have radial freedom only locked by the radius. These free electrons are locally limited on the nanodiamond and appear as localized spins in different experiments. The energy level of a Tamm floating electron is placed approximately in the middle of band gap of a nanodiamond. We can associate them with classical energy levels of vacancy in a bulk diamond. The results and our discussion on them motivates to interpret that vacancy electron, unpaired free electron, paramagnetic electron in magnetization, free electron in EPR as well as surface states in a nanodiamond are various aspects (multyface) of a Tamm floating electron. This implicates that nanodiamond is a native quantum dot and can be the base for a qubit to hold electron. As a result these surface electrons float easily from one atom to the next, so it is not possible to destinguish to which atom they belong to, like in the theory of normal metals \cite{Abrikosov1972}. This shared nature of the Tamm electrons is also responsible for the large cohesive energy responsible for protein adhesion of nanodiamonds and explains their specific magnetic properties. Surface states could not be interpreted us a dangling bonds, but dangling bonds can be the source of electrons to share to whole nanoparticle to keep it charge neutral. In this case nanodiamond should be seemed by chemists like macro-molecule of new substance because the electrons are kept by each of approximately 12000 nucleus and some of them give electrons to float near the surface. Tamm solution led to the understanding of electronic movement on a bulk single-crystal diamond surface with energy of the order of diamond cohesion energy with 0.1 eV ($800 cm^{-1}$). This energy corresponds to de Broglie wave with $\approx 4$nm which is equal to the size of thermodynamic stable nanodiamonds. Thermodynamic stability of nanodiamond proved by fact of their fabrication in explosion method as laser ablation high purity carbon black \cite{Hu2009}. This simple comparison allowed to understand that surface quantum effects can play a bigger role in nature of nanodiamonds and make an image of de Broglie wave propagating on the angle degree of freedom inside the nanodiamond subshell wave function of surface Tamm state and somehow minimize the energy of the particle outline to the stabilizing effect. Tamm surface states are self-consistent with all inner structure of particle. They form collective excitation of Tamm elections or quasi-particle. This Tamm quasi-particle has properties which are exhibits in Auger process, Zeeman transition, NMR relaxation and perhaps is in the cause of stability in 2 to 5nm diamond. It can be suggested to be possible to use free electrons as spin qubits \cite{Morton2011}. Floating electrons on the nanodiamond surface can be a good alternative to the free electrons on the liquid helium surface. It would be good to organize spin-dependent optic channels, as classics from single molecule Zeeman, which allows to measure the statistics of quantum events. It is a necessary condition for formation of a nanodiamond qubits control system. Better understanding of unpaired electrons nature gives new application of floating electrons for electronics. In nearest future floating electrons in nanodiamonds can be useful for quantum computing with floating point. Results were presented in the Nano and Giga Challenges 2011, poster \#15 \cite{Denisov2011ngs}. \section*{Conclusions} Nanodiamond exhibits unpaired electrons in magnetization, EPR, NMR and Auger relaxation. Wave functions and eigenenergies of a bound electron in a nanodiamond crystal have been calculated. It has been proved by using quantum mechanical analysis that unpaired electrons are self-condition of a nanodiamond as a limited crystal according to Tamm theory of surface states. The surface electron floating over a nanodiamond gives paramagnetic response and stabilizes the nanoparticle at small range of size. Possibly the spin of the floating electron can be used for floating point calculation in future quantum computers on the base of nanodiamond qubits. \section*{Abbreviations} NMR~---~nuclear magnetic resonance; ND~---~nanodiamond; NV-centres~---~nitrogen vacancy centres; HOMO~---~highest occupied molecular orbital; LUMO~---~lowest unoccupied molecular orbital; EPR~---~electronic paramagnetic resonance; PEELS~---~parallel electron energy loss spectra. \bigskip \section*{Competing interests} The authors declare that they have no competing interests. \section*{Author's contributions} Ivan A Denisov made the concept of calculation experiments and provided them. Peter I Belobrov gave the idea of research, provided background and helped to understand results. Both authors contributed to the preparation and revision of the manuscript and approved its final version. \section*{Acknowledgements} \ifthenelse{\boolean{publ}}{\small}{} Thanks to Tobias Binder for attentive reading of the manuscript. This research was supported by RFBR Grants 07-04-01340-a, 08-02-00259-a and (\# 09-08-98002-p-sibir-a) , ME\&S of RF Grant No. 2.2.2.2/5309 and U.S. CRDF Grant RUX0-002-KR-06/BP4M02. \newpage {\ifthenelse{\boolean{publ}}{\footnotesize}{\small} \bibliographystyle{bmc_article}
1,314,259,993,087
arxiv
\section{Introduction} The Loop Vertex Expansion (LVE) was introduced by Rivasseau in 2007 \cite{Rivasseau2007aa} as a new tool in constructive field theory in order to deal with matrix fields. It was then successfully applied to general tensor fields \cite{Gurau2013ac,Delepouve2014aa,Rivasseau2016aa}. For a general exposition in zero dimensions, close to the topic of this article, see \cite{Rivasseau2009aa}. The outcome of the LVE is an expression of the free energy, as well as the generating function of connected moments (or cumulants) as a sum over trees instead of connected graphs. As the number of trees increases only exponentially with the number of vertices and the contribution of each tree is exponentially bounded, the resulting series is convergent. The two ingredients of this expansion are the Hubbard–Stratonovich \cite{Hubbard,Stratonovich} intermediate field representation and the Brydges-Kennedy--Abdesselam-Rivasseau (BKAR) formula \cite{Brydges1987aa,Abdesselam1995aa}. In this paper we study the Borel summability in $1/N$ of the free energy and the cumulants of the quartic $\grp{O}(N)$-vector model in zero dimensions using the LVE (see Section \ref{sec-model} for the definition of the model). Note that here we are not interested in the pertubative expansion (the expansion at small coupling constant), which is well-understood for the quartic $ \grp{O}(N)$-vector model and is Borel summable in $0$ dimensions \cite{Rivasseau2007aa} and in $2$ dimensions \cite{Eckmann1974aa}. On the contrary, Borel summability in $1/N$ is less explored. The associated two-dimensional Euclidean quantum field theory was studied in \cite{BillionnetRenouard}, where the authors prove the Borel summability of the partition function and of the moments of the $\frac{g}{N}\norm{\phi}^{4}_2$ measure. But they discuss neither the free energy nor the cumulants. Passing between the two is rather non trivial as one needs to take a logarithm. The raison d'\^etre of the LVE is to take this logarithm rigorously and uniformly in $N$. A related model, the spherical $\grp{O}(N)$ model (or non-linear $\sigma$-model), has been studied in \cite{Kupiainen1980aa, Frohlich1982ld} where the authors showed that the partition function and the correlation functions at high enough temperature are Borel summable in $1/N$. However, contrary to the model we study here, the spherical $\grp{O}(N)$ model does not have any issues of convergence at large field as the field is restricted to belong to the sphere $\mathbb S^{N-1}$. Techniques similar to the ones we use in this paper have been introduced in \cite{Gurau2014aa} for $N\times N$ matrices. However, only the Borel summability of the perturbative expansion in the coupling constant has been established in \cite{Gurau2014aa}: the status of the $1/N$ series has not been analyzed. The generalization of Borel summability results in $1/N$ to the case of matrices is not straightforward: contrary to the vector case, we do not have a representation of the partition function (with sources) in which $N$ is just a parameter. Consequently it has been impossible so far to prove that such functions can be extended to analytic functions in $1/N$ in some domain.\\ In this article the free energy and the generating function of cumulants of the quartic $\grp{O}(N)$ vector model are considered as functions of the coupling constant $g$ and of $1/N$. We look for the largest domain in the $(g,1/N)$-plane allowing their bivariate analytic continuation. After introducing the model in Section \ref{sec-model}, we present both the main tools and the two main results in Section \ref{sec:main}, namely the analyticity (Thm. \ref{THM1}) and the Borel summability (Thm. \ref{THM2}) domains of the free energy and the cumulants. We obtain that if $\lvert\arg g+\arg 1/N\rvert<3\pi/2$, the free energy and the cumulants are analytic in a cardioid shaped domain in $g$ and, for $\modulus{\arg g}<{\pi}$ they are Borel summable in $1/N$ along the real axis uniformly in $g$ for $g$ in a slightly smaller cardioid domain. The proofs of these theorems are presented in Sections \ref{sec4} and \ref{sec5}, respectively. In order to keep this article self-contained, we recall BKAR formula in Lemma \ref{thm:BKAR}, but other useful tools also appear in the appendix. \footnotesize \begin{acknowledgements} L.F. is supported by the EDPIF. R. G. and C.I.P-S. have been supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No818066) and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC-2181/1 - 390900948 (the Heidelberg STRUCTURES Cluster of Excellence). \end{acknowledgements} \normalsize \section{The model and the partition function} \label{sec-model} Before introducing the model, let us adopt the following notation: \begin{itemize} \item We denote by $I_n$ the identity matrix on $\bbR^n$ and by $\mathbbm{1}_n$ the $n\times n$ matrix with all entries equal to $1$. \item Let $C \in M_n(\mathbb{R})$ be symmetric positive and $X,Y \in \bbR^n$. We write $\langle X,Y \rangle_C$ for $\sum_{1\leq i,j \leq n} X_i C_{ij} Y_j$. If $C=I_n$, we omit it. We denote $\langle X,X \rangle$ by $\norm{X}^2$. Whenever $X \in \mathbb{R}^n$ is the argument of a function $F : \mathbb{R}^n \rightarrow \mathbb{C}$, we write \[\nonumber \langle \partial,\partial \rangle_C F(X)= \sum_{1 \leq i,j \leq n} C_{ij} \frac{\partial^2F}{\partial{x_i}\partial{x_j}}(X) \; . \] \item Let $C \in M_n(\bbR)$ be symmetric positive semi-definite. We denote by $\mu_C$ the centered Gaussian probability distribution of covariance $C$ on $\bbR^n$. Note that it exists and is unique (see appendix~\ref{gaussexp}) even if $C$ is degenerate. \item If $F:\mathbb{R}^n \rightarrow \mathbb{C}$, we denote $\mathbb{E}_C[F(X)]$ the expectation of $F$ with respect to $\mu_C$: \begin{equation} \nonumber \mathbb{E}_C[F(X)]=\int d\mu_C(X) F(X)=[e^{\frac{1}{2}\langle \partial,\partial \rangle_C}F(X)]_{X=0} \;. \end{equation} \item We write $a \lesssim b$ if there is a constant $K >0$ such that $a \leq K b$. If we want to specify that $K$ depends on some parameter $\alpha$, we write $a \lesssim_\alpha b$. \item Throughout this paper, we denote $1/N$ by $\epsilon$ when promoted to a complex variable. \end{itemize} Let $N$ be a positive integer and $g\in \{z\in\bbC\mid \Re z>0\}$. The zero-dimensional quartic $\grp{O}(N)$-vector model is a probability distribution $\nu$ on $\bbR^N$ defined as a perturbed Gaussian distribution in the following way: denoting by $\mathbb{E}$ the expectation with respect to $\nu$, for all $F:\bbR^N\rightarrow\bbC$ $\nu$-mesurable, the expectation of $F$ is \begin{align}\nonumber \mathbb{E}[F(X)]=\frac{\mathbb{E}_{I_N}[e^{-\frac{g}{8N}\norm{X}^4}F(X)]}{\mathbb{E}_{I_N}[e^{-\frac{g}{8N}\norm{X}^4}]} \;. \end{align} The Fourier-Laplace transform of the measure, also known in the physics literature as the \textit{partition function with sources} $J\in \bbR^N$, denoted $Z(g,1/N;J)$, is: \begin{align} \nonumber Z\big(g,\frac1N;J\big) = \mathbb{E}_{I_N}[e^{-\frac{g}{8N}\norm{X}^4}]\,\mathbb{E}[e^{\sqrt{N} \langle J,X\rangle}]=\mathbb{E}_{I_N}[e^{-\frac{g}{8N}\norm{X}^4+\sqrt{N} \langle J,X\rangle}] \;. \end{align} In particular, the \textit{partition function} of $\nu$, $Z(g,1/N;0) = \mathbb{E}_{I_N}[e^{-\frac{g}{8N}\norm{X}^4}]$, is the normalisation constant of that measure. \begin{remark} Note that we have made a particular choice of scaling of the sources $J$ with $N$. This scaling ensures that all the cumulants (see below for details) are non-trivial in the large $N$ limit. In the absence of this scaling, in the large $N$ limit the $\grp{O}(N)$-vector model is a Gaussian model with a complicated covariance corresponding to the resummation of the dominant diagrams in the large $N$ limit, the so-called cactus diagrams. \end{remark} From now on, we will switch to an integral notation, more adapted to the LVE, and more reminiscent of the functional integration in quantum field theory. In particular, in accordance to the usual notation of quantum field theory, we denote $\phi \in \bbR^N$ the random vector so that the partition function with sources rewrites: \begin{align}\label{ON_in0dim} Z\big(g,\frac1N;J\big)&=\int_{\bbR^N}\frac{d^N\phi}{(2\pi)^{N/2}}\; e^{-\frac 12\norm\phi^2-\frac g{8N}\norm\phi^4+ \sqrt{N} \langle J,\phi\rangle}=\int d\mu_{I_N}(\phi)\; e^{-\frac g{8N}\norm\phi^4+ \sqrt{N} \langle J,\phi\rangle} \;. \end{align} Our aim is to study the expansion in $1/N$ of the partition function and the cumulants of the measure $\nu$. At fixed $N\in\bbZ_{>0}$, the integral in eq.~\eqref{ON_in0dim} is absolutely convergent iff $\Re g\geq 0$ and defines $ Z(g,\tfrac{1}{N} ;J)$ as a holomorphic function of $g$ for all $g \in \{ z\in \bbC \mid \Re z>0\}$. In \cite{Rivasseau2007aa} it was noted that performing a change of variables (known as the Hubbard-Stratonovich transformation, or intermediate field representation) one can obtain a convergent expansion for the logarithm of the partition function. We thus insert in eq.~\eqref{ON_in0dim} the Hubbard-Stratonovich intermediate field representation ($\imath = \sqrt{-1}$): \begin{equation} \nonumber e^{-\frac{x^2}{2}}={\frac{1}{\sqrt{2 \pi }}} \int_{\mathbb{R}} dy \; e^{ - \frac{y^2}{2} + \imath x y } = \int d\mu_{1}(y) \; e^{\imath xy} \;, \end{equation} for the quartic interaction term, $x=\frac{1}{2}\sqrt{g/N}\norm\phi^2$ and obtain: \begin{align*} Z\big(g,\frac 1N ;J\big) & = \int d\mu_{{I}_N}(\phi)\int d\mu_1(\sigma) \; e^{\imath \frac{1}{2}\sqrt{g/N}\, \norm\phi^2 \sigma +\sqrt{N} \langle J, \phi \rangle} \crcr &= \int d\mu_{1}(\sigma) \; e^{\frac{N}{2}\ln R(\sigma, g/N) + \frac{N}{2}R(\sigma,g/N)\,\norm{J}^2}\nonumber \\&=\int d\mu_{\frac1N}(\sigma) \; e^{\frac{N}{2}\ln R(\sigma, g) + \frac{N}{2}R(\sigma,g)\,\norm{J}^2 \end{align*} where $R:\bbC^2 \setminus \{ (\frac{ 1}{\imath\sqrt{z}},z) \mathrel{:} z \in \bbC^* \} \rightarrow \bbC$, $R(\sigma,z) = {(1-\imath\sqrt{z}\sigma)}^{-1}$ is called the resolvent. \begin{remark} This transformation renders the O($N$) invariance explicit: the partition function depends only on the norm of the sources. \end{remark} We observe that, at fixed $N\in\bbZ_{>0}$, $Z$ can be analytically continued in $g$ to all $\bbC \setminus \bbR_-$. The intermediate field representation also makes $1/N$ an explicit parameter \cite{Kupiainen1080aa} in an integral representation of $ Z(g,1/N;J)$, contrary to eq.~\eqref{ON_in0dim} where $N$ is also present implicitly in the dimension of the integral. One can then study the analyticity properties of $Z(g,1/ N;J)$ seen as a function of the variable $\epsilon=1/N$ that, since we are interested in Borel summability along the positive real axis, we promote to $\mathrm H=\{z\in\bbC\mid \Re z>0\}$. We parameterize $\bbC^*$ as $\{(\modulus{z},\alpha)\in\bbR_+^*\times (-\pi,\pi]\}$ and for $z=(\modulus{z},\alpha)\in\bbC^*$, we write $\arg z=\alpha$, and we use the same parametrization for $\mathrm H=\{z\in\bbC^*\mid \modulus{\arg z}<\pi/2\}$. Regarding $g$, since it appears in a square root in the resolvent, it is wiser to take it to be an element of the Riemann surface of the square root, whose basic properties are recall hereafter: \begin{definition}[Riemann surface of the square root] Let us denote by $\sqrt{\phantom{z}}$ the principal branch of the complex square root defined on $\bbC\setminus\bbR_-$. Let $\Sigma$ be the associated Riemann surface. We write $\liftsqrt{\phantom{\omega}}$ for the analytic continuation of $\sqrt{\phantom{z}}$ to $\Sigma$. $\Sigma$ is a $2$-sheeted covering of $\bbC^*$ and can be parameterized as $\{(\modulus{z},\alpha)\in\bbR_+^*\times (-2\pi,2\pi]\}$ and for $z=(\modulus{z},\alpha)\in\Sigma$, we write $\liftarg z=\alpha$. Let $\mathrm I:z=(\modulus{z},\alpha)\in \bbC\setminus\bbR_-\mapsto (\modulus{z},\alpha)\in\Sigma$ be the canonical injection of $\bbC\setminus\bbR_-$ into $\Sigma$ and $\Pi$ be the projection of $\Sigma$ onto $\bbC^*$ namely $\Pi(\modulus{z},\alpha)=\modulus{z} e^{\imath\alpha}$. The two sheets of $\Sigma$ correspond to the two possible determinations of the square root on $\bbC\setminus\bbR_-$ : for $z\in\Sigma$ such that $\liftarg z\in(-\pi,\pi)$, $\liftsqrt z=\sqrt{\Pi z}$ and if $\liftarg z\in(-2\pi,-\pi)\cup(\pi,2\pi]$, $\liftsqrt z=-\sqrt{\Pi z}$ (but note that $(\liftsqrt{z})^2$ is always equal to $\Pi z $). We will denote by $\Sigma^+$ (resp.\@ $\Sigma^-$) the sheet of $\Sigma$ corresponding to $\sqrt{\phantom{z}}$ (resp.\@ to $-\sqrt{\phantom{z}}$), and we will also denote $\widetilde{R}(\sigma,z)\coloneqq (1-\imath\sigma\liftsqrt{z})^{-1}$. \end{definition} Therefore, from now on, the coupling constant $g$ is an element of $\Sigma$, and we aim to find the maximal domain of analyticity of the free energy and the cumulants as functions of $(g,\epsilon)\in \Sigma\times\mathrm H$. Since this will bring us to constantly deal with with the arguments of $g $ and $\epsilon$, in the rest of the article, we use the following convention: \begin{equation*} \text{we will use indifferently $\liftarg g$ and $\varphi$, as well as $\arg \epsilon$ and $\theta$} \end{equation*} At this point, partition function rewrites as \begin{align* Z(g,\epsilon ;J) &= \int d\mu_{\epsilon}(\sigma)\; e^{\frac{1}{2\epsilon}\ln \widetilde{R}(\sigma, g) + \frac{1}{2\epsilon}\widetilde{R}(\sigma,g)\,\norm{J}^2}\,, \end{align*} which enables to analytically continue it from $\mathrm I(\bbR_+^*)\times\{1/N\mid N\in \bbZ_{>0}\}$ to $\mathrm I(\bbC\setminus\bbR_-)\times \mathrm H$. In order to extend this continuation to some wider subdomain of $\Sigma\times\mathrm H$, for all $\psi \in (\theta-\pi/2,\theta+\pi/2)$ we define $ Z_\psi(g,\epsilon ;J)$ by \begin{align} Z_\psi(g,\epsilon ;J) &= \int_{e^{\imath \frac{ \psi}{2} }\bbR} \frac{d\sigma}{\sqrt{2\pi\epsilon}} \; e^{-\frac{\sigma^2}{2\epsilon}+\frac{1}{2\epsilon}\ln \widetilde{R}(\sigma, g) + \frac{1}{2\epsilon}\widetilde{R}(\sigma, g)\,\norm{J}^2}\nonumber\\ &= \int_{\bbR} \frac{d\sigma}{e^{-\imath \frac\psi2}\sqrt{2\pi \epsilon }} \; e^{-\frac{\sigma^2}{2\epsilon e^{-\imath \psi}}+\frac{1}{2\epsilon}\ln \widetilde{R}(\sigma e^{\imath \frac\psi2}, g) + \frac{1}{2\epsilon}\widetilde{R}(\sigma e^{\imath \frac\psi2}, g)\,\norm{J}^2} \nonumber \\ &= \int d\mu_{\epsilon e^{-\imath \psi}}(\sigma) \; e^{\frac{1}{2\epsilon}\ln \widetilde{R}(\sigma e^{\imath \frac\psi2} , g) + \frac{1}{2\epsilon}\widetilde{R}(\sigma e^{\imath \frac\psi2}, g)\,\norm{J}^2} \;. \label{eq-Ztiltcontour} \end{align} The integral is convergent and, furthermore, $Z_\psi(g,\epsilon ;J)=Z(g,\epsilon ;J)$ is independent of $\psi$. Indeed, let $s_{g,\epsilon,J}:\sigma\in\bbC\mapsto \frac{1}{\sqrt{2\pi \epsilon }} \; e^{-\frac{\sigma^2}{2\epsilon}+\frac{1}{2\epsilon}\ln \widetilde{R}(\sigma , g) + \frac{1}{2\epsilon}\widetilde{R}(\sigma , g)\,\norm{J}^2} $ so that $ Z_\psi(g,\epsilon ;J)=\int_{\bbR}e^{\imath \frac\psi2} s_{g,\epsilon,J}(\sigma e^{\imath \frac\psi2})d\sigma$. We then have $\frac{d}{d\psi}Z_\psi(g,\epsilon ;J)=\int_{\bbR}\frac{\imath}{2}e^{\imath \frac\psi2}d\sigma [s_{g,\epsilon,J}(\sigma e^{\imath \frac\psi2})+\sigma {s'}_{g,\epsilon,J}(\sigma e^{\imath \frac\psi2})]=0$ by integration by part.\\ Before going to the analyticity domain of the free energy and the cumulants, for the sake of comparison, we note the following result: \begin{proposition}\label{prop1} The partition function with sources of the zero-dimensional $ \grp{O}(N)$-vector model, $ Z(g,\epsilon ;J)$, can be analytically continued in $(g,\epsilon)$ from $\mathrm I(\bbR_+^*)\times \{1/N \mid N \in \bbZ_{>0}\}$ to the following domain of $\Sigma\times\mathrm H$: \begin{equation* \mathfrak{B} = \Big\{(g,\epsilon) \in \Sigma\times\mathrm H \,\big\vert\, \liftarg g+\arg \epsilon \in \big(-\frac{3\pi}{2},\frac{3\pi}{2}\big) \Big\}. \end{equation*} \end{proposition} \begin{remark} In the sequel, to prove Borel summability of the free energy or the cumulants of $\nu$, we will rely on the Nevanlinna-Sokal theorem \cite{Sokal1980aa}. One important hypothesis of this theorem is analyticity in a disk tangent to the imaginary axis and centered at a positive real number. We call such a domain a Sokal disk, see remark~\ref{remarksokaldisk}. Note that for $g \in \mathrm I(\bbC\setminus\bbR_-)$, the analyticity domain in the $\epsilon$-plane of the partition function with sources contains indeed a Sokal disk as for all $\theta \in (-\pi/2,\pi/2)$ and $\varphi\in(-\pi,\pi)$, $\varphi+\theta\in(-3\pi/2,3\pi/2)$. \end{remark} In order to prove Proposition \ref{prop1}, we need the following bound on the resolvent: \begin{lemma} For all $(\sigma, g) \in \mathbb{C}^*\times\Sigma $, \begin{equation} \modulus{\widetilde{R}(\sigma,g)} \leq \frac{1}{\modulus{\cos({\arg \sigma+ \frac{1}{2}\liftarg{g}})}}\,. \label{resbound} \end{equation} \end{lemma} \noindent This bound is trivial for $\liftsqrt{g}\sigma\in \imath\bbR$, which reflects the fact that the resolvent has a pole at $\sigma=1/\imath\liftsqrt{g}$. \begin{proof} Directly stems from $\modulus{1-\imath z}\geq\modulus{\cos\arg z}$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop1}] We start with the intermediate field representation \eqref{eq-Ztiltcontour}. Let $\mathfrak{H}_\psi$ be the following manifold: \begin{equation} \nonumber \mathfrak{H}_\psi\coloneqq\big\{(\sigma,g,\epsilon)\in\bbC\times\Sigma\times\mathrm H\mid\sigma e^{\imath \frac\psi2} \liftsqrt g \in\bbC\setminus\imath\bbR\big\}. \end{equation} We let $f_\psi$ from $\mathfrak{H}_\psi$ to $\bbC$ be the integrand in eq.~\eqref{eq-Ztiltcontour}: \begin{equation}\nonumber f_\psi(\sigma,g,\epsilon)=\frac{1}{e^{-\imath \frac\psi2} \sqrt{2\pi \epsilon}}e^{-\frac{\sigma^2}{2\epsilon e^{-\imath \psi}}+\frac{1}{2\epsilon}\ln\widetilde{R}(\sigma e^{\imath \frac\psi2}, g) + \frac{1}{2\epsilon}\widetilde{R}(\sigma e^{\imath \frac\psi2},g)\,\norm{J}^2}. \end{equation} $f_\psi$ is holomorphic on $\mathfrak{H}_\psi$ and $\int_\bbR f_\psi(\sigma,g,\epsilon)d\sigma$ coincides with \eqref{ON_in0dim} for $(g,\epsilon)\in\mathrm I(\bbR_+^*)\times\{1/N\mid N\in\bbZ_{>0}\}$. For all $\sigma\in\bbR$, $(g,\epsilon)\mapsto f_\psi(\sigma,g,\epsilon)$ is holomorphic on $ \mathfrak{A}_\psi\coloneqq\{(g,\epsilon)\in\Sigma\times\mathrm H\mid e^{\imath \frac\psi2} \liftsqrt g\in\bbC\setminus\imath\bbR\}$, that has two connected components, namely \begin{equation}\label{Apsiplus} \mathfrak{A}_{\psi}^+=\big\{(g,\epsilon)\in\Sigma\times\mathrm H\mid\varphi\in(-\pi-\psi,\pi-\psi)\big\} \end{equation} and $ \mathfrak{A}_{\psi}^-=\{(g,\epsilon)\in\Sigma\times\mathrm H\mid\varphi\in (-2\pi,2\pi]\setminus(-\pi-\psi,\pi-\psi)\}$. Moreover as $\modulus{\ln\widetilde{R}(\sigma e^{\imath \frac\psi2}, g)} \le \modulus{\ln\modulus{\widetilde{R}(\sigma e^{\imath \frac\psi2}, g)}}+\pi$, thanks to the bound \eqref{resbound}, the integral of $f_\psi$ is absolutely convergent, uniformly in $g$ and $\epsilon$, on any compact of $\mathfrak{A}_\psi^+$. Thus, it defines an analytic continuation of $Z_\psi=Z$ to $\mathfrak{A}_\psi^+$. Therefore, $Z$ is analytic on $\bigcup_{\psi \in (\theta-\pi/2,\theta+\pi/2)} \mathfrak{A}_\psi^+=\mathfrak{B}$ which concludes the proof of Proposition~\ref{prop1}. \end{proof} \begin{remark} This analytic continuation is the largest one that can be found thanks to the tilt of the contour of integration, since for $|\psi-\theta|\geq\pi/2$, the integral \eqref{eq-Ztiltcontour} becomes divergent. \end{remark} \begin{remark} In order to clarify why the tilting of the contour was needed, consider the following. Suppose we are interested in the function $h:\{\Re z>0\}\rightarrow\bbC, z\mapsto \int_{\bbR_+}e^{-zt}dt$ and ignore that $f(z)=1/z$. Clearly, $h$ is analytic on its domain of definition. We aim to analytically continue $h$ to some maximal domain. To this end, we observe that $h_{\psi}: z=\modulus{z}e^{\imath\alpha}\mapsto \int_{e^{\imath\psi}\bbR_+}e^{-zt}dt$ is analytic iff $\modulus{\alpha+\psi}<\pi/2$. Moreover, if $\modulus{\psi}<\pi$, the domains of analyticity of $h$ and $h_{\psi}$ overlap, and $h=h_{\psi}$ where they are both analytic. Thus $h_{\psi}$ is an analytic continuation of $f$ to a Riemann sheet. One needs to check whether this analytic continuation has a discontinuity at the real negative axis (in which case $0$ is a branch point) or a pole. In our case one gets a pole, but applying the same strategy to $z\mapsto \int_{\bbR}e^{-zt^2}dt$ one obtains a branch point of order $2$. We apply the same strategy to $Z$. \end{remark} Our aim is to obtain similar results for the free energy and the cumulants. As such quantities depend on the logarithm of the partition function $Z$ and $Z$ has zeroes, they will not simply inherit the analyticity properties of $Z$: we expect that the domain of analyticity of $\ln Z$ is smaller than the one of $Z$. Since $Z(0,\epsilon;J)$ is non vanishing, for $g$ close to $0$, $Z(g,\epsilon;J)$ is non vanishing too. However, for $g$ real negative $Z$ is discontinuous and $g=0$ does not belong to the domain of analyticity of $Z$ or $\ln Z$. In order to identify some domain of analyticity of $\ln Z$ we will rewrite it as a uniformly convergent series of analytic functions. This series is indexed by trees and converges for a small enough coupling constant thereby defining $\ln Z$ in some domain. This is in contrast with the perturbative expansion which writes the partition function and the cumulants as divergent series.\\ The core of our arguments heavily relies on the \textit{Loop Vertex Expansion} (LVE) \cite{Magnen2008aa,Rivasseau2007aa}, which we now present. \section{The Loop Vertex Expansion of the cumulants}\label{sec:main} In this section, we perform the LVE of the cumulants defined hereafter: \begin{definition}[The rescaled cumulants] For all $k\geq 1$, one defines the \textit{rescaled cumulant }of order $2k$, $\mathfrak{K}^{2k}(g,\epsilon)$, by the following relation: \begin{align} \label{scaled_cumul} \left( \sum_{\pi\in P_2(2k)} \prod_{(a_i,a_j) \in \pi} \delta_{a_i,a_j} \right) \mathfrak{K}^{2k}_\psi(g,\epsilon)&:= \epsilon \frac{\partial^{2k}}{\partial_{J_{a_1}}...\partial_{J_{a_{2k}}}} \left. \ln{Z_\psi(g,\epsilon ; J)} \right|_{J=0}, \end{align} where $ P_2(2k)$ is the set of pairings of $2k$ elements and let $\mathfrak K^{2k}(g,\epsilon)=\mathfrak K^{2k}_{\psi=0}(g,\epsilon)$. \end{definition} This scaling is chosen as to have a well-defined large-$N$ limit and the advantage of using $\mathfrak{K}^{2k}$ over the RHS of \eqref{scaled_cumul} is that the former is manifestly O($N$) invariant. Since only $\mathfrak{K}^{2k}$ will appear below, we refer to them as the cumulants.\\ Before going to the Loop Vertex Expansion, let us introduce a few notations. First of all, in the following, we will denote by $\mathcal T_n$ be the set of all labelled trees with $n$ vertices. To a tree $T\in \mathcal T_n$, we associate the symmetric $n \times n$ matrix $W^{T}(u)$ with diagonal entries equal to $1$ and off diagonal ones $W_{ij}^{F}(u)=W_{ji}^{F}(u)=w_{ij}^{F}(u)$ as given by eq.~\eqref{eq:BKAR}. Then, the LVE is written in terms of $$C^n_k = \big\{ (i_1,...,i_k)\in \{1,...,n\}^k \mid \text{for all } a,b \in \{1,...,k\},a \neq b \Rightarrow i_a \neq i_b \big\} \; .$$ The notation comes from the fact that $C_k^n$ is the configuration space of $k$ particles on the discrete $n$-point space. With this notation at hand, the LVE allows us to express to cumulants as a sum over \textit{ciliated trees}, for which we adopt the following convention: \begin{equation} \text{couples $(T,\mathfrak{c})$ made of a tree $T$ (with $n$ vertices) and \emph{cilia} $\mathfrak{c}\in C^n_k$ are denoted by $T_{\mathfrak{c}}$ }.\tag{$\star$} \label{T_c} \end{equation} For $i\in\{1,...,n\}$, we also denote $d_i(T_{\mathfrak{c}})=c(i)+d_i(T)$ with $c(i)=\textbf{1}_{i\in \mathfrak{c}}$ the coordination (or degree) of the vertex $i$ in $T_{\mathfrak{c}}$ including the cilia. The Loop Vertex expansion of the cumulants is given by the following proposition: \begin{proposition}[Loop Vertex expansion of the cumulants] For all $1\leq k\leq n$ and $\psi \in (\theta- \pi/ 2,\theta+\pi /2)$, the cumulants are given by the series \begin{multline*} \mathfrak{K}_{\psi}^{2k}(g,\epsilon) = 2^{k-1} \sum_{n \geq k} \frac{1}{n!} {\left(\frac{-\Pi g}{2}\right)}^{n-1}\\\times \sum_{T_{\mathfrak{c}}\in \mathcal{T}_n\times C^n_k} \int du_{T} \int d\mu_{\epsilon e^{-\imath \psi} W^{T}(u)}(\sigma) \bigg\{ \prod_{i=1}^n (d_i(T_{\mathfrak{c}}) -1)! \, \widetilde{R}^{d_i(T_{\mathfrak{c}})}(\sigma^{(i)} e^{\imath \frac \psi2}, g) \bigg\} \;, \end{multline*} for all $(g,\epsilon)\in\Sigma\times\mathrm H $ such that it converges (in particular for $(g,\epsilon)\in\mathrm I(\bbR_+^*)\times\bbR_+^* $). \end{proposition} \begin{proof} To perform the LVE, we first expand the partition function with sources as expressed in eq.~\eqref{eq-Ztiltcontour}. From now on, we fix $\psi \in (\theta- \pi/ 2,\theta+\pi /2)$ and for $(g,\epsilon)\in\mathfrak{B} $ we start from: \begin{align}\nonumber Z_{\psi}(g,\epsilon ;J) &= \int d\mu_{\epsilon e^{-\imath \psi}}(\sigma) \; e^{\frac{1}{2\epsilon}\ln \widetilde{R}(\sigma e^{\imath\frac \psi2}, g) + \frac{1}{2\epsilon}\widetilde{R}(\sigma e^{\imath\frac \psi2}, g)\,\norm{J}^2}\,. \end{align} We expand the exponential inside the integral and, using Fubbini's theorem, we exchange the sum and the integral to obtain: \begin{equation}\label{partfunc} Z_{\psi}(g,\epsilon ;J) = \sum_{n=0}^\infty \frac 1{(2\epsilon)^nn!} \int d\mu_{\epsilon e^{-\imath \psi }}(\sigma) \big[\ln \widetilde{R}(\sigma e^{\imath\frac \psi2}, g) + \widetilde{R}(\sigma e^{\imath\frac \psi2}, g)\norm{J}^2\big]^n \;. \end{equation} The use of Fubbini's theorem is justified by the following lemma: \begin{lemma}\label{thm-ZZtilde} Let $a\in (0,1/2]$, $(g,\epsilon)\in\Sigma\times \mathrm H$ and $\psi\in (\theta-\pi/2,\theta+\pi/2)$. Then, if $(g,\epsilon)\in\mathfrak A^+_\psi$ (see eq.~\ref{Apsiplus}), there exist two non negative reals $C_1, C_2$ independent of $\modulus{g}$ and $\modulus{\epsilon}$ such that for $n$ large enough: \begin{equation* A_n:= \frac{1}{(2\modulus{\epsilon})^nn!}\bigg| \int d\mu_{\epsilon e^{-\imath \psi}}(\sigma) \big[{\ln\widetilde{R}(\sigma e^{\imath\frac \psi2}, g) +\widetilde{R}(\sigma e^{\imath\frac \psi2}, g)\norm{J}^2}\big]^n \bigg|\leq \frac{C_1^n}{\modulus{\epsilon}^n n!}+\frac{C_2^n \modulus{g}^{an}}{{(\modulus{\epsilon}^n n!)}^{(1-a)}}\,. \end{equation*} In particular, at fixed $\psi\in (\theta-\pi/2,\theta+\pi/2)$, the sum of the $A_n$'s has infinite radius of convergence in both $\modulus{g}$ and $1/\modulus{\epsilon}$ for all $(g,\epsilon)\in\mathfrak A_\psi^+$. \end{lemma} \begin{proof}It is convenient to perform the change of variable $\sigma \rightarrow \sigma \sqrt{\modulus{g}}$ so that $A_n$ rewrites: \begin{equation*} \frac{1}{(2\modulus{\epsilon})^n n!}\bigg| \int d\mu_{\modulus{g}\epsilon e^{-\imath \psi}}(\sigma)[\ln \widetilde{R}(\sigma e^{\imath \frac\psi2},\frac{g}{\modulus{g}} )+\widetilde{R}(\sigma e^{\imath \frac\psi2},\frac{g}{\modulus{g}} ) \norm{J}^2]^n \bigg|. \end{equation*} Then, thanks to the bound in eq.~\eqref{resbound}, if $\modulus{\psi+\varphi}<\pi$, $\modulus{\widetilde{R}(\sigma e^{\imath \frac\psi2},\frac{g}{\modulus{g}} )} \cdot \norm{J}^2 \leq \frac{\norm{J}^2}{\cos{(\frac{\psi+\varphi}{2})}}$. Furthermore: \begin{equation*} \big| { \ln{ \widetilde{R}(\sigma e^{\imath \frac\psi2},\frac{g}{\modulus{g}} )} } \big|\leq \big[ \big(-\frac{1}{2}\ln\lvert \widetilde{R}^{-2}(\sigma e^{\imath \frac\psi2},\frac{g}{\modulus{g}} ) \rvert\big)^2+\pi^2 \big]^{1/2}. \end{equation*} Then, using $\lvert \widetilde{R}^{-2}(\sigma e^{\imath \frac\psi2},\frac{g}{\modulus{g}} ) \rvert = 1 +2\sin{\frac{\psi+\varphi}{2}}\modulus{\sigma}+\sigma^2\leq(1+\modulus{\sigma})^2$, we get $\ln{\lvert \widetilde{R}^{-2}(\sigma e^{\imath \frac\psi2},\frac{g}{\modulus{g}} ) \rvert}\leq \ln{(1+\modulus{\sigma})^2}\leq 2(1+\modulus{\sigma})^{2a}$ for $0<a<1/2$ implying $ | { \ln{\widetilde{R}(\sigma e^{\imath \frac\psi2},\frac{g}{\modulus{g}} )} }|\leq \sqrt{(1+\modulus{\sigma})^{4a}+\pi^2 } \le (1+\modulus{\sigma})^{2a}+\pi $ as $\pi>1$ so that: \begin{align*} A_n &\leq\frac{1}{(2\modulus{\epsilon})^n n!\sqrt{\cos(\psi-\theta)}}\int d \mu_{\frac{\modulus{g\epsilon}}{\cos(\psi-\theta)}}(\sigma) \big[(1+\modulus{\sigma})^{2a}+\pi +\norm{J}^2\cos^{-1}{(\frac{\psi+\varphi}{2})}\big]^n\\ &\leq \frac{1}{(2\modulus{\epsilon})^n n!\sqrt{\cos(\psi-\theta)}}\int d \mu_{\frac{\modulus{g\epsilon}}{\cos(\psi-\theta)}}(\sigma) \sum_{k=0}^n\binom{n}{k}(1+\modulus{\sigma})^{2ak} \big[\pi +\norm{J}^2\cos^{-1}{(\frac{\psi+\varphi}{2})}\big]^{n-k}\\ &\leq \frac{\big[\pi +\norm{J}^2\cos^{-1}{(\frac{\psi+\varphi}{2})}\big]^{n}}{\modulus{\epsilon}^n n!\sqrt{\cos(\psi-\theta)}}\int d \mu_{\frac{\modulus{g\epsilon}}{\cos(\psi-\theta)}}(\sigma) (1+\modulus{\sigma})^{2an}\,, \end{align*} where we used the fact that $\pi +\norm{J}^2\cos^{-1}{(\frac{\psi+\varphi}{2})}$ and $1+\modulus{\sigma}$ are greater than one. At this stage, rewriting: \begin{align*} \int d \mu_{\frac{\modulus{g\epsilon}}{\cos(\psi-\theta)}}(\sigma) (1+\modulus{\sigma})^{2an} &= \int_{\modulus{\sigma}<1} d \mu_{\frac{\modulus{g\epsilon}}{\cos(\psi-\theta)}}(\sigma) (1+\modulus{\sigma})^{2an} +\int_{\modulus{\sigma}>1} d \mu_{\frac{\modulus{g\epsilon}}{\cos(\psi-\theta)}}(\sigma) (1+\modulus{\sigma})^{2an} \\ &\leq \int_{\modulus{\sigma}<1} d \mu_{\frac{\modulus{g\epsilon}}{\cos(\psi-\theta)}}(\sigma) 2^{2an} +\int_{\modulus{\sigma}>1} d \mu_{\frac{\modulus{g\epsilon}}{\cos(\psi-\theta)}}(\sigma) (2\modulus{\sigma})^{2an} \\ &\leq 2^{2an} \bigg(\int_{\bbR} d \mu_{\frac{\modulus{g\epsilon}}{\cos(\psi-\theta)}}(\sigma) +\int_{\bbR} d \mu_{\frac{\modulus{g\epsilon}}{\cos(\psi-\theta)}}(\sigma) \modulus{\sigma}^{2an}\bigg)\\ &\leq 4^{an}\bigg(1+\frac{1}{2\sqrt{\pi}}\big[\frac{\cos{(\psi-\theta)}}{2\modulus{g\epsilon}}\big]^{1/2}\int_{\bbR_+}e^{-\cos{(\psi-\theta)}\frac{t}{2\modulus{g\epsilon}}}t^{an-1/2}dt\bigg)\\ &\leq 4^{an}\bigg(1+\frac{1}{2\sqrt{\pi}}\big[\frac{2\modulus{g\epsilon}}{\cos{(\psi-\theta)}}\big]^{an}\Gamma\big(an+\frac{1}{2}\big)\bigg)\,, \end{align*} and using the asymptotic of the gamma function we get $A_n \leq \frac{C_1^n}{\modulus{\epsilon}^n n!}+\frac{C_2^n \modulus{g}^{an}}{{(\modulus{\epsilon}^n n!)}^{(1-a)}}$.\end{proof} We now use the copies trick as stated in Lemma~\ref{replica} (see Appendix~\ref{replicatr} for the proof). With our integral notations, Lemma~\ref{replica} rewrites: \begin{lemma}[The copies trick] Let $n$ be a positive integer, $z\in\bbC$ with $\Re z>0$ and $F \in L^n(\bbR,\mu_{\textfrac{\lvert z\rvert^2}{\Re z} })$ a $\bbC$-valued function. Then $F^{\otimes n }:\bbR^n\to\bbC$, $X=(X_i)_{1\leq i\leq n} \mapsto \prod_{1=n}^n F(X_i)$ is in $L^1(\bbR^n,\mu_{{\textfrac{\lvert z\rvert^2 \mathbbm{1}_n}{\Re z}} })$ and furthermore we have: \begin{align}\label{eq:replica} \int d\mu_{z}(x) F^n(x) = \int d\mu_{z \mathbbm{1}_n}(X) F^{\otimes n}(X)=\int d\mu_{z \mathbbm{1}_n}(X) \prod_{i=1}^n F(X_{i}) \;. \end{align} \end{lemma} This lemma produces for an integration variable $\sigma$, $n$ variables of integration that we call the copies of $\sigma$ and that we denote by $(\sigma^{(i)})_{1\leq i \leq n}$, where we use parenthesis to avoid the confusion with the O($N$) indices. With this notation, using eq.~\eqref{eq:replica} in eq.~\eqref{partfunc} we obtain that: \begin{align} &Z_{\psi}(g,\epsilon ; J) =\sum_{n \geq 0} \frac{1}{(2\epsilon)^nn!} \int d\mu_{\epsilon e^{-\imath \psi} \mathbbm{1}_{n}}(\sigma) \prod_{i=1}^n \bigg\{ \ln \widetilde{R}(\sigma^{(i)}e^{\imath \frac\psi2}, g) + \widetilde{R}(\sigma^{(i)}e^{\imath \frac\psi2}, g)\norm{J}^2 \bigg\}\label{square_brackets} \\ ={}& \sum_{n \geq 0} \frac{1}{(2\epsilon)^nn!} \left[ \exp\Big(\frac{\epsilon e^{-\imath \psi}}{ 2} \langle \partial, \partial \rangle_{X(x)} \Big) \prod_{i=1}^n \bigg\{ \ln\widetilde{R}(\sigma^{(i)} e^{\imath \frac \psi2}, g) + \widetilde{R}(\sigma^{(i)} e^{\imath \frac \psi2}, g)\norm{J}^2 \bigg\} \right]_{\sigma^{(i)}=0,x_{ij}=1}\,,\nonumber \end{align} where $[X(x)]_{ii}=1$ and $[X(x)]_{ij}=x_{ij}$ for $i\neq j$. We now rewrite the above expansion as a sum indexed by forests over labelled vertices of analytic functions and consequently its logarithm as a sum indexed by trees over labelled vertices of analytic functions. The crucial point is that at order $n$ the number of such trees is of order $O(1)^n n!$, much less than the number of Feynman diagrams which is of order $O(1)^n(2n)!$, and the expansion is convergent. This rewriting is obtained thanks to the BKAR formula \cite{Brydges1987aa,Abdesselam1995aa}, which will be applied to the $x=(x_{ij})$ parameters. \begin{lemma}[BKAR formula] \label{thm:BKAR} Let $f: \mathbb{R}^{\frac{n(n-1)}{2}}\rightarrow {\mathbb C}$ be a smooth function. If $x\in\bbR^{\frac{n(n-1)}{2}}$, we denote its components by $x_{ij}$ for $i,j=1,2,\dots,n$, $i<j$. Let $F$ be a forest with vertex set $V(F)=\{1,2,\dotsc,n\}$. If there is an edge between vertices $i$ and $j$ ($i<j$), we write $(i,j)\in E(F)$. If both $i$ and $j$ belong to the same connected component of $F$, we let $P_{i\leftrightarrow j}^{F}$ stand for the unique path in $F$ between $i$ and $j$. Then we have: \begin{equation} f(x) \big|_{x=\mathbf{1}}=\sum_{F\in \mathcal F_n}\Big(\prod_{(i,j)\in E(F)} \int_{0}^{1} du_{ij}\Big) \, \frac{\partial^{|E(F)|} f(x) }{\prod_{(l,m)\in E(F)} \partial x_{lm}} \bigg| _{x=w^{F}} \,, \label{BKARformula} \end{equation} where $\mathbf{1}$ is the $\frac{n(n-1)}2$-vector with all the components equal to $1$, $\mathcal F_n$ is the set of all forests with $n$ labelled vertices, $|E(F)|$ is the number of edges of $F$, and $w^{F}\in\bbR^{\frac{n(n-1)}{2}}$ is given by: \begin{equation} \label{eq:BKAR} w^{F}_{ij}\coloneqq\begin{cases} \inf_{(k,l)\in P_{i\leftrightarrow j}^{F}} u_{kl}&\text{if $i$ and $j$ belong to the same component of $F$,}\\ 0 &\text{otherwise.} \end{cases} \end{equation} \end{lemma} Notice that $\prod_{(a,b) \in E(F)}\partial_{x_{ab}} = {\epsilon}^{\lvert E(F) \rvert}e^{-\imath \psi \lvert E(F) \rvert} \prod_{(ab) \in E(F)} \partial_{\sigma^{(a)}}\partial_{\sigma^{(b)}}$ when acting on the quantity in square brackets in \eqref{square_brackets}. Thus, by the BKAR formula~\eqref{BKARformula}, the partition function with sources rewrites as \begin{multline*} Z_{\psi}(g,\epsilon ; J) = \sum_{n \geq 0} \frac{1}{(2\epsilon)^nn!} \sum_{F\in \mathcal F_n} \int du_{F} \int d\mu_{\epsilon e^{-\imath \psi} W^{F}(u)}(\sigma){\epsilon}^{\lvert E(F) \rvert} e^{-\imath \psi \lvert E(F) \rvert}\\ \times\prod_{(a,b) \in E(F)} \partial_{\sigma^{(a)}}\partial_{\sigma^{(b)}} \prod_{i=1}^n \bigg\{ \ln\widetilde{R}(\sigma^{(i)} e^{\imath \frac \psi2}, g) + \widetilde{R}(\sigma^{(i)} e^{\imath \frac \psi2}, g)\norm{J}^2 \bigg\} \;, \end{multline*} where we recall that for $F\in \mathcal F_n$, $W^{F}(u)$ is the symmetric $n \times n$ matrix with diagonal entries equal to $1$ and off diagonal ones $W_{ij}^{F}(u)=W_{ji}^{F}(u)=w_{ij}^{F}(u)$ as given by eq.~\eqref{eq:BKAR}, $d u_{F}=\prod_{(ij)\in{F}} \int_{0}^{1} d u_{ij}$. The matrix $W^{F}(u)$ is positive. The logarithm of the partition function with sources is the generating function of the cumulants. Now that the partition function with sources is expressed as a sum over forests with the amplitude of a forest factorizing over the trees of this forest, its logarithm writes as a sum over trees. As $\lvert E(T) \rvert = |V(T)|-1$, we get: \begin{multline* \epsilon \ln{Z}_{\psi}(g,\epsilon ; J)= \sum_{n \geq 1} \frac{e^{-\imath (n-1) \psi}}{2^n n!} \sum_{T \in \mathcal T_n} \int du_{T} \int d\mu_{\epsilon e^{-\imath \psi} W^{T}(u)}(\sigma) \prod_{(a,b) \in E(T)} \partial_{\sigma^{(a)}}\partial_{\sigma^{(b)}} \sum_{k=0}^n \frac{{\norm{J}^{2k}}}{k!} \\ \times \sum_{\substack{1 \leq i_1,...,i_k \leq n \\ a \neq b \Rightarrow i_a \neq i_b}} \prod_{c=1}^k \widetilde{R}(\sigma^{(i_c)}e^{\imath \frac \psi2}, g) \prod_{j \neq i_1,...,i_k} \ln{\widetilde{R}(\sigma^{(j)}e^{\imath \frac \psi2}, g})\,, \end{multline*} were we recall that $\mathcal{T}_n$ stands for the set of all trees with $n$ labelled vertices. At this stage, we can rewrite the sum above in terms of $C^n_k$ and of \textit{ciliated trees} (recall the convention \eqref{T_c}). Indeed, repeatedly using the fact that $\partial_{\sigma}^k \ln \widetilde{R} (\sigma e^{\imath \frac \psi2}, g) = {(\imath e^{\imath \frac \psi2}\liftsqrt{ g})}^k (k-1)! \, \widetilde{R}^k(\sigma e^{\imath \frac \psi2}, g)$, $ \partial_{\sigma}^k \widetilde{R}(\sigma e^{\imath \frac \psi2}, g) = {(\imath e^{\imath \frac \psi2}\liftsqrt{ g})}^k k! \, \widetilde{R}^{k+1}(\sigma e^{\imath \frac \psi2}, g)$, $(\liftsqrt{g})^2=\Pi g$ and the combinatorial identity $ \sum_i d_i = 2(n-1) $, the logarithm of the partition function with sources becomes: \begin{multline*} \epsilon \ln{Z}_{\psi}(g,\epsilon ; J) = \frac{1}{2} \sum_{n \geq 1} \frac{1}{n!} {\left(\frac{- \Pi g}{2} \right)}^{n-1} \sum_{k=0}^n \frac{ \norm{J}^{2k} }{k!} \sum_{T_{\mathfrak{c}}\in \mathcal{T}_n\times C^n_k} \int du_{T} \int d\mu_{\epsilon e^{-\imath \psi} W^{T}(u)}(\sigma)\\ \times \prod_{i=1}^n \big\{ (d_i(T_{\mathfrak{c}}) -1)! \, \widetilde{R}^{d_i(T_{\mathfrak{c}})}(\sigma^{(i)} e^{\imath \frac \psi2}, g) \big\}. \end{multline*} Using $\frac{\partial^{2k}}{\partial_{J_{a_1}}...\partial_{J_{a_{2k}}}} \norm{J}^{2k} = 2^k k!\sum_{\pi\in P_2(2k)} \prod_{(a_i,a_j) \in \pi} \delta_{a_i,a_j}$, we obtain the following expression, that holds true as long as both the individual integrals converge and the overall series is convergent (recall that with the convention \eqref{T_c}, $T$ is the tree $T_{\mathfrak c}$ without cilia): \begin{multline}\label{cumul} \mathfrak{K}_{\psi}^{2k}(g,\epsilon) = 2^{k-1} \sum_{n \geq k} \frac{1}{n!} {\left(\frac{- \Pi g}{2}\right)}^{n-1}\\\times \sum_{T_{\mathfrak{c}}\in \mathcal{T}_n\times C^n_k} \int du_{T} \int d\mu_{\epsilon e^{-\imath \psi} W^{T}(u)}(\sigma) \bigg\{ \prod_{i=1}^n (d_i(T_{\mathfrak{c}}) -1)! \, \widetilde{R}^{d_i(T_{\mathfrak{c}})}(\sigma^{(i)} e^{\imath \frac \psi2}, g) \bigg\} \;, \end{multline} which concludes the proof. \end{proof} Our first main theorem concerns the domain in $(g,\epsilon)$ in which the rescaled cumulants are analytic in both variables. \begin{theorem}[Main Theorem 1: Analyticity]\label{THM1} For all $k \geq 1$, the cumulant of order $2k$ of the quartic $\grp{O}(N)$-vector model, $\mathfrak{K}^{2k}(g,\epsilon)$, as expressed by the series~\eqref{cumul}, is analytic in $g$ and $\epsilon$ on the domain $\mathfrak{C}$ consisting in all the couples $(g,\epsilon) \in \Sigma\times\mathrm H$ such that there exists $\psi\in(-\pi,\pi)$ for which the following inequalities hold: \begin{subequations}% \label{conds} \begin{align} \lvert g \rvert & <\frac{1}{4} (1+\cos{(\liftarg g+\psi)}) \sqrt{\cos{(\psi-\arg \epsilon)}} \label{conds1} \; , \\ \lvert \liftarg g+\psi \rvert & < \pi \label{conds2} \; , \\ \modulus{\psi-\arg \epsilon} & <\frac{\pi}{2} \label{conds3} \; . \end{align} \end{subequations}% \end{theorem} The proof of this theorem is given is Section~\ref{sec4}. \begin{corollary}[Domain as a Riemann sheet]\label{corol1} At fixed $g\in\Sigma$, the domain of analyticity in $\epsilon$ is independent of its modulus, and contains all $\epsilon\in\mathrm H$ such that $ - 3\pi/2 - \liftarg g < \arg \epsilon < 3\pi/2 - \liftarg g $. In particular, for all $\lvert \epsilon \rvert \geq 0$, $\theta \in (-\pi/2,\pi/2)$, and $\varphi \in (-\pi,\pi)$, $((\modulus{g},\varphi),\lvert \epsilon \rvert e^{\imath \theta})$ belongs to $\mathfrak{C}$ if $\lvert g\rvert$ is small enough (see discussion in Remark~\ref{rk:domain} for how small ``small enough'' is) and for such $g$, $\mathfrak{C}$ includes a Sokal disk in the $\epsilon$-plane (see Remark~\ref{remarksokaldisk}) of an arbitrary, positive radius. \end{corollary} \begin{proof} The two conditions on $\varphi$ and $\theta$ read: \begin{equation*} \begin{cases} \lvert \varphi+\psi \rvert < \pi \\ \lvert \psi -\theta \rvert < \frac{\pi}{2} \end{cases} \Leftrightarrow \begin{cases} -\pi< \varphi+\psi < \pi \\ -\frac{\pi}{2}< \theta-\psi < \frac{\pi}{2} \end{cases} \Rightarrow -\frac{3\pi}{2}< \varphi+\theta <\frac{3\pi}{2} \Leftrightarrow - \frac{3\pi}{2} - \varphi < \theta < \frac{3\pi}{2} - \varphi \; , \end{equation*} and observing that $\bigcap_{\varphi\in(-\pi,\pi)} (- 3\pi/2 - \varphi , 3\pi/2 - \varphi)=(-\pi/2,\pi/2)$ we conclude. \end{proof} \begin{theorem}[Main Theorem 2: Borel summability]\label{THM2} For small $\alpha > 0$ we define the subdomain $\mathfrak{C}_{\alpha}$ of $\mathfrak{C}$ made of all couples $(g,\epsilon) \in \Sigma\times\mathrm H$ such that there exists $\psi \in (-\pi,\pi)$ for which the following inequalities hold: \begin{subequations \label{condsa} \begin{align} \lvert g \rvert & <\frac{1}{4} (1+\cos{(\liftarg g+\psi)}) \sqrt{\cos{(\psi-\arg \epsilon)}} \,(1-\alpha), \label{condsa1} \\ \lvert \liftarg g+\psi \rvert & < \pi\, (1-\alpha), \label{condsa2}\\ \lvert \psi-\arg \epsilon \rvert &< \frac{\pi}{2}\, (1-\alpha).\label{condsa3} \end{align} \end{subequations}% For all $k \geq 1$, the rescaled cumulant of order $2k$ of the $\grp{O}(N)$-vector model $\mathfrak{K}^{2k}(g,\epsilon)$ is Borel summable in $\epsilon$ along the positive real axis for $g$ inside a non trivial domain (see Remark~\ref{rk:domain}) and for $g$ inside this domain they can be computed as the Borel sum of their large $N$ expansion. \begin{proof} The proof of this theorem follows from Corollary~\ref{corol1} and from the following lemma, proven in Section~\ref{sec5}: \begin{lemma}\label{lemmaboundrest} For small $\alpha>0$, for all $k\geq1$ and for all $(g,\epsilon)\in \mathfrak{C}_\alpha$, there exists two constants $C_\alpha >0$ and $K_\alpha >0$ independent of $g$ and $\epsilon$ but depending on $\alpha$ such that the Taylor rest term of order $q$ in the $\epsilon$ expansion of the cumulant, denoted by $R^{2k}_q(g,\epsilon)$ (see eq.~\eqref{restterm} for a closed expression of this rest term), obeys the following bound for $q$ large enough: \begin{equation* \lvert R^{2k}_q(g,\epsilon) \rvert \lesssim_k C_\alpha K_\alpha^q {\lvert \epsilon \rvert}^{q} q!\;. \end{equation*} \end{lemma} This bound together with Corollary~\ref{corol1} prove that the cumulants verify the hypotheses of the Nevanlinna Sokal theorem \cite{Sokal1980aa} uniformly in $g$ (for completeness we recall the relevant version of this theorem in Thm.~\ref{thm:Sokal}). \end{proof} \end{theorem} \begin{remark}\label{rk:domain} We now wish to visualize the domain $\mathfrak{C} \subset \Sigma\times \mathrm H $ (or $\mathfrak{C}_{\alpha} $ for $\alpha\to 0$). Let us go to the $\bbC$-plane of $\Pi g$ and look for the curve $\rho(\varphi)$ defined by: $$ \varphi \mapsto \rho(\varphi):= \sup \{ \, |g| : \text{there is a $\psi=\psi(\theta)$ such that $|g|e^{\imath\varphi}\in \mathrm{pr}_1 \mathfrak{C}^{\psi(\theta)}$ for all $ \theta \in (-\pi/2,\pi/2 )$} \, \} .$$ Here, $\mathfrak C^\psi$ consists of the points $(g,\epsilon)$ verifying eqs. \eqref{conds} for a given $\psi$, and pr$_1$ is the projection to the first $\bbC$-factor (or the $g$-plane), so that, in particular, the conditions: $$ \lvert \varphi+\psi(\theta) \rvert < \pi,\quad \lvert \psi(\theta)-\theta \rvert < \frac{\pi}{2},\quad \text{ and }\quad\lvert g \rvert <\frac{1}{4} \big(1+\cos{[\varphi+\psi(\theta) ]}\big) \sqrt{\cos{(\theta-\psi(\theta) )}} \;, $$ must hold. The visualization of this curve is easier for a linear choice $\psi_\xi(\theta)= \xi \cdot \theta$, where $0<\xi < 1$ is a new parameter. Denoting by $\rho_\xi(\varphi)$ the curve for this particular choice of $\psi=\psi_\xi$, namely: \begin{align} \rho_\xi(\varphi) := \sup \{\, |g| : \,\, |g|e^{\imath\varphi}\in \mathrm{pr}_1 \mathfrak{C}^{\psi_\xi(\theta)} \text{ for all } \theta \in (- \pi/2,\pi/2 ) \} \;, \label{linearpsi} \end{align} this curve can be visualized (see Fig. \ref{fig:Cpsinus}). \begin{figure}[ht] \centering \includegraphics[width=.35\linewidth]{Cpsinu_2B.pdf} \raisebox{-2.99ex}{\includegraphics[width=.35\linewidth]{Cpsinu_3B.pdf}}\\[2ex] \includegraphics[width=.35\linewidth]{Cpsinu_4B.pdf} \quad \raisebox{4.9ex}{\includegraphics[width=.35\linewidth]{Cpsi_several_nus_ann}} \vspace{.3cm} \caption{In the first three panels, we show the (discretized) curves $\rho_\xi(\varphi)$ given by eq. \eqref{linearpsi} for the values of $\xi=1/2,1/4,1/8$. The last panel shows the superposed domains.\label{fig:Cpsinus}} \end{figure} \end{remark} \section{The bounds and the domain of analyticity}\label{sec4} This section is dedicated to: \begin{proof}[Proof of Theorem~\ref{THM1}] From now on we fix $\psi \in (\theta- \pi/ 2,\theta + \pi/ 2)$ and bound $\mathfrak{K}_{\psi}^{2k}(g,\epsilon)$ (as expressed in eq.~\eqref{cumul}) thanks to eq.~\eqref{resbound} and the following lemma, proven in Appendix~\ref{app:compbound}: \begin{lemma}\label{complexgaussianboundmain} Let $n$ be a positive integer, $z\in\bbC$ with $\Re z>0$, $C \in M_n(\bbR)$ symmetric positive matrix and $F \in L^1(\bbR^n,\mu_{{\textfrac{\lvert z\rvert^2 C}{\Re z}}})$ a $\bbC$-valued function. Then: \begin{equation}\label{GaussianComplex} \left| \int d\mu_{zC}(X)F(X) \right| = \left| \left[ e^{ \frac{z}{2} \langle \partial,\partial \rangle_{C}} F(X) \right]_{X=0} \right| \le \frac{1}{ \cos^{n/2}(\arg z) } \sup_{X \in \mathbb{R}^n} | F(X) |\,. \end{equation} \end{lemma} We apply this lemma with $z=\epsilon e^{-\imath \psi}$ and $C=W^{T}(u)$. Then, we bound the integration over the $u_{T}$ parameters by one. Finally, we also have to notice that $\frac{1}{\cos{(\frac{\varphi+\psi}{2})}}$ appears at the power $\sum_{i=1}^n d_{i}(T_{\mathfrak{c}}) = k + 2(n-1)$. Thus, \begin{multline*} \lvert \mathfrak{K}_{\psi}^{2k}(g,\epsilon) \rvert \lesssim_k \frac{1}{\sqrt{\cos{(\psi-\theta)}} } \frac{1}{\cos^k{(\frac{\varphi +\psi}{2})}} \\ \times \sum_{n \geq k} \frac{1}{n!}{\left(\frac{\lvert g \rvert}{2 \cos^2{(\frac{\varphi+\psi}{2})} \sqrt{\cos{(\psi-\theta)}} }\right)}^{n-1} \sum_{T_{\mathfrak{c}}\in \mathcal{T}_n\times C^n_k} \prod_{i=1}^n (d_{i}(T_{\mathfrak{c}})-1)! \; , \end{multline*} We conclude using combinatorial arguments, that are gathered in the following lemma: \begin{lemma}\label{lemmacombina1} For all $n \geq k$, the sum over $k$-ciliated (i.e. $\mathfrak c \in C_k^n$) trees with $n$-vertices verifies \begin{equation*} \frac{1}{n!} \sum_{T_{\mathfrak{c}}\in \mathcal{T}_n\times C^n_k} \prod_{i=1}^n ( d_i(T_{\mathfrak{c}}) -1 )!= \binom{2n-1}{n-k}\binom{2n+k-3}{2n-1}\times(k-2)!\;. \end{equation*} \end{lemma} \begin{proof} \begin{align} \frac{1}{n!} \sum_{T_{\mathfrak{c}}\in \mathcal{T}_n\times C^n_k} \prod_{i=1}^n (d_i(T_{ \mathfrak{c}}) -1)!&=\frac{1}{n!}\sum_{T\in\mathcal{T}_n}\prod_{i=1}^n(d_i(T)-1)!\sum_{\mathfrak{c}\in C^n_k}\prod_{i=1}^n\frac{(d_i(T)+\mathbf{1}_{i\in\mathfrak{c}}-1)!}{(d_i(T)-1)!}\nonumber\\&=\frac{1}{n!}\sum_{T\in\mathcal{T}_n}\prod_{i=1}^n(d_i(T)-1)!\sum_{\mathfrak{c}\in C^n_k}\prod_{i\in\mathfrak{c}}d_i(T)\nonumber\,. \end{align} Here, to count the number of trees, we use Cayley's theorem that states that: \begin{align}\label{Cayley} \sum_{T\in\mathcal{T}_n}\prod_{i=1}^n(d_i(T)-1)!=(n-2)!\sum^n_{\substack{d_1,\dotsc,d_n=1 \\ \sum_i d_i = 2(n-1)}}1\,, \end{align} which yields \begin{equation*} \frac{1}{n!} \sum_{T_{\mathfrak{c}}\in \mathcal{T}_n\times C^n_k} \prod_{i=1}^n (d_i(T_{ \mathfrak{c}}) -1)!=\frac{(n-2)!}{n!}\sum^n_{\substack{d_1,\dotsc,d_n=1 \\ \sum_i d_i = 2(n-1)}}\sum_{\mathfrak{c}\in C^n_k}\prod_{i\in\mathfrak{c}}d_i\,. \end{equation*} The sum over the $d_i$'s can be computed by the following trick. Let us consider the function $f$ of $n$ variables: \begin{equation*} f(x_1,\dotsc,x_n) = \sum_{d_1,\dotsc,d_n=1}^\infty\prod_{i=1}^n x_i^{d_i} = \prod_{i=1}^n\frac {x_i}{1-x_i} \;. \end{equation*} Applying the following differential operator to $f$, and evaluating it at $(x,\dotsc,x)$ gives the expression of the sum as a Taylor coefficient: \begin{align*} \sum_{\substack{d_1,\dotsc,d_n=1\\\sum_i d_i = 2(n-1)}}^\infty \prod_{i\in\mathfrak{c}} d_{i}&=[x^{2(n-1)}] \big(\prod_{i\in\mathfrak{c}} x_{i}\frac{\partial}{\partial x_{i}}\big)f(x,\dotsc,x)\\ &=[x^{2(n-1)}]\frac{x^{n}}{(1-x)^{n+k}}=\binom{2n+k-3}{n-2}. \end{align*} With this result at hand, and using $\sum_{\mathfrak{c}\in C^n_k} 1 =n!/(n-k)!$, we finally obtain that \begin{align*} \frac{1}{n!} \sum_{T_{\mathfrak{c}}\in \mathcal{T}_n\times C^n_k} \prod_{i=1}^n (d_i(T_{ \mathfrak{c}}) -1)!&=\frac{(n-2)!}{n!}\times\frac{n!}{(n-k)!}\binom{2n+k-3}{n-2}=\binom{2n-1}{n-k}\binom{2n+k-3}{2n-1}\times(k-2)!\,. \end{align*} \end{proof} Combining this with the trigonometric identity $2\cos^2{(x/2)}=(1+\cos x)$, we obtain the following bound on the cumulants: \begin{multline*} \lvert \mathfrak{K}_{\psi}^{2k}(g,\epsilon) \rvert \lesssim_k \frac{(k-2)!}{\sqrt{\cos{(\psi-\theta)}} } { \frac{1}{\cos^{k}{(\frac{\varphi+\psi}{2})}} } \\ \times \sum_{n \geq k} \binom{2n-1}{n-k}\binom{2n+k-3}{2n-1}{\left(\frac{\lvert g \rvert}{ (1+\cos{(\varphi+\psi})) \sqrt{\cos{(\psi-\theta)} } }\right)}^{n-1}. \end{multline*} By Stirling's formula, $\binom{2n-1}{n-k}\lesssim_k 4^n$ and $\binom{2n+k-3}{2n-1}(k-2)!=\frac{(2n+k-3)!}{(2n-1)!}\lesssim_k n^{k-2}$ so that we can finally bound the cumulants by: \begin{equation*} \lvert \mathfrak{K}_{\psi}^{2k}(g,\epsilon) \rvert \lesssim_k \frac{1}{\sqrt{\cos{(\psi-\theta)}} } { \frac{1}{\cos^{k}{(\frac{\varphi+\psi}{2})}} } \sum_{n \geq k} {\left(\frac{4\lvert g \rvert}{ (1+\cos{(\varphi+\psi})) \sqrt{\cos{(\psi-\theta)} } }\right)}^{n-1} n^{k-2} \;. \end{equation*} We conclude on the analyticity of the cumulants thanks to two classical theorems: the first one states that the integral of a function that depends analytically on a paramater defines an analytic function as long as it converges; the second one states that a uniformly convergent series of analytic functions is analytic. The cumulants are expressed as a uniformly convergent series of analytic functions both in $\Pi g$ and $\epsilon$. Since the theorems above apply in the bivariate analytic case $\mathfrak{K}_{\psi}^{2k}(g,\epsilon)$ is analytic on the domain of $\Sigma\times\mathrm H$ where the conditions \eqref{conds} hold and analytically continues $\mathfrak{K}^{2k}(g,\epsilon)$. Taking the union of these domains for $\psi \in (\theta- \pi/ 2,\theta +\pi /2)$ yields an analytic continuation of $\mathfrak{K}^{2k}(g,\epsilon)$ to the subdomain $\mathfrak{C}$ of $\Sigma\times\mathrm H$ which concludes the proof. \end{proof} \begin{remark} For $g$ such that $\Pi g\in\bbR_-$, Borel summability in $\epsilon$ is lost since for $\varphi=\pm\pi$, the cardioid~\eqref{conds1} shrinks to zero when $\theta \rightarrow \pm \pi/2$. However, the domain of analyticity we found passes beyond the negative real axis and continues on the Riemann sheet. At the negative real axis the cumulants converge for $|\Pi g|\le\frac{1}{6\sqrt{3}}$ which is of order 1. Of course, the cumulants are discontinuous here: the analytic continuations coming from above and from below the negative real axis do not coincide. The discontinuity of the partition function and its logarithm are well understood as non perturbative instanton contributions: in zero dimensions and for $N=1$ this is detailed for instance in \cite{Aniceto2019jn}. On the contrary, the discontinuities of the cumulants have so far been less well studied. \end{remark} \begin{proof} It is possible to make use of $\psi$ in order to reach the negative real axis for $g$. Indeed, assuming $\epsilon$ real positive, so that $\theta =0$, we let $z_\psi(\varphi)=\frac 12\cos^2\del[1]{\frac{\varphi+\psi}2}\sqrt{\cos\psi}\,e^{i\varphi}$ be a point on the boundary of the cardioid $\{|g|<\frac 12\cos^2(\frac{\varphi+\psi}2)\sqrt{\cos\psi}\}$ The maximal value of $|z_{\psi}(\pm\pi)|$ is attained for $\psi_0=2\arcsin\big(\frac 1{\sqrt 3}\big)$ and is $\frac 1{6\sqrt 3}$. \end{proof} \section{Borel summability of the cumulants in $\mathbf{1/N}$}\label{sec5} This last section is devoted to: \begin{proof*}{Proof of Lemma~\ref{lemmaboundrest}} The Borel summability of the cumulants stems from the analyticity in a Sokal disk as stated in Theorem~\ref{THM1} and an estimation of the Taylor remainder. As we aim to obtain Borel summability in $\epsilon$ uniformly in $g$, we need to show that at large $q$ the remainder of order $q$ is bounded from above by $C\,K^q\,\lvert \epsilon \rvert^q \, q!$ with $C$ and $K$ \textit{independent of} $ g$. In order to compute the Taylor reminder of order $q$ we start from the expansion of $\mathfrak{K}_{\psi}^{2k}(g,\epsilon)$ in eq.~\eqref{cumul}. We fix some $\psi \in (\theta-\pi/ 2,\theta + \pi /2)$. Then, for all $k\geq 1$ and $(g,\epsilon)\in \mathfrak{C}$ such that $\lvert \varphi+ \psi \rvert < \pi $, the cumulants read (recall the \eqref{T_c} convention): \begin{multline*} \mathfrak{K}_{\psi}^{2k}(g,\epsilon) = 2^{k-1} \sum_{n \geq k} \frac{1}{n!} {\left(\frac{- \Pi g}{2}\right)}^{n-1} \sum_{T_{\mathfrak{c}}\in \mathcal{T}_n\times C^n_k} \int du_{T} \\ \times \bigg[ e^{t\frac{\epsilon e^{-\imath \psi}}{2} \langle \partial,\partial \rangle_{W^{T}(u)} } \prod_{i=1}^n \bigg\{ (d_i(T_{\mathfrak{c}}) -1)! \, \widetilde{R}^{d_i(T_{\mathfrak{c}})}(\sigma^{(i)} e^{\imath \frac \psi2}, g) \bigg\} \bigg]_{\sigma^{(i)}=0,t=1} \;, \nonumber \end{multline*} and the Taylor remainder of order $q$ of $\mathfrak{K}_{\psi}^{2k}(g,\epsilon)$, denoted by $R^{2k}_{q,\psi}(g,\epsilon)$ writes: \begin{multline} R^{2k}_{q,\psi}(g,\epsilon) = \int_0^1 ds \frac{(1-s)^{q-1}}{(q-1)!} 2^{k-1} \sum_{n \geq k} \frac{1}{n!} {\left(\frac{-\Pi g}{2}\right)}^{n-1} \sum_{T_{\mathfrak{c}}\in \mathcal{T}_n\times C^n_k} \int du_{T} \int d\mu_{s \epsilon e^{-\imath \psi} W^{T}(u)}(\sigma)\\ \times {\left(\frac{\epsilon e^{-\imath \psi}}{2}\right)}^{q} {\left( \langle \partial,\partial \rangle_{W^{T}(u)}\right)}^{q}\bigg[ \prod_{i=1}^n \bigg\{ (d_i(T_{\mathfrak{c}}) -1)! \, \widetilde{R}^{d_i(T_{\mathfrak{c}})}(\sigma^{(i)} e^{\imath \frac \psi2}, g) \bigg\}\bigg].\label{restterm} \end{multline} We would like to reexpress the remainder as a sum over some graphs. Since $2q$ derivatives are going to act on each term of the sum over the ciliated trees, and since they can act on each of the $n$ vertices, to the amplitude of a ciliated tree $T_{\mathfrak{c}}$ are now going to correspond $n^{2q}$ amplitudes indexed by \textit{marks} $\mathfrak{m}$ in $D^n_{2q}=\{1,...,n\}^{2q}$ corresponding to the ordered sequence of vertices on which the derivatives are acting (that is to say that for all $j\in\{1,...,2q\}$, the vertex $\mathfrak{m}_j$ is the vertex on which acted the $j$-th derivative in eq.~\eqref{restterm}). This allows us to index the sum \eqref{restterm} by \textit{decorated trees}, for which we adopt the next convention: \begin{equation} \text{triples $(T,\mathfrak{c},\mathfrak{m})$ made of a tree $T\in\mathcal T_n$, \emph{cilia} $\mathfrak{c}\in C^n_k$ and \emph{marks} $\mathfrak{m}\in D^n_{2q}$ are denoted by $T_{\mathfrak{c},\mathfrak{m}}$}.\tag{$\star\!$ $\star$} \label{T_cm} \end{equation} For all $i\in\{1,...,n\}$, we also denote by $d_i(T_{\mathfrak{c}, \mathfrak{m}})=m(i)+d_i(T_{\mathfrak{c}})=m(i)+c(i)+d_i(T)$ the coordination degree of the vertex $i$ in the decorated tree $T_{\mathfrak{c},\mathfrak{m}}$, with $m(i)= \modulus{\{j \in \{1,...,2q\}\mid \mathfrak{m}_j=i\}}$ the number of marks of $i$ and $c(i)=\textbf{1}_{i\in\mathfrak{c}}$ the number of cilia of $i$, which is 0 or 1. With this notation, the rest term rewrites (recall that with the convention \eqref{T_cm}, $T$ is the tree $T_{\mathfrak c,\mathfrak m}$ without cilia and marks): \begin{multline*} R^{2k}_{q,\psi}(g,\epsilon) = 2^{k}(-\epsilon )^{q}\int_0^1 ds \frac{(1-s)^{q-1}}{(q-1)!} \sum_{n \geq k} \frac{1}{2^nn!} {\left(-\Pi g\right)}^{n-1+q}\sum_{T_{\mathfrak{c},\mathfrak{m}}\in \mathcal{T}_n\times C^n_k\times D^n_{2q}} \\ \times \int du_{T}\int d\mu_{s\epsilon e^{-\imath \psi} W^{T}(u)}(\sigma) \prod_{\ell=1}^{q} W^{T}_{\mathfrak{m}_{2\ell-1}\mathfrak{m}_{2\ell}}(u) \prod_{i=1}^n \big\{ (d_i(T_{\mathfrak{c},\mathfrak{m}}) -1 )! \, \widetilde{R}^{d_i(T_{\mathfrak{c},\mathfrak{m}})}(\sigma^{(i)} e^{\imath \frac \psi2}, g) \big\}. \end{multline*} The remainder can now be bounded using the same arguments as in Section~\ref{sec4}, but taking into account the combinatorics of the new $2q$ derivatives that can act on a ciliated tree $T_{\mathfrak{c}}$. We have the following lemma: \begin{lemma} For all $n\geq k$, $q\geq 0$, the sum over $k$-ciliated (i.e. $\mathfrak c \in C_k^n$) and $2q$-marked (i.e. $\mathfrak m \in D_{2q}^n$) trees with $n$-vertices verifies \begin{align}\label{combi2} \frac{1}{n!} \sum_{T_{\mathfrak{c},\mathfrak{m}}\in \mathcal{T}_n\times C^n_k\times D^n_{2q}} \prod_{i=1}^n (d_i(T_{\mathfrak{c},\mathfrak{m}}) - 1)! = \binom{2n-1}{n-k} \binom{2n+2q+k-3}{2n-1} \times (2q+k-2)! \; . \end{align} \end{lemma} In particular, for $q=0$ we recover Lemma~\ref{lemmacombina1}. \begin{proof} Injecting Cayley's formula~\eqref{Cayley} in eq.~\eqref{combi2} yields \begin{align} \frac{1}{n!}\sum_{T_{\mathfrak{c},\mathfrak{m}}\in \mathcal{T}_n\times C^n_k\times D^n_{2q}} \prod_{i=1}^n (d_i(T_{\mathfrak{c},\mathfrak{m}}) - 1)! &= \frac{(n-2)!}{n!} \sum^n_{\substack{d_1,...,d_n=1 \\ \sum_i d_i = 2n-2}} \sum_{\mathfrak{c}\in C^n_k}\sum_{\mathfrak{m}\in D^n_{2q}} \prod_{i=1}^n \frac{(d_i+c(i)+m(i) - 1)!}{(d_i - 1)!}\nonumber \\ &= \frac{(n-2)!}{n!} \sum^n_{\substack{d_1,...,d_n=1 \\\sum_i d_i = 2n-2}} \sum_{\mathfrak{c}\in C^n_k} \prod_{i\in\mathfrak{c}} d_i \sum_{\mathfrak{m}\in D^n_{2q}} \prod_{i=1}^n \frac{(d_i+c(i)+m(i)- 1)!}{(d_i+c(i) - 1)!} \; .\nonumber \end{align} Then, using \begin{align*} \sum_{\mathfrak{m}\in D^n_{2q}} \prod_{i=1}^n \frac{(d_i+c(i)+m(i) - 1)!}{(d_i+c(i) - 1)!}&=\sum_{\substack{m(1),...,m(n) \\\sum_i m(i)=2q}} \frac{(2q)!}{\prod_{i=1}^n m(i)!} \prod_{i=1}^n \frac{(d_i+c(i)+m(i) - 1)!}{(d_i+c(i) - 1)!}\\ &=(2q)! \sum_{\substack{m(1),...,m(n) \\\sum_i m(i)=2q}} \prod_{i=1}^n \binom{d_i+c(i)+m(i)-1}{m(i)} \nonumber\\ &= (2q)! [x^{2q}] \prod_{i=1}^n \frac{1}{(1-x)^{d_i+c(i)}}=(2q)! [x^{2q}]\frac{1}{(1-x)^{2n-2+k}}\nonumber\\ &= (2q)! \binom{2n-2+k+2q-1}{2q} =\frac{(2n+2q+k-3)!}{(2n+k-3)!} \,, \nonumber \end{align*} $\sum_{\mathfrak{c}\in C^n_k} 1 =n!/(n-k)!$ and $\sum^n_{\substack{d_1,...,d_n=1 \\ \sum_i d_i = 2n-2}} \prod_{i\in\mathfrak{c}}d_i = \binom{2n+k-3}{n+k-1} $, we get: \begin{align*} \frac{1}{n!} \sum_{T_{\mathfrak{c},\mathfrak{m}}\in \mathcal{T}_n\times C^n_k\times D^n_{2q}} \prod_{i=1}^n (d_i(T_{\mathfrak{c},\mathfrak{m}}) - 1)! &=\frac{(n-2)!}{n!} \times\frac{n!}{(n-k)!} \binom{2n+k-3}{n+k-1} \frac{(2n+2q+k-3)!}{(2n+k-3)!}\\&= \binom{2n-1}{n-k} \binom{2n+2q+k-3}{2n-1} \times (2q+k-2)! \;. \end{align*} \end{proof} Thanks to this lemma, we can now find an upper bound on the rest term. All the entries of the $W^T(u)$ matrices are bounded by one, and eq.~\eqref{condsa1} implies that $\modulus{g}^q$ is smaller than $1/2^q$. We also use Lemma~\ref{complexgaussianbound} to bound the integration over $\sigma$, and we trivially bound all the integrals over $s$ and the $u_{T}$'s by one leading to: \begin{multline*} \lvert R^{2k}_{q,\psi}(g,\epsilon) \rvert \lesssim_k \,\left(\frac{\vert \epsilon \rvert}{2}\right)^{q} \frac{(2q+k-2)!}{(q-1)!} \sum_{n \geq k} \binom{2n-1}{n-k} \binom{2n+2q+k-3}{2n-1} {\left(\frac{\lvert g \rvert}{2 }\right)}^{n-1} \\ \times{\left(\frac{1}{\sqrt{\cos{(\psi-\theta)} }}\right)}^{\! n} {\left(\frac{2}{ 1+\cos{(\varphi+\psi)}}\right)}^{\! n+q+\frac{k}{2}-1}. \end{multline*} Now, let us choose some small $\alpha>0$, and take $(g,\epsilon)\in\mathfrak{C}_{\alpha}$, that is to say such that the inequalities~\eqref{condsa} are satisfied. Note that in this domain, $g$ and $\epsilon$ satisfy tighter bounds, which are gathered in the following lemma: \begin{lemma} For small $\alpha >0$, and for all $(g,\epsilon) \in \mathfrak{C}$ such that~\eqref{condsa} hold, we have: \begin{subequations} \begin{align} \alpha^2 &\leq \frac{ 1+\cos{(\varphi+\psi)}}{2} \leq 1 \\ \sqrt{\alpha} &\leq \sqrt{\cos{(\psi-\theta)}} \leq 1 \\ \label{ineqsqrtx} \frac{\alpha}{2}&\leq 1 - \sqrt{\gamma }\leq 1 \qquad \qquad \text{with } \gamma =\frac{4\lvert g \rvert }{ {(1+\cos{(\varphi+\psi)})\sqrt{\cos{(\psi-\theta)}} }} \; . \end{align} \end{subequations} \end{lemma} \begin{proof*}{Proof.} Since $0\leq \lvert \varphi+\psi \rvert \leq \pi(1 - \alpha)$, $\cos^2{\frac{\pi(1-\alpha)}{2}} \leq \cos^2{\frac{\lvert \varphi+\psi \rvert}{2}} \leq 1$, and $\cos^2{\frac{\pi(1-\alpha)}{2}}=\sin^2{\frac{\pi\alpha}{2}} \geq \alpha^2$ for small $\alpha >0$. Similarly, since $0 \leq \lvert \psi-\theta \rvert \leq \frac{\pi}{2} (1- \alpha)$, $\cos{(\frac{\pi}{2}(1-\alpha))} \leq \cos{\lvert \psi-\theta \rvert} \leq 1$ so that $\sqrt{\cos{(\frac{\pi}{2}(1-\alpha))}} \leq \sqrt{\cos{\lvert \psi-\theta \rvert}} \leq 1$ and $\sqrt{\cos{(\frac{\pi}{2}(1-\alpha))}}=\sqrt{\sin{(\frac{\pi\alpha}{2}})} \geq \sqrt{\alpha}$ for small $\alpha >0$. Finally, since $0 \leq \gamma \leq 1- \alpha$, $0 \leq \sqrt{\gamma}\leq \sqrt{1 - \alpha}$ and $ \sqrt{1 - \alpha} \leq 1 - \frac{\alpha}{2}$ for small $\alpha > 0$ so that $\frac{\alpha}{2} \leq 1-\sqrt{\gamma}\leq 1$. \end{proof*} Combining this with the bounds in eqs.~\eqref{resbound} and~\eqref{GaussianComplex}, and using $(2q+k-2)!/(q-1)! \lesssim_k 4^q q!$ and $\binom{2n-1}{n-k} \leq 4^n$ by Stirling's formula, we obtain the following upper bound on the rest term: \begin{align*} \lvert R^{2k}_{q,\psi}(g,\epsilon) \rvert \lesssim_k &{} {\vert \epsilon \rvert}^{q} 2^q q! \sum_{n \geq k} \binom{2n+2q+k-3}{2n-1} {\left(\frac{4 \lvert g \rvert}{(1+\cos{(\varphi+\psi)})\sqrt{\cos{(\psi-\theta)}} }\right)}^{n-1}\\ &\times{\left(\frac{2}{ 1+\cos{(\varphi+\psi)}}\right)}^{q+\frac{k}{2}}{\left(\frac{1}{\sqrt{\cos{(\psi-\theta)}} }\right)}\\ \lesssim_k&{} (\alpha^2)^{-q-\frac{k}{2}} (\sqrt{\alpha})^{-1} {\vert \epsilon \rvert}^{q} 2^q q! \sum_{n \geq k} \binom{2n+2q+k-3}{2n-1} {\gamma}^{n-1}. \end{align*} Recall that $\gamma \in [0,1-\alpha \mathopen]$ in $\mathfrak{C}_{\alpha}$. Let us denote $f(\gamma) = \sum_{n \geq k} \binom{2n+2q+k-3}{2n-1} {\gamma}^{n-1}$ so that: \begin{equation*} \lvert R^{2k}_q(g,\epsilon) \rvert \lesssim_k \alpha^{-1/2-k-2q} {\vert \epsilon \rvert}^{q} 2^q q!f(\gamma). \end{equation*} In order to conclude, it suffices to prove that $f(\gamma)$ is exponentially bounded in $q$ for all $\gamma \in [0,1-\alpha \mathopen]$. This is stated in the following lemma: \begin{lemma} At large $q$, for all $\gamma \in [0,1-\alpha\mathopen]$, we have that \begin{equation*} f(\gamma) \leq \frac{4q}{{(1-\sqrt{\gamma})}^{2q+k}}. \end{equation*} \end{lemma} \begin{proof*}{Proof.} We note that $\binom{2n+2q+k-3}{2n-1} = \frac{2q+k-1}{2n-1}\binom{2n+2q+k-3}{2n-2}$ and that for any $k$ and $n$, for $q$ large enough $\frac{2q+k-1}{2n-1} \leq 4q$ so that $\binom{2n+2q+k-3}{2n-1} \leq 4q \binom{2n+2q+k-3}{2n-2}$. This implies that for all $k \geq 1$: \begin{align} f(\gamma) &= \sum_{n \geq k} \binom{2n+2q+k-3}{2n-1} {\gamma}^{n-1} \leq 4q \sum_{n \geq k} \binom{2n+2q+k-3}{2n-2} {\gamma}^{n-1} \nonumber\\ &\leq 4q \sum_{n \geq k} \binom{2n+2q+k-3}{2n-2} {(\sqrt{\gamma})}^{2n-2} \leq 4q \sum_{n \geq 2k-2} \binom{n+2q+k-1}{n} {(\sqrt{\gamma})}^{n} \nonumber \\ &\leq 4q \sum_{n \geq 0} \binom{n+2q+k-1}{n} {(\sqrt{\gamma})}^{n} \;. \nonumber \end{align} In the second line we bound the even part of the series by the total series, using the positivity of the odd part. Observing that $\sum_{n \geq 0} \binom{n+2q+k-1}{n} \sqrt{\gamma}^n=\frac{1}{{(1-\sqrt{\gamma})}^{2q+k}}$ we are done. \end{proof*} Combining this lemma with eq.~\eqref{ineqsqrtx}, we can bound $f$ uniformly as $f(\gamma) \lesssim_k \frac{4q}{\alpha^{2q+k}}$ for all $\gamma \in [0,1-\alpha\mathopen]$ and denoting $C_\alpha = \alpha^{-2k-1/2}$ and $K_\alpha= 6\alpha^{-4}$ at $q$ large enough, for all $k \geq 1$: \begin{align}\nonumber \lvert R^{2k}_{q,\psi}(g,\epsilon) \rvert \lesssim_k C_\alpha K_\alpha^q {\lvert \epsilon \rvert}^{q} q! \; , \end{align} with $C_\alpha$ and $K_\alpha$ independent of $g$ for $g\in\mathfrak{C}_{\alpha}$, which concludes the proof of lemma~\eqref{lemmaboundrest}.\end{proof*} \begin{remark} Note that, as $K_\alpha \sim O(1) \alpha^{-4}$ and $C_\alpha \sim {\alpha^{-2k-1/2}}$ our bounds deteriorate for $\alpha\to 0$ that is when we take a subdomain closer and closer to the full $\mathfrak{C} $. \end{remark} \section*{Conclusion} The Loop Vertex Expansion made possible to extract the logarithm of the partition function, obtain the maximal analyticity domain (Thm. \ref{THM1}), and the domain of $1/N$-Borel summability (Thm. \ref{THM2}) of the cumulants of the quartic $\grp{O}(N)$-vector model. \par The next step would be to adapt our analysis to the more involved case of a (Euclidean) quantum field theory. The fist case of interest is the two dimensional quartic $\grp{O}(N)$-vector model, whose renormalisation is limited to the Wick ordering. Two dimensional quantum field theory was studied with a modification of the LVE known as the Multiscale LVE (MLVE) in \cite{Rivasseau2014aa}, where the Borel summability of free energy in the coupling constant is established. This study should be generalized to an $\grp{O}(N)$-vector model. However, the adaptation of the (M)LVE beyond dimension two seems out of reach. \newpage
1,314,259,993,088
arxiv
\section{Introduction} SU~UMa-type stars form a sub-group of dwarf novae characterized by the appearance of long and bright ``superoutbursts'', during which periodic modulations, ``superhumps'', are observed \citep{war85suuma}. Superhumps have periods slightly longer than orbital periods, which can be explained by a beat phenomenon of a precessing tidally-distorted eccentric disk. According to the tidal instability theory, an accretion disk becomes unstable against a tidal perturbation from a secondary star when the disk reaches the 3:1 resonance radius \citep{whi88tidal}. In conjunction with the thermal instability model for (normal) dwarf nova outbursts, the model for superoutbursts is called the thermal-tidal instability (TTI) model \citep{osa89suuma}. Superoutbursts are sometimes associated with a precursor typically lasting one or two days. This precursor phenomenon is actually expected from the TTI model. The precursor is considered to be a normal outburst leading to an expansion of the accretion disk over the 3:1 resonance radius and triggering a superoutburst. Growing superhumps have been detected during a decay phase from the precursor in T~Leo \citep{kat97tleo}, V436~Cen \citep{sem80v436cen}, and GO~Com \citep{ima04gocom}. These growing superhumps provide evidence for the TTI model since a system is predicted to reach a supermaximum with the growth of an eccentric disk. On the other hand, the original TTI model cannot explain gradually growing superhumps even after supermaxima without a precursor, which are also frequently observed \citep{sma96superoutburst}. \citet{osa03DNoutburst} propose a refinement of the original TTI model with the idea that the accretion disk can pass the 3:1 resonance radius and reach the tidal truncation radius. The dammed matter at the tidal truncation radius causes a gradual decay without a precursor. This refined TTI model predicts that SU~UMa stars having a large mass ratio ($q=M_2/M_1$, where $M_1$ and $M_2$ are the masses of a white dwarf and a secondary star, respectively) can show both types of superoutbursts, that is, those with and without a precursor. This idea should be examined by observations of the early evolution of superoutbursts and superhumps. The superhump period ($P_{\rm SH}$) in SU~UMa stars generally decreases through a superoutburst with a period derivative of order $\dot{P}_{\rm SH}/P_{\rm SH}\sim -10^{-5}$ \citep{war85suuma,pat93vyaqr}. A simple dynamical treatment for the tidal instability shows that the precession rate of the eccentricity wave is proportional to $r^{1.5}$, where $r$ is the disk radius \citep{osa85SHexcess}. The shortening of $P_{\rm SH}$ can, hence, be understood with the shrink of the disk during a superoutburst. Hydrodynamical simulations also show that the precessing eccentricity wave propagates inward, which causes the period shortening of superhumps \citep{lub92SH,whi94SH}. On the other hand, several short-period SU~UMa stars showing positive $\dot{P}_{\rm SH}/P_{\rm SH}$ have been discovered since mid-90's \citep{how96alcom,kat03v877arakktelpucma}. WZ~Sge-type stars, in particular, tend to show positive $\dot{P}_{\rm SH}/P_{\rm SH}$ \citep[e.g.][]{how96alcom,kat97egcnc}. The situation becomes more complicated because ultra-short period systems, V485~Cen and EI~Psc also show positive $\dot{P}_{\rm SH}/P_{\rm SH}$ \citep{ole97v485cen,uem02j2329}. These two sources have quite large mass ratios ($q\sim 0.2$), though WZ~Sge stars have quite small mass ratio ($q\sim 0.01$). Based on the discussions for ordinary negative $\dot{P}_{\rm SH}/P_{\rm SH}$, the positive $\dot{P}_{\rm SH}/P_{\rm SH}$ has been proposed to arise due to an expansion of the disk or an outward-propagation of the eccentricity wave \citep{bab00v1028cyg,kat04egcnc}. It is, however, poorly understood why the outward propagation can occur only in the short-period systems regardless of their mass ratio \citep{ish03hvvir}. TV~Crv is known as an SU UMa-type dwarf nova having a short orbital period of $0.06288\pm 0.00013$~d \citep{wou03tvcrv}. The historical discovery of this object is summarized in \citet{lev90tvcrv}. \citet{how96tvcrv} reported superhumps with a period of $0.0650\pm 0.0008\;{\rm d}$ from observations of a superoutburst in 1994 June. This $P_{\rm SH}$ provides a superhump period excess $\varepsilon=(P_{\rm SH}-P_{\rm orb})/P_{\rm orb}=0.033\pm 0.009$. This value of the excess implies that TV~Crv may be a peculiar object regarding its possibly large period excess compared with other short period systems \citep{pat01SH}. The error of $\varepsilon$ is, however, so large that the large $\varepsilon$ is not conclusive. Here we report observations of four superoutbursts of TV~Crv. Our observations on TV~Crv provide new clues to understand the superhump period evolution related to the precursor phenomenon and the TTI model. In the next section, we mention our observation systems. In Sect.~3, we report detailed behaviour of superoutbursts and superhumps of TV~Crv. We then discuss the implication of our results linked to the TTI model in Sect.~4 and 5. In Sect.~6, we compare and discuss our results with those for other known systems. Finally, we summarize our findings in Sect.~7. \section{Observations} \begin{table} \caption{Journal of observations.}\label{tab:log} \centering \begin{tabular}{cccrr} \hline\hline ID & $T_{\rm start}$ & $\delta T$ & $N$ & Site\\ \hline 01-01 & 1957.1607 & 4.75 & 361 & Kyoto\\ 01-02 & 1958.1373 & 5.28 & 243 & Tsukuba\\ 01-03 & 1958.2718 & 2.12 & 176 & Kyoto\\ 01-04 & 1959.0578 & 7.21 & 542 & Kyoto\\ 01-05 & 1960.3133 & 0.75 & 64 & Kyoto\\ 01-06 & 1961.1367 & 4.90 & 374 & Kyoto\\ 01-07 & 1961.1470 & 3.04 & 70 & Tsukuba\\ 01-08 & 1963.2215 & 1.96 & 138 & Kyoto\\ 01-09 & 1964.0792 & 2.78 & 73 & Tsukuba\\ 01-10 & 1965.1255 & 5.17 & 436 & Kyoto\\ 01-11 & 1965.1483 & 3.81 & 94 & Tsukuba\\ 01-12 & 1966.1710 & 0.56 & 39 & Kyoto\\ 01-13 & 1969.1343 & 4.44 & 303 & Kyoto\\ 02-01 & 2427.9767 & 2.18 & 132 & Kyoto\\ 02-02 & 2428.0206 & 1.47 & 119 & Kyoto\\ 02-03 & 2428.9583 & 2.23 & 195 & Kyoto\\ 02-04 & 2428.9960 & 1.86 & 51 & Kyoto\\ 02-05 & 2429.0071 & 1.56 & 147 & Okayama\\ 02-06 & 2430.9581 & 1.16 & 103 & Kyoto\\ 02-07 & 2430.9627 & 2.34 & 182 & Kyoto\\ 02-08 & 2434.9773 & 2.03 & 161 & Kyoto\\ 03-01 & 2769.9589 & 6.18 & 327 & Craigie\\ 03-02 & 2776.9232 & 1.34 & 71 & Ellinbank\\ 03-03 & 2777.0997 & 2.88 & 31 & Kyoto\\ 03-04 & 2777.8617 & 2.18 & 111 & Ellinbank\\ 03-05 & 2780.1247 & 1.35 & 37 & Kyoto\\ 03-06 & 2781.0285 & 1.82 & 86 & Hida\\ 04-01 & 3160.9673 & 2.46 & 237 & Kyoto\\ 04-02 & 3161.9689 & 2.05 & 142 & Kyoto\\ 04-03 & 3162.4693 & 5.84 & 210 & Concepci\'on\\ 04-04 & 3163.4698 & 6.18 & 238 & Concepci\'on\\ 04-05 & 3164.9970 & 0.70 & 34 & Barfold\\ 04-06 & 3166.5550 & 3.81 & 182 & Concepci\'on\\ 04-07 & 3167.5709 & 3.40 & 195 & Concepci\'on\\ 04-08 & 3168.5882 & 2.86 & 176 & Concepci\'on\\ 04-09 & 3169.5472 & 1.67 & 57 & Concepci\'on\\ 04-10 & 3169.9654 & 2.63 & 123 & Kyoto\\ 04-11 & 3170.9610 & 3.06 & 189 & Kyoto\\ \hline \multicolumn{5}{l}{$T_{\rm start}=$HJD$-$2450000.}\\ \multicolumn{5}{l}{$\delta T=$Period of observations in hours.}\\ \multicolumn{5}{l}{$N$=Number of images.}\\ \end{tabular} \end{table} \begin{figure*} \centering \includegraphics[width=180mm]{2004fig1.ps} \caption{Light curves of the 2001 February/March (left) and 2004 June (right) superoutbursts. The filled, open circles and open squares indicate our CCD observations, visual observations reported to VSNET, and observations by the ASAS-3 system, respectively \citep{ASAS3}. The vertical dashed lines in each panel show times when superhumps have the largest amplitude.}\label{fig:lc0104} \end{figure*} We conducted observational campaigns for four superoutbursts of TV~Crv which occurred in 2001 February--March, 2002 June, 2003 May, and 2004 July, through VSNET Collaboration \citep{VSNET}. Photometric observations were performed with unfiltered CCD cameras attached to 30-cm class telescopes at Concepci\'on (2004), Kyoto (2001, 2002, 2003, and 2004), Tsukuba (2001), Okayama (2002), Craigie (2003), Ellinbank (2003), Hida (2003), and Barfold Observatory (2004). Our observation log is listed in Table~\ref{tab:log}. Each image was taken with an exposure time of $\sim 30$~s. After correcting for the standard de-biasing and flat fielding, we performed aperture and PSF photometry, then obtained differential magnitudes of the object using a neighbor comparison star UCAC2~24840990 (14.43~mag). The constancy of this comparison star was checked using another neighbor star UCAC2~24840985 (14.57~mag). In this paper, we neglect any small differences of magnitude systems between unfiltered CCD chips used by each observatory. Heliocentric time corrections were applied before the period analysis. \section{Results} \begin{table*} \caption{Observational properties of superoutbursts.}\label{tab:obssum} \centering \begin{tabular}{p{4cm}cccc} \hline \hline & 2001 & 2002 & 2003 & 2004\\ \hline Precursor & No & No? & No? & Yes \\ $P_{\rm SH}$ (day) & 0.065028(0.000008) & 0.064981(0.000053) & 0.0674(0.0024) & 0.065023(0.000013) \\ $\dot{P}_{\rm SH}/P_{\rm SH}$ ($10^{-5}$) & 7.96(0.73) & --- & --- & $-0.32$(1.20) \\ Fading rate (mag d$^{-1}$) & 0.12(0.01) & 0.17(0.02) & 0.17(0.01) & 0.13(0.01)\\ Duration (day) & 12 & 10 & 12 & 12 \\ Time interval from the last superoutburst (day) & --- & 468 & 345 & 392 \\ \hline \end{tabular} \end{table*} Among the four superoutbursts, the evolution of superhumps was successfully detected even in early superoutburst phases during the 2001 and 2004 superoutbursts. On the other hand, the 2002 and 2003 superoutbursts were observed rather sparsely. We first report the former two superoutbursts focusing on their different features, and then shortly report the latter, poorly observed ones. Properties of all superoutbursts are summarized in Table~\ref{tab:obssum}. See the following sections for detailed information about the values in this table. \subsection{The 2001 and 2004 Superoutbursts} The 2001 superoutburst was detected on February 18.392 (hereafter dates refer to UT) at a visual magnitude of 12.9. Visual observations reported to VSNET indicate that the object was fainter than 14.6~mag on February 17.517 and no pre-outburst activity is seen before February 18. The outburst was, hence, detected in a very early phase within one day just after the onset of the outburst. The first time-series CCD observation initiated on February 18.654, about 6 hours after the visual detection. The 2004 superoutburst was detected on June 4.362 (UT) at a visual magnitude of 13.0. Observations reported to VSNET indicate that it was fainter than 13.4~mag on May 28.399 (UT) and no pre-outburst activity is seen before June 4. The first time-series observation initiated on June 4.463 (UT), about 2 hours after the visual detection. The light curves of the superoutbursts in February/March 2001 and June 2004 are shown in Fig.~\ref{fig:lc0104}. The most noteworthy point in the light curves is their different behaviour during the first few days. While the light curve in 2001 is described with a monotonic fading, the light curve in 2004 shows a 0.4~mag rebrightening 1.7~d after the outburst detection. This observation reveals that the early outburst was actually a precursor of the late genuine supermaximum. In conjunction with the close monitoring of the object, we conclude that no precursor event was associated with the 2001 superoutburst. \begin{figure*} \centering \includegraphics[width=180mm]{2004fig2.ps} \caption{Light curves during early phases of superoutbursts in 2001 and 2004. The abscissa and ordinate denote the time in HJD and the differential magnitudes, respectively. The magnitude system is normalized by subtracting average magnitudes of each panel. We indicate the run ID number (Table~\ref{tab:log}) and typical errors in each panel.}\label{fig:pre0104} \end{figure*} We succeeded in obtaining time-series data during the early phase of the superoutbursts, which are shown in Fig.~\ref{fig:pre0104}. We also show the observation IDs (see Table~\ref{tab:log}) and typical errors in each panel. As can be seen in Fig.~\ref{fig:pre0104}, no superhump-like modulation appears except for the ``04-02'', in which a 0.3-mag hump is detected. The ``04-02'' run lasted 2.05~hr which well covers an orbital period of TV~Crv. Throughout this run, the object is on a rapid brightening trend at a rate of 2.6~mag~d$^{-1}$. The hump is superimposed on this brightening trend. This indicates that the temporary fading from the precursor had already been terminated, and then started brightening to the supermaximum during the ``04-02'' run. The other panels of the ``01-01'', ``01-02'', and ``04-01'' in Fig.~\ref{fig:pre0104} show modulations with rather small amplitudes ($\sim 0.1\;{\rm mag}$) and long timescales. No periodic signal is detected in these runs with our Fourier analysis in the period range of 10~s--0.1~d. On the other hand, we note that possible 0.1--0.2~mag amplitude short-term fluctuations with timescale of $\sim 10\;{\rm min}$ can be seen in the ``01-02'' run. We detected superhumps after this early phase. In the case of the 2001 superoutburst, fully grown superhumps appeared on JD~2451959 (the ``01-04'' run). In the case of the 2004 superoutburst, on the other hand, the supermaximum coincides with the apparition of superhumps with the largest amplitude of $\sim 0.4\;{\rm mag}$ (the ``04-03'' run). \begin{figure*} \centering \includegraphics[width=180mm]{2004fig3.ps} \caption{Frequency-$\Theta$ diagrams for the 2001 (left) and 2004 (right) superoutbursts calculated by the PDM method. }\label{fig:pdm0104} \end{figure*} \begin{figure*} \centering \includegraphics[width=120mm]{2004fig4.ps} \caption{Superhump evolution during the 2001 (left) and 2004 (right) superoutbursts. The abscissa and ordinate denote the superhump phase and the differential magnitude, respectively. The phase is calculated with a superhump period of 0.065028~d and an arbitrary epoch. The differential magnitudes are normalized by each average magnitude, and are sorted with observation times which are indicated on the right vertical axis of each panel. See the text for detailed information.}\label{fig:hump0104} \end{figure*} \begin{figure*} \centering \includegraphics[width=180mm]{2004fig5.ps} \caption{$O-C$ diagrams of superhumps during the 2001 (left) and 2004 (right) superoutbursts. The abscissa and ordinate denote the cycle and the $O-C$ in day, respectively. The dashed line in the left panel is the best fitted quadratic curve for the $O-C$ in 2001.}\label{fig:o-c0104} \end{figure*} A period analysis with the PDM method \citep{PDM} was performed after linear trends were subtracted from the light curves. We used light curves between JD~2451959.0 and 2451966.2 for the 2001 superoutburst and between JD~2453162.4 and 2453171.1 for the 2004 superoutburst. The samples for the 2001 and 2004 superoutbursts contain 1830 and 1692 photometric points, respectively. The PDM analysis yielded the frequency--$\Theta$ diagram shown in Fig.~\ref{fig:pdm0104}. The superhump periods are calculated to be $0.065028\pm 0.000008$~d (2001) and $0.065023\pm 0.000013$~d (2004). These are in agreement each other and also in agreement with $P_{\rm SH}$ reported in \citet{how96tvcrv} ($0.0650\pm 0.0008\;{\rm d}$). Since the error of $P_{\rm SH}$ is smaller in 2001 than that in 2004, we adopt $P_{\rm SH}$ of TV~Crv to be $0.065028\pm 0.000008$~d in this paper. According to \citet{wou03tvcrv}, the orbital period of TV~Crv is $0.06288\pm 0.00013$~d, which yields a superhump period excess $\varepsilon=0.0342\pm 0.0021$. The 3.4\% superhump excess is relatively large for short-period SU~UMa systems \citep{pat03cvs}. Fig.~\ref{fig:hump0104} shows the evolution of the superhumps from the early phase including the precursor to the end of the superoutburst plateau. All light curves are folded with $P_{\rm SH}=0.065028$~d and an arbitrary epoch. The abscissa and ordinate denote the phase and the differential magnitude, respectively. We calculated center times of each run and show them in the figure. We set the origin of the times at the ``01-04'' and ``04-03'' runs, in which superhumps had the largest amplitude. The differential magnitudes are normalized by each average magnitude, and are shifted by constants proportional to the times of each run in order to clearly compare two sequences. The hump just before the supermaximum on 2004 has a peak phase roughly the same as those of later superhumps. It strongly indicates that the hump is actually a superhump, growing to the supermaximum, as observed in T~Leo \citep{kat97tleo}, V436~Cen \citep{sem80v436cen}, and GO~Com \citep{ima04gocom}. As can be seen from both panels, the amplitude of superhumps decreased in a few days, then kept 0.2-mag peak-to-peak amplitudes for 6~days. The 2001 and 2004 superoutbursts, thus, have quite similar characteristics regarding the evolution of superhump amplitudes. We determined peak times of superhumps by taking cross-correlation between the light curve and average profiles of superhumps. With determined peaks and $P_{\rm SH}$ of 0.065028~d, we calculate the $O-C$ of the superhump maximum timings, which is shown in Fig.~\ref{fig:o-c0104}. There is an obvious difference between the $O-C$ in the 2001 and 2004 superoutbursts. The $O-C$ clearly indicates an increase of $P_{\rm SH}$ with time in the case of the 2001 superoutburst. A quadratic fit to the $O-C$ yields a period derivative of $\dot{P}_{\rm SH}/P_{\rm SH}=7.96\pm 0.73\times 10^{-5}$. On the other hand, the $O-C$ is almost constant, in other words, $P_{\rm SH}$ was stable during the 2004 superoutburst. A quadratic fit yields $\dot{P}_{\rm SH}/P_{\rm SH}=-0.32\pm 1.20\times 10^{-6}$. This result indicates that the superhumps in 2004 superoutburst have quite small $\dot{P}_{\rm SH}/P_{\rm SH}$ compared with other systems \citep{kat03v877arakktelpucma}. We note that there is a slight phase shift at the hump just before the supermaximum in 2004 superoutburst, as shown in the right panel of Fig.~\ref{fig:o-c0104}. The slight phase shift in the early stage implies that superhumps evolved with a rapid period change just before the supermaximum. Similar rapid period changes during very early phases are also known in T~Leo \citep{kat97tleo}, V1028~Cyg \citep{bab00v1028cyg}, and XZ~Eri \citep{uem04xzeri}. \subsection{The 2002 Superoutburst} The 2002 superoutburst was first detected on May 30.399 (UT) at a visual magnitude of 13.1~mag. The ASAS-3 system records an earlier detection of the outburst on May 30.009 (UT) and a negative detection on May 21.048 (UT) \citep{ASAS3}. Unfortunately, there is no time-series data just after the outburst detection. The first run (the ``02-01'' run in Table~\ref{tab:log}) initiated at June 2.476 (UT). The light curve of the superoutburst is shown in Fig.~\ref{fig:lc2002}. The ``02-01'' run detected superhumps, which establish that this outburst is a superoutburst. Profiles of superhumps during this superoutburst are shown in Fig.~\ref{fig:hump2002}. Fig.~\ref{fig:o-c02} is the $O-C$ diagram of superhumps. While it contains only three points, this figure apparently implies a period increase of superhumps, as observed in the 2001 superoutburst. \begin{figure} \centering \includegraphics[width=88mm]{2004fig6.ps} \caption{Light curve of the superoutburst in 2002 June. The symbols are the same as in Fig.~\ref{fig:lc0104}.}\label{fig:lc2002} \end{figure} \begin{figure} \centering \includegraphics[width=88mm]{2004fig7.ps} \caption{Superhump evolution during the 2002 superoutburst. The symbols in the figure are the same as in Fig.~\ref{fig:hump0104}.}\label{fig:hump2002} \end{figure} \begin{figure} \centering \includegraphics[width=88mm]{2004fig8.ps} \caption{$O-C$ diagram of superhumps during the 2002 superoutburst. The symbols in the figure are the same as in Fig.~\ref{fig:o-c0104}.}\label{fig:o-c02} \end{figure} \subsection{The 2003 Superoutburst} \begin{figure} \centering \includegraphics[width=88mm]{2004fig9.ps} \caption{Light curve of the superoutburst in 2003 May. The symbols in the figure are the same as in Fig.~\ref{fig:lc0104}.}\label{fig:lc2003} \end{figure} \begin{figure} \centering \includegraphics[width=88mm]{2004fig10.ps} \caption{Superhump evolution during the 2003 superoutburst. The symbols in the figure are the same as in Fig.~\ref{fig:hump0104}.}\label{fig:hump2003} \end{figure} The 2003 superoutburst was discovered by a visual observation on May 9.546 (UT) at 13.1~mag. The latest negative visual observation had been reported on May 6.412 (UT) (fainter than 14.6~mag), three days before the outburst detection. The first time-series observation initiated at May 10.458 (UT), about one day after the outburst detection. Considering the rapid evolution during the precursor in the 2001 superoutburst, we cannot exclude the possibility that the 2003 superoutburst had a precursor between May 6 and 9. The light curve of this outburst is shown in Fig.~\ref{fig:lc2003}. The first run ``03-01'' clearly detects fully grown superhumps, as shown in Fig.~\ref{fig:hump2003}, which reveal that it is another superoutburst. Due to the lack of enough observations, we cannot find any hints of significant period changes of superhumps. \section{Implication for the TTI model} The observational properties of the four superoutbursts are summarized in Table~\ref{tab:obssum}. TV~Crv is one of the typical short orbital period SU~UMa-type dwarf novae. Its supercycle is calculated to be $402\pm 51$~d from the three time-intervals of superoutbursts listed in Table~\ref{tab:obssum}. This supercycle is also a typical value for SU~UMa stars. A noteworthy feature of TV~Crv is its superhump excess (3.4\%), which is relatively large for short-period systems, but not extraordinary \citep{pat03cvs}. It is well known that the superhump period excess is related to the superhump period \citep{pat03cvs}. From the theoretical point of view, this can be understood since the precession velocity of the eccentric disk depends on the disk radius and the mass ratio of binary systems. The superhump period excess, $\varepsilon$, can be expressed as \citep{osa85SHexcess}; \begin{eqnarray} \varepsilon= {3 \over 4}{q \over \sqrt{1+q}} \left( {r_{\rm d} \over a} \right)^{3/2}. \end{eqnarray} Assuming a certain disk radius at which the tidal mode is excited, one can describe the superhump period excess as a function of the superhump period \citep{min92BHXNSH}. The large superhump excess of TV~Crv, therefore, implies a relatively large mass ratio among short-period SU~UMa stars. The empirical relationship in \citet{pat01SH} yields the mass ratio to be $q=0.16\pm 0.01$ for TV~Crv. On the other hand, it is possible that the large superhump excess is partly caused by an unusually large disk radius. The large mass ratio of TV Crv should be confirmed by spectroscopic observations in future. Our observations reveal that TV~Crv experiences two types of superoutbursts, that is, one with a precursor and the other without a precursor. Similar morphology studies of superoutburst light curves had been performed for VW~Hyi, which also shows the two types of superoutbursts \citep{bat77vwhyi,mar79superoutburst}. VW~Hyi is a typical SU~UMa-type dwarf novae having a relatively long orbital period of 0.074271~d \citep{DownesCVatlas3}. Our observations of TV~Crv are the first to show that those two types of superoutbursts appear even in short orbital period systems. To explain the behaviour of VW~Hyi, \citet{osa03DNoutburst} propose the refined TTI model, in which the types of superoutburst depend on the maximum radius of the accretion disk. When the accretion disk reaches the tidal truncation radius, the dammed matter prevents the disk from a propagation of a cooling wave, leading to a superoutburst without a precursor. In this view, a large mass ratio is required for a system to achieve the situation that the tidal truncation radius lies just beyond the 3:1 resonance radius. On the other hand, when the disk fails to reach the tidal truncation radius, a rapid fading initiates. This fading is terminated, and the object rebrightens to a supermaximum due to a growth of the tidal dissipation. In this case, a large mass ratio is also required for a rapid growth of the tidal dissipation before the object returns to quiescence. VW~Hyi has a superhump excess of 3.9\% \citep{vaname87vwhyi}, which yields a mass ratio $q=0.18$ from the empirical relationship in \citet{pat01SH}. \citet{tap03CVatlas} reported $q\sim 0.14$ for VW~Hyi based on their spectroscopic observations. The mass ratio of TV~Crv is possibly close to that of VW~Hyi rather than those of ordinary short period SU~UMa stars \citep{pat01SH}. Although TV~Crv is a short period system, we propose that it has a relatively large mass ratio. According to \citet{osa03DNoutburst}, a system having a large mass ratio ($q\sim 0.2$) can experience the two types of superoutburst. The behaviour of TV~Crv can, therefore, be explained by the refined TTI model, furthermore, it possibly provides evidence that the mass ratio plays a key role in the morphology of superoutburst light curve. \section{Presence of a precursor and superhump evolution} The most important and unforeseen finding in our observation is that the $\dot{P}_{\rm SH}/P_{\rm SH}$ can be variable in distinct superoutbursts in one system. This is clearly shown in Table~\ref{tab:obssum}; a positive $\dot{P}_{\rm SH}/P_{\rm SH}$ in the 2001 superoutburst and an almost constant $P_{\rm SH}$ in the 2004 superoutburst. Except for the difference in $\dot{P}_{\rm SH}/P_{\rm SH}$, another observational difference between these two superoutbursts is the presence or absence of the precursor. There was no precursor in the 2001 superoutburst, while a clear precursor was observed in 2004. Our observation hence indicates that the $\dot{P}_{\rm SH}/P_{\rm SH}$ is related to the precursor phenomenon. As mentioned above, the TTI model suggests that the appearance of the precursor depends on whether the disk reaches the tidal truncation radius or not. Based on this idea, at the time when superhumps are fully grown, the disk size should be different in the two types of superoutbursts. In the case of the precursor-main type outburst, the disk size is around the 3:1 resonance radius at supermaximum. On the other hand, in the case of the superoutburst without a precursor, the hot disk can remain larger than the 3:1 resonance radius due to the dammed matter at the tidal truncation radius. The accretion disk can, hence, have a relatively large amount of gas beyond the 3:1 resonance radius even a few days after the supermaximum when superhumps are fully grown. We therefore propose that the $\dot{P}_{\rm SH}/P_{\rm SH}$ is related to the amount of the gas around and beyond the 3:1 resonance radius. We now present an idea how the disk size actually affects the eccentric disk evolution. We first consider the standard picture of the eccentric disk evolution. In an early phase of outburst, the rapid excitation of the eccentric mode stops when the angular momentum removal by the tidal dissipation is balanced with the input angular momentum transfered from the inner region. In the case of the precursor-main type outburst, then, the accretion disk shrinks below the 3:1 resonance radius at that time \citep{whi94SH}. The eccentricity wave can only propagate inward, since the tidal mode is no longer excited. In the case of the superoutburst without a precursor, on the other hand, we can expect a large amount of gas over the 3:1 resonance radius at that time. We conjecture that the eccentric mode can keep excited because the disk radius presumably remains larger than the 3:1 resonance radius. The positive $\dot{P}_{\rm SH}/P_{\rm SH}$ can be explained by a gradual outward propagation of the eccentricity wave. It is, however, unclear whether the outward propagation is possible only with the large disk. The outward propagation essentially requires an additional input of angular momentum from an inner region. It might be possible that the gas in an inner region may be swept up, then give additional angular momentum into the outermost area of the eccentricity wave. This additional supply of angular momentum would enable to keep the disk size large and the continuous excitation of the eccentric mode. \citet{ole03ksuma} propose that the $\dot{P}_{\rm SH}/P_{\rm SH}$ is negative at the beginning and the end of the superoutburst, but positive in the middle phase for several SU~UMa-type dwarf novae. Based on our scenario, the duration of the positive $\dot{P}_{\rm SH}/P_{\rm SH}$ depends on the amount of the gas which enables the continuous excitation of the eccentric mode. The transition from a positive $\dot{P}_{\rm SH}/P_{\rm SH}$ to a negative one may be explained by the depletion of the gas. The above discussion is summarized in the following two ideas: i) At the time when superhumps are fully grown, the accretion disk remains larger in the superoutburst without the precursor than in the precursor-main type superoutburst. ii) Even after that, the eccentric mode keeps excited through a superoutburst. These ideas should be tested by hydrodynamical simulations. \section{Discussion} \begin{figure} \centering \includegraphics[width=88mm]{2004fig11.ps} \caption{The superhump period derivative against the superhump period for the SU~UMa-type dwarf novae listed in Table~\ref{tab:ref}. The open circles indicate type A superoutbursts which have a precursor. The filled circles indicate type B superoutbursts in which a delay of superhump growth is observed. The filled squares indicate WZ~Sge-type dwarf novae. The other points indicated by the crosses are objects whose outburst types are unknown. The figure focuses on objects whose outburst types are known. We, hence, omit three unknown-type dwarf novae, KK~Tel, MN~Dra, (exceptionally large period derivatives) and TU~Men (a long superhump period) in this figure. We only show positive values of period derivatives for KS~UMa and TT~Boo, in which changes of the period derivative have been observed.}\label{fig:ref} \end{figure} We revealed that the $\dot{P}_{\rm SH}/P_{\rm SH}$ is variable in distinct superoutbursts for TV Corvi. This result should be confirmed by observations of other sources in future because we now have no data of variations of the $\dot{P}_{\rm SH}/P_{\rm SH}$ against different types of superoutburst in other sources. On the other hand, it is valuable to investigate the relationship of $\dot{P}_{\rm SH}/P_{\rm SH}$ and the type of superoutburst in known systems. To perform this, we collected the sample of 40 dwarf novae and one X-ray binary whose $\dot{P}_{\rm SH}/P_{\rm SH}$ is published, as listed in Table~\ref{tab:ref}. We now classify the morphology of superoutburst light curve into two types, that is, the type ``A'' and type ``B''. The type A is defined by the detection of a precursor, in other words, the precursor-main type superoutburst. On the other hand, the type B is defined by the detection of a delay of the superhump growth after a supermaximum. In our sample listed in Table~\ref{tab:ref}, we find 6 and 7 cases for the type A and B, respectively. There is no system having both features. WZ~Sge-type dwarf novae are indicated by ``WZ'' in Table~\ref{tab:ref} because their superhump evolution is peculiar; they have an early hump era followed by an ordinary superhump era \citep{kat96alcom}. The sample in Table~\ref{tab:ref} has 8 cases for 5 WZ~Sge stars. Types of superoutburst are unclear in the other 29 cases due to the lack of enough observations during early phases of superoutbursts. The $\dot{P}_{\rm SH}/P_{\rm SH}$ are shown against $P_{\rm SH}$ in Fig.~\ref{fig:ref}. As mentioned above, WZ~Sge-type systems tend to show positive $\dot{P}_{\rm SH}/P_{\rm SH}$ as indicated by filled squares in Fig.~\ref{fig:ref}. In these systems, their long recurrence time and the lack of normal outburst lead to a huge amount of accumulated gas compared with ordinary SU~UMa systems. At the onset of their outburst, the accretion disk, hence, violently expands beyond the 3:1 resonance radius. The large disk in WZ~Sge stars may partly be due to a continuous expansion of their quiescent disks, as proposed in \citet{min98wzsge}. This situation in WZ~Sge systems is similar to the type B outburst in TV~Crv discussed in the last section. \citet{kat04egcnc} propose a scenario analogous to that described in the last section for positive $\dot{P}_{\rm SH}/P_{\rm SH}$ in WZ~Sge-type dwarf novae. The difference between WZ~Sge systems and TV~Crv is the mechanism to generate a large disk over the 3:1 resonance radius. In the case of WZ~Sge systems, \citet{osa03DNoutburst} propose that the large disk is maintained by the strong tidal removal of angular momentum at the 2:1 resonance radius. The disk can reach the 2:1 resonance radius because of the large amount of accumulated matter. In the case of the type B outbursts of TV~Crv, the large disk is maintained at the tidal truncation radius. This is due to a high mass ratio leading to the tidal truncation radius just beyond the 3:1 resonance radius. We can therefore consider that a similar physical condition appears in the type B superoutbursts and the WZ~Sge-type superoutbursts, in terms of the superhump evolution. In Fig.~\ref{fig:ref}, we show these objects as filled symbols (squares for WZ~Sge stars and circles for the type B) and the type A superoutburst as open circles. We can see a tendency that the B- and WZ-types generally have larger $\dot{P}_{\rm SH}/P_{\rm SH}$, as expected from our scenario. This figure, however, also show the presence of two exceptions breaking the tendency (GO~Com and V1251~Cyg). The nature of these objects is an open issue. We need to obtain their $\dot{P}_{\rm SH}/P_{\rm SH}$ in another superoutburst to investigate their possible variations. The two systems having the shortest $P_{\rm SH}$ in Fig.~\ref{fig:ref} are V485~Cen and EI~Psc. While they have ultra-short orbital periods, their secondaries are relatively massive \citep{aug93v485cen,tho02j2329}. The superhump period excess and mass ratio of EI~Psc are $\varepsilon=0.040$ and $q=0.19$, respectively, which are actually larger than those of TV~Crv and VW~Hyi \citep{uem02j2329letter}. According to the refined TTI model, their accretion disks can reach the tidal truncation radius and remain active in the eccentric mode through a superoutburst. Their high $\dot{P}_{\rm SH}/P_{\rm SH}$ can, hence, be naturally explained with our scenario. Observations of the onset of their superoutbursts are encouraged to reveal the type of them. The only X-ray binary in table 3, XTE~J1118+480, is a black hole X-ray binary (BHXB) having a quite low mass ratio $q=0.05$ \citep{wag01j1118}. This is a unique object in the point that the $\dot{P}_{\rm SH}/P_{\rm SH}$ is significantly determined in BHXBs. Although the low mass ratio implies a situation similar to WZ~Sge-type dwarf novae, its $\dot{P}_{\rm SH}/P_{\rm SH}$ is slightly, but significantly negative as listed in Table~\ref{tab:ref}. On the other hand, its main outburst has a precursor, which is reminiscent of the precursor-main type superoutburst in SU~UMa systems \citep{kuu01j1118}. The accretion disk radius was probably just around the 3:1 resonance radius at the ``supermaximum'' of XTE~J1118+480. This rather small disk may cause the inward propagation of an eccentricity wave in this low-$q$ system. \section{Summary} Our findings through observations of four superoutbursts of TV~Crv are summarized below:\\ i) We accurately determined the superhump period to be $0.065028\pm 0.000008$~d. \\ ii) In conjunction with the orbital period in \citet{wou03tvcrv}, the superhump period yields a high superhump period excess of $0.0342\pm 0.0021$. This implies that TV~Crv has a relatively large mass ratio compared with other short-period SU~UMa systems. Using the empirical relationship for the superhump mass ratio in \citet{pat01SH}, the mass ratio of TV~Crv is estimated to be $q=0.16\pm 0.01$.\\ iii) TV~Crv experiences two types of superoutbursts; one with a precursor and the other without. This behaviour can be interpreted with the refined thermal-tidal instability model if TV~Crv has a relatively large mass ratio in spite of its short orbital period.\\ iv) We show that the superhump period derivative is variable in distinct superoutbursts. The difference is apparently related to the presence/absence of a precursor.\\ v) We propose that the eccentric mode keeps excited when the accretion disk remains larger than the 3:1 resonance radius. This scenario can explain the behaviour of TV~Crv, and furthermore be consistent with systematically large period derivatives in superoutbursts without a precursor.\\ We greatly appreciate valuable comments made by Dr. Shin Mineshige. This work is supported by the Grant-in-Aid for the 21st Century COE ``Center for Diversity and Universality in Physics'' from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. RM acknowledges grant Fondecyt 1030707. PN acknowledges the Curry Foundation and the AAVSO. This work is partly supported by a grant-in aid from the Japanese Ministry of Education, Culture, Sports, Science and Technology (No.s. 13640239, 15037205). Part of this work is supported by a Research Fellowship of the Japan Society for the Promotion of Science for Young Scientists. \bibliographystyle{aa}
1,314,259,993,089
arxiv
\section{Introduction} In the next decade, neutral hydrogen may become the ultimate cosmological probe. The uniqueness of \textsc{Hi} stems from the expectation that, in principle, its presence can be detected, and its properties characterized, during every epoch of cosmological history. Immediately following recombination, during a period commonly called the Dark Ages, neutral hydrogen was the dominant form of baryonic matter, existing everywhere as a diffuse, pervasive gas. Eventually, though, gravitational instabilities in this gas, induced by primordial matter density fluctuations, led to the formation of the first stars, galaxies, and quasars. New radiative processes from these collapsed objects exerted, for the first time, substantial feedback on the diffuse \textsc{Hi} that remained in the intergalactic medium (IGM), and vast regions of neutral hydrogen became ionized. After some time, only localized pockets of \textsc{Hi} were left in, and around, galaxies. These galactic \textsc{Hi} regions persist to the present day and play active roles in the evolution of galaxies. The transition period between these two primary eras---one characterized by large amounts of diffuse \textsc{Hi} in the IGM, and the other by small regions of localized \textsc{Hi} in galaxies---is known as the epoch of reionization (EOR). It is constrained to occur between $6\lesssim z \lesssim15$ by existing observations, although with considerable uncertainty in the specific details of the transition. Unlike the Lyman series of electronic transition lines, the \textsc{Hi} 21~cm hyperfine spin-flip transition line does not suffer an optical depth problem at high redshift and the IGM remains optically thin over all redshifts. The redshifted 21~cm line, whether in emission or absorption, is weak (particularly compared to other sources in the radio sky), however, and detecting and characterizing 21~cm emission or absorption at cosmological distances is very difficult. All-sky surveys to detect galaxies in 21~cm emission, such as the Arecibo Legacy Fast ALFA Survey (ALFALFA) \citep{2005AJ....130.2598G}, have reached to only $z\lesssim0.1$. And the highest confirmed detections of 21~cm absorption in damped Lyman-$\alpha$ systems are at only $z\lesssim3$ \citep{2007MNRAS.375.1528K}. Overcoming the sensitivity limits needed to thoroughly characterize 21~cm emission and absorption from $0 < z \lesssim 15$ is one of the primary motivations for the Square Kilometer Array\footnote{http://www.skatelescope.org} (SKA). But moving from the current paradigm, where detections of discrete objects are difficult even at the lowest redshifts, to a large, successful \textsc{Hi} science program with the SKA is a path filled with significant uncertainty and numerous challenges. \section{\textsc{Hi} Cosmology} The redshifted 21~cm radiation targeted by the SKA and other upcoming experiments falls in the frequency range $10 \lesssim \nu < 1420$~MHz for \textsc{Hi} below $z\lesssim200$. The potential contributions of \textsc{Hi} to cosmology can be roughly divided into three regimes defined by frequency, or redshift, throughout this range. The three regimes are (see the recent review at \cite{2006PhR...433..181F} for details): \textit{Inflationary Physics [$30 < z < 200, \; \; \; 46 > \nu > 7$~MHz]}. At very high redshifts, \textsc{Hi} is expected to be a clean tracer of baryonic matter. As such, constraining the anisotropic fluctuations of the redshifted 21 cm signal could probe the matter power spectrum at very small scales of order $\ell \gtrsim 10^4$ to $10^6$ that are unattainable with CMB anisotropy measurements or galaxy surveys. These observations would be able to constrain perturbations to the primordial power spectrum and spatial curvature (including parameters such as $n_s$ and $\alpha_s$), neutrino masses, non-Gaussianity, and inflationary models. At such high redshifts, the perturbations in the matter density fluctuations should be linear and relatively easy to interpret. In addition, the anisotropy power spectrum from \textsc{Hi} is three-dimensional since the signal is a spectral line (as opposed to the two-dimensional CMB arising from continuum emission), and thus contains more Fourier modes than the CMB, potentially providing less cosmic sample variance and better cosmological parameter constraints. The process of baryon collapse into dark matter gravitational potential wells following recombination also should be seen to unfold with redshift in \textsc{Hi} observations during this era. \textit{Dark Ages and Reionization [$6 < z < 30, \; \; \; 203 > \nu > 46$~MHz]}. Redshifted 21 cm emission and absorption offers truly unparalleled views into the evolution of the IGM during the crucial times associated with the formation of the first stars, galaxies, and quasars. Measurements of both the mean (global) redshifted 21 cm brightness temperature and the fluctuation power spectrum (both with characteristic amplitudes of order 10~mK) should yield the spin and kinetic temperature histories of the IGM and the reionization history. Indirectly, these measurements probe the early star formation history and the nature of the luminous sources responsible for ionizing photons. Cross-correlation of (even low signal-to-noise) \textsc{Hi} maps with CMB maps or planned high-redshift galaxy surveys could add additional insight into the processes responsible for reionization. High signal-to-noise maps during this era could yield images of Stromgren spheres (ionized bubbles) around individual quasars and tomographic maps of the IGM. Together, these measurements could determine the abundance of mini-halos during the Dark Ages and weakly constrain magnetic fields in the IGM. Recent efforts suggest that cosmological parameter estimation solved simultaneously with parameterized reionization models in measured power spectra over a range of redshifts could yield valuable improvements to the parameter limits achievable even with Planck (in conjunction with other cosmological data sets). This process exploits the three-dimensional nature of the \textsc{Hi} power spectrum to separate cosmological and astrophysical contributions using velocity-field effects inherent in redshift-space distance measurements. Although not a primary driver of \textsc{Hi} cosmology, characterization of 21~cm absorption by \textsc{Hi} along the line-of-site toward bright, high-redshift sources such as active galactic nuclei (AGN), star-forming galaxies, and gamma-ray bursts (GRBs) should be possible during this era, as well, with the SKA. And finally, \textsc{Hi} maps and power spectra from this era could provide source planes for weak lensing studies of the matter distribution at lower redshifts. \textit{Galaxy Evolution and Dark Energy [$0 < z < 6, \; \; \; 1420 > \nu > 203$~MHz]}. Following reionization, localized galactic clumps of \textsc{Hi} can be detected individually and cataloged (with $\gtrsim 10^8$ entries expected for the SKA), or characterized using the same diffuse mapping approaches needed to exploit \textsc{Hi} in the IGM during and before reionization. Both of these approaches could lead to very accurate estimates of the matter power spectrum suitable for characterizing baryon acoustic oscillations (BAOs) and constraining the nature of Dark Energy. In addition, $\Omega_{HI}(z)$ should be well constrained. These measurements are also very valuable for studying galaxy evolution, including, in the case of diffuse maps, through cross-correlations with galaxy surveys divided by galaxy type or environment and redshift. And, as already demonstrated below $z\lesssim3$, 21~cm absorption toward by \textsc{Hi} in damped Lyman-$\alpha$ systems and other objects along the path to bright, discrete sources is an existing technique that will only expand with the SKA and other new instruments. \section{Challenges and Unknowns} The potential rewards of \textsc{Hi} cosmology are compelling, but the challenges are substantial. Experiments operating in the frequency range needed for redshifted 21 cm measurements face three general categories of hurdles: \textit{Radio Frequency Interference}. The frequencies that will be targeted are commonly used for television, FM radio, satellite, cellular phone, and other communications transmissions. The candidate sites for the SKA, in Western Australia and South Africa, are in remote locations to avoid these radio communications, but it will almost certainly be impossible to completely eliminate all of the sources of interference. No telescope has ever reached the sensitivity levels planned for \textsc{Hi} observations with the SKA and it is unknown just how deep the integrations will be able to go before reaching a limiting interference floor. \textit{Ionosphere}. Turbulence in the Earth's ionosphere refracts low frequency radio waves (particularly relevant below $\sim300$~MHz) and causes distortions in the apparent location and magnitude of signals originating from above the ionosphere. Correcting for these distortions over wide fields-of-view and very long baselines should be possible \cite{2004SPIE.5489..180C}, but has never been demonstrated under the full set of conditions that will exist for the most ambitious planned high-redshift 21~cm observations. \textit{Astrophysical emission}. Astrophysical foregrounds are $10^3$ to $10^9$ times brighter than the predicted $\sim1$ to 100~mK redshifted 21~cm signal everywhere on the sky. Galactic synchrotron radiation tends to be the dominate contribution over most of the frequency range, but extragalactic continuum point sources are also especially strong and numerous. Galactic radio recombination lines (RRLs) and free-free emission from electrons in both the Galaxy and the IGM should be present at roughly the level of the expected signal. Synchrotron radiation from high Galactic latitudes has frequency-dependent intensity of approximately $T_{sky}(\nu) \approx 250 (\nu/150$~MHz$)^{-2.6}$~K. It is also linearly polarized at the percent level ($\sim1$~K) and exhibits a high degree of structure due to Faraday rotation by the interstellar medium. The astrophysical foregrounds set not only stringent instrumental calibration and data analysis performance requirements, but dictate the system temperatures of the instruments since $T_{sky}$ is well above the typical receiver temperatures ($T_{rcv}\approx50$~K) of radio instruments, particularly for \textsc{Hi} measurements targeting redshifts above $z\gtrsim6$ ($\nu\lesssim200$~MHz). Therefore, the foreground sky temperature governs the collecting area needed for the \textsc{Hi} science drivers of the SKA and other experiments and, in general, forces the arrays to be very large. For this reason, it is unlikely that the inflationary physics regime of \textsc{Hi} cosmology above $z\gtrsim30$ will be accessible for many years since, in this regime, $T_{sys} \approx T_{sky} \gtrsim 5000$~K. Mitigating the astrophysical foregrounds is anticipated to require a multi-stage foreground subtraction effort that exploits the power-law-like spectral smoothness of all the foreground categories (except RRLs) by fitting and subtracting low-order polynomials along each line-of-sight in an interferometric data cube. First, deconvolution of bright continuum sources within the target field will be performed using techniques similar to iterative ``pealing''. This will be followed by subtraction of the predicted contributions (due to calibration uncertainties and other instrumental paths) of both discrete and diffuse foregrounds from outside the primary target fields using global sky models. Interferometric measurements will still be dominated after this process by faint confusion-level sources that are mixed by the array beam sidelobes. Recent efforts suggest that, by careful instrumental design and data processing, the contribution of these sources can be subtracted even in ``dirty'' sky maps. In principle, this multi-stage process should be sufficient to reveal the redshifted 21 cm signal. However, it is also anticipated that polarized foregrounds will leak into the desired intensity measurement through mis-calibrations. Techniques for mitigating this contamination are an area of ongoing development, but approaches based on separation of signal and foreground through rotation measure synthesis appear encouraging. In addition, model errors and fitting uncertainties in the foreground subtraction process could manifest themselves at this stage (if not earlier) and methods for identifying and ameliorating these effects have proposed \cite{2006ApJ...648..767M}. The last step will be data consistency cross-checks to confirm that foregrounds have been separated to the required precision. These could include cross-correlation of different Stokes maps, inspection of noise correlation matrices, comparison of measurements at different redshifts (such as at $z=8$ when a reionization signature should be present and at $z=5$ when virtually no redshifted 21 cm contribution should exist), or cross-correlation of \textsc{Hi} maps and galaxy surveys. All of the terrestrial and astrophysical foregrounds will severely complicate observations of weak redshifted 21~cm emission in the high-redshift IGM, as well as in galaxies at lower redshifts. With little current empirical knowledge about their detailed properties to guide the development of redshifted 21~cm experiments and with a large range of instrumental approaches still under consideration for the SKA, learning more about the foregrounds and the consequences of instrumental design choices on mitigating them is at the core of ``what we need to know'' to unleash the full cosmological potential of neutral hydrogen. Thus, to a large degree, what we need to know to enable \textsc{Hi} cosmology is fundamentally intertwined with questions regarding what we need to know to successfully develop the SKA and other new redshifted 21~cm radio arrays. \section{Experimental Parameter Space} Addressing these questions and hurdles now rests firmly on a wave of new experiments, including the Murchison Widefield Array\footnote{http://www.haystack.mit.edu/ast/arrays/mwa} (MWA) \cite{2005SPIE.5901..124S, 2005ApJ...634..715W, 2006ApJ...638...20B, 2006ApJ...653..815M, 2007ApJ...661....1B, 2007AJ....133.1505B}, the Precision Array to Probe the Epoch of Reionization (PAPER), and the Low Frequency Array\footnote{http://www.lofar.org} (LOFAR) \cite{2005MNRAS.360L..64Z, 2006MNRAS.369L..66V, 2006MNRAS.373..623R, 2006ApJ...653..815M}, as well as pilot projects on existing facilities such as the Giant Metre-wave Radio Telescope (GMRT) and with existing low-redshift data sets from the \textsc{Hi} Parkes All Sky Survey (HIPASS) \cite{2001MNRAS.322..486B, 2007MNRAS.378..301B, 2008arXiv0802.3239P} and other observations. In addition to these efforts, the Alan Telescope Array\footnote{http://ral.berkeley.edu/ata} (ATA) and the Expanded Very Large Array\footnote{http://www.aoc.nrao.edu/evla} (EVLA) will, in many ways, act as SKA technology pathfinders as they become increasingly operational over the next few years, as will several explicit SKA precursor projects including the Australian SKA Precursor\footnote{http://www.atnf.csiro.au/projects/askap} (ASKAP) and the South African MeerKAT\footnote{http://www.ska.ac.za/meerkat}. In addition, new projects not on the pathway to the SKA are tackling measurements of the mean redshifted 21~cm signal as a function of redshift. These include the Compact Reionization Experiment (CoRE) and the Experiment to Detect the Global EOR Signature\footnote{http://www.haystack.mit.edu/ast/arrays/Edges} (EDGES) \cite{2008ApJ...676....1B}. The landscape of these precursor-level redshifted 21~cm projects will undoubtedly evolve on the way to the SKA, but the process as a whole will greatly benefit the final design, development, and operation of the SKA and other future experiments as lessons from the successes and failures of these early trailblazers become incorporated into subsequent efforts. Indeed, the current environment mirrors the history of the experimental cosmic microwave background (CMB) community throughout much of the 1980s and 1990s \cite{2003PhRvD..68l3001W} along the way to the successful Wilkinson Microwave Anisotropy Probe (WMAP). It is interesting to note from the CMB experience that one technology does not necessarily emerge as the only alternative and that, for different specific goals within the same broad science objectives, multiple technologies can be exploited fully. Early redshifted 21~cm projects that are no longer active, including the VLA EOR Extension Program\footnote{http://www.cfa.harvard.edu/dawn} (VEEP) and the Primeval Structure Telescope\footnote{http://web.phys.cmu.edu/~past} (PAST) \cite{2004astro.ph..4083P}---also called the 21~cm Array (21CMA)---and preliminary experiments with the GMRT \cite{2008MNRAS.tmp..301A} have already helped to refine the course to \textsc{Hi} cosmology by demonstrating that significant foreground and technical challenges do in fact exist and are not easily overcome by existing facilities. \section{Putting It All Together} The primary goal for the growing \textsc{Hi} cosmology community over the next five years is to find out if carefully designed experiments and well devised analysis techniques can mitigate the effects of the terrestrial and astrophysical foregrounds sufficiently well to enable the detection of redshifted 21 cm emission over large volumes of the universe. In order to accomplish this task, new levels of instrumental calibration and ionospheric calibration will need to be achieved and robustly characterized, the statistics of faint source populations will need to be documented since even confusion noise will be well above the \textsc{Hi} signal in most experiments, the spectral coherence function of foregrounds will need to be confirmed on small scales in order to permit the foreground subtraction techniques described above, and techniques for dealing with polarized foregrounds will need to be implemented in new ways. When all of these issues have been successfully addressed and robust detections of \textsc{Hi} during and after the reionization epoch have been achieved, the field will transition from its preliminary \textit{detection} phase to a \textit{characterization} period leading to the SKA and beyond. During this time, attention will shift to ensuring that basic theoretical expectations match with observations and then building a solid model-based parametrization for connecting theory and observation through a small number of core processes. As a set of goals to focus efforts, it is reasonable to assert that by the end of the next decade (2020) we should have robustly explored the low-frequency radio and digital signal processing technological parameter spaces, have multiple independent detections of anisotropy spectra at $z > 6$, as well as at $z<6$, have significantly constrained the mean (global) spin, kinetic, and ionization histories of the IGM, and know if HI cosmology ultimately needs to go to the moon (to overcome RFI or the ionosphere). If all of this has been accomplished, we will truly need an SKA-class instrument to make more progress. \begin{theacknowledgments} JDB is supported by NASA through Hubble Fellowship grant HF-01205.01-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. \end{theacknowledgments}
1,314,259,993,090
arxiv
\section{Introduction} The main enhancement of 5G Time Division Duplex (TDD) as compared to 4G is the flexibility in the assignment of the two directions, uplink (UL) and downlink (DL), allowing for a very agile adaptation to the instantaneous traffic variations. Another remarkable feature of 5G {New Radio (NR)} is the support of three generic services with vastly heterogeneous requirements (e.g., in the packet sizes): enhanced mobile broadband (eMBB), massive machine-type communications (mMTC) and ultra-reliable low-latency communications (URLLC). The latter will enable the real time interactive applications with two-way traffic, envisioned with the emergence of tactile Internet. It is well known from queueing theory that waiting in a single line with two available servers is on average better than waiting in separated lines with one server each \cite{Kleinrock1975}. The intuition behind is that tasks with a long task in front of the queue shall wait for a long time if only one server is available, whereas having a second server reduces the blocking situations. Translating this principle to a {TDD cellular system, where the UL and DL transmission cannot occur simultaneously, we can interpret that the UL and DL transmissions are waiting in the same queue and the transmission direction (UL/DL) of the wireless link adapts to the direction (UL/DL) of the queued packets. In this paper,} we study the gain of decoupling the UL and the DL directions under heterogeneous {Time Transmission Interval} (TTI) requirements. {The idea of decoupling the access \cite{Boccardi2016} arose in the context of Heterogeneous Networks (HetNets) with the goal of alleviating the UL-DL asymmetry and improving the average throughput. }Being the focus on the user association and the interference, the related literature has mostly used stochastic geometry for the analysis \cite{Smiljkovikj2015}. Several works have looked at the interplay of time slot length versus the switching cost in 4G TDD (see e.g., \cite{ElBamby2015} \cite{Kerttula2016}). {The switching time is related to the difference in distances and propagation delays among devices in the UL, and the need for a timing advance to account for such differences. The increased base station density results in smaller link distances. Thus, the switching time in 5G, especially indoors, is reduced and cannot be seen as a bottleneck anymore.} Another relevant research question in 4G networks has been the possibility of having a link to more than one transmission point. {Coordinated Multi-Point (CoMP) transmission was introduced in 4G to allow a device to simultaneously transmit and receive data on multiple transmission points (TPs) \cite{Qamar2017}. One of the CoMP techniques, Transmission Point Selection (TPS), entails the device being dynamically scheduled by the most appropriate TP. Besides in-band CoMP, 5G NR introduces the possibility of \textit{multi-connectivity} across bands. While CoMP is typically used to improve the throughput, multi-connectivity mostly serves to improve the availability and the reliability \cite{Ohmann2016}. In any case, the studies have been typically limited to one of the two transmission directions, i.e., finding methods to optimize the DL or the UL. }For example, \cite{Fernandez2017} studies a DL centralized joint cell association and scheduling mechanism for eMBB traffic, based on dynamic cell switching by which users are not always served by the strongest perceived cell. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{users.png} \caption{Traffic patterns of two-way traffic with different TTI requirements: eMBB device and URLLC interactive device. The interactive URLLC device is modeled with some processing time between transmission directions. } \label{fig:users} \end{figure} This letter proposes exploiting the extra diversity of the decoupled access to satisfy low latency requirements, and addresses UL and DL in a unified model. Each slot, of possibly different size in the frequency-time plane, can be assigned to either the UL or DL direction, depending on traffic load and received signal conditions. We use a queueing model for the analysis. The reference example with heterogeneous TTI requirements is the mix of eMBB and interactive URLLC devices, {as shown in Figure \ref{fig:users}}. The eMBB device requires long DL transmissions followed by short UL {acknowledgement/negative acknowledgments} (ACKs/NACKs), whereas an interactive process has a stringent latency requirement and sends short UL/DL packets continuously, {with a processing time between each UL and DL packet generation.} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{decoupled.png} \caption{(a) Coupled access. The URLLC device receives UL and DL from RRH2. The UL packet has to wait until the long DL packet is transmitted. (b) Decoupled access. The URLLC device can receive UL and DL from different RRHs. The UL packet is transmitted to RRH1 while RRH2 transmits a long DL packet} \label{fig:decoupled} \end{figure} The rest of the letter is organized as follows. In Section \ref{sec:system_model} the system model is detailed. In Section \ref{sec:sojourn_time} the sojourn time is analyzed. Section \ref{sec:upper_bound} discusses an upper bound for a priority interactive URLLC user. Conclusions are in Section \ref{sec:conclusions}. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{queue_coupled_decoupled.png} \caption{Queueing model with flexible TDD and devices with long and short TTI requirements. (a) Standard coupled access. Devices get the UL and DL from the same RRH. All the RRHs in the pool coordinate the transmission direction. (b) Decoupled access. Devices may receive the UL from one RRH and the DL from another one. The RRHs do not necessarily coordinate the transmission direction. } \label{fig:queue_coupled_decoupled} \end{figure*} \section{System model} \label{sec:system_model} We consider a TDD dense cell deployment with a central baseband pool connected with a fronthaul to a large number of small cells (Remote Radio Heads RRHs) where the Radio Frequency (RF) functionality is located. In a dense deployment each device is likely to receive a good or a fair signal quality from more than one cell. The small cells are not necessarily coordinated in their transmission directions, but opportunistically serve the traffic. The small cells serve two-way traffic from eMBB and URLLC devices like the ones in Figure \ref{fig:users}. The central unit can quickly decide which device to be served by each cell. {The benchmark is the standard coupled UL/DL in which devices are connected to only one RRH}. The base station must allocate long DL periods for the eMBB. Preemption can still be used for the small DL packets, but not for the UL, and therefore the overall latency requirement of the interactive user is challenged. The alternative is to allow decoupled UL/DL. Each device can connect to a maximum of two RRHs. Besides the primary cell $\mathcal{P}$ with the highest received power, the second one in power, denoted by $\mathcal{S}$, is reachable if it is at most $T$ dB below $\mathcal{P}$. The half-duplex devices can be served from any of the two base stations, e.g., receive the DL from the first base station and the UL from the second one. The 5G NR frame has been designed with the premise of providing the necessary flexibility to support a heterogeneity of services and requirements. The main principle is that strict timing relations are avoided. On the same page, the TDD DL/UL scheme is much more flexible than LTE: a slot can contain all DL, all UL, or almost any other DL/UL ratio, and the pattern can be changed in each slot or subframe. The faster TDD turn-around and the self-contained concept, such that data and ACK can be scheduled in the same slot, are enablers for low-latency devices. In spite of these enhancements, we identify a limitation when the TTI requirements are asymmetric, due to the enforced transmission direction. {For example, when a URLLC request arrives in the UL but the primary RRH is busy with a long eMBB transmission, the latency requirement can be met if the request is scheduled in the secondary RRH (see Figure \ref{fig:decoupled}). } We assume a single spectrum with unit bandwidth. The instantaneous SNR is \begin{equation} \gamma(t) = |h(t)|^2 \frac{E_s}{N_0} \end{equation} \noindent where $E_S$ is the average energy per symbol, $N_0$ the noise power spectral density, and $h(t)$ the complex channel envelope. Considering a block Rayleigh fading channel with Gaussian noise, the SNR has an average of $\bar{\gamma}$ and it is distributed as: \begin{equation} f_{\gamma} = \frac{1}{\bar{\gamma}}e^{-\gamma/\bar{\gamma}} \end{equation} {Regarding the interference, the \textit{classical} UL-UL and DL-DL interference has been widely addressed in the context of 4G HetNets \cite{Soret2015} and later widened to 5G networks \cite{Soret2018}. Both device-based (e.g., interference cancellation receivers) and network-based (e.g., transmit power control) interference mitigation techniques are applicable to our scenario. } Nevertheless, the lack of coordination in the transmission directions represents a major challenge in the form of inter-RRH and inter-device interference. A similar problem was addressed in \cite{Popovski2015}, where the notion of interference spin was introduced for the optimization of the two-way scheduling in terms of the sum-rate. The framework can be adapted to be used in our scenario, using the latency as the {Key Performance Indicator}. With the focus on the queueing gains, we assume that the inter-RRH interference is ideally cancelled by sending the signal of the DL RRH to the UL RRH, such that it can be subtracted from the received signal. {As per the inter-device interference, the challenging scenario is the reception of a DL signal from the RRH when a nearby device with line of sight (LOS) is transmitting in the UL. The situation is widely improved if there is total or partial signal obstruction between the two devices. This can be favoured by letting the scheduler prioritize the allocation of NLOS devices, as long as the latency requirements are fulfilled. } Alternatively, a parametric approach is also possible, where the average interference level is mapped to a transmission latency. This reduces the inference process to a parameter estimation problem. {Another relevant consideration is that in large cell deployments, the difference in power between the DL signal and the UL signal is significant, but in small indoor cells they are of the same order. Therefore, the cross-interference UL-DL can be treated similarly as the UL-UL and DL-DL interference.} \section{Sojourn time} \label{sec:sojourn_time} {The analysis of the sojourn time is based on a multiclass M/G/s queue like the one in \mbox{Figure \ref{fig:queue_coupled_decoupled}}, where the traffic is separated in two queues for small and large packets}. The sub-indexes $S$ and $L$ refer to short and long TTIs, respectively. {To exploit the flexible TDD, {no queue is dedicated to a given transmission direction.} At each time instant, the transmission direction is imposed by the traffic: DL if the Head Of Line (HOL) packet is DL and UL otherwise. The server models the two-way wireless connection between RRH(s) and devices. Taking the reference traffic mix of Figure \ref{fig:users}, the short TTI queue is used by the interactive URLLC devices, whereas the eMBB devices store the long DL transmissions in the long TTI queue, and the UL ACKs/NACKs in the short TTI queue.} In the coupled UL/DL, see \mbox{Fig. \ref{fig:queue_coupled_decoupled} (a)}, each RRH is serving a short and a long TTI queue in both directions. The difference in the decoupled case is that the two queues have access to both RRHs. The queue is conservative: if the system is not empty, then the server is busy (or, in the decoupled case, at least one of the servers is busy). Moreover, there is no loss of work. For a fair comparison, the amount of traffic for the decoupled case is doubled. {Between queues, short packets have strict priority over long packets. The policy within each queue is First In First Out (FIFO). Due to the interactive nature of the URLLC traffic, modeled with a processing time between UL/DL packets, there is no need to have a special coordination between the packets scheduled in each RRH for the decoupled access. In other words, if a device has a UL packet in the queue, the consecutive DL packet will be queued only after the UL is received. This means that there is never a simultaneous transmission of a DL and UL packet from the same device. } Being the arrival process of each queue Poisson-distributed, the total arrival process is also Poisson-distributed with rate: \begin{equation} \lambda = \lambda_S + \lambda_L \end{equation} \noindent where the arrival rates are $\lambda_L$ and $\lambda_S$ for the long TTI and short TTI devices, respectively. Similarly, $\mu_L$ and $\mu_S$ denote the service rates. The inverses are the TTI duration (i.e., the service time), $S_S$ and $S_L$ ($S_S << S_L$). The time is discretized, and the minimum scheduling unit is given by the short, fixed TTI duration, $S_S$. Long eMBB transmissions use a discrete adaptation scheme with the range of received SNR divided into $M$ consecutive regions, each of which is associated to a transmission rate within the fading region $(\Gamma_{i-1}, \Gamma_i), i = 1, ..,M $. The better the channel quality, the higher the transmission rate. Thus, the service rate can take a value from a discrete set \begin{equation} \mu_{L}(t) = \mu_i , \;\;\; \Gamma_{i-1} \leq \gamma(t) < \Gamma_i , \; \; i = 1..M \end{equation} For Rayleigh channels, the probability of using the $i$th constellation is {\begin{equation} p_i = \exp\left(-\frac{\Gamma_{i-1}}{\bar{\gamma}}\right) - \exp\left(-\frac{\Gamma_{i}}{\bar{\gamma}}\right) \end{equation}} The first and second moment of the service time are given by \begin{equation} E[S_L] = \frac{1}{\mu_L} = \sum_{i=1}^M \frac{p_i}{\mu_i} \end{equation} \begin{equation} E[S^2_L] = \sum_{i=1}^M \frac{p_i}{\mu^2_i} \end{equation} The interactive URLLC devices do not have the possibility of using a closed loop and the transmission rate is fixed, i.e. \begin{equation} E[S_{S}] = \frac{1}{\mu_S}, \;\;\; E[S^2_S] = \frac{1}{\mu_S^2} \end{equation} To avoid saturation, the overall system utilization must be: \begin{equation} \rho = \rho_L + \rho_S = \lambda_L E[S_L] + \lambda_S E[S_S]< 1 \label{eq:rho} \end{equation} Consider the $i$th data packet arriving to the system with a single server. The sojourn time comprises the queue waiting time, the frame alignment time and the transmission time. If it is a URLLC packet, it must wait in the queue for the residual time until the end of the current packet transmission plus the . If it is an eMBB packet, then it must also wait for the transmission of the URLLC packets arrived during its queueing time. \begin{prop} The average sojourn time of the short and long TTI queues in the multiclass M/G/1 with priorities and discretized time is given by \begin{equation} \begin{aligned} E[T^{M/G/1}_S] = \frac{\lambda_L E[S_L^2] + \lambda_S E[S_S^2]}{2 (1-\rho_S)} &+ \frac{1}{\mu_S} + \frac{1}{2 \mu_S}, \\ E[T^{M/G/1}_L] = \frac{\lambda_L E[S_L^2] + \lambda_S E[S_S^2]}{2 (1-\rho) (1-\rho_S)} &+ \frac{1}{\mu_L} + \frac{1}{2 \mu_S} \label{eq:prop1} \end{aligned} \end{equation} \noindent The average sojourn time of the short and long TTI queues in the multiclass M/G/2 with priorities and discretized time is approximated by \begin{equation} \begin{aligned} E[T^{M/G/2}_S] & \approx \frac{\lambda_L E[S_L^2] + \lambda_S E[S_S^2]}{(\lambda_L E[S_L] + \lambda_S E[S_S])^2} \cdot \frac{\rho^{\sqrt{6}-1}}{4\mu_L(1-\rho_S)} \\ + &\frac{1}{\mu_S} + \frac{1}{2 \mu_S}, \\ E[T^{M/G/2}_L] & \approx \frac{\lambda_L E[S_L^2] + \lambda_S E[S_S^2]}{(\lambda_L E[S_L] + \lambda_S E[S_S])^2} \\ &\cdot \frac{\rho^{\sqrt{6}-1}}{4\mu_L(1-\rho)(1-\rho_S)} + \frac{1}{\mu_L} + \frac{1}{2 \mu_S} \label{eq:prop1_2} \end{aligned} \end{equation} \end{prop} \begin{proof} The result for the M/G/1 is a generalization of the Pollaczek-Khinchine formula, by considering the multi-class case, the priority and non-priority classes, and using PASTA and Little's law. The last term, $\frac{1}{2\mu_S}$, accounts for the time discretization or the frame alignment, modeled as a uniform random variable $U$ in $[0, S_S]$. The result for the M/G/2 is a generalization of the approximated result in \cite{Kimura1986} for GI/G/s, \begin{equation} E[W^{M/G/s}] \approx \frac{ 1+C_s^2}{2} \frac{\rho^{\sqrt{2(s+1)}-1}}{s\mu(1-\rho)} \end{equation} \noindent where $C_s^2$ is the coefficient of variation of the service process {and $\mu$ is the average service rate}. Then, we use the observation in \cite{Bondi1984}, \begin{equation} \frac{E[W^{M/GI/s/prio}]}{E[W^{M/GI/s/FCFS}]} \approx \frac{E[W^{M/GI/1/prio}]}{E[W^{M/GI/1/FCFS}]} \end{equation} \noindent to consider the priority and non-priority classes. {The ratios $\frac{E[W^{M/GI/1/prio}]}{E[W^{M/GI/1/FCFS}]}$ and $\frac{E[W^{M/GI/s/prio}]}{E[W^{M/GI/s/FCFS}]}$ reflects the effect on the sojourn time of converting from FCFS scheduling to priority scheduling with single and multiple ($s$) servers, respectively. The service rate with multiple servers is $s$ times the one with one server. Based on this observation, the intuition behind is that the ratio among waiting times remains the same in the single and multiple server systems because the relationships between service time and order of selection from the queues will be the same \cite{Bondi1984} . } \end{proof} The numerical evaluation of Proposition 1 and the comparison with the simulations are shown in Figure \ref{fig:sims_TTI}. {The short TTI is set to 1, and the long TTI takes the values 2, 10 or 15 depending on the channel quality (with the thresholds set to 0 and 10 dB). The total sojourn time is plotted versus the system utilization $\rho$. The arrival rates are are obtained from \mbox{equation (\ref{eq:rho})} and fixing $\lambda_L=4 \cdot \lambda_S$. } The devices with long TTI spend more time in the system, due to the longer service time and the low priority. Moreover, the decoupled access reduces the average time, and the improvement is remarkable as the intensity increases, corresponding to cases in which long tasks keep the server busy with a higher probability. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{sims_TTI.png} \caption{{Comparison of the mean sojourn time with long and short TTI requirements versus the intensity $\rho$, using coupled and decoupled access. $S_S = 1$ (fixed), $S_L = 2, 10$ or $15$ (depending on the Rayleigh channel) and $\lambda_L = 4 \cdot \lambda_S$. The average sojourn time are obtained from (\ref{eq:prop1}) and (\ref{eq:prop1_2}). Short packets have strict priority over long packets.}} \label{fig:sims_TTI} \end{figure} \section{Upper bound of the cycle time: priority device} \label{sec:upper_bound} We have studied the average gains for URLLC as a homogeneous service with FIFO policy among URLLC packets. Next, we give an upper bound of the latency distribution of the decoupled access by considering the two-way URLLC device with the highest priority. The processing time between transmission directions in the interactive traffic is $t_{proc}$, and the TTI length is $S_{S}$. In the background, there is broadband traffic with a maximum long TTI $S_{L}$ and no strict latency requirement. The interactive device has scheduling priority over any other device. The two-way traffic is decomposed in UL-DL cycles, from the arrival instant of a UL packet to the reception of the following DL packet. Figure \ref{fig:cycle_time} shows an example of the round trip time with decoupled access. The processing time $t_{proc}$ and the transmission time $S_S$ add to the total cycle time. Moreover, both directions might find the RRHs busy, and the user has to wait the residual time $t_{res}$, i.e., the time til one of the RRHs is available. The cycle time is written \begin{equation} \label{eq:t_cycle} t_{cycle} = 2 \cdot (S_S + t_{res}) + t_{proc} \end{equation} Assuming the same constant $S_S$ and $t_{proc}$ for the coupled and the decoupled access, the only randomness in equation (\ref{eq:t_cycle}) comes from the residual time. In our scenario, $t_{res}$ is confined to the interval $(0..S_{L}$). In the coupled scheme, the residual time in each base station follows a generic distribution $G$ between $0$ and $S_{L}$ (the longest possible TTI duration for eMBB traffic). We call this random variable $X_i \stackrel{}{\sim} G(0,S_{L})$, were $i$ is the RRH id. In the decoupled case, the residual time is the minimum between the residual times of the two RRHs, \begin{equation} Y \stackrel{}{\sim} \min(X_1, X_2) \end{equation} \begin{rem} The CDF of the residual time in the decoupled access is given by the minimum between the residual times of the two RRHs, therefore \begin{equation} \begin{aligned} F_Y(y) &= \text{Pr}\{Y \leq y\} =1-\left[1-F_{X_i}(y)\right]^2 \end{aligned} \end{equation} \end{rem} \begin{rem} Regardless of the distribution, the CDF of $Y$ is lower than the CDF of $X_i$, and therefore the latency of the decoupled access is always better than the coupled case. \end{rem} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{cycle_time.png} \caption{Sketch of the cycle time for a user with the highest priority using decoupled access. The RRH with the shortest residual time is selected for each transmission. } \label{fig:cycle_time} \end{figure} Figure \ref{fig:residual_time_exp} plots the CDF for the exemplary case of an exponential distribution, which corresponds to the residual time of an M/M/1 queue in the coupled access. In this case, the CDF yields \begin{equation} F_Y(y) = 1-e^{2\lambda y} \;\;\; \forall 0\leq y \leq S_{L} \end{equation} \section{Conclusions} \label{sec:conclusions} We have investigated the latency gains of an interactive URLLC device when using flexible TDD and a decoupled UL/DL access. {The critical URLLC traffic} is multiplexed with eMBB traffic, which usually requires much longer TTIs and adaptation to the instantaneous channel quality. The flexible TDD frame in 5G NR is the basis for the analysis. The problem is addressed from a queueing perspective, with the heterogeneous requirements and the Rayleigh channel variations captured in the model. The results show the latency improvements of the decoupled access, which are remarkable when the load increases. An upper bound of a priority user completes the analysis giving insight of the two-way round trip time. We have identified and quantified the potential of decoupling the two transmission directions, setting the basis for future work. Next steps include refining the model to include the impact of scheduling policies beyond FIFO. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{residual_time_exp.png} \caption{Decoupled latency gain for a critical URLLC user. The service time in each RRH is exponentially distributed and confined to $(0,S_L)$.} \label{fig:residual_time_exp} \end{figure} \section*{Acknowledgment} This work has been in part supported by EU Horizon 2020 projects ONE5G (ICT-760809) and the European Research Council (ERC Consolidator Grant no.648382 WILLOW). The views expressed in this work are those of the authors and do not necessarily represent the ONE5G project view. \bibliographystyle{IEEEtran}
1,314,259,993,091
arxiv
\section{ {\it Introduction.--} We recognize, through a decade of research, that entanglement is indispensable to execute quantum information processing (QIP), such as quantum computation and multi-party quantum communication. A persistent challenge is to maintain multipartite entanglement against decoherence. In this Letter, we enlarge the present applicability of a key technique, entanglement distillation \cite{bennett96-1,bennett96-2}, to genuine multipartite entangled states called the W states \cite{coffman00}. The W state, $\tfrac{1}{\sqrt{n}}(|0\ldots 01\rangle + |0\ldots 10\rangle + \cdots + |10\ldots 0\rangle)$, i.e., the equal superposition of ``single-excitation'' basis vectors in $n$ qubits is a tolerant {\em resource} against decoherence and loss of qubits. It is quite robust \cite{briegel01,dur01}, because it can be compared to a symmetric web consisting of only pairwise entanglement \cite{coffman00,koashi00}. This state is also a Dicke state of the total spin operators $\vec{J}^2$ and $J_z$ with eigenvalues $\tfrac{n}{2}(\tfrac{n}{2}+1)$ and $\tfrac{n}{2}-1$ respectively, due to the permutation symmetry. Thus the W state is often available more easily than the Greenberger-Horne-Zeilinger (GHZ) state. The 3-qubit W state has been created in optical systems \cite{eibl04} and ion traps \cite{roos04}, and can be prepared according to several proposals in coupled quantum dots, critical spin chains, etc. Furthermore, the W state can be said to be essentially different from most multipartite entangled states known in the applications to QIP, in the following sense. Basic {\em software techniques} to circumvent decoherence have been proposed, and implemented experimentally for prototypes. For example, entanglement distillation (or, purification) \cite{bennett96-1,bennett96-2} is a tool to extract high-fidelity entangled states from a larger ensemble of noisy ones. Quantum error correction codes \cite{qecc,gottesman97} are a way to protect entanglement from small numbers of errors. The latter can be formulated in terms of the stabilizer, i.e., as simultaneous eigenspaces of commuting ``multilocal'' Pauli operators. Note that, if all stabilized eigenspaces are 1-dimensional state vectors, they are often called stabilizer states \cite{gottesman98} or graph states (up to local unitaries) \cite{briegel01,schlingemann02}. The Calderbank-Shor-Steane (CSS) code \cite{qecc} is defined by the stabilizer group which consists of only two kinds of generators: multilocal bit-flip $X$ operators and phase-flip $Z$ operators. In fact, beyond the ``bipartite'' distillation protocols \cite{bennett96-1,bennett96-2,deutsch96} for the Bell pairs, direct distillation of multipartite entanglement is so far possible just for the CSS stabilizer (or, two-colorable graph) states by the protocol in Refs.~\cite{dur03,chen04}, which extended earlier results for GHZ states \cite{murao98,maneva00}. Since the W state is not a stabilizer state, there has been no protocol to distill it directly. {\it Main idea.--} We propose an entanglement distillation protocol that extracts directly a multipartite non-stabilizer (non-graph) state, specifically the 3-qubit W state. Our idea is to apply {\em local} measurements of the stabilizer (whose nonlocal counterpart acting at different parties stabilizes the target state), assuming that the target state belongs to a basis of equivalent entangled states. Note that such a basis, similar to the Bell basis, exists for a wider range of multipartite states than stabilizer states. We need $n$ copies of the input state for the $n$-qubit case, to apply stabilizer measurements locally. In this manner, we can improve the fidelity, and attain the target state as a fixed point of the protocol. Note that if the target state is not the stabilizer state, {\em local} depolarization or twirling (over the single copy) which keeps the target state invariant seems impossible in general. Thus, we do not make the mixed states diagonal, i.e., a classical mixture of the basis states. It implies we cannot reduce the task to a ``classical problem'' that consists in extracting entropy from the binary strings of the stabilizer eigenvalues, as is possible by bilateral {\sc cnot} operations in all the known protocols. Nevertheless, by virtue of complementary stabilizer measurements which exchange the amplified components, our protocol works without local depolarization. The feature is favorable in efficiency, and analogous to the Oxford protocol \cite{deutsch96}. Direct distillation of multipartite entanglement has several potential merits. In the case of CSS states such as the GHZ states, multipartite distillation was shown to be more efficient than the bipartite strategy which consists of distillation of Bell pairs and their connection \cite{murao98,dur03}. Under imperfect operations, the achievable fidelity can be higher \cite{dur03,kruszynska05}. Also, the threshold for distillability may be tighter than that by indirect methods. {\it W basis and its stabilizer group.--} To make our idea explicit, we construct a recurrence protocol for the 3-qubit W state $|W^{000}\rangle = \tfrac{1}{\sqrt{3}}(|001\rangle + |010\rangle + |100\rangle)^{ABC}$, distributed over Alice, Bob, and Carol. We denote the Pauli matrices, operating on the $j$-th qubit at the party $l$, as $X_{j}^{l}$, $Y_{j}^{l}$, and $Z_{j}^{l}$ along with the identity $\openone_{j}^{l}$. To distinguish the non-local tensor structure of the multiple Hilbert spaces controlled by different parties and the local tensor structure at a single party, we use the superscripts $l$ as the non-local indices, and the subscripts $j$ as the local ones. We define the Hadamard operation by $H = \tfrac{1}{\sqrt{2}}(X + Z)$, the 2-qubit swap operation by $\mbox{\sc swap}: |k k'\rangle \mapsto |k' k\rangle$ $(k,k' = 0,1)$ in the computational basis, and a 3-qubit unitary operation $V$, which leaves $|000\rangle$ and $|111\rangle$ unchanged, but exchanges the others in such a way that $|001\rangle \leftrightarrow |110\rangle$, $|010\rangle \leftrightarrow |101\rangle$, and $|100\rangle \leftrightarrow |011\rangle$. Let us introduce a complete orthonormal basis, called the W basis here, where each basis state $|W^{k_1 k_2 k_3}\rangle$ $(k_1,k_2,k_3 =0,1)$ has entanglement equivalent to the W state $|W^{000}\rangle$. This is because basis states transform into each other by the local unitary operations in Table~\ref{tab:wbasis}. The W basis can be obtained from the computational basis acted on by the 3-qubit unitary operation $U^{\rm W basis} = \tfrac{1}{\sqrt{3}}(\openone^{A}Z^{B}X^{C} + Z^{A}X^{B}\openone^{C} + X^{A}\openone^{B}Z^{C})$, i.e., $|W^{k_1 k_2 k_3}\rangle = U^{\rm W basis}|k_1 k_2 k_3\rangle$. It is convenient to identify the stabilizer for the W basis. To satisfy $K_{j}|W^{k_1 k_2 k_3}\rangle = (-1)^{k_j}|W^{k_1 k_2 k_3}\rangle$, three generators $K_j$ are determined as \begin{align} \begin{split} K_1^{(ABC)} \!\!\!\! &= \tfrac{1}{3}(2 X^{A}X^{B}Z^{C} + 2 Y^{A}Z^{B}Y^{C} + Z^{A}\openone^{B}\openone^{C}) , \\ K_2^{(ABC)} \!\!\!\! &= \tfrac{1}{3}(2 Z^{A}X^{B}X^{C} + 2 Y^{A}Y^{B}Z^{C} + \openone^{A}Z^{B}\openone^{C}) , \\ K_3^{(ABC)} \!\!\!\! &= \tfrac{1}{3}(2 X^{A}Z^{B}X^{C} + 2 Z^{A}Y^{B}Y^{C} + \openone^{A}\openone^{B}Z^{C}) . \end{split} \end{align} We emphasize, by the superscript $(ABC)$, that the stabilizers are not local. Note that later we measure {\em locally} the stabilizers, which will be denoted, e.g. for Alice, as $K_1^{(A)} = \tfrac{1}{3}(2 X_{1}^{A}X_{2}^{A}Z_{3}^{A} + 2 Y_{1}^{A}Z_{2}^{A}Y_{3}^{A} + Z_{1}^{A}\openone_{2}^{A}\openone_{3}^{A})$, and all 3 qubits specified by the {\em subscripts} belong to Alice. The stabilizer group consists of eight commuting elements, $\{\openone, K_1, K_2, K_3, K_1 K_2, K_1 K_3, K_2 K_3, K_1 K_2 K_3 \}$, where \begin{align} \begin{split} K_1 K_2 &= \tfrac{1}{3}(2 \openone_{1}X_{2}X_{3} + 2 Y_{1}\openone_{2}Y_{3} - Z_{1} Z_{2} \openone_{3}) , \\ K_1 K_3 &= \tfrac{1}{3}(2 X_{1}X_{2}\openone_{3} + 2 \openone_{1}Y_{2}Y_{3} - Z_{1}\openone_{2}Z_{3}) , \\ K_2 K_3 &= \tfrac{1}{3}(2 X_{1}\openone_{2}X_{3} + 2 Y_{1}Y_{2}\openone_{3} - \openone_{1}Z_{2}Z_{3}) , \\ K_1 K_2 K_3 &= -Z_{1}Z_{2}Z_{3}. \\ \end{split} \end{align} \begin{table}[t] \begin{tabular}{c|c|c} $k_1 k_2 k_3$ & $|W^{k_1 k_2 k_3}\rangle$ & $|W^{000}\rangle \mapsto |W^{k_1 k_2 k_3}\rangle$ \\ \hline $000$ & $\tfrac{1}{\sqrt{3}}(|001\rangle + |010\rangle + |100\rangle)$& $\openone^{A}\openone^{B}\openone^{C}$ \\ $001$ & $\tfrac{1}{\sqrt{3}}(|000\rangle + |011\rangle - |101\rangle)$ & $Z^{A}\openone^{B}X^{C}$ \\ $010$ & $\tfrac{1}{\sqrt{3}}(-|011\rangle + |000\rangle + |110\rangle)$ & $\openone^{A}X^{B}Z^{C}$ \\ $011$ & $\tfrac{1}{\sqrt{3}}(-|010\rangle + |001\rangle - |111\rangle)$ & $Z^{A}X^{B}(-iY^{C})$ \\ $100$ & $\tfrac{1}{\sqrt{3}}(|101\rangle - |110\rangle + |000\rangle)$ & $X^{A}Z^{B}\openone^{C}$ \\ $101$ & $\tfrac{1}{\sqrt{3}}(|100\rangle - |111\rangle - |001\rangle)$ & $(-iY^{A})Z^{B}X^{C}$ \\ $110$ & $\tfrac{1}{\sqrt{3}}(-|111\rangle - |100\rangle + |010\rangle)$ & $X^{A}(-iY^{B})Z^{C}$ \\ $111$ & $\tfrac{1}{\sqrt{3}}(-|110\rangle - |101\rangle - |011\rangle)$ & $(-iY^{A})(-iY^{B})(-iY^{C})$ \end{tabular} \caption{The 3-qubit W basis. Local unitary operations that map $|W^{000}\rangle$ to $|W^{k_1 k_2 k_3}\rangle$ are shown in the third column.} \label{tab:wbasis} \end{table} {\it W state distillation protocol.--} Our protocol consists of two subprotocols: ${\mathscr P}$ and its dual $\bar{\mathscr P}$. In both, three input copies are mapped into one output copy to define a simple recurrence. Generally, any mapping to a smaller subsystem can be considered. We assume, without loss of generality, that the $|W^{000}\rangle\langle W^{000}|$ component of the input mixed states $\rho$ is the largest among the diagonal elements in the W basis (otherwise we can relabel the computational basis by a local unitary operation in Table~\ref{tab:wbasis}). We define the fidelity $F$ of $\rho$ by $F = \langle W^{000}|\rho |W^{000}\rangle$. {\it Protocol ${\mathscr P}$:} 1) Every party ($l$ = A, B, or C) applies the local measurement of two stabilizers $K_1^{(l)} K_2^{(l)}$ and $K_1^{(l)} K_3^{(l)}$ over the input state $\gamma$ (= $\rho_{\rm in}^{\otimes 3}$) of three copies, and obtains the 2-bit outcomes ${\mathbf m}^{(l)} = [m_1^{(l)},m_2^{(l)}]$. 2) Informing their outcomes by two-way classical communication, parties select coincident outcomes ${\mathbf m}^{(A)} = {\mathbf m}^{(B)} = {\mathbf m}^{(C)} = [0,1] \; (\equiv {\bf 1})$, $[1,0] \;(\equiv {\bf 2})$, or $[1,1] \;(\equiv {\bf 3})$. Otherwise they discard three copies. 3) For the coincident outcomes, each party transforms {\em locally} her/his state into a 1-qubit subsystem by the following ``majority rule''. If ${\mathbf m}^{(l)} = {\bf 1}$, $P_{\bf 1}^{l}: |W_{001}\rangle^{l} \mapsto |0\rangle^{l}$, $|W_{110}\rangle^{l} \mapsto |1\rangle^{l}$; if ${\mathbf m}^{(l)} = {\bf 2}$, $P_{\bf 2}^{l}: |W_{010}\rangle^{l} \mapsto |0\rangle^{l}$, $|W_{101}\rangle^{l} \mapsto |1\rangle^{l}$; and if ${\mathbf m}^{(l)} = {\bf 3}$, $P_{\bf 3}^{l}: |W_{100}\rangle^{l} \mapsto |0\rangle^{l}$, $|W_{011}\rangle^{l} \mapsto |1\rangle^{l}$. Mathematically, the stabilizer measurement $M_{{\mathbf m}^{(l)}}^{(l)}$ of the party $l$ is written by, \begin{equation} \label{eq:m} M_{{\mathbf m}^{(l)}}^{(l)} \!\!=\!\! \tfrac{1}{4} \left(\openone + (-1)^{m_1^{(l)}} K_1^{(l)}K_2^{(l)}\right)\!\!\! \left(\openone + (-1)^{m_2^{(l)}} K_1^{(l)}K_3^{(l)}\right), \end{equation} with the completeness condition $\sum_{{\mathbf m}^{(l)}} M_{{\mathbf m}^{(l)}}^{(l)\dag} M_{{\mathbf m}^{(l)}}^{(l)} = \openone$. Note that $M_{{\mathbf m}^{(l)}}^{(l)}$ acts on 3 qubits of the party $l$, and it is a projector to the {\em local} W basis vectors, for example if ${\mathbf m}^{(l)}={\bf 1}$, $M_{\bf 1}^{(l)}= |W_{001}\rangle^{l}\langle W_{001}| + |W_{110}\rangle^{l}\langle W_{110}|$. By the selection of desired coincident outcomes ${\bf m}$, $\mathscr P$ maps the input state $\gamma$ (= $\rho_{\rm in}^{\otimes 3}$) to the one-copy state $\rho'$ given by \begin{equation} \label{eq:p} \rho' = \!\!\!\!\! \sum_{{\mathbf m}={\bf 1,2,3}} \!\!\!\!\! PM_{\mathbf m}^{(A)}PM_{\mathbf m}^{(B)}PM_{\mathbf m}^{(C)} \gamma PM_{\mathbf m}^{(A)\dag}PM_{\mathbf m}^{(B)\dag} PM_{\mathbf m}^{(C)\dag}, \end{equation} with the success probability ${\rm tr}(\rho')$. We normalize the state as $\rho_{\rm out}=\rho'/{\rm tr} (\rho')$ for the next recurrence step. Before describing the whole protocol including $\bar{\mathscr P}$, we illustrate analytically how ${\mathscr P}$ works. Suppose the perfect W state is distributed to three parties, but suffers typical decoherence as described by the local dephasing channel ${\mathcal D}^{l}(\rho) = \tfrac{1}{2}((1+\mu)\rho + (1-\mu)Z^{l}\rho Z^{l} )$ with the same channel reliability $\mu \in [0,1]$. Three parties initially share a noisy W state $\sigma(F) = {\mathcal D}^{A}{\mathcal D}^{B}{\mathcal D}^{C} (|W^{000}\rangle\langle W^{000}|)$, which is not diagonal in the W basis, but is parametrized uniquely by $F = \tfrac{1}{3}(1+ 2 \mu^2) \in [\tfrac{1}{3},1]$. A straightforward calculation shows that ${\mathscr P}$ maps three copies $\sigma(F)^{\otimes 3}$ to one copy $\sigma(F')$ with the higher fidelity $F'$ such that \begin{equation} \label{eq:fid_lodephase} F' = \frac{\tfrac{25}{81}F^3 + \tfrac{1}{18}F(1-F)^2 + \tfrac{1}{324}(1-F)^3}{\tfrac{25}{81}F^3 + \tfrac{1}{9}F^2 (1-F) + \tfrac{2}{27}F(1-F)^2 + \tfrac{17}{162}(1-F)^3}. \end{equation} Eq.~(\ref{eq:fid_lodephase}) suggests a recurrence seen in the distillation curve of Fig.~\ref{fig:lodephase}. Since $F$ lies in $[\tfrac{1}{3},1]$, we prove analytically that $F=1$, corresponding to the W state, is the {\em attractive} fixed point and $F=\tfrac{1}{3}$ is the repulsive one. We find that for any locally dephased W state except $F=\tfrac{1}{3}$, ${\mathscr P}$ restores it with a few steps. Indeed, this threshold coincides with a {\em necessary} condition for distillability by the partial transposition criterion \cite{peres96,dur99}. Since the mixed state ${\mathcal T^{l}} (\sigma(F))$, partially transposed for any party $l$ (i.e., bipartition), has a negative eigenvalue only for $F > \tfrac{1}{3}$, there is no chance to distill entanglement in $F=\tfrac{1}{3}$. In Fig.~\ref{fig:lodephase}, the yield (i.e., the ratio of the number of surviving copies to that of used copies) of ${\mathscr P}$ after $F$ reaches at least 0.99 is also shown. The ``stairs'' of the yield come from the difference in the number of recurrence steps. \begin{figure}[t] \begin{minipage}{4.5cm} \begin{center} \includegraphics[width=4.7cm,clip]{fig1_1_curvelodephasev2.eps} \end{center} \end{minipage} \begin{minipage}{4.0cm} \begin{center} \includegraphics[width=4.0cm,clip]{fig1_2_yieldlodephase3v4.eps} \end{center} \end{minipage} \caption{The distillation curve (left) of ${\mathscr P}$ for locally dephased W states, and the yield (right) after ${\mathscr P}$ achieves $F \geq 0.99$. Note that the region of $F \in [0,\tfrac{1}{3})$ is not physical. } \label{fig:lodephase} \end{figure} For more general noises, we need $\bar{\mathscr P}$ which has the similar structure as ${\mathscr P}$ but employs complementary observables $\bar{K}^{(l)}_{j} = \Lambda^{l \dag} K^{(l)}_{j} \Lambda^{l}$, where $\Lambda^l = H_1^l H_2^l H_3^l \mbox{\sc swap}_{13}^{l}$. Two measurement bases $|W_{k_1 k_2 k_3}\rangle$ in ${\mathscr P}$ and $|\bar{W}_{k'_1 k'_2 k'_3}\rangle = \Lambda^{\dag}|W_{k'_1 k'_2 k'_3}\rangle$ in $\bar{\mathscr P}$ are complementary (also called {\em mutually unbiased} \cite{wootters89}), i.e., $|\langle W_{k_1 k_2 k_3}|\bar{W}_{k'_1 k'_2 k'_3}\rangle|^2 = \tfrac{1}{8}$. {\it Dual Protocol $\bar{\mathscr P}$:} 0) Every party ($l$ = A, B, or C) applies $V^l$ to change the {\em local} computational basis. The input $\gamma$ is modified to $\bar{\gamma} = V^A V^B V^C \rho_{\rm in}^{\otimes 3} V^{A \dag} V^{B \dag} V^{C \dag}$. 1) She/he applies the local measurement of two dual stabilizers $\bar{K}_1^{(l)}\bar{K}_2^{(l)}$ and $\bar{K}_1^{(l)}\bar{K}_3^{(l)}$ on $\bar{\gamma}$, and obtains the 2-bit outcomes $\bar{\mathbf m}^{(l)}$. 2) By two-way classical communication, parties select the coincident outcome $\bar{\mathbf m}^{(A)}=\bar{\mathbf m}^{(B)} = \bar{\mathbf m}^{(C)} = [0,0] \;(\equiv {\bf 0})$. 3) Each party transforms locally the state into a 1-qubit subsystem in the manner opposite to ${\mathscr P}$; $\bar{P}_{\bf 0}^l : |\bar{W}_{000}\rangle^l \mapsto H |1\rangle^l$, $|\bar{W}_{111}\rangle^l \mapsto H |0\rangle^l$. In brief, in $\bar{\mathscr P}$, we replace all operators in Eqs.~(\ref{eq:m}) and (\ref{eq:p}) by their ``barred'' dual operators. The complete distillation procedure for general mixed states consists of the sequential application of either ${\mathscr P}$ or $\bar{\mathscr P}$ where, in every recurrence step, we select one of the subprotocols which gives the higher fidelity in the output. There seems to be no simple formula for the sequence, but we can determine the sequence if we know the initial input state $\rho$ before distillation, e.g., by state tomography. Also note that although this combination of ${\mathscr P}$ and $\bar{\mathscr P}$ can reach numerically the region where $F > 0.999$ , precisely speaking $F=1$ is the fixed point of ${\mathscr P}$ but is not that of $\bar{\mathscr P}$. \begin{figure}[t] \begin{minipage}{3.9cm} \begin{center} \includegraphics[width=3.9cm,clip]{fig2_1_lodephasefid2.eps} \end{center} \end{minipage} \begin{minipage}{4.6cm} \begin{center} \includegraphics[width=4.6cm,clip]{fig2_2_lodepolarizefid3.eps} \end{center} \end{minipage} \caption{Noisy W states subjected by the local dephasing (left) or local depolarizing (right) channel can be retrieved by ${\mathscr P}$ and $\bar{\mathscr P}$, if $F$ is initially larger than $\tfrac{1}{3}$ or $0.48$, respectively. Note that this is actually accomplished by ${\mathscr P}$ alone for the local dephasing case.} \label{fig:fid_lonoise} \end{figure} Hereafter, we show that, under the sequential application of ${\mathscr P}$ and $\bar{\mathscr P}$, the W state can be distilled from arbitrary mixed states if, roughly speaking, $F$ is sufficiently large. First, consider another typical decoherence such as the local depolarizing channel (white noise) ${\mathcal E}^{l}(\rho) = \mu\rho + \tfrac{1-\mu}{4}(\rho + X^{l}\rho X^{l} + Y^{l}\rho Y^{l} + Z^{l}\rho Z^{l})$, and the input state $\rho_{\rm in}(F)={\mathcal E}^{A}{\mathcal E}^{B}{\mathcal E}^{C} (|W^{000}\rangle\langle W^{000}|)$ with $F=\tfrac{1}{24}(3+\mu + 9\mu^2 + 11\mu^3) \in [\tfrac{1}{8},1]$. Although the locally depolarized W state does not remain in the same form under our protocol, we can still determine a threshold for distillability. As seen in Fig.~\ref{fig:fid_lonoise}, if initially $F \gtrsim 0.48$, we distill the W state, and otherwise we have an undistillable mixed state $\chi = \frac{1}{2}(|\varphi\rangle\langle\varphi | + |\varphi' \rangle\langle\varphi' |)$, where $|\varphi \rangle = \tfrac{1}{2} (|001\rangle + |010\rangle + |100\rangle - |111\rangle)$ and $|\varphi'\rangle = \tfrac{1}{2} (-|000\rangle + |011\rangle + |101\rangle + |110\rangle)$, as another fixed point with $F=\tfrac{3}{8}$. This threshold is stricter than the necessary condition $F \gtrsim 0.36$ by the partial transpose criterion \cite{peres96,dur99}. Note that the progress of the protocol is not described by a single parameter, and $F$ is not monotonic any more. A nonmonotonic behavior of $F$ was also seen in the bipartite distillation without depolarization \cite{deutsch96}. However, as a long-term behavior, $F$ is increasing for the distillable cases and can be used for visualization of the progress. \begin{figure}[t] \begin{minipage}{3.7cm} \begin{center} \includegraphics[width=3.7cm,clip]{fig3_1_averagef070samp10000c1.eps} \end{center} \end{minipage} \begin{minipage}{4.8cm} \begin{center} \includegraphics[width=4.8cm,clip]{fig3_2_averagef050samp10000c1.eps} \end{center} \end{minipage} \caption{The average fidelity and its standard deviation followed toward each fixed point, for (in total) 10,000 randomly generated initial mixed states with $F \in 0.70 \pm 0.01$ (left) or $0.50 \pm 0.01$ (right).} \label{fig:random} \end{figure} Next, we consider randomly generated input mixed states (under the Hilbert-Schmidt measure \cite{zyczkowski01}), and will observe numerically hierarchical distillations not only to the 3-qubit W state, but also to a 2-qubit Bell pair. This is surprising, since it implies that we can distill a non-stabilizer state and a stabilizer state by the same protocol. In Fig.~\ref{fig:random}, for 10,000 random mixed states with the initial fidelity $F$ fixed close to 0.70 or 0.50, we display the average fidelity and its standard deviation for each set of samples reaching the same fixed point. When $F$ is sufficiently large, such as $F\simeq 0.70$, the branch to the W state is dominant. More than 99 percent of the states follow it, and a few residual samples are transient or drop to other fixed points mentioned below. As $F$ becomes smaller, there appear three hierarchical branches (i) to the W state, (ii) to the 2-qubit Bell state ($F=\tfrac{2}{3}$) shared by two parties out of three, i.e., $\tfrac{1}{\sqrt{2}}(|01\rangle +|10\rangle)|0\rangle^{{l_1}{l_2}{l_3}}$, where $(l_1,l_2,l_3)$ is a permutation of (A, B, C), or (iii) to the undistillable state $\chi$ (up to local unitaries). Depending on the initial entanglement, a branch is selected by the protocol. This hierarchy reflects the ``onion-like'' geometry among different kinds of entanglement in 3-qubit mixed states \cite{acin01}. As $F$ approaches the ``critical'' region ($F \simeq 0.50$ in Fig.~\ref{fig:random}) for distillability, the characteristic number of steps toward every fixed point as well as the fluctuation of the progresses for different samples become larger. The fraction of the states which follow lower branches also increases. {\it Conclusion.--} Identifying a complementary (mutually unbiased) pair of stabilizer measurements, which replaces the conventional bilateral {\sc cnot}, as a key local operation for distillation, we have proposed a 3-qubit W state distillation protocol. To our knowledge, it is the first protocol to distill directly multipartite non-stabilizer states. An extension to the $n$-qubit W state should be straightforward, introducing the general W basis by $U^{\rm W basis} = \tfrac{1}{\sqrt{n}}\sum_{l=1}^{n} Z^{1}\cdots Z^{l-1} X^{l} \openone^{l+1} \cdots \openone^{n}$. Since our protocol distills a non-stabilizer state and stabilizer states on the same footing, our scheme may lead to a unified construction of direct distillation protocols for multipartite entanglement. It is still open whether a hashing protocol can be made for non-stabilizer states without local depolarization which makes density matrices classical mixtures of pure states. Finally, since quantum computers in which only stabilizer states are generated can be efficiently simulated by classical computers \cite{gottesman98}, the appearance of non-stabilizer states, such as the W state, is necessary to exploit the power (universality) of quantum computers. Thus, the technique to purify such states, beyond the ``classical'' parity check (exclusive {\sc or} via {\sc cnot}) for stabilizer states, might also give a new perspective on fault-tolerant quantum computation (cf. Ref.~\cite{bravyi05}). We thank J. Calsamiglia, W. D\"{u}r, O. G\"{u}hne, M. Hein, C. Kruszynska, and J. Shimamura for helpful discussions. This work was supported in part by the Japan Society for the Promotion of Science (JSPS), the Austrian Science Foundation (FWF), and the European Union (IST-2001-38877, -39227, SCALA).
1,314,259,993,092
arxiv
\section{Introduction} \label{sec:intro} \input{010_intro} \section{Hexagonal flanks} \label{sec:trio} \input{020_trio} \section{Satellite triangles} \label{sec:satellite} \input{030_satellite} \section{A Web of Confocal Parabolas} \label{sec:parabolas} \input{040_parabolas} \section{Controlled by Poncelet} \label{sec:poncelet} \input{050_poncelet} \section{Videos \& Questions} \label{sec:videos} \input{100_videos} \section*{Acknowledgements} \input{110_ack} \subsection*{Summary of the Results} \begin{itemize} \item The second isodynamic points \cite{mw} of $\mathcal{T}$ and the flanks are common; \item A contiguous, infinite grid of regular hexagons can be constructed, see \cref{fig:grid}; all flanks in the grid have a common second isodynamic point and conserve a special quantity; \item Sequences of hexagon vertices are crisscrossed by a web of confocal parabolas with only 3 distinct foci; \item Their foci and directrix intersections are vertices of 2 new equilateral triangles; \item Phenomena are described when the reference triangle in \cref{fig:basic} are triangles in two special Poncelet triangle families. \end{itemize} \subsection*{Related Work} In \cite{cerin2002-flanks,lamoen2004-flank} properties of ``flank'' triangles located between squares and/or rectangles erected on a triangle's sides are studied. Works \cite{dosa2007-ext,hoehn2001-ext} study triangles centers (taken as triples or not) analogues of the intouch and/or extouch triangles erected upon each side of a reference triangle. In \cite{fukuta1996,fukuta1997,stachel2002,cerin1998} a construction related to Napoleon's theorem is described which associates to a generic triangle a regular hexagon. \subsection*{Article Structure} In \cref{sec:trio} we describe properties of the trio of flank triangles. In \cref{sec:satellite} we fix a central regular hexagon and consider properties of 6 ``satellite'' triangles built around it, and this implies a contiguous grid can be iteratively built. . In \cref{sec:parabolas} we show said grid is interwoven by three groups of confocal parabolas. In \cref{sec:poncelet} we show two examples of grids controlled by Poncelet poristic triangles. \cref{sec:videos} provides a table of videos illustrating some results as well as a list of open questions. In \cref{app:eqns} we provide computer-usable expressions for some objects described in \cref{sec:parabolas}. \subsection{Zero-Area Flanks} Consider a family of triangles $ABC$ where $A,B$ are fixed and $C$ is free. Let $F_c$ denote the flank triangle between the regular hexagons erected upon $AC$ and $CB$. As shown in \cref{fig:sliver}: \begin{observation} $F_c$ will be zero-area if $C$ subtends a $120^o$ angle, i.e., it lies on a circular arc centered on the centroid $O$ of an equilateral erected upon $AB$ and with radius $|OA|$. \end{observation} \begin{figure} \centering \includegraphics[width=\textwidth] {pics/170_sliver.pdf} \caption{The locus of vertex $C$ such that it subtends a $120^o$ angle, i.e., the $C$-flank is degenerate, is a circular arc (dashed blue) such centered on $O$, the centroid of an equilateral (dashed brown) erected upon $AB$, and with radius $|OA|$ (a mirror arc corresponding to the top equilateral centered on $O'$ is also shown). The left (resp. right) picture shows $C$ in two distinct positions.} \label{fig:sliver} \end{figure} Consider the construction for the two flank triangles $F_1$ and $F_2$ shown in \cref{fig:self-inter}, namely, departing from $\mathcal{T}=ABC$, erect regular hexagons $\H_1$ and $\H_2$ on sides $AC$ and $BA$, respectively. Consider the a first flank triangle $F_1$ between $\H_1$ and $\H_2$. As shown in the figure, erect a third regular hexagon $\H_3$ on the unused side of $F_1$, and let a second flank triangle $F_2$ sit between $\H_2$ and $\H_3$. While holding $B$ and $C$ fixed, there are positions for $A$ such that $F_2$ has positive, zero, or negative signed area. Referring to \cref{fig:zero-f2}: \begin{proposition} $F_2$ will have positive (resp. negative) area if $A$ is exterior (resp. interior) to the circumcircle $\mathcal{C}$ of an equilateral triangle whose base is $B$ and the midpoint of $BC$. In particular, the vertices of $F_2$ become collinear if $A$ is on $\mathcal{C}$. \end{proposition} \begin{figure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim=10 0 80 20,clip,width=1\textwidth]{pics/080_self_inter_a.pdf} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim=80 0 70 20,clip,width=1\textwidth]{pics/080_self_inter_b.pdf} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim=30 0 50 20,clip,width=1\textwidth]{pics/080_self_inter_c.pdf} \end{subfigure} \caption{For certain positions of $A$ (while keeping $B$ and $C$ stationary), flank triangle $F_2$ will be non-eversed (left), degenerate (middle), or eversed (right).} \label{fig:self-inter} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{pics/090_zero_f2.pdf} \caption{With $B$ and $C$ fixed, the locus of $A$ such that the area of $F_2$ oin \cref{fig:self-inter} is zero is the circumcircle (dashed red) of an equilateral (brown) with base on $B$ and the midpoint of $BC$. Two positions of $A$ are shown (left, right).} \label{fig:zero-f2} \end{figure} \subsection{Second-level satellites} Referring to \cref{fig:second-level}, consider the 6 satellite flanks $F_i$ surrounding a central hexagon which are obtained as above by sequentially erecting 6 regular hexagons $\H_i$. Consider 6 ``2nd-level'' flanks $F_i'$ nestled between the $\H_i$. Let $\H_k'$ denote a hexagon whose vertices are the $X_k$ of the $F_i'$. \begin{proposition} For all $X_k$ on the Euler line, $\H_k'$ has invariant internal angles. \end{proposition} We thank A. Akopyan for the following argument \cite{akopyan2021-private}: \begin{proof} This follows from the fact that their area is the sum of squares of distances from the reflected point $A'$ to vertices $Q_i$ of the central hexagon. \end{proof} \begin{figure} \centering \includegraphics[trim=120 0 150 0,clip,width=.8\textwidth,frame]{pics/070_second_level.pdf} \caption{Shown are 5 hexagons $\H_i'$ (red) whose vertices are the $X_k$ of second-level satellites (yellow). If $X_k$ is on the Euler line, the $\H_i'$ have identical internal angles.} \label{fig:second-level} \end{figure} \subsection{Properties of the Second Fermat Point} The second Fermat point $X_{14}$ of a triangle is the isogonal conjugate of the second isodynamic point $X_{16}$ \cite[Fermat Points]{mw}. Let $\H=ABCDEF$ be a regular hexagon with centroid $O$, and let $P$ be a point anywhere. Define six ``inner'' triangles $\mathcal{T}_1=ABP$, $\mathcal{T}_2=BCP$, ..., $\mathcal{T}_6=FAP$. Referring to \cref{fig:x14}(left): The second Fermat point $X_{14}$ of a triangle is the isogonal conjugate of the second isodynamic point $X_{16}$ \cite[Fermat Points]{mw}. Let $\H=ABCDEF$ be a regular hexagon with centroid $O$, and let $P$ be a point anywhere. $\mathcal{T}_1=ABP$, $\mathcal{T}_2=BCP$, ..., $\mathcal{T}_6=FAP$. Referring to \cref{fig:x14}(left): \begin{proposition} The $X_{14}$ of the $\mathcal{T}_i$ will lie on $PO$. \label{prop:x14-inner} \end{proposition} Let $\mathcal{T}_i'$ be the six triangles with (i) the base a side of $\H$, and (ii) the apex $P_i'$ the reflection of $P$ about said side. Assume in this case $P$ is interior to $\H$. Referring to \cref{fig:x14}(right): \begin{proposition} The $X_{14}$ of the $\mathcal{T}_i'$ lie on a rectangular hyperbola (green) concentric with $\H$. \label{prop:x14-outer} \end{proposition} \begin{figure} \begin{subfigure}[m]{0.49\textwidth} \includegraphics[trim=300 125 100 25,clip,width=\textwidth]{pics/210_x14_inner.pdf} \end{subfigure} \begin{subfigure}[m]{0.49\textwidth} \includegraphics[trim=200 0 0 0,clip,width=\textwidth]{pics/220_x14_outer.pdf} \end{subfigure} \caption{\textbf{Left:} Let $\H=ABCDEF$ be a regular hexagon, and $P$ a point. The second Fermat points $X_{14}$ of the ``inner'' triangles $\mathcal{T}_i\in\{ABP, BCP,\ldots FAP\}$ are collinear with $P$ and the centroid $O$ of $\H$. \textbf{Right:} Let $\mathcal{T}_i'$ be triangles with base a side of $\H$, and apex $P_i'$ the reflection of $P$ about said side. The $X_{14}$ of the $\mathcal{T}_i'$ lie on a rectangular hyperbola (green) concentric with $\H$. Also shown is the (dotted magenta) line of the $X_{14}$ of the inner triangles.} \label{fig:x14} \end{figure} \subsection*{A directrix equilateral} The anticomplement\footnote{This is the double-length reflection about the barycenter $X_2$.} of the second Fermat point $X_{14}$ is labeled $X_{617}$ on \cite{etc}. \begin{proposition} The triangle bounded by the directrices of the $A$-, $B$-, and $C$-parabolas is an equilateral whose centroid is $X_{617}$, and whose sidelengths $s'$ is given by: \begin{align*} (s')^2 =& \left(\frac{S}{4}\right)\left(\frac{5\sqrt{3} + 11\cot{\omega} + 16(\sqrt{3} + \cot{\omega})}{2\cos{(2\omega)}-1}\right) \end{align*} \label{prop:dir-equi} \end{proposition} \noindent Note: an expression of the $A$-vertex of the above appears in \cref{app:eqns}. Note also that the sidelength is of the directrix equilateral is not conserved across all flank triangles in the grid since each of these will be associated with a different directrix equilateral. Interestingly, the triangle formed by the vertices of said parabolas is not an equialteral. \begin{figure} \centering \includegraphics[width=\textwidth]{pics/180_directrix_equi.pdf} \caption{The triangle $A'B'C'$ bounded by the directrices (dashed blue, red, and green) of the $A$-, $B$-, and $C$-parabolas (blue, red, red, and green) is also an equilateral, whose centroid is $X_{617}$. Interestingly, the triangle (brown) connecting the vertices $V_a,V_b,V_c$ of said parabolas is in general a scalene.} \label{fig:dir-equi} \end{figure} \subsection*{Skip-1 confocal parabolas} Let $Q_1,A,Q_3,Q_5,\ldots$ (resp. $Q_2,B,Q_4,Q_6,\ldots$) be a sequence of odd (resp. even) side vertices of adjacent hexagons, as shown in \cref{fig:par-alternated}. \begin{proposition} The sequence of odd (resp. even) vertices lies on a parabola. The former (resp. latter) is confocal with the $A$-parabola (resp $B$-parabola). Furthermore, their axes are parallel to the axis of the $C$-parabola, and pass through $f_b$ and $f_c$, respectively. \end{proposition} \begin{figure} \centering \includegraphics[width=\textwidth,frame]{pics/130_par_alternated.pdf} \caption{Two additional groups of confocal parabolas exist which pass through the odd (resp. even) side vertices along a given grain of the grid. A member of the first (resp. second) confocal group is shown in dashed blue (resp. red), passing through odd vertices $[\ldots,Q_1,A,Q_3,Q_5,Q_7,\ldots]$ (resp. even vertices) $[\ldots,Q_2,B,Q_4,Q_6,\ldots]$. The focus of the odd (resp. even) group is $f_a$ (resp. $f_b$). The major axes of the odd, even, and original $C$-parabola group are parallel (dashed red, blue, green).} \label{fig:par-alternated} \end{figure} The above statements are valid cyclically, i.e., there are families of confocal odd and even parabolas along each of the 3 major directions in the grid, e.g., corresponding to the diagonals of a hexagon erected upon a side of $\mathcal{T}$. Computer-usable explicit equations for some of the objects in this section appear in \cref{app:eqns}. \subsection*{Homothetic phenomena} Referring to \cref{fig:ponc-homot}, consider our basic construction such that the reference triangle $\mathcal{T}$ is one in the homothetic family. Referring to the last column of \cref{tab:homot-broc}, notice that the homothetic family conserves both the sum of squared sidelengths and area. Another fact proved in \cite{reznik2020-similarityII} is that the locus of $X_k$, $k=13,14,15,16$ over this family are 4 distinct circles. Since the the side of said equilateral only depends on the sum of squared sidelengths and area \cref{thm:focal}: \begin{corollary} Over homothetic triangle family controlling our grid, the focal equilateral has invariant sidelength and circumradius. Furthermore, the locus of its centroid is a circle concentric with the homothetic pair of ellipses. \end{corollary} \begin{figure} \centering \includegraphics[width=.8\textwidth]{pics/160_ponc_homot.png} \caption{Phenomena manifested by objects in our basic construction over homothetic triangles (blue): (i) the focal equilateral (orange) has fixed sidelength and (ii) its centroid moves along a circle (black); (iii) the centroids $X_2$ of the 3 flank triangles move along a first ellipse (green) concentric with the homothetic pair; (iv) the centroids $O_a,O_b,O_c$ of the three regular hexagons (purple) move along a second ellipse (dotted purple) which is a $90^o$-rotated copy of (iii).} \label{fig:ponc-homot} \end{figure} Still referring to \cref{fig:ponc-homot}, experimentally, we observe: \begin{observation} Over the homothetic family, (i) the locus of the barycenters of the three flanks is an ellipse concentric and axis-alinged with the homothetic pair, though of distinct aspect ratio, and (ii) the locus of the centroids of the three regular hexagons erected on $\mathcal{T}$ is an ellipse which is a $90^o$-rotated copy of (i). \end{observation} \subsection*{Brocard porism phenomena} Referring to \cref{fig:ponc-broc}, consider our basic construction such that the reference triangle $\mathcal{T}$ is one in the Brocard porism. Let $\mathcal{A}=S/2$ denote the area of a reference triangle. Rewrite the expression for $s^2$ in \cref{thm:focal} as $s^2=(3/8)\left(\cot{\omega}-\sqrt{3}\right)\mathcal{A}$. Referring to the last column of \cref{tab:homot-broc}, note the Brocard porism conserves $\omega$ (though not area), therefore: \begin{corollary} In a (dynamic) grid controlled by triangles $\mathcal{T}$ in the Brocard porism, the focal equilateral rotates about a fixed centroid $X_{16}$. Its area is variable and proportional to the area of $\mathcal{T}$. \end{corollary} \begin{figure} \centering \includegraphics[width=\textwidth]{pics/150_ponc_broc.png} \caption{The isodynamic points $X_{15}$ and $X_{16}$ of Brocard porism triangles (blue) are stationary isodynamic. This entails that over the porism, flank triangle (green) $X_{16}$'s will also be stationary. Also shown are corresponding focal equilaterals, whose area is variable and proportional to the area of porism triangles.} \label{fig:ponc-broc} \end{figure} Since $X_6$ is stationary over the Brocard porism, recalling \cref{prop:x6}: \begin{corollary} Over the Brocard porism, the triangle whose vertices are the $X_{15}$ is perspective with $\mathcal{T}$ at a fixed point ($X_6$). \end{corollary} \subsection*{Open Questions} \begin{itemize} \item What dynamic properties underlie the fact that certain sequence of hexagonal vertices are spanned by parabolas? \item Does the sequence of hexagons and flank triangles tend to regular shapes away from the foci of the parabolas? \item What are new or different properties of both the basic and grid constructions if hexagons are erected inwardly upon each side of $T=ABC$? \item Depending on the amount of self-intersection (\cref{fig:self-inter}) for one or more triangles in the grid, a certain condition is crossed such that the second isodynamic point wanders away from its fixed common locations. What is that condition? Would using $X_{15}$ be correct? \item Is there an $N$ other than $3,4,6$ such that interesting properties of similar constructions can be found? \item What happens if self-intersected hexagons are erected, e.g., with vertices common with a simple regular hexagon? \end{itemize} \subsection{A-parabola} It is given implicitly by: \begin{verbatim} 2*(a^8-4*a^6*b^2+6*a^4*b^4-4*a^2*b^6+b^8- 4*a^6*c^2+13*a^4*b^2*c^2-14*a^2*b^4*c^2+5*b^6*c^2+ 6*a^4*c^4-23*a^2*b^2*c^4+15*b^4*c^4-4*a^2*c^6+ 14*b^2*c^6+c^8)*x*y-3*c^2*(2*a^6-8*a^4*b^2+ 10*a^2*b^4-4*b^6-5*a^4*c^2+18*a^2*b^2*c^2- 11*b^4*c^2+4*a^2*c^4-8*b^2*c^4-c^6)*y^2+2*(a^8-4*a^6*b^2+ 6*a^4*b^4-4*a^2*b^6+b^8-4*a^6*c^2+13*a^4*b^2*c^2- 23*a^2*b^4*c^2+14*b^6*c^2+6*a^4*c^4-14*a^2*b^2*c^4+ 15*b^4*c^4-4*a^2*c^6+5*b^2*c^6+c^8)*x*z+2*(4*a^8- 13*a^6*b^2+15*a^4*b^4-7*a^2*b^6+b^8-13*a^6*c^2+ 31*a^4*b^2*c^2-35*a^2*b^4*c^2+17*b^6*c^2+15*a^4*c^4- 35*a^2*b^2*c^4+36*b^4*c^4-7*a^2*c^6+17*b^2*c^6+c^8)*y*z- 3*b^2*(2*a^6-5*a^4*b^2+4*a^2*b^4-b^6-8*a^4*c^2+ 18*a^2*b^2*c^2-8*b^4*c^2+10*a^2*c^4-11*b^2*c^4-4*c^6)*z^2+ 2*sqrt(3)*S*(2*(a^6-3*a^4*b^2+3*a^2*b^4-b^6-5*a^4*c^2+ 9*a^2*b^2*c^2-4*b^4*c^2+7*a^2*c^4-7*b^2*c^4-3*c^6)*x*y+ (a^2-b^2-c^2)*(2*a^4-4*a^2*b^2+2*b^4-10*a^2*c^2+ 8*b^2*c^2+5*c^4)*y^2+2*(a^6-5*a^4*b^2+7*a^2*b^4-3*b^6- 3*a^4*c^2+9*a^2*b^2*c^2-7*b^4*c^2+3*a^2*c^4-4*b^2*c^4-c^6)*x*z+ 2*(2*a^6-7*a^4*b^2+8*a^2*b^4-3*b^6-7*a^4*c^2+17*a^2*b^2*c^2- 12*b^4*c^2+8*a^2*c^4-12*b^2*c^4-3*c^6)*y*z+(a^2-b^2-c^2)*(2*a^4- 10*a^2*b^2+5*b^4-4*a^2*c^2+8*b^2*c^2+2*c^4)*z^2)=0 \end{verbatim} The $B$- and $C$-parabolas can be obtained by cyclic permutations, i.e., $(a,b,c)\to(b,c,a)$, and $(a,b,c)\to(c,a,b)$, respectively. \subsection{The two ``skip-1'' parabolas} The skip-1 parabola through $B$ is given by: \begin{verbatim} (sqrt(3)*(2*a^6 - 5*a^4*b^2 + 4*a^2*b^4 - b^6 - 7*a^2*b^2*c^2 + 3*b^4*c^2 - 3*a^2*c^4 + 3*b^2*c^4 + c^6) + 6*(6*a^4 - 5*a^2*b^2 + b^4 - 4*a^2*c^2 + 2*b^2*c^2 + c^4)*S)*x^2 + (sqrt(3)*(3*a^6 - 7*a^4*b^2 + 5*a^2*b^4 - b^6 + 6*a^4*c^2 - 10*a^2*b^2*c^2 + 2*b^4*c^2 - 3*a^2*c^4 + 5*b^2*c^4) + 2*(13*a^4 - 8*a^2*b^2 + b^4 - 17*a^2*c^2 + 7*b^2*c^2 + 4*c^4)*S)*x*y + (sqrt(3)*(4*a^6 - 14*a^4*b^2 + 13*a^2*b^4 - 3*b^6 - 4*a^4*c^2 - 13*a^2*b^2*c^2 + 4*b^4*c^2 - a^2*c^4 + 4*b^2*c^4 + c^6) + 2*(22*a^4 - 14*a^2*b^2 + b^4 - 8*a^2*c^2 + 7*b^2*c^2 + c^4)*S)*x*z + (sqrt(3)*(5*a^6 - 7*a^4*b^2 + 2*a^2*b^4 + 6*a^4*c^2 + 3*a^2*b^2*c^2 - 2*b^4*c^2 - 6*a^2*c^4 + b^2*c^4 + c^6) - 2*(11*a^4 - 10*a^2*b^2 + 2*b^4 - a^2*c^2 + 2*b^2*c^2 - c^4)*S)*y*z + (sqrt(3)*(a^6 - 15*a^4*b^2 + 9*a^2*b^4 - b^6 - 2*a^4*c^2 - 3*a^2*b^2*c^2 - b^4*c^2 + a^2*c^4 + 2*b^2*c^4) + 6*(3*a^4 + 2*a^2*b^2 - b^4 - a^2*c^2)*S)*z^2 = 0 \end{verbatim} The skip-1 parabola through $C$ is obtained with a bicentric substitution, i.e., $(a,b,c,x,y,z)\to(a,c,b,x,z,y)$. \subsection{Center (at infinity) of the A-parabola group} The $A$-parabola and $BC$ skip-1 pair of parabolas have parallel axes through $f_a,f_b,f_c$, therefore their axes will cross the line at infinity at the same point given by the following barycentrics; \begin{verbatim} x=8*a^6 - 8*a^4*b^2 + a^2*b^4 - b^6 - 8*a^4*c^2 + 6*a^2*b^2*c^2 + b^4*c^2 + a^2*c^4 + b^2*c^4 - c^6 + 2*sqrt(3)*(b^2 - c^2)^2*S y=-4*a^6 + a^4*b^2 + a^2*b^4 + 2*b^6 + 7*a^4*c^2 - 3*a^2*b^2*c^2 - 5*b^4*c^2 - 2*a^2*c^4 + 4*b^2*c^4 - c^6 - 2*sqrt(3)*(a^2 - c^2)*(b^2 - c^2)*S z=-4*a^6 + 7*a^4*b^2 - 2*a^2*b^4 - b^6 + a^4*c^2 - 3*a^2*b^2*c^2 + 4*b^4*c^2 + a^2*c^4 - 5*b^2*c^4 + 2*c^6 + 2*sqrt(3)*(a^2 - b^2)*(b^2 - c^2)*S \end{verbatim} \subsection{Directrix of the A-parabola} Is is the line given by: \begin{verbatim} (a^4 - 8*a^2*b^2 + 12*b^4 - 8*a^2*c^2 + 27*b^2*c^2 + 12*c^4 - 2*sqrt(3)*(2*a^2 - 5*b^2 - 5*c^2)*S)*x + (6*a^4 - 23*a^2*b^2 + 22*b^4 - 30*a^2*c^2 + 58*b^2*c^2 + 39*c^4 + 2*sqrt(3)*(a^2 - 3*b^2 - 2*c^2)*S)*y + (6*a^4 - 30*a^2*b^2 + 39*b^4 - 23*a^2*c^2 + 58*b^2*c^2 + 22*c^4 + 2*sqrt(3)*(a^2 - 2*b^2 - 3*c^2)*S)*z = 0 \end{verbatim} The other two directrices can be obtained via cyclic substitution. \subsection{Directrix equilateral} The barycentrics $x,y,z$ of the $A$-vertex of the directrix equilateral (\cref{prop:dir-equi}) are given by: \begin{verbatim} x = -sqrt(3)*(12*a^6 - 13*a^4*b^2 - a^2*b^4 + 2*b^6 - 13*a^4*c^2- 8*a^2*b^2*c^2 - 2*b^4*c^2 - a^2*c^4 - 2*b^2*c^4 + 2*c^6)- 6*(3*a^2*b^2 + 3*a^2*c^2 + 2*b^2*c^2)*S; y = sqrt(3)*(5*a^6 + 2*a^4*b^2 - 10*a^2*b^4 + 3*b^6 - 7*a^4*c^2 - 10*a^2*b^2*c^2 - a^2*c^4 - 6*b^2*c^4 + 3*c^6) - 6*(a^4 - 5*a^2*b^2 + b^4 - 3*b^2*c^2 - c^4)*S; z = sqrt(3)*(5*a^6 - 7*a^4*b^2 - a^2*b^4 + 3*b^6 + 2*a^4*c^2 - 10*a^2*b^2*c^2 - 6*b^4*c^2 - 10*a^2*c^4 + 3*c^6) - 6*(a^4 - b^4 - 5*a^2*c^2 - 3*b^2*c^2 + c^4)*S. \end{verbatim}
1,314,259,993,093
arxiv
\partial{\partial} \def\lwr #1{\lower 5pt\hbox{$#1$}\hskip -3pt} \def\rse #1{\hskip -3pt\raise 5pt\hbox{$#1$}} \def\lwrs #1{\lower 4pt\hbox{$\scriptstyle #1$}\hskip -2pt} \def\rses #1{\hskip -2pt\raise 3pt\hbox{$\scriptstyle #1$}} \def\bmatrix#1{\left[\matrix{#1}\right]} \def\<#1{\left\langle{#1}\right\rangle} \def\subinbn{{\subset\hskip-8pt\raise 0.95pt\hbox{$\scriptscriptstyle\subset$}}} \def\Square{\hbox{\hskip 6pt\vrule width 5pt height4pt depth 1pt\hskip 1pt}} \def\llvdash{\mathop{\|\hskip-2pt \raise 3pt\hbox{\vrule height 0.25pt width 1.5cm}}} \def\lvdash{\mathop{|\hskip-2pt \raise 3pt\hbox{\vrule height 0.25pt width 1.5cm}}} \def\fakebold#1{\leavevmode\setbox0=\hbox{#1}% \kern-.025em\copy0 \kern-\wd0 \kern .025em\copy0 \kern-\wd0 \kern-.025em\raise.0333em\box0 } \font\msxmten=msxm10 \font\msxmseven=msxm7 \font\msxmfive=msxm5 \newfam\myfam \textfont\myfam=\msxmten \scriptfont\myfam=\msxmseven \scriptscriptfont\myfam=\msxmfive \mathchardef\ggarrow="7010 \mathchardef\rhookupone="7016 \mathchardef\BECAUSE="702A \mathchardef\ldh="700D \mathchardef\leg="7053 \mathchardef\ANG="705E \mathchardef\lcu="7070 \mathchardef\rcu="7071 \mathchardef\leseq="7035 \mathchardef\qeeg="703D \mathchardef\qeel="7036 \mathchardef\blackbox="7004 \mathchardef\bbx="7003 \mathchardef\simsucc="7025 \def{\fam=\myfam\BECAUSE}{{\fam=\myfam\BECAUSE}} \def\,{\fam=\myfam\qeeg}\,{\,{\fam=\myfam\qeeg}\,} \def{\fam=\myfam\leseq}{{\fam=\myfam\qeel}} \def{\fam=\myfam\ldh}{{\fam=\myfam\ldh}} \def{\fam=\myfam \rhookupone}{{\fam=\myfam \rhookupone}} \def\mathrel{\fam=\myfam\simsucc}{\mathrel{\fam=\myfam\simsucc}} \def{\fam=\myfam\leg}{{\fam=\myfam\leg}} \def{\fam=\myfam\leseq}{{\fam=\myfam\leseq}} \def{\fam=\myfam\lcu}{{\fam=\myfam\lcu}} \def{\fam=\myfam\rcu}{{\fam=\myfam\rcu}} \def{\fam=\myfam\blackbox}{{\fam=\myfam\blackbox}} \def{\fam=\myfam\bbx}{{\fam=\myfam\bbx}} \def{\fam=\myfam\ANG}{{\fam=\myfam\ANG}} \def{\fam=\myfam\ggarrow}{{\fam=\myfam\ggarrow}} \font\tencaps=cmcsc10 \def\tencaps{\tencaps} \def\author#1{\bigskip\centerline{\tencaps #1}\medskip} \def\time#1{\par\smallskip\hang\indent\llap{\hbox to \parindent{#1\hfil\enspace}}\ignorespaces} \def\mathop\sqcap{\mathop\sqcap} \def\sqcp\limits{\mathop\sqcap\limits} \font\cmssbx=cmssbx10 \font \bbrrm=cmbx10 at 14pt \def\bbrrm{\bbrrm} \def\noindent\hang{\noindent\hang} \def\lineskip=2pt\baselineskip=12pt\lineskiplimit=0pt{\lineskip=2pt\baselineskip=12pt\lineskiplimit=0pt} \def\dotsq{\ooalign{\hfil\raise 1pt\hbox{$\cdot$}\hfil\cr\cr\hbox{${\fam=\myfam\bbx}$}}} \def\mathop{{\int\!\!\!\!\int\!\!\!\!\int}}{\mathop{{\int\!\!\!\!\int\!\!\!\!\int}}} \def\mathop{{\int\!\!\!\!\int}}{\mathop{{\int\!\!\!\!\int}}} \def{\smallcaps Geometric and Functional Analysis}{{\tencaps Geometric and Functional Analysis}} \def\frac #1#2{{#1\over #2}} \def\ssheading#1{\smallskip\goodbreak\noindent$\underline{\hbox {{#1}}}$.\enspace} \def{\backslash}{{\backslash}} \def\mathop{\rm span}{\mathop{\rm span}} \def\endproclaim{\par\rm\ifdim\lastskip<\medskipamount\removelastskip \penalty55\medskip\fi} \def\mathrel{\hbox{$\big/\!\!\!\!R$}}{\mathrel{\hbox{$\big/\!\!\!\!R$}}} \def\leaders\hbox to 1em{\hss.\hss}\hfill{\leaders\hbox to 1em{\hss.\hss}\hfill} \def\upddots{\mathinner{\mkern 1mu\raise 1pt \hbox{.}\mkern 2mu \mkern 2mu \raise 4pt\hbox{.}\mkern 1mu \raise 7pt\vbox {\kern 7 pt\hbox{.}}} } \def{\rm spec}{{\rm spec}} \def\mathop{\rm sup\ ess}{\mathop{\rm sup\ ess}} \def\mathop{\rm sign}{\mathop{\rm sign}} \def\mathop{\rm supp}{\mathop{\rm supp}} \def\mathop{\rm Supp}{\mathop{\rm Supp}} \def\mathop{\rm Sup}{\mathop{\rm Sup}} \def\mathop{\rm Sym}{\mathop{\rm Sym}} \def\mathop{\rm tr}{\mathop{\rm tr}} \def\mathop{\rm Tor}\nolimits{\mathop{\rm Tor}\nolimits} \def\mathop{\rm Var}{\mathop{\rm Var}} \def\mathop{\rm Vol}\nolimits{\mathop{\rm Vol}\nolimits} \def\mathop{\rm vol}{\mathop{\rm vol}} \def\twolongrightarrow{\ \hbox{$\longrightarrow\hskip -17pt \longrightarrow$}\ } \def\sqr#1#2{{\vcenter{\hrule height.#2pt\hbox{\vrule width.#2pt height#1pt \kern#1pt \vrule width.#2pt}\hrule height.#2pt}}} \def\mathchoice{\sqr34}{\sqr34}{\sqr{2.1}3}{\sqr{1.5}3}{\mathchoice{\sqr34}{\sqr34}{\sqr{2.1}3}{\sqr{1.5}3}} \def\mathchoice{\sqr{4.1}5}{\sqr{4.1}5}{\sqr{2.1}3{\mathchoice{\sqr{4.1}5}{\sqr{4.1}5}{\sqr{2.1}3} {\sqr{1.5}3}} \def\buildrul#1\under#2{\mathrel{\mathop{\null#2}\limits_{#1}}} \def{\widetilde x}{{\widetilde x}} \def\boxit#1{\vbox{\hrule\hbox{\vrule\kern3pt\vbox{\kern3pt#1 \kern3pt}\kern3pt\vrule}\hrule}} \def\mathop{\bigcup\hskip -7.5pt\cdot}{\mathop{\bigcup\hskip -7.5pt\cdot}} \def\mathop{\bigcup\hskip -7.5pt\circ}{\mathop{\bigcup\hskip -7.5pt\circ}} \def\mathrel{\vcenter{\offinterlineskip{\hbox{$<${\mathrel{\vcenter{\offinterlineskip{\hbox{$<$} \hbox{$\sim$}}}}} \def\mathrel{\vcenter{\offinterlineskip{\hbox{$>${\mathrel{\vcenter{\offinterlineskip{\hbox{$>$} \hbox{$\sim$}}}}} \def\subsetsim{\mathrel{\vcenter{\baselineskip 6pt{\hbox{$ \subset$}\hbox{$\sim$}}}}} \def\sum\limits{\sum\limits} \def\prod\limits{\prod\limits} \def\int\limits{\int\limits} \def{\textstyle{1\over 2}}{{\textstyle{1\over 2}}} \def{\textstyle{1\over 4}}{{\textstyle{1\over 4}}} \def{\textstyle{3\over 4}}{{\textstyle{3\over 4}}} \def{\textstyle{1\over 3}}{{\textstyle{1\over 3}}} \def\mathop{{\int\!\!\!\!\int}}{\mathop{\int\!\!\!\int}} \def\sucsim {\mathrel{\vcenter{\offinterlineskip\hbox{$\scriptstyle\succ$}\hbox {$\scriptstyle\sim$}}}} \def{\mathrel{\vcenter{\offinterlineskip\hbox{$\sim$}\hbox{$\to${{\mathrel{\vcenter{\offinterlineskip\hbox{$\sim$}\hbox{$\to$} }}}} \def\mathrel{\vcenter{\offinterlineskip\hbox{$\scriptstyle\ge${\mathrel{\vcenter{\offinterlineskip\hbox{$\scriptstyle\ge$} \hbox{$\vphantom t\scriptstyle<$}}}} \def\mathrel{\vcenter{\offinterlineskip\hbox{$\scriptstyle\le${\mathrel{\vcenter{\offinterlineskip\hbox{$\scriptstyle\le$} \hbox{$\vphantom t\scriptstyle>$}}}} \def\baselineskip=5pt\vcenter{\hbox{$\subset$}\hbox{$\ne$}}{\baselineskip=5pt\vcenter{\hbox{$\subset$}\hbox{$\ne$}}} \def\subheading#1{\medskip\goodbreak\noindent{\bf #1.}\quad} \def\sect#1{\goodbreak\bigskip\centerline{\bf#1}\medskip} \def\smallskip\noindent{\bf Proof:\quad}{\smallskip\noindent{\bf Proof:\quad}} \def\longmapright #1 #2 {\smash{\mathop{\hbox to #1pt {\rightarrowfill}}\limits^{#2}}} \def\longmapleft #1 #2 {\smash{\mathop{\hbox to #1pt {\leftarrowfill}}\limits^{#2}}} \def\references#1{\goodbreak\bigskip\par\centerline{\bf References}\medskip\parindent=#1pt} \def\ref#1{\par\smallskip\hang\indent\llap{\hbox to \parindent{#1\hfil\enspace}}\ignorespaces} \def\noalign{\hrule}{\noalign{\hrule}} \def{\raise 2.5pt\hbox{$\,\scriptscriptstyle\backslash\,$}}{{\raise 2.5pt\hbox{$\,\scriptscriptstyle\backslash\,$}}} \def\partial{\partial} \def\lwr #1{\lower 5pt\hbox{$#1$}\hskip -3pt} \def\rse #1{\hskip -3pt\raise 5pt\hbox{$#1$}} \def\lwrs #1{\lower 4pt\hbox{$\scriptstyle #1$}\hskip -2pt} \def\rses #1{\hskip -2pt\raise 3pt\hbox{$\scriptstyle #1$}} \def\bmatrix#1{\left[\matrix{#1}\right]} \def\<#1{\left\langle{#1}\right\rangle} \def\subinbn{{\subset\hskip-8pt\raise 0.95pt\hbox{$\scriptscriptstyle\subset$}}} \def\Square{\hbox{\hskip 6pt\vrule width 5pt height4pt depth 1pt\hskip 1pt}} \def\llvdash{\mathop{\|\hskip-2pt \raise 3pt\hbox{\vrule height 0.25pt width 1.5cm}}} \def\lvdash{\mathop{|\hskip-2pt \raise 3pt\hbox{\vrule height 0.25pt width 1.5cm}}} \def\fakebold#1{\leavevmode\setbox0=\hbox{#1}% \kern-.025em\copy0 \kern-\wd0 \kern .025em\copy0 \kern-\wd0 \kern-.025em\raise.0333em\box0 } \font\msxmten=msxm10 \font\msxmseven=msxm7 \font\msxmfive=msxm5 \newfam\myfam \textfont\myfam=\msxmten \scriptfont\myfam=\msxmseven \scriptscriptfont\myfam=\msxmfive \mathchardef\rhookupone="7016 \mathchardef\ldh="700D \mathchardef\leg="7053 \mathchardef\ANG="705E \mathchardef\lcu="7070 \mathchardef\rcu="7071 \mathchardef\leseq="7035 \mathchardef\qeeg="703D \mathchardef\qeel="7036 \mathchardef\blackbox="7004 \mathchardef\bbx="7003 \mathchardef\simsucc="7025 \def\,{\fam=\myfam\qeeg}\,{\,{\fam=\myfam\qeeg}\,} \def{\fam=\myfam\leseq}{{\fam=\myfam\qeel}} \def{\fam=\myfam\ldh}{{\fam=\myfam\ldh}} \def{\fam=\myfam \rhookupone}{{\fam=\myfam \rhookupone}} \def\mathrel{\fam=\myfam\simsucc}{\mathrel{\fam=\myfam\simsucc}} \def{\fam=\myfam\leg}{{\fam=\myfam\leg}} \def{\fam=\myfam\leseq}{{\fam=\myfam\leseq}} \def{\fam=\myfam\lcu}{{\fam=\myfam\lcu}} \def{\fam=\myfam\rcu}{{\fam=\myfam\rcu}} \def{\fam=\myfam\blackbox}{{\fam=\myfam\blackbox}} \def{\fam=\myfam\bbx}{{\fam=\myfam\bbx}} \def{\fam=\myfam\ANG}{{\fam=\myfam\ANG}} \font\tencaps=cmcsc10 \def\tencaps{\tencaps} \def\author#1{\bigskip\centerline{\tencaps #1}\medskip} \def\time#1{\par\smallskip\hang\indent\llap{\hbox to \parindent{#1\hfil\enspace}}\ignorespaces} \def\mathop\sqcap{\mathop\sqcap} \def\sqcp\limits{\mathop\sqcap\limits} \font\cmssbx=cmssbx10 \font \bbrrm=cmbx10 at 14pt \def\bbrrm{\bbrrm} \def\noindent\hang{\noindent\hang} \def\lineskip=2pt\baselineskip=12pt\lineskiplimit=0pt{\lineskip=2pt\baselineskip=12pt\lineskiplimit=0pt} \def\mathop{{\int\!\!\!\!\int\!\!\!\!\int}}{\mathop{{\int\!\!\!\!\int\!\!\!\!\int}}} \def\mathop{{\int\!\!\!\!\int}}{\mathop{{\int\!\!\!\!\int}}} \def{\smallcaps Geometric and Functional Analysis}{{\tencaps Geometric and Functional Analysis}} \def\frac #1#2{{#1\over #2}} \def\ssheading#1{\smallskip\goodbreak\noindent$\underline{\hbox {{#1}}}$.\enspace} \def{\backslash}{{\backslash}} \def\upddots{\mathinner{\mkern 1mu\raise 1pt \hbox{.}\mkern 2mu \mkern 2mu \raise 4pt\hbox{.}\mkern 1mu \raise 7pt\vbox {\kern 7 pt\hbox{.}}} } \def{\raise 1pt\hbox{${\scriptstyle\bigcirc}$}}{{\raise 1pt\hbox{${\scriptstyle\bigcirc}$}}} \def\hbox{---}\!\!\!\!\!\!\!\displaystyle\int{\hbox{---}\!\!\!\!\!\!\!\displaystyle\int} \def\jarrow{\vcenter{\offinterlineskip\hbox{$\hskip2.4pt \Big\uparrow$}\hbox{$\cup$ }}} \def\mathop{\ooalign{\hbox{$\,\bigcirc ${\mathop{\ooalign{\hbox{$\,\bigcirc $} \cr\cr \hbox{$\displaystyle\int\!\!\!\!\!\int$} \cr}}} \def\noindent\hang{\noindent\hang} \def\centerline{\centerline} \def\sect#1{\goodbreak\bigskip\centerline{\bf#1}\medskip} \def\time#1{\par\smallskip\hang\indent\llap{\hbox to \parindent{#1\hfil\enspace}}\ignorespaces} \def\hfill\break{\hfill\break} \nopagenumbers \vsize=24.5truecm \null \sect{ A NEW APPROACH TO INVESTIGATION OF EVOLUTION} \vskip-0.5truecm \sect{DIFFERENTIAL EQUATIONS IN BANACH SPACES} \sect{Ya.I. Alber\footnote*{\ninerm This research is supported in part by the Ministry of Scence Grant 3481-1-91 and by the Ministry of Absorption, Center for Absorption in Science.}} \centerline{Department of Mathematics } \centerline{Technion -- Israel Institute of Technology } \centerline{Haifa 32000, Israel} \vskip0.75truecm \sect{Introduction} \ospace Known investigations of nonlinear evolution equations $$ {dx\over dt} + A(t)x(t) = f(t)\ ,\quad x(t_{0}) = x^{0}\ ,\quad t_{0} \le t < \infty\ , \eqno(0.1)$$ with monotone operators $A(t)$ acting from reflexive Banach space $B$ to dual space $B^*$, usually assume that along with $B$ and $B^*$ there is a Hilbert space $H$ and continuous imbedding $B \hookrightarrow H$ in the triplet $$ B \hookrightarrow H \hookrightarrow B^*\ ; \eqno(0.2)$$ and that $B$ is dense in $H$. The stabilization of solutions of evolution equations has been proven either in the sense of weak convergence in $B$ or in the norm of $H$ space, and only asymptotic estimates of stabilization rate have been obtained [15]. In the present paper we consider equations of type (0.1) without conditions (0.2) and establish stabilization with both asymptotic and nonasymptotic rate estimated in the norm of $B$. Our research is based on a new technique using Banach space geometry, parallelogram inequalities, estimates of duality mappings, nonstandard Lyapunov functionals in Banach space, and estimates of solutions to differential inequalities. \bigskip \sect{1. Differential Inequalities} In this section differential inequalities $$ {d\lambda (t)\over dt} \le - \alpha (t)\psi \big(\lambda (t)\big) + \gamma (t)\ ,\quad \lambda (t_{0}) = \lambda _{0}\ ,\quad t \ge t_{0} \eqno(1.1)$$ are investigated for a nonnegative function $\lambda (t)$. Such inequalities are the most powerful tool, unique in a lot of cases, to prove convergence and stability and to establish the estimates of convergence rate solutions of evolution equations. The results that will be presented play an auxiliary role. However, they have their own importance as well (see also [16]). \headline{\hfill\ninerm\folio\hfill} \proclaim Lemma 1. If, in inequality $(1.1)$, $\psi (\lambda )$ is a positive continuous function for all $\lambda > 0$, $\psi (0) = 0$ and, in addition, $$ \lim_{t\rightarrow \infty } {\gamma (t)\over \alpha (t)} = 0\ , $$ $$ \int^{\infty }_{t_{0}}\alpha (\tau)d\tau = \infty \ , $$ then $\lambda (t) \rightarrow 0$ as $t \rightarrow \infty$. \smallskip\noindent{\bf Proof:\quad} Let us consider the alternative $$ H_{1}: \psi \big(\lambda (t)\big) < S(t)\ ,\quad H_{2}: \psi \big( \lambda (t)\big) \ge S(t)\ , $$ $$ S(t) = {1\over \int^{t}_{t_{0}}\alpha (\tau)d\tau} + {\gamma (t)\over \alpha (t)}\ . $$ Introduce the sets $$ \eqalignno{&T^{i}_{1} =\big\{t_{0} \le t \in [t_{i},{\overline t}_{i}] \subseteq R^{+}: H_{1}\hbox{ is true }\big\}\ ,\quad T_{1} = \bigcup T^{i}_{1}\ ,\quad i = 1,2,\ldots &(1.2)\cr &T^{j}_{2} =\big\{t_{0} \le t \in [t_{j},{\overline t}_{j}\big\}] \subseteq R^{+}: H_{2}\hbox{ is true }\big\}\ ,\quad T_{2} = \bigcup T^{j}_{2}\ ,\quad j = 1,2,\ldots . &(1.3)\cr}$$ It is obvious that $T^{i}_{1}$ and $T^{j}_{2}$ alternate and $T = [t_{0},\infty ) = T_{1} \cup T_{2}$. We will show $T_{1}$ to be unbounded. Let the opposite be true. Then there is some $t=\tau_{1}$ such that for all $t \ge \tau_{1}$ the hypothesis $H_{2}$ occurs and the differential inequality $$ {d\lambda (t)\over dt} \le - {\alpha (t)\over \int^{t}_{t_{0}}\alpha (\tau)d\tau} \eqno(1.4)$$ is realized. Hence, $$ \lambda (t) \le \lambda (\tau_{1}) - \int^{t}_{\tau_{1}}{\alpha (\tau)\over A(\tau)}d\tau\ , \quad A(\tau) = \int^{\tau}_{t_{0}}\alpha (s)ds\ . $$ By virtue of the Cauchy integral criterion, $$ \int^{\infty }_{\tau_{1}}{\alpha (\tau)\over A(\tau)}d\tau = \infty $$ because $$ \int {\alpha (t)\over A(t)}d\tau = \ell nA(t) \rightarrow \infty\quad \hbox{ as }\quad t \rightarrow \infty \ . $$ It follows from this that, beginning with some $t = \tau_{2}$, the function $\lambda (t)$ becomes negative, but this is in contradiction with the conditions of the lemma. Thus $\lambda (t)$ is estimated from above by a function which is monotonically decreasing and vanishes on the set $T_{1}$. Because of the $H_{2}$-hypothesis, $\lambda (t)$ is also estimated on every interval $T^{j}_{2}$ by a decreasing function of the (1.4) kind. The lemma is proved. \subheading{Remark 1} The assertion of Lemma 1 for the linear case $\big(\psi (\lambda )=\lambda \big)$ has been obtained previously in [1], using the formula $$ \lambda (t) \le \lambda (t_{0})e^{-\int^{t}_{t_0}\alpha (\tau)d\tau} + \int^{t}_{t_{0}}\gamma (\tau)e^{-\int^{t}_{\tau}\alpha (s)ds}d\tau\ . $$ Now we will introduce stronger requirements concerning functions $\psi (\lambda ), \alpha (t)$ and $\gamma (t)$, that enable us to prove stabilization of $\lambda (t)$ to zero, as well as to establish both asymptotic and nonasymptotic estimates of the stabilization rate. First of all we will formulate two general statements about inequality (1.1). Let $\gamma (t)$ and $\alpha (t)$ be continuous, nonnegative and nonincreasing functions, $\psi (\lambda )$ a continuous positive strictly increasing function with $\psi (0) = 0$, and let $F(t)$ and $\phi (\lambda )$ be antiderivatives of the functions $\alpha (t)$ and $1/\psi (\lambda )$, respectively. These functions are assumed to be defined, $\phi ^{-1}(z)$ exists and is single-valued on the corresponding set $Z$, and $F(t) > 0$ for all $t \ge t_{0}$. We introduce the following notations [4]: \item{(1)}$v(t) = \psi ^{-1}\big(c_{0}{\gamma (t)\over \alpha (t)}\big)$, $t \ge t_{0}$, where $c_{0} > 1$ is an arbitrary parameter and $\psi ^{-1}(\cdot )$ is the function inverse to $\psi (\cdot )$, $v_{0} = v(t_{0})$; $$ u(t,C) = \phi ^{-1}\big[\phi (C)-a(F(t)-F(t_{0}))\big]\ ,\quad a = c^{-1}_{0}(c_{0}-1) > 0\ ,\quad C \ge 0\ . \leqno(2)$$ Next, we reduce the main property $(P)$: on the interval $[t_{0},\infty )$ the functions $v(t)$ and $$ w\big(t,v(z)\big) = \phi ^{-1}\big[\phi (v(z))-a(F(t)-F(z))\big]\ , $$ where $z \in [t_{0},\infty )$ is fixed, either coincide completely or intersect at no more than two points (the points of tangency are not regarded as intersection points of the functions except, possibly, for $t = t_{0})$. \proclaim Lemma 2. Suppose that $(1)$ $(P)$ holds for $z = t_{0}$, $(2)$ $u(t,v_{0}) > v(t)$ as $t\rightarrow \infty $, and $(3)$ $c_{0}$ is chosen such that $u(t,v_{0}) \ge v(t)$ as $t \rightarrow t_{0}$. Then the solution of inequality $(1.1)$ $\lambda (t)\rightarrow 0$ and for all $t \ge t_{0}$ satisfies the next estimate $$ \lambda (t) \le u(t,C)\ ,\quad C = \max \{\lambda _{0},v_{0}\}\ . \eqno(1.5)$$ \subheading{Remark 2} Condition 2 is understood in the sense that there exists the number $N$ such that $u(t,v_{0}) \ge v(t)$ for all $t \ge N$. This also applies to similar conditions contained in Lemmas 2 and 3. \smallskip\noindent{\bf Proof:\quad} Consider the following alternative $$ \eqalignno{&H_{1}: \lambda (t) < \psi ^{-1}\big(c_{0}\gamma (t)/\alpha (t)\big) &(1.6)\cr &H_{2}: \lambda (t) \ge \psi ^{-1}\big(c_{0}\gamma (t)/\alpha (t) \big)\cr} $$ and the intervals $T^{i}_{1}$ and $T^{j}_{2}$ with respect to (1.2) and (1.3). Assume, at first, that $T_{1} = \phi $. In that case, for all $t \in T = [t_{0},\infty )$, we have $\gamma (t) \le c^{-1}_{0}\alpha (t)\psi (\lambda (t))$ and inequality $$ {d\lambda (t)\over dt} \le -a\alpha (t)\psi (\lambda (t))\ , \quad \lambda (t_{0}) = \lambda _{0}\ . $$ Its solution is estimated by the known formula $$ \lambda (t) \le \phi ^{-1}\big[\phi (\lambda _{0})-a(F(t)-F(t_{0}))\big]\ ,\quad t \ge t_{0}\ . \eqno(1.7)$$ Now let $T_{2}= \emptyset$. Then, estimate (1.6) is true. If $T_{1}$ or $T_{2}$ is finite, then the estimates (1.6) or (1.7) hold, starting from some $\widetilde{t} \ge t_{0}$, and a situation arising under $t \le \widetilde{t}$ is described below. Finally, assume $T_{1}$ and $T_{2}$ to be infinite sets at the same time. Let us choose an arbitrary interval $T^{i}_{1} \in T_{1}$. This determines the next interval $T^{i}_{2}$ (if $t_{0} \in T_{1}$) or $T^{i+1}_{2}$ (if $t_{0} \in T_{2}$). It is obvious that (1.6) is fulfilled for all $t \in T^{i}_{1}$, but that on the interval $T^{i}_{2}$ $$ \lambda (t) \le \phi ^{-1}\bigg[\phi\big(\lambda ({\overline t}_{i})\big )-a \int^{t}_{{\overline t}_{i}}\alpha (s)ds\bigg]\ . \eqno(1.8)$$ One should consider the following two cases: \item{(a)} $\lambda _{0} \ge v(t_{0})$. Here $t_{0} \in T_{2}$ and the function $u(t,\lambda _{0})$ is majorant. Indeed, the function $u(t,\lambda _{0})$ cannot be intersected with $v(t)$ at any value $t$ because, in the opposite case, at least three points of intersection that do not match the property $(P)$ would appear. \item{(b)}$\lambda _{0} < v(t_{0})$. Let us construct $u\big(t,v(t_{0})\big)$. According to the assumptions of the lemma, $u(t,v_{0})$ majorizes $v(t)$ as $t_{0}\leftarrow t \rightarrow \infty $ and by virtue of property $$ u(t,R_{1}) \ge u(t,R_{2})\ ,\quad \forall\, R_{1} \ge R_{2} \eqno(1.9)$$ it majorizes $u(t,\lambda _{0})$ along with function $\phi ^{-1}(\cdot )$ in (1.8), because $\lambda ({\overline t}_{i})=v({\overline t}_{i})$. We remark that $u(t,\lambda _{0})$ has to intersect $v(t)$ only once, so $i = 1$. Hence, the general estimate is described by the formula (1.5). The lemma is proven. \proclaim Lemma 3. Suppose that $(1)$ property $(P)$ holds for all $z \in [t_{0},\infty )$; $(2)$ $u(t,v_{0}) \le v(t)$ as $t \rightarrow \infty $. Then $\lambda (t) \rightarrow 0$ and the following assertions are true: \item{\rm (i)} If $\lambda _{0} \le v_{0}$ and $c_{0}$ is chosen so that $u(t,v_{0}) \le v(t)$ as $t \rightarrow t_{0}$, then, for the solution of inequality $(1.1)$ the estimate $$ \lambda (t) \le v(t)\ ,\quad t \ge t_{0} $$ is valid; \item{\rm (ii)} In all the remaining cases, $$ \eqalignno{&\lambda (t) \le u(t,C)\ ;\quad C = \max \big\{\lambda _{0},v(t_{0})\big\}\ ,\quad t_{0} \le t \le {\overline t}& (1.10)\cr & \lambda (t) \le v(t)\ ,\quad t \ge {\overline t}&(1.11)\cr}$$ where ${\overline t}$ is the unique root of the equation $u(t,C) = v(t)$ on the interval $[t_{0},\infty )$. \smallskip\noindent{\bf Proof:\quad} Let the assumption (i) be valid. By (2) and property $(P)$ the function $u(t,v_{0})$ is no greater than $v(t)$ for all $t \ge t_{0}$, otherwise they would have no less than three intersection points on the interval $[t_{0},\infty )$. If $\lambda _{0} \le v_{0}$, then the function $w(z,v(z))$ is majorized also by the function $v(t)$, since, in the opposite case, either three intersection points must appear, or one intersection point $t > z$ and one tangent point $t = z$. Neither is appropriate by virtue of property (1.9) so long as the tangent point is disintegrated into two intersection points by a tiny perturbation. So function $v(t)$ is a majorant. Now let $\lambda _{0} \le v_{0}$ but $u(t,v_{0}) > v(t)$ as $t \rightarrow t_{0}$. It is clear that in that variant exactly one intersection point for $v(t)$ and $v(t,v_{0})$ exists. The same applies to $\lambda _{0} > v_{0}$. The major estimates are determined by relations (1.10) and (1.11). The lemma is proven. We shall reduce the most important versions of Lemmas 2 and 3 (cf.~[10]). \proclaim Lemma 4. In inequality $(1.1)$ let $\lambda (t) \ge 0$, $\psi (\lambda ) = \lambda $, $$ \alpha (t) = {b\over t}\ ,\quad \gamma (t) = {d\over t^{n}}\ , \quad n > 1\ ,\quad b > 0\ ,\quad d \ge 0\ . $$ Then $\lambda (t) \rightarrow 0$ as $t \rightarrow \infty $ and, also: \item{$ (1)$} Assume $ab > n-1$, i.e., $c_{0} > b/(b-n+1)$. Then, the following are true: \itemitem{\rm (i)}If $\lambda _{0} \le c_{0}db^{-1}t^{1-n}_{0}$, then, for all $t \ge t_{0}$, $$ \lambda (t) \le {c_{0}d\over b}\left({1\over t}\right)^{n-1}\ ; $$ \itemitem{\rm (ii)} If $\lambda _{0} \ge c_{0}db^{-1}t^{1-n}_{0}$, then, the estimates $$ \eqalign{&\lambda (t) \le \lambda_0\left({t_{0}\over t}\right) ^{ab}\ ,\quad t_{0} \le t \le {\overline t}\cr &\lambda (t) \le {c_{0}d\over b}\left({1\over t}\right)^{n-1}\ , \quad t >{\overline t}\ ,\quad {\overline t}= (c^{-1}_{0}d^{-1}b\lambda _{0}t^{ab}_{0} )^{1/(ab+1-n)}\cr} $$ are satisfied. \item{$(2)$} Assume $ab \le n-1$, i.e., $c_{0} \le b/(b-n+1)$. Then, for all $t \ge t_{0}$ $$\lambda (t) \le C\left({t_{0}\over t}\right)^{ab}\ ,\quad C = \max \left\{\lambda _{0}, {c_{0}d\over b} t^{1-n}_{0}\right\}\ .$$ \proclaim Lemma 5. In inequality $(1.1)$ let $\lambda (t) \ge 0$, $\psi (\lambda ) = \lambda $, $$ \alpha (t) = {b\over t^m}\ ,\quad 0 \le m < 1\ ,\quad \gamma (t) = {d\over t^n}\ ,\quad n > m\ ,\quad b > 0\ ,\quad d \ge 0\ . $$ Then $\lambda (t) \rightarrow 0$ as $t \rightarrow \infty $ and, also: \item{$(1)$} If $\lambda _{0} \le c_{0}db^{-1}t^{m-n}_{0}$ and $ab \ge t^{m-1}_{0}(n-m)$, i.e., $c_{0} \ge bt^{1-m}_{0}/(bt^{1-m}_{0}-n+m)$, then, for all $t \ge t_{0}$, $$ \lambda (t) \le {c_{0}d\over b}\left({1\over t}\right)^{n-m}\ ; $$ \item{$(2)$} In all the remaining cases $$ \eqalign{&\lambda (t) \le C~\exp \left[-{ab\over 1-m}(t^{1-m}-t^{1-m}_{0})\right]\ ,\quad t_{0} \le t \le {\overline t} \cr & \lambda (t) \le {c_{0}d\over b}\left({1\over t}\right)^{n-m}\ , \ t >{\overline t}\ ,\ C = \max \left\{\lambda _{0},{c_{0}d\over b} t^{m-n}_{0}\right\}\ ,\qquad\cr} $$ where ${\overline t}$ is the unique root of equation $$ C \exp \left[{ab\over m-1}(t^{1-m}-t^{1-m}_{0})\right] = {c_{0}d\over b}\left({1\over t}\right)^{n-m}\ . $$ The asymptotic estimate is $$ \lambda (t) = O(t^{-n}) $$ as $m = 0.$ \proclaim Lemma 6. In inequality $(1.1)$ let $\lambda \ge 0$, $\psi (\lambda ) = \lambda ^{\nu }$, $\nu > 1,$ $$ \alpha (t) = {b\over t}\ ,\quad \gamma (t) = {d\over t^{n}}\ , \quad n > 1\ ,\quad b > 0\ ,\quad d \ge 0\ . $$ Then $\lambda (t) \rightarrow 0$ as $t \rightarrow \infty $ and, if $$ ab\left({dc_{0}\over b}\right)^{\nu-1\over \nu }<{n-1\over\nu} t^{{(n-1)(\nu -1)\over \nu }}_{0} \eqno(1.12)$$ holds, then, for all $t \ge t_{0}$, $$ \lambda (t) \le \big[C^{1-\nu }+(\nu -1)ab\ln t/t_{0}\big]^{-1/(\nu -1)} $$ where $C = \max \big\{\lambda _{0},(c_{0}db^{-1})^{1/\nu }t^{(n-1)/\nu }_{0}\big\}$. \proclaim Lemma 7. In inequality $(1.1)$ let $\lambda (t) \ge 0$, $\psi (\lambda ) = \lambda ^{\nu }$, $\nu > 1,$ $$\alpha (t) = {b\over t^{m}}\ ,\quad 0 \le m < 1\ ,\quad \gamma (t) = {d\over t^{n}}\ ,\quad n > m\ ,\quad b > 0\ ,\quad d \ge 0\ . $$ Then $\lambda (t) \rightarrow 0$ as $t \rightarrow \infty $ and, also: \item{$(1)$} Assume $\nu ^{-1}(n-m) < (\nu -1)^{-1}(1-m)$ and \itemitem{\rm (i)}if $\lambda _{0} \le (c_{0}d b^{-1})^{1/\nu } t^{(m-n)/\nu }_{0}$ and $$ ab\left({c_{0}d\over b}\right)^{(\nu-1)/\nu}\ge{n-m\over\nu}t ^{m-p}_0\ ,\quad p = 1 - {(\nu -1)(n-m)\over \nu } \eqno(1.13)$$ then, for all $t \ge t_{0}$ $$ \lambda (t) \le \left({c_{0}d\over b}\right)^{1/\nu }\left({1\over t}\right)^{(n-m)/ \nu }\ . $$ \itemitem{\rm (ii)} In all the remaining cases $$ \eqalignno{&\lambda (t) \le \left[C^{1-\nu }+ab{\nu -1\over 1-m}( t^{1-m}-t^{1-m}_{0})\right]^{-1/(\nu -1)}\ ,\quad t_{0} \le t \le {\overline t}\cr &\lambda (t) \le \left({c_{0}d\over b}\right)^{1/\nu }\left( {1\over t}\right)^{(n-m)/ \nu }\ ,\ t >{\overline t}\ ,\ C = \max \left\{\lambda _{0},\left({c_{0}d\over b}\right)^{1/\nu } t^{(m-n)/ \nu }_{0}\right\}\ ,\qquad\qquad &(1.14)\cr}$$ where ${\overline t}$ is a unique root of the equation $$ \left[C^{1-\nu }+ab{\nu -1\over 1-m}(t^{1-m}-t^{1-m}_{0})\right] ^{-1/(\nu -1)} = \left({c_{0}d\over b}\right)^{1/\nu }\left( {1\over t}\right)^{(n-m)/ \nu }\ . $$ \item{$(2)$} Assume $(1-m)(\nu -1)^{-1} \le (n-m)\nu ^{-1}$. If an inequality opposite to $(1.13)$ occurs, then, for all $t > t_{0}$, $$ \lambda (t) \le \left[C^{1-\nu }+ab{\nu -1\over 1-m}(t^{1-m} -t^{1-m}_{0})\right]^{-1/(\nu -1)} $$ where constant $C$ coincides with $(1.14)$. The asymptotic estimate is $$ \lambda (t) = O(t^{-p})\ ,\quad p = \min \left\{{n\over\nu},{1\over \nu -1}\right\} $$ as $m = 0.$ \proclaim Lemma 8. In inequality $(1.1)$ let $\lambda (t) \ge 0$, $\psi (\lambda ) = \lambda ^{\nu }$, $0 < \nu < 1$, $$ \alpha (t) = {b\over t}\ ,\quad \gamma (t) = {d\over t^{n}}\ ,\quad n > 1\ ,\quad b > 0\ ,\quad d \ge 0\ . $$ Then $\lambda (t) \rightarrow 0$ as $t \rightarrow \infty $ and, also: \item{$(1)$} If $\lambda _{0} < (c_{0}db^{-1})^{1/\nu } t^{(1-n)/\nu }_{0}$ and $c_{0}$ is chosen from $(1.12)$, then, for all $t \ge t_{0}$, $$ \lambda (t) \le\left({c_{0}d\over b}\right)^{1/\nu }\left({1\over t} \right)^{(n-1)/ \nu }\ . $$ \item{$(2)$} In all the remaining cases the estimates $$ \eqalign{&\lambda (t) \le C\left[1-C^{\nu -1}(1-\nu )ab \ln {t\over t_{0}}\right]^{1/(1-\nu )}\ ,\quad t_{0} \le t \le {\overline t}\ ,\cr &\lambda (t) \le \left({c_{0}d\over b}\right)^{1/\nu }\left( {1\over t}\right)^{(n-1)/ \nu }\ ,\qquad t >{\overline t} \ ,\cr} $$ are satisfied, where $C = \max \big\{\lambda _{0},(c_{0}db^{-1}) ^{1/\nu }t^{(1-n)/\nu }_{0}\big\}$ and ${\overline t}$ is a unique root of equation $$ \left({c_{0}d\over b}\right)^{1/\nu }\left({1\over t}\right)^{( n-1)/ \nu } = C\left[1-(1-\nu )C^{\nu -1}ab \ln{t\over t_{0}} \right]^{1/(1-\nu )} $$ belonging to the interval $\big[t_{0},t_{0}\exp \big\{C^{1-\nu }/ab(1-\nu )\big\}\big]$. \proclaim Lemma 9. In inequality $(1.1)$ let $\lambda (t) \ge 0$, $\psi (\lambda ) = \lambda ^{\nu }$, $0 < \nu < 1$, $$ \alpha (t) = {b\over t^{m}}\ ,\quad 0 \le m < 1\ ,\quad \gamma (t) = {d\over t^{n}}\ ,\quad n > m\ ,\quad b > 0\ ,\quad d \ge 0\ . $$ Then $\lambda (t) \rightarrow 0$ as $t \rightarrow \infty $ and, also: \item{$(1)$} If $\lambda _{0} \le (c_{0}db^{-1})^{1/\nu } t^{(m-n)/\nu }_{0}$ and $c_{0}$ is chosen from $(1.13)$, then, for all $t \ge t_{0}$, $$ \lambda (t) \le \left({c_{0}d\over b}\right) ^{1/\nu } \left( {1\over t}\right)^{(n-m)/ \nu }\ . $$ \item{$(2)$} In all the remaining cases $$ \eqalign{&\lambda (t) \le C\left[1+{1-\nu \over m-1}ab\, C^{\nu -1}(t^{1-m}-t^{1-m}_{0})\right]^{1/(1-\nu )}\ ,\quad t_{0} \le t \le {\overline t}\ ,\cr &\lambda (t) \le \left({c_{0}d\over b}\right)^{1/\nu }\left( {1\over t}\right)^{(n-m)/ \nu }\ ,\quad t >{\overline t}\ ,\cr} $$ where $C = \max \big\{\lambda _{0},(c_{0}db^{-1})^{1/\nu } t^{(m-n)/\nu }_{0}\big\}$ and ${\overline t}$ is a unique root of the equation $$ \left[C^{\nu -1}+{1-\nu \over m-1}ab (t^{1-m}-t^{1-m}_{0})\right] ^{1/(1-\nu )} =\left({c_{0}d\over b}\right)^{1/\nu}\left({1\over t} \right)^{(n-m)/ \nu } $$ belonging to interval $\big[ t_{0},\big(C^{1-\nu }(1-m)(1-\nu ) ^{-1}(ab)^{-1}+t^{1-m}_{0}\big)^{1/(1-m)}\big] $. \medskip The asymptotic estimate is $$ \lambda (t) = O(t^{-n/\nu })\ , $$ as $m = 0.$ \proclaim Lemma 10. Let $\lambda (t) \ge 0$ satisfy the inequality $$ {d\lambda (t)\over dt} \le - b\lambda (t) +de^{-nt}\ ,\quad t \ge t_{0}\ ,\quad b > 0\ ,\quad d \ge 0\ ,\quad \lambda (t_{0}) = \lambda _{0}\ . $$ Then $\lambda (t) \rightarrow 0$ as $t \rightarrow \infty $ and, also: \item{$(1)$} If $n \le ab$ and $\lambda _{0} \le c_{0}db^{-1}e^{-nt_{0}}$, then, for all $t \ge t_{0}$, $$ \lambda (t) \le {c_{0}d\over b} e^{-nt}\ . $$ \item{$(2)$} If $n \le ab$ and $\lambda _{0} > c_{0}db^{-1}e^{-nt_{0}}$, then, for all $t \ge t_{0}$, $$ \eqalign{&\lambda (t) \le \lambda _{0}e^{-ab(t-t_{0})}\ ,\quad t_{0} \le t \le {\overline t}\cr &\lambda (t) \le {c_{0}d\over b} e^{-nt}\ ,\quad t >{\overline t}\ ,\quad {\overline t}= {\ln c_{0}d(b\lambda _{0})^{-1}-abt_{0}\over n-ab} \ .\cr} $$ \item{$(3)$} If $n > ab$, then, for all $t \ge t_{0}$ $$ \lambda (t) \le Ce^{-ab(t-t_{0})}\ ,\quad C = \max \{\lambda _{0},c_{0}db^{-1}e^{-nt_{0}}\}\ . $$ The asymptotic estimate is $$ \lambda (t) = O(e^{-\beta t})\ ,\quad \beta = \min \{ab,n\} $$ as $t \rightarrow \infty$. \bigskip \sect{\bf 2. The Evolution Equations with Uniformly Monotone Operators} Let $B$ be a real uniformly convex and uniformly smooth (reflexive) Banach space, $(\varphi ,y)$ the pairing (dual product) between elements $\varphi \in B^*$ and $y \in B$, $\|\cdot\|$ and $|\cdot| $ the norms in $B$ and $B^*$, respectively. We consider stabilization and estimates for the stabilization rate of evolution equations of type (0.1) where $A(t)$ is a nonlinear monotone operator from $B$ to $B^*$ for all $t \ge t_{0}$. Let us suppose first that the operator $A(t)$ tends to its limit $A$, the function $f(t)$ tends to $f$ as $t \rightarrow \infty $, limit equation $$ Ax = f\ ,\quad A: B \rightarrow B^*\ ,\quad f \in B^*\ ,\quad x \in B \eqno(2.1)$$ has the solution ${\overline x}$ and scalar functions $\omega _{1}(t)$ and $\omega _{2}(t)$ are such that $$ \big|A(t)x-Ax\big|\le \omega _{1}(t)g\big(\|x\|\big) \ ,\quad\big\| f(t)-f\big\| \le \omega _{2}(t)\ ,\eqno (2.2)$$ where $\omega _{1}(t) \rightarrow 0$, $\omega _{2}(t) \rightarrow 0,$ and $g(\xi )$ is a continuous and nonnegative function for $\xi \ge 0$ and bounded (on the bounded sets). We should remark that relation (0.2), in fact, reduces the equation (0.1) to a single space $(B^*$ or $H)$ and it removes the principal difficulties connected with cardinal distinctions of $B$ and $B^*$ spaces [17,22]. The first difficulty in our situation is that we can no longer use the traditional Lyapunov functional $V_{1}(x) = 2^{-1}\|x-{\overline x} \|^{2}$. Indeed, if we want to decrease functional $V_{1}\big(x(t)\big)$ for each value $t \ge t_{0}$, we should suggest $$ \big(\mathop {\rm grad}\nolimits V_{1}\big(x(t)\big),A(t)x(t)\big) \ge 0\ , \eqno(2.3)$$ because in Banach space (see, for instance [2]) $$ {dV_{1}\big(x(t)\big)\over dt} = \left(U\big(x(t)-{\overline x}\big),{dx(t)\over dt}\right) $$ and $$\mathop {\rm grad}\nolimits V_{1}(x) = U(x-{\overline x} )\ . $$ Here $U: B \rightarrow B^*$ is a normalized duality mapping. In the case of $B \neq H$, (2.3) makes no sense since $A(t)x(t) \in B^*$ as well. The second difficulty is a necessity to produce a suitable equation (0.1) so far as in (0.1) $x^\prime (t) \in B$. These above two difficulties are closely connected. Consider the evolution equation $$ {d\varphi (t)\over dt} + A(t)x(t) = f(t)\ ,\quad x(t_{0}) = x^{0}\ ,\quad t_{0} < t < \infty \ ,\quad \varphi (t)= Ux(t) \eqno(2.4)$$ in the dual space $B^*$. Let us assume operator $A(t)$ is uniformly monotone, i.e., for all $t \ge t_{0}$, $x \in B$, $y \in B$, $$\big(A(t)x-A(t)y,x-y\big) \ge c(t)\psi _{1}\big(\| x-y\|\big)\ , \quad c(t) \ge b > 0 \ ,\eqno (2.5)$$ where $\psi _{1}(\xi )$ is a continuous positive function for all $\xi \ge 0$, $\psi _{1}(0) = 0,$ and $$ \overline{\lim_{\xi\to\infty}} {\psi _{1}(\xi )\over \xi } = \infty\ . \eqno(2.6)$$ Let us introduce the Lyapunov functional in Banach space by formula $$ V(\varphi ,y) = 2^{-1}\big(|\varphi | -2(\varphi ,y)+\| y\| \big) \ ,\quad\varphi = Ux\ .\eqno (2.7) $$ It is a nontrivial functional because it is determined on the elements of primary and dual spaces at the same time. $V(Ux,y)$ is convex, nonnegative, differentiable with respect to $\varphi $ and $y$ functionals, and $$ \eqalignno{&\mathop {\rm grad}\nolimits_{\varphi }V(Ux,y) = x-y \in B\ ,\hbox{ as $y$ is fixed}\ ,\cr &\mathop {\rm grad}\nolimits_{y}V(Ux,y) = Uy-Ux \in B^*\ , \hbox{as $x$ is fixed\ ,}\cr &V(Ux,y) \ge 2^{-1}\big(\| x\| -\| y\| \big)^{2}& (2.8)\cr &V(Ux,y) \le 2^{-1}\big(\| x\|+\| y\| \big)^{2}\ .& (2.9)\cr}$$ Therefore, $\mathop {\rm grad}\nolimits_{\varphi }V(Uy,y) = 0$, $\mathop {\rm grad}\nolimits_{\varphi }V(Ux,x) = 0$, $V(Uy,y) = 0$, $V(Ux,x)=0$, $\mathop {\rm grad}\nolimits_{y}V(Uy,y) = 0,$ $\mathop {\rm grad}\nolimits_{y}V(Uy,y) = 0$. From (2.8) and (2.9) it follows that $V(Ux,y) \rightarrow 0$ as $\left\Vert{x}\right\Vert \rightarrow \infty $ or $\left\Vert{y}\right\Vert \rightarrow \infty $ and vice versa. In Hilbert space $V(Ux,y) = V_{1}(x,y) = 2^{-1}\left\Vert{x-y}\right\Vert^{2}$. It turns out that in a Banach space the functional $V(Ux,y)$ is connected with $V_{1}(x,y)$ by means of its geometric characteristics. Let us show this. In [6-8] we established the following inequalities for dual mapping: for all $x,y \in B$ $$ \eqalignno{&(Ux-Uy,x-y) \le 8\| x-y\|^{2} + C_{1}\rho _{B}\big(\|x-y\|\big)&(2.10)\cr &(Ux-Uy,x-y) \le 8|Ux-Uy|^{2} + C_{1}\rho _{B^*}\big( |Ux-Uy|\big)&(2.11)\cr &(Ux-Uy,x-y) \ge (2L)^{-1}\delta _{B}\big(\|x-y\|/C_{2}\big)\ , \quad 1 < L < 3.18\ .&(2.12)\cr &(Ux-Uy,x-y) \ge (2L)^{-1}\delta _{B^*}\big(|Ux-Uy|/C_{2}\big) &(2.13)\cr &|Ux-Uy|\le C_{2}g^{-1}_{B^*}\big(2C_{2}L \|x-y\|\big)\ ,\quad g_{B}(\epsilon ) = \delta _{B}(\epsilon ) /\epsilon &(2.14)\cr &\|x-y\|\le C_{2}g^{-1}_{B}\big(2C_{2}L|Ux-Uy|\big)\ ,\quad g^{-1}_{B}g_{B}(\epsilon ) = \epsilon . &(2.15)\cr}$$ In (2.10)-(2.15) $\rho (\tau)$ is the modulus of smoothness, $\delta (\epsilon )$ is the modulus of convexity of space $B$, the constants $C_{1}$ and $C_{2}$, in general, depend on $\left\Vert{x}\right\Vert$ and $\left\Vert{y}\right\Vert$. However, if $\left\Vert{x}\right\Vert \le R, \left\Vert{y}\right\Vert \le R$, then $$ C_{1} = 8\max \{L,R\},\quad C_{2} = 2\max \{1,R\},\quad 1 < L < 3.18. $$ In this case (2.10) and (2.12) are the quantitative description of a well known mathematical fact: a duality mapping is uniformly continuous on each bounded set in a uniformly smooth Banach space and uniformly monotone on each bounded set in a uniformly convex Banach space. The estimates (2.10)-(2.13) are derived from the parallogram inequalities in Banach space [8] $$ 2\|x\|^{2} + 2\|y\|^{2} - \|x+y\|^{2} \le 4\|x-y\|^{2} + C_{1} \rho _{B}(\|x-y\|)$$ $$ 2\|x\|^{2} + 2\|y\|^{2} - \|x+y\|^{2} \ge (4L)^{-1}\delta _{B} (\|x-y\|/C_{2})$$ and (2.14), (2.15) are obtained from (2.12), (2.13) (cf.\ [21]). Let $y$ be an arbitrary fixed point in B. By virtue of the uniform convexity of $V_{1}(x,y)$ we have $$ \|2^{-1}(x+y)\|^{2}-\|x\|^{2} \ge (Ux,y)-(Ux,x) + (2L)^{-1} \delta _{B}\big(\|x-y\|/2C_{2}\big). $$ Now using the identity $(Ux,x) = \left\Vert{x}\right\Vert^{2}$ the following relation $$ (Ux,y) \le \|2^{-1}(x+y)\|^{2}-(2L)^{-1}\delta _{B}\big(\|x-y\|/2C_{2} \big) $$ is obtained. So far as $g_{B}(\epsilon )$ is a nondecreasing function, it is $\delta _{B}(\epsilon /2) \le 2^{-1}\delta _{B}(\epsilon )$. Hence, one can write $$ \eqalign {V(\varphi ,y) &= 2^{-1}\|x\|^{2} - (Ux,y) + 2^{-1}\|y\|^{2} \ge\cr &\ge 2^{-1}\|x\|^{2} + 2^{-1}\|y\|^{2} -\|2^{-1}(x+y)\|^{2}+ (2L)^{-1}\delta _{B}\big(\|x-y\|\big)/C_{2} \ge\cr &\ge (4L)^{-1}\delta _{B}\big(\|x-y\|/C_{2}) + (2L)^{-1}\delta _{B} \big(\|x-y\|/2C_{2}\big) \ge\cr &\ge L^{-1}\delta _{B}\big(\|x-y\|/2C_{2})\cr}.$$ On the other hand, by the convexity of $V(\varphi ,y)$ $$ V(\varphi ,y) \le V(Uy,y) + (Ux-Uy,x-y) = (Ux-Uy,x-y) \eqno(2.16)$$ holds. Finally, that implies $$ V(Ux,y) \ge L^{-1}\delta _{B}\big(\|x-y\|/2C_{2}\big) \eqno(2.17)$$ and $$ V(Ux,y) \le 8\|x-y\|^{2} + C_{1}\rho _{B}\big(\|x-y\|\big) = \mu _{B}\big(\|x-y\|\big) . \eqno(2.18)$$ $\left\Vert{x-y}\right\Vert$ and $B$ in the last two inequalities can be replaced by $|Ux-Uy|$ and $B^*$. The functions $\delta _{B}(\epsilon )$ and $\rho _{B}(\tau)$ are strongly increasing, therefore yielding $$ \eqalignno { &\|x-y\| \le 2C_{2}\delta ^{-1}_{B}\big(LV(Ux,y)\big) &(2.19)\cr &\|x-y\|\ge \mu ^{-1}_{B}\big(V(Ux,y)\big) . &(2.20)\cr}$$ There is a connection between the functional $V(Ux,y)$ and the Young-Fenchel transformation. Indeed, let $\phi (x)$ be a function in B. Consider the conjugate functional $\phi^*(x^*)$ in space $B^*$ determined by $$ \phi^*(x^*) = \sup_x\big((x^*,x)-\phi (x)\big) . $$ If $\phi (x)$ is differentiable and $x^* = \phi'(x)$, then $$ \phi^*(x^*) = (x^*,x)-\phi (x) $$ or, otherwise, if $x^* \neq \phi'(x)$, then $$ \phi^*(x^*) > (x^*,x)-\phi (x). $$ Consequently, the functional $$ V(x^*,y) = \phi^*(x^*)-(x^*,y)+\phi (y) \ge 0 $$ for all $x^* \in B^*$, $y \in B$ and $V(x^*,y) = 0$ only if $x^* = \phi'(y)$. $\phi^*(x^*)$ is well known to be a convex. Let $\phi^*(x^*)$ be a differentiable. Then we have for each fixed $y \in B$, $$ V'_{x^*}(x^*,y) = \phi^{*\prime}(x^*) - y \in B\ . $$ This shows the functional $V(x^*,y)$ is a convex and its point of minimum is determined by the relation $$ \phi^{*\prime}(x^*) = y \ , \eqno(2.21)$$ i.e., $x^* = [\phi^{*\prime}]^{-1}y$ (if $\phi^{*\prime\prime}(x^*) \neq 0)$. By analogy, one can write $$ V'_y(x^*,y) = \phi ^\prime (y) - x^* \in B^* $$ for each fixed $x^* \in B$. It gives $$ \phi ^\prime (y) = x^*\ , \eqno(2.22)$$ i.e., $y = [\phi ^\prime ]^{-1}x^*$ (if $\phi ^{\prime\prime}(y) \neq 0)$. So, at the minimum point $({\overline x}^*,{\overline y} )$ $$\phi ^\prime \phi^{*\prime}({\overline x}^ *) = {\overline x}^*\ ,\quad \phi^{*\prime} \phi ({\overline y} ) = {\overline y} $$ hold. The problem of global minimization of $V(x^*,y)$ in the form $${\overline V} (x^*) \rightarrow\min_{x^*\in B^*}$$ is formulated in [13], where $${\overline V}(x^*) = \min_{y\in B}V(x^*,y)\ . $$ If $\phi^*(x^*)$ is differentiable then $$ {\overline V}^\prime _{x^*}(x^*) = \phi^{*\prime}(x^*)-[\phi ^\prime ]^{-1}(x^*) $$ and the condition of minimum of $V(x^*)$ is written as $$ \varphi^{*\prime}(x^*) = [\phi ^\prime ]^{-1}(x^*)\ . \eqno(2.23)$$ It is easy to see that (2.21), (2.22) and (2.23) coincide. In terms of the above, the functional $V(x^*,y)$ can be used as a general Lyapunov functional in Banach spaces. Our functional (2.7) is a particular case of $V(x^*,y)$, for which $$ \phi (y) = 2^{-1}\|y\|^{2}\ ,\quad \phi^*(x^*) = 2^{-1}|Ux|^{2}\ , \quad V'_{x^*}(x^*,y) = x-y\ , $$ $$ V'_y(x^*,y) = Uy-Ux\ , \quad\big(\partial _{y}V(\varphi ,y),\partial _{\varphi }V(\varphi ,y)\big) = -(x-y,Ux-Uy) \le 0 $$ are satisfied. Assume now the function $x(t),y(t): [t_{0},\infty ) \rightarrow B$. Consider the Lyapunov functional $$ V\big(\varphi (t),y(t)) = 2^{-1}\big(|\varphi (t)|^{2}-2( \varphi (t),y(t)) +\|y(t)\|^2\big)\ ,\quad \varphi (t) = Ux(t)\ . \eqno(2.24)$$ \proclaim Lemma 11. Let $x(t)$ and $y(t)$ be strongly continuous functions, $y(t)$ and $\phi (t)$ be weakly differentiable on interval $[t_{0},\infty )$. Then, the functional $(2.24)$ is differentiable and equality $$ {dV\big(\varphi (t),y(t)\big)\over dt} = \left({d\varphi (t)\over dt}, x(t)-y(t)\right) + \left(Uy(t)-Ux(t),{dy(t)\over dt}\right) \eqno(2.25)$$ holds. \subheading{Proof {\rm (cf.\ [12])}} As far as the spaces $B$ and $B^*$ are uniformly smooth, their norms are differentiable and $${\partial V\big(\varphi (t),y(t)\big)\over \partial \varphi } = x(t)-y(t)\ ,\quad {\partial V\big(\varphi (t),y(t)\big)\over \partial y} = Uy(t)-Ux(t)\ . $$ Because of the convexity of $V(Ux,y)$ one has for $t_{0} < s < t$ that $$\eqalignno{& V\big(\varphi (t),y(t)\big)\ge V\big(\varphi (s),y(t)\big) + \left({\partial V\big(\phi (s),y(t)\big)\over \partial \varphi }, \phi (t)-\phi (s)\right)\cr \noalign{\hbox{and}} &V\big(\varphi (s),y(t)\big) \ge V\big(\varphi (s),y(s)\big) + \left({\partial V\big(\phi (s),y(s)\big)\over \partial y},y(t)-y(s) \right)\cr} $$ are true. Therefore, inequalities $$\eqalign{ V\big(\varphi (t),y(t)\big) \ge V\big(\varphi (s),y(s)\big)&+ \big(x(s)-y(t),\varphi (t)-\varphi (s)\big)\cr & + \big(Uy(s)-Ux(s),y(t)-y(s)\big)\cr}$$ and $$\eqalign{ V\big(\varphi (s),y(s)\big) \ge V\big(\varphi (t),y(t)\big)&+ \big(x(t)-y(s),\varphi (s)-\varphi (t)\big)\cr &+ \big(Uy(t)-Ux(t),y(s)-y(t)\big)\cr} $$ take place. Unifying them, one obtains $$\eqalignno{& \left(x(t)-y(s),{\varphi (t)-\varphi (s)\over t-s}\right) + \left(Uy(t)-Ux(t),{y(t)-y(s)\over t-s}\right) \ge\cr &\qquad \ge {V\big(\varphi (t),y(t)\big)-V\big(\varphi (s),y(s)\big)\over t-s}\ge&(2.26)\cr &\qquad\ge \left(x(s)-y(t),{\varphi (t)-\varphi (s)\over t-s}\right) + \left(Uy(s)-Ux(s),{y(t)-y(s)\over t-s}\right)\ .\cr} $$ Since $B$ is a uniformly smooth space, then $U$ is a continuous mapping. Using the conditions of the lemma let us go to the limit in (2.26) as $s \rightarrow t$. We immediately get $$\matrix{\displaystyle \left(x(t)-y(t),{d\varphi (t)\over dt}\right) + \left(Uy(t)-Ux(t),{dy(t)\over dt}\right) \ge {dV(Ux(t),y(t))\over dt} \ge\cr \displaystyle\ge \left(x(t)-y(t),{d\varphi (t)\over dt}\right) + \left(Uy(t)-Ux(t),{dy(t)\over dt}\right)\ .\cr} $$ From this we conclude (2.25), i.e., $$ {dV(\varphi ,y)\over dt} = \left({\partial V\over \partial \varphi },{d\varphi \over dt}\right) + \left({\partial V\over \partial y},{dy\over dt}\right)\ .$$ The lemma is proven. Returning to the evolution differential equation (2.4), we will assume the existence of its weak solution. \subheading{Definition} A function $x(t)$ from $[t_{0},\infty )$ to $B$ is said to be a weak solution of the equation (2.4) if $x(t)$ is strongly continuous on set $[t_{0},\infty ), \varphi (t)$ is weak once-differentiable and for each $f(t)$ given in $B^*$ and $x(t_{0}) = x^{0}$ given in $B$ the equality $$ \left({dUx(t)\over dt},w)\right) + \big(A(t)x(t),w\big) = \big(f(t),w \big) $$ is satisfied for all $w \in B$. Let us set in (2.24) $y(t) \equiv {\overline x} $ (cf.\ [11,12]. \proclaim Theorem 1. Assume the conditions $(2.2), (2.5), (2.6)$ are carried out. If the weak solution $x(t)$ of $(2.4)$ exists, then the relation $$\big\|x(t)-{\overline x} \big\| \rightarrow 0\eqno (2.27)$$ yields as $t \rightarrow \infty $. \smallskip\noindent{\bf Proof:\quad} In fact, from (2.4) and (2.25) we have $$ {dV\big(\varphi (t),{\overline x}\big )\over dt} = \left({dUx(t)\over dt},x(t) -{\overline x}\right) = - \big(A(t)x(t)-A(t){\overline x},x(t)-{\overline x}\big)$$ $$ + \big(A{\overline x}-A(t){\overline x},x(t)-{\overline x}\big)-\big(A{\overline x}-f(t),x(t)-{\overline x}\big)\ . $$ Utilizing the condition of uniform monotonicity (2.5) one can obtain $${dV\big(\varphi (t),{\overline x}\big)\over dt} \le -c(t)\psi _{1} \big(\|x(t)-{\overline x} \|\big) + \big(\omega _{1}(t)g(\|{\overline x}\|)+ \omega _{2}(t)\big)\|x(t)-{\overline x} \|\ . $$ Because of (2.6) the functional $V\big(\varphi (t),{\overline x}\big)$ is bounded (this fact can be proved easily from the contrary). Let $V\big(\varphi (t),{\overline x}\big) \le R_{0}$. Then $\big\Vert x(t)\big\Vert \le \| {\overline x} \| + 2\sqrt{R_0}= R$ follows from (2.8). Now we can use (2.20) with $C_{1} = \max \{L,R\}$. We get $$ {dV\big(\varphi (t),x\big)\over dt} \le -c(t)\psi \big(V(\varphi (t),{\overline x})\big) + \gamma (t)\ ,\qquad t_{0} \le t < \infty\ ,\quad V\big(\varphi (t_{0}),{\overline x}\big) = V_{0} $$ where $$\gamma (t) = \big(\omega _{1}(t)g(\|{\overline x}\|)+\omega _{2}(t)\big) (R+\|{\overline x} \|) \eqno (2.28) $$ $$ \psi (V) = \psi _{1}\big(\mu ^{-1}_{B}(V)\big)\ ,\quad C_{1} = \max \{L,R\}\ . \eqno(2.29)$$ Denoting $\lambda (t) = V\big(Ux(t),{\overline x}\big)$ we come to the differential inequality $$ {d\lambda (t)\over dt} \le -c(t)\psi (\lambda ) + \gamma (t)\ , \quad\lambda (t_{0}) = V_{0}\ ,\quad t \ge t_{0}\ . \eqno(2.30)$$ By Lemma 1 we assert that $V\big(Ux(t),{\overline x}\big)\rightarrow 0$ as $t \rightarrow \infty $ because $\gamma (t) \rightarrow 0$ and $c(t)\ge b$. Finally, (2.27) follows from (2.19). The theorem is proven. \proclaim Theorem 2. Assume $(2.2), (2.5)$ and $(2.6)$ exist. Then the solution $x(t)$ of differential equation $(2.4)$ is bounded $\big(\Vert x(t) \Vert \le R\big)$, and \item{\rm(a)} in the conditions of Lemma $2, x(t)$ tends to ${\overline x}$ and is estimated as $$\big\|x(t)-{\overline x} \|\le 2C_{2}\delta ^{-1}_{B}\big(Lu(t,C)\big)$$ for all $t \ge t_{0}$; \item{\rm(b)} in the conditions of Lemma 3, $x(t)$ tends to ${\overline x}$ and either $$\eqalignno{& \big\|x(t)-{\overline x}\|\le 2C_{2}\delta ^{-1}_{B}\left(L\left(\psi ^{-1} \left( c_{0}{\gamma (t)\over c(t)}\right)\right)\right)\ , \qquad t \ge t_{0}\cr \noalign{\hbox{or}} &\big\|x(t)-{\overline x}\|\le 2C_{2}\delta ^{-1}_{B}\big(Lu(t,C)\big)\ , \quad t_0 \le t<{\overline t}\cr &\big\|x(t)-{\overline x}\|\le 2C_{2}\delta ^{-1}_{B}\left(L\left(\psi ^{-1} \left( c_{0}{\gamma (t)\over c(t)}\right)\right)\right)\ , \qquad t \ge {\overline t}\cr}$$ holds. {\sl Here $$ u(t,C) = \phi ^{-1}\bigg[ \phi (C) - a \int^{t}_{t_{0}}c(\tau) d\tau\bigg] $$ $$ C = \max \left\{V_{0},\psi ^{-1}\left( c_{0} {\gamma (t_{0})\over c(t_{0})} \right)\right\}\ ,\qquad C_{2} = \max \{L,R\}\ , $$ functions $\gamma (t)$ and $\psi (V)$ is determined by $(2.28),(2.29)$, ${\overline t}$ is a unique root of scalar equation} $$ \phi ^{-1}\bigg[\phi (C) - a \int^{t}_{t_{0}}c(\tau)d\tau \bigg] = \psi ^{-1}\bigg(c_{0}{\gamma (t)\over c(t)}\bigg)\ . $$ The proof is obtained from the inequality (2.30), Lemmas 2 and 3, and estimate (2.19). So far as $c(t) \ge b$ and $\psi (\lambda )/\lambda \rightarrow \infty $ as $\lambda \rightarrow \infty $, the estimates of convergence rate are determined also by Lemmas 5 and 7 with $m = 0$. \proclaim Theorem 3. Assume conditions $(2.2), (2.5)$ and $(2.6)$ are satisfied. Then the solution $x(t)$ of the differential equation $$ {dUx(t)\over dt} + \alpha (t)\big(A(t)x(t)-f(t)\big) = 0\ , \qquad x(t_{0}) = x^{0}\ , \qquad t_{0} \le t < \infty \ , \eqno(2.31)$$ where $$ \alpha (t) \rightarrow 0\ ,\qquad \int^{t}_{t_{0}}\alpha (\tau)d\tau \rightarrow \infty \ ,\quad{\rm as }\quad t \rightarrow \infty \ , $$ is bounded $\big(\Vert{x(t)}\Vert \le R\big)$ and the assertions of Theorem 2 are proven with $$ u(t,C) = \phi ^{-1}\bigg[\phi (C) - a \int^{t}_{t_{0}} c(\tau)\alpha (\tau)d\tau \bigg] $$ $$ C = \max \left\{V_{0},\phi ^{-1}\left( c_{0} {\gamma (t_{0})\over c(t_{0})} \right)\right\}\ ,\qquad C_{2} = \max \{L,R\} \ . $$ Consider equation (2.1) with a uniformly monotone operator $$ Ax = Fx + \alpha Ux\ ,\qquad \alpha =\mathop{\rm const} > 0\ , $$ where $F$ is a proper monotone operator, i.e., it satisfies $$ (Fx-Fy,x-y) \ge 0 $$ and there is no intensification of this condition. The corresponding evolution differential equation (2.4) is $$ {dUx(t)\over dt} + Fx(t) + \alpha Ux(t) = 0\ ,\quad x(t_{0}) = x^{0}\ ,\quad t \ge t_{0}\ . \eqno(2.32)$$ An equation of this kind has been studied previously in Hilbert spaces and in Banach spaces $B$, when operator $A$ is accretive and acts from $B$ to $B$. For the solution $x(t)$ of the equation $$ {dx(t)\over dt} + Fx(t)+\alpha x(t) = 0\ ,\quad x(t_{0}) = x^{0}\ ,\quad t \ge t_{0} $$ the estimate $$\big\|x(t)-{\overline x} \big\|\le \big\|x(t_{0})-{\overline x} \big\| e^{- \alpha (t-t_0)}$$ was obtained, where $F{\overline x}+ \alpha {\overline x} = 0$. We may establish a similar estimate for equation (2.32). \proclaim Theorem 4. Suppose that the weak solution $x(t)$ of the equation $(2.32)$ exists, $Fx$ is a proper monotone operator, ${\overline x}$ is the solution of equation $Fx + \alpha Ux = 0$. Then $x(t)$ approaches ${\overline x}$ strongly as $t \rightarrow \infty $ and $$ \big\|x(t)-{\overline x} \big\|\le 2C_{2}\delta ^{-1}_{B}(LV_{0} e^{-\alpha (t-t_0)})\ . $$ \smallskip\noindent{\bf Proof:\quad} Using property (2.16), we have $$ \eqalign{{dV(Ux(t),x)\over dt}&= -\big(Fx(t)+\alpha (t)Ux(t), x(t)-{\overline x}\big)\cr &= -\big(Fx(t)-F{\overline x} , x(t)-{\overline x} \big)-\alpha\big (Ux(t)-U{\overline x} ,x(t)-{\overline x}\big )\cr &\le - \alpha V\big(Ux(t),{\overline x}\big)\ .\cr} $$ From here the inequality $$ V\big(Ux(t),{\overline x}\big) \le V(Ux^{0},{\overline x})e^{-\alpha (t-t_0)} = V_{0}e^{-\alpha (t-t_0)} $$ is valid. And the final estimate follows from (2.19). \hfill$\mathchoice{\sqr34}{\sqr34}{\sqr{2.1}3}{\sqr{1.5}3}$ \medskip Let us set in (2.25) $y(t) \equiv 0.$ We obtain the identity $$ {dV(\varphi (t))\over dt} = {1\over 2} {d\big\|x(t)\big\|^2\over dt} = \left({dUx(t)\over dt},x(t)\right)\ . \eqno(2.33)$$ It allows us to produce the statement which is analogous to [15] as the corollary above. \proclaim Theorem 5. (a) Let the assumptions $\big(A(t)x,x\big) \ge 0$ and $f(t) \in L_{2}(t_{0},T;B^*)$ be satisfied. Then $x(t) \in L_{\infty }(t_{0},\infty ;B)$.\hfill\break \indent (b) If $\big(A(t)x,x\big) \ge \psi \big(\|x\|\big )$ and $f(t) \in L_{1}(t_{0},T,B^*)$, then $x(t) \rightarrow 0$ in $B$ as $t \rightarrow \infty $. From (2.33) we obtain $$\big\| x(t)\big\| ^{2}_{B} \le \big\|x(s)\big\|^{2}_{B} - 2 \int^{t}_{s}\big(A(\tau)x(\tau)-f(\tau),x(\tau)\big)d\tau\ , $$ which is similar to Hilbert space. It is obvious that $$ \eqalignno{&\left({dUx(t)\over dt},x(t)\right) = \left(Ux(t),{dx\over dt}\right)\cr \noalign{\hbox{and}} &\left({d^{2}Ux(t)\over dt^{2}},x(t)\right) = \left( Ux(t), {d^{2}x\over dt^{2}}\right)\ .\cr} $$ \bigskip \sect{3. Evolution Equations with Proper Monotone Operators} Evolution equations $$ {dx(t)\over dt} + Ax(t) = f\ ,\qquad x(t_{0}) = x^{0}\ ,\qquad t \ge t_{0} $$ with proper monotone operators are unstable with respect to perturbations of the operator $A$ and right-hand part $f$. It is known that one cannot prove a convergence of solutions $x(t)$ to the solution ${\overline x}$ of the stationary equation (2.1) in the class of nonlinear operators even for unperturbed equations, the more so if the equations are of the kind (2.4). Therefore, in such a situation one has to apply a stabilization operator. In the case of $A: B \rightarrow B$ being an accretive operator in Banach space (or monotone operator in Hilbert space), a one-parameter operator $\alpha (t)I$ has been used for stabilization where $\alpha (t) \rightarrow 0$ as $t \rightarrow \infty $ and $I$ is the identical operator. Under these conditions a convergence of solutions of the evolution equation $$ {dx(t)\over dt} + A(t)x(t) + \alpha (t)x(t) = f(t) $$ has been proven in [2,12,14]. In the present section we shall consider the evolution equation $$ {dUx(t)\over dt} + A(t)x(t) + \alpha (t)Ux(t) = f(t)\ ,\qquad x(t_{0}) = x^{0}\ ,\qquad t \ge t_{0}\ . \eqno(3.1)$$ Let us introduce an auxiliary equation $$ Ay(t) + \alpha (t)Uy(t) = f\ . \eqno(3.2)$$ Assume the solution $y(t)$ to be continuous and strongly differentiable for all $t \ge t_{0}$. \proclaim Theorem 6. Assume the following: \item{\rm (i)} A weak solution of $(3.1)$ exists and is bounded, $\big\Vert{x(t)}\big\Vert \le R_{1}$; \item{\rm (ii)} Operator $A(t)$ tends to limit one $A$ as $t \rightarrow \infty $, and $f(t)$ tends to limit function $f$ with the estimates (2.2); \item{\rm (iii)} The solution ${\overline x}$ of stationary equation $Ax = f$ exists; \item{\rm (iv)} $\alpha (t)$ is a positive, continuous and differentiable function satisfying conditions such as $t \rightarrow \infty $: $$ \alpha (t) \rightarrow 0\ ,\quad \int^{t}_{t_{0}}\alpha (\tau)d\tau \rightarrow \infty \ ,\quad {\big\| \alpha ^\prime (t)\big\| \over \alpha ^{2}(t)} \rightarrow 0\ ,\quad {\omega _{1}(t)+\omega _{2}(t)\over \alpha (t)} \rightarrow 0\ ; $$ \item{\rm (v)} Let also either $\delta (\epsilon ) = O(\epsilon ^{2})$ and $y(t)$ be differentiable; or \item{\rm (vi)} $ \delta (\epsilon ) = 0(\epsilon ^{p})$, $p > 2$, and $$ \left({dUy(t)\over dt},{dy\over dt}\right) \ge\left\| {dy\over dt} \right\| \psi \left(\left\Vert{dy\over dt}\right\Vert\right)\ , $$ where function $\psi (\xi )$ is strictly increasing and continuous for $\xi >0$, $\psi (0)=0$, and operator $Ax$ is monotone and strongly differentiable with respect to $x$. Then $$\lim_{t\to\infty}\big\|x(t)-{\overline x} \big\| = 0\ . $$ \smallskip\noindent{\bf Proof:\quad} It is known [3] that $y(t) \rightarrow {\overline x}$ and $\big\|y(t) \big\|\le R_{2}$. On account of (2.2), (2.12), (3.2), Lemma 11 and the monotonicity of $A(t)$, we have $$\eqalignno{& {dV\big(Ux(t),y(t)\big)\over dt} = \left({d\varphi \over dt}, {dV\over d\varphi }\right) + \left({dV\over dy},{dy\over dt}\right) =& (3.3)\cr &\eqalign{= - \big(A(t)x(t)&-A(t)y(t),x(t)-y(t)\big) - \alpha (t)\big(Ux(t)-Uy(t),x(t)-y(t)\big)-\cr &- \big(Ay(t)+\alpha Uy(t)-f,x(t)-y(t)\big) + \big(f(t)-f,x(t)-y(t)\big)+\cr &+ \big(Ay(t)-A(t)y(t),x(t)-y(t)\big) + \big(Uy(t)-Ux(t),dy(t)/dt\big)\le\cr}\cr &\eqalign{\le - \alpha (t)(2L)^{-1}&\delta _{B}\big(\|x(t)-y(t)\| /C_{2}\big) + \big(\omega _{1}(t)g(R_{2})+\omega _{2}(t)\big)\|x(t)-y(t)\|+\cr &+\big\|Ux(t)-Uy(t)\big\|\big\|dy(t)/dt\big\|\ ,\quad C_{2} = \max \{1,R_{1},R_{2}\}\ .\cr}\cr} $$ Now $\big\| dy(t)/dt\big\|$ has to be estimated. It is shown in [5] that $$ \big\|y(t_{1})-y(t_{2})\big\|\le R_{3}g^{-1}_{B}\bigg({R_{3} \big| \alpha (t_{1})-\alpha (t_{2})\big| \over \alpha (t_{1})}\bigg)\ ,$$ where $R_{3} = 2\max\big \{1,\| {\overline x} \| \big\}$. If $\delta _{B}(\epsilon ) = O(\epsilon ^{2})$ and $y(t)$ is strongly differentiable, then $$ \left\| {dy(t)\over dt}\right\| \le R_{4} {\big| \alpha ^\prime (t)\big| \over \alpha (t)}\ . $$ Further, using (2.20) and (2.14) we obtain $$\eqalign{ {d\over dt}V\big(Ux(t),y(t)\big)&\le - \alpha (t)(2L)^{-1}\delta _{B}\big(\mu ^{-1}_B\big(V(Ux(t),y(t)) \big)/C _2\big)+\cr &\quad+ \big(\omega _{1}(t)g(R_{2})+\omega _{2}(t)\big)(R_{1}+R_{2}) + \mu _{B}(R_{1}+R_{2})| \alpha ^\prime (t)| /\alpha (t)\ .\cr} $$ One can easily see that the function $\psi (\xi ) = \delta _{B}\mu ^{-1}_{B}(\xi )$ is continuous and positive for all $\xi > 0$ and $\psi (0) = 0$. From the conditions of the theorem and from Lemma 1, $$ V\big(Ux(t),y(t)\big) \rightarrow 0\hbox{ as }t \rightarrow \infty\ . $$ Further, if condition (vi) is true, then the identity $$ \left(A'_y{dy\over dt},w\right) + \alpha ^\prime (t)\big(Uy(t),w \big) + \alpha (t)\left({dUy(t)\over dt},w\right) = 0 $$ is also true. Considering this identity under $w = dy/dt$ and using the monotonicity of operator $A$, one obtains $$ \alpha (t)\left({dUy(t)\over dt},{dy\over dt}\right) \le \big| \alpha ^\prime (t)\big| \big\|y(t)\big\|\left\| {dy\over dt}\right\| \ . $$ From here and from (vi) $$ \alpha (t)\psi \left(\left\Vert{dy\over dt}\right\Vert\right) \le \big| \alpha ^\prime (t)\big| R_{2} $$ follows. That means $$ \left\| {dy\over dt}\right\| \le \psi ^{-1}\left(R_{2}{\big| \alpha ^\prime (t)\big| \over \alpha (t)}\right)\ . $$ Now the statement $V\big(Ux(t),y(t)\big) \rightarrow 0$ follows again from Lemma 1. If we apply estimate (2.19), we conclude $\big\Vert{x(t)-y(t)}\big\Vert \rightarrow 0$ as $t \rightarrow \infty $. According to the remark given at the beginning of the proof, we assert $\lim\limits_{t\to\infty}\big\|x(t)\big\| = 0$. \hfill$ \mathchoice{\sqr34}{\sqr34}{\sqr{2.1}3}{\sqr{1.5}3} $ \subheading{Remark 3} In [19] the estimate $$ {1\over 2}{d^{2}\big\|y(t)\big\|^{2}\over dt^{2}} \ge \left( {d^{2}y\over dt^{2}},Uy \right) + m \left\|{dy\over dt} \right\| ^{2} $$ is reached under the condition $y^{\prime\prime}(t) \in L^{1}(t_{0},\infty ,B)$ and $$ (Ux-Uy,x-y) \ge m\|x-y\|^{2}\ ,\qquad m > 0\ . $$ This corresponds to $\delta (\epsilon ) = O(\epsilon ^{2})$ if $\big\Vert{x(t)}\big\Vert \le R$, $\big\Vert{y(t)}\big\Vert \le R$. From this we obtain the inequality (cf.\ with condition (vi)) $$ \left({dUy(t)\over dt},{dy\over dt}\right) \ge m \left\|{dy\over dt}\right\|^{2}\ . $$ \subheading{Remark 4} If in Theorem 5, $$ \left\| {dy\over dt} \right\|\bigg/ \alpha (t) \rightarrow 0\hbox{ as }t \rightarrow \infty \ , $$ then the assumption about the monotonicity of operator $A(t)$ and the differentiability of $A$ could be omitted. In this case (3.3) should be rewritten as $$ \eqalign{&\eqalign{{dV\big(Ux(t),y(t)\big)\over dt} \le& - \big(Ax(t)-Ay(t),x(t)-y(t)\big) - \alpha (t)\big(Ux(t)-Uy(t),x(t)-y(t)\big)-\cr & - \big(Ay(t)+\alpha (t)Uy(t)-f,x(t)-y(t)\big) + \big(f(t)-f,x(t)-y(t)\big)+\cr & + \big(Ax(t)-A(t)x(t),x(t)-y(t)\big) + \big(Uy(t)-Ux(t),dy(t)/dt\big)\le\cr}\cr &\eqalign{ \le - \alpha (t)(2L)^{-1}\delta _{B}&\big(\|x(t)-y(t)\|/C_{2}\big) + \big(\omega _{2}(t)+\omega _{1}(t)g(R_{1})\big)\|x(t)-y(t)\|+\cr &+ \big\| Uy(t)-Ux(t)\big\| \big\| dy(t)/dt\big\|\ .\cr}\cr} $$ \subheading{Remark 5} If $\big\| x(t)\big\|\le R_{1}$ is not known a priori, the theorem remains true but the proof is rather complicated (cf.\ [9]). \subheading{Remark 6} The lemmas 2-10 give estimates of a convergence rate of $\big\Vert x(t)-y(t)\big\Vert$ to zero. \subheading{Remark 7} If equation (2.1) has a set of solution $N$, then $x(t)\rightarrow{\overline x}^ *$, where $\|{\overline x}^*\| = \min\limits_{{\overline x}\in N}\|{\overline x}\|$ [3]. \subheading{Remark 8} Let us determine the integral solution of (3.4) as a continuous function $x(t) = [t_{0},\infty )$ such that $$ V\big(Ux(t),y\big) \le V\big(Ux(s),y\big) + \int^{t}_{s}\big( f(\tau)-Ay,x(\tau)-y\big)d\tau $$ for all $y \in B$, $t_{0} \le s \le t < \infty $ and given a value of $x(t_{0}) = x^{0}$. For this solution the above theorems are valid [20]. Thus it has been shown that stabilization and the stabilization rate of solutions of the evolution equation (3.1),(2.4)(2.31),(2.32) depend not only on the structure and smoothness of the problem's operators, but also on the geometric characteristics of the Banach spaces. \subheading{Remark 9} The trajections in dual spaces for problems of functional minimization were considered also in [18,9]. \subheading{Remark 10} As approximate methods, one can use the methods described in [6,7]. \bigskip \sect{References} \item{[1]} Ya.I.~Alber, A continuous regularization of linear operator equations in Hilbert spaces, Math. Zametki 4 (1968), 503-509 (Russian). \item{[2]} Ya.I.~Alber, The solution by the regularization method of operator equations of the first kind with accretive operators in Banach space, Differential Equations 11 (1975), 2242-2248. \item{[3]} Ya.I.~Alber, The solution on nonlinear equations with monotone operators in Banach space, Siberian Math.\ J.\ 16 (1975), 1-8. \item{[4]} Ya.I.~Alber, Recurrence relations and variational inequalities, Soviet Math.\ Dokl.\ 27 (1983), 511-517. \item{[5]} Ya.I.~Alber, Iterative regularization in Banach spaces, Izvestiya Vuzov.\ Math.\ 30 (1986), 1-8. \item{[6]} Ya.I.~Alber, A.I.~Notik, Geometric properties of Banach spaces and approximate methods for solving nonlinear operator equations, Soviet Math.\ Dokl.\ 29 (1984), 611-615. \item{[7]} Ya.I.~Alber, A.I.~Notik, On minimization of functionals and solution of variational inequalities in Banach spaces, Soviet Math.\ Dokl.\ 34 (1987), 296-300. \item{[8]} Ya.I.~Alber, A.I.~Notik, Parallelogram inequalities in Banach spaces and some properties of a duality mapping, Ukrainian Math.\ J.\ 40 (1988), 650-652. \item{[9]} Ya.I.~Alber, A.I.~Notik, Iterative method for solving unstable variational inequalities on approximately given sets (in preparation). \item{[10]} Ya.I.~Alber, S.V.~Shilman, Recurrent numerical and differential inequalities, III. \break Preprint, NIRFI, No. 134, 1980. \item{[11]} H.~Brezis, On some degenerate nonlinear parabolic equations, Proc.\ Symp.\ Pure Math.\ 18, Part I, Amer.\ Math.\ Soc., Providence, 1970, 28-38. \item{[12]} F.E.~Browder, Nonlinear operators and nonlinear equations of evolutions in Banach space, Proc.\ Symp.\ Pure Math.\ 18, *Part II, Amer. Math.\ Soc., Providence, 1970. \item{[13]} M.~Dietrich, On the convexification procedure for nonconvex and nonsmooth infinite dimensional optimization problems, J.\ Math.\ Anal.\ Appl.\ 161 (1991), 28-34. \item{[14]} M.~Israel, S.~Reich, Asymptotic behaviour of solutions of a nonlinear evolution equation, J.\ Math.\ Anal.\ Appl.\ 83 (1981), 43-53. \item{[15]} J.~Kacur, Stabilization of solutions of abstract parabolic equations, Chechoslovak.\ Math.\ J.\ 30 (1980), 539-555. \item{[16]} V.~Lakshmikantham, S.~Leela, Differential and Integral Inequalities, Theory and Applications, Academic Press New York and London, 1969, Vol.\ I and II. \item{[17]} J.L.~Lions, Quelques m\'ethodes de r\'esolution des probl\'emes aux limites non lin\'eaires, Dunod Gauthier-Villars, Paris, 1969. \item{[18]} A.S.~Nemirovskii, D.B.~Yudin, Problem Complexity and Method in Optimization, Wiley, Interscience, 1983. \item{[19]} E.I.~Poffald, S.~Reich, An incomplete Gauchy problem, J.\ Math.\ Anal.\ Appl.\ 113 (1986), 514-543. \item{[20]} E.~Schechter, Perturbations of regularizing maximal monotone operators, Israel J.\ of Math.\ 43 (1982), 49-61. \item{[21]} Zong-Ben Xu, G.F.~Roach, Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces, J.\ Math.\ Anal.\ Appl.\ 157 (1991), 189-210. \item{[22]} E.~Zeidler, Nonlinear Functional Analysis and its Applications II/B: Nonlinear Monotone Operators, Springer-Verlag, New-York Inc.\ 1990. \end
1,314,259,993,094
arxiv
\section{Introduction} Visual Object Tracking (VOT) is an important component in many video surveillance applications to localize objects and persons, and possibly regroup their images, for further processing in applications such as scene understanding, action recognition, person re-identification, expression recognition~\cite{Miguel, Dewan}. Some of the challenges faced by VOT in such real-world applications are changes in pose, illumination, occlusion, deformation, motion blur and complex backgrounds. Additionally, in real-world applications, the VOT must periodically interact with an object detector to initiate new tracks, or to validate and/or update the object template with new detector bounding boxes. The quality of bounding boxes produced using a state-of-the-art CNN-based detector can vary, and have an impact on accuracy. VOT techniques are mainly classified as either generative or discriminative depending on whether they track by detecting the target or by discriminating the target from the background~\cite{salti2012}. For robust discriminative tracking, adaptive trackers update the target model representation as the object's appearance changes over time. An adaptive tracker should therefore periodically initiate new tracks (e.g., every second) and update its target representations over time as the object appearance changes. \begin{figure}[b] \centering \includegraphics[width=1.0\linewidth]{trackingFinal2} \caption{Illustration of the tracker-detector interaction to construct facial trajectories (set of ROIs captured for the same high quality track) in a video surveillance system. Trajectories can be used for further processing, like spatio-temporal person recognition.} \label{fig:dt} \end{figure} Some techniques have been proposed to combine detection and tracking, and initiate, drop tracks as target objects respectively appear and leave the scene. For example, Fast-DT~\cite{fastdt} relies on detector confidence to drop a track, by efficiently applying the object detector on individual tracker output. Unlike Fast-DT, SORT~\cite{sort},~\cite{MOT_trajectory} focuses mostly on data association and multi-target tracking. Recently methods such as the deep Siamese-FC network~\cite{bertinetto2016fully, SiamRPN} have been proposed to exploit the expressive powers of deep learning in VOT. These SiamFC trackers can effectively learn to represent a target object, but since they do not update the target appearance representations, there is a risk of target drifting due to changes over time in object appearance. These trackers locate object by finding maximum score location in the output heat map. Hence, when appearance changes abruptly or the object is occluded or partially leaves the search region, the SiamFC tracker temporarily drifts to a location that has a high response map score. Recently, DaSiamRPN~\cite{DaSiam} tracker based on the SiamFC tracking technique further improved by incorporating distracter awareness has produced state of the art results on various tracking benchmarks. Robust tracking can be achieved by combining a deep detector and tracker. CNN-based object detectors~\cite{8310113} currently provide state-of-the-art accuracy in object detection. However, tracks may drift if the bounding boxes provided by these detector are noisy, due to changes in appearance, background and occlusion. Moreover, given the computational complexity of CNN-based detectors, a key to efficient VOT is the management of detector-tracker interactions. A deep Siamese tracker for real-time video surveillance applications should minimize the number of interactions with the detector for track initiation and update. In the literature (e.g.,~\cite{vot2018, otb}), VOT is typically evaluated by initialising the tracker with an initial ground truth target bounding box. These bounding boxes are often tightly bound around the object without much of background noise. In some evaluation methods, bounding boxes are generated with random noise to simulate a practical scenario for tracker initialisation. But these noisy bounding box cannot fully mimic a real-world scenario. In this paper, the interaction between deep learning models for detection and tracking are analysed with a proposed adaptive tracker. A change detection mechanism is integrated within this Siamese tracker to detect gradual and abrupt changes in a target's appearance in each frame based on features extracted by the deep Siamese network. In response to an abrupt change, the tracker triggers the object detector in order to update an evolving set of templates. Given a gradual change, templates stored in memory are applied on the search region, and the resulting response maps are integrated to locate a precise target. The proposed Siamese tracker allows for real-time adaptation of templates, while avoiding target model corruption. The performance of our adaptive Siamese tracker is compared against baseline Siamese FC trackers, where tracks are initialized and updated with ground truth bounding boxes (ideal object detector) and with the YOLOv3 detector. They are evaluated over several operating conditions on video surveillance like cases from OTB-100~\cite{otb} benchmark where videos contain persons or vehicles. \begin{figure*}[t!] \centering \includegraphics[width=.7\linewidth]{Adaptive_siam} \caption{Architecture of the adaptive Siamese tracker that integrates appearance change detection to manage detector-tracker interactions.} \label{fig:siam} \end{figure*} \section{Tracking Objects in Video Surveillance} In video surveillance, VOT consists interacts with an object detector. Fig.~\ref{fig:dt} shows an example of the detector-tracker interactions employed to produce facial trajectories or tracklets. In this case, the face-head detector initiates a new track, and defines a new target representation or template with an initial bounding box. Then, the tracker generates ROIs in subsequent frames. The tracker employs local object detection, learns the object online, and adapts to the changing object appearance and results in the object's location. The detector can also be used locally (on search regions) to validate the tracker's output. Also, in a real-time surveillance application, the detector searches globally (on the entire frames), and it is often computationally expensive to call the detector every frame. Objects are tracked by searching locally and can thereby be very efficient compared to the detector. The main challenges of VOT in real-time applications are~\cite{fastdt, 6671560}: (1) tracked objects tend to drift with time due to continuous integration of noise in the target appearance; (2) it is difficult to verify a tracker's state due to lack of reliability in tracker's confidence; (3) the appearance of targets change with time; (4) occlusions are difficult to be detected by the tracker, as there is a risk of learning the occlusion as a part of the target. Relying on a tracker that is continuously adapting does not guarantee a high-quality trajectory. Trackers that update their template on every frame assuming have a high probability of drifting. Outliers filtering may be employed in order to detect samples that are notably different from the actual target, and should be removed Siamese Fully-Convolutional (SiamFC) tracker~\cite{bertinetto2016fully} uses an AlexNet based Siamese network for feature extraction. The networks takes two input -- target template image $z$ and search image $x$ -- where $|x| = 2|z|$. The embedding $\varphi$ for $z$ is hence smaller than that of $x$. To localize the object, template features are cross correlated with that of search features to obtain a score map. The location of the maximum value in the score map gives the location of the object in the search region $x$. The tracker had been trained on ILSVRC~\cite{ILSVRC15} dataset with a logistic loss function. During tracking, the correlation map $f(z, x)$ obtained after cross-correlation of target template embedding $\varphi(z)$ with search image embedding $\varphi(x)$ is defined by: \begin{equation} \label{eqn:siam} f(z, x)=\varphi(z) * \varphi(x)+b \end{equation} In this paper, we focus on deep SiamFC trackers due to their robustness and potential for template adaptation. Other variants of the Siamese FC tracker have outperformed the baseline SiamFC~\cite{bertinetto2016fully} tracker like SiamRPN~\cite{SiamRPN}, SA-Siam~\cite{SA-Siam}, DaSiamRPN~\cite{DaSiam} and MEMTrack~\cite{MEMtrack}. They have proposed various improvements to the original SiamFC tracker such as attention mechanims, region proposal, etc. to improve overall accuracy and online model adaptation. It is however important in video surveillance to benefit from with the object detector, and adapt to changing appearance. \section{Adaptive Tracking Using Change Detection} In our proposed method, a real-time detector is leveraged to initialise and update a deep Siamese tracker with its object bounding box. In practice, the detector cannot produce a strict bounding box around the object, it is expected to produce bounding boxes that contain some background context too. Noisy bounding boxes may cause the tracker to quickly drift, and hence it is the tracker template that should be updated when the target appearance begins to change or the tracker starts drifting. As shown in Fig~\ref{fig:siam}, integrating change detection into the Siamese tracker allows to manage detector-tracker interactions. \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{ChangeDetectionModule_updated} \caption{Change detection module with a separate Siamese network from the tracker. The template image is the cropped target image with initial target bounding box and the image to be compared is updated by the current frame cropped by the tracker output bounding box.} \label{fig:change} \end{figure} \paragraph{Target template similarity measure.} As a first step to track quality measurement, a Siamese network similar to the SiamFC is employed to predict the similarity measure between current track output and tracker target appearance. The features extracted by the Siamese convolutional layers ($\psi$) are concatenated and a FC layer a shown in Fig.~\ref{fig:change} (Change Detection Module) is trained to predict the similarity score between 0 and 1. The network is trained with positive and negative pairs of object templates from the same video and a random video. Unlike SiamFC tracker, the inputs to the similarity measurement network are cropped along the target bounding box alone without any background context. The feature extraction layer is similar to the one used in SiamFC tracker, with AlexNet features. The extraction of features for the target template occurs just once and is stored in the database. During tracking, it outputs a bounding box location $Bbox_t$ which is used to crop the tracked object image from the current input video stream. This template is then cropped to the CNN input size. Hence the problem of having excess background context as discussed earlier is avoided. In the rest of the paper, this similarity score is referred to as track quality measure. \paragraph{Change detection with CUSUM.} Changes in a data stream could be classified as gradual or abrupt depending on the time and magnitude of the change. Hence it has been proposed that in our case the change in appearance of the target would be detected by the template similarity detection module as described above and would be classified as gradual and abrupt. In order to detect change, we propose to use the Adaptive CUmulative SUM algorithm CUSUM~\cite{cusum}. In Eq.~\ref{eqn:mean}, $y_i$ is the tracker similarity measure discussed in the previous section and ~\ref{eqn:statistic}, $g_{i}$ is the test statistic. It is initialised to $0$ at the start of the tracking and when an alarm is detected where $\beta$ is the set threshold. Change is detected when $ g_{i} > \beta $. With a white noise input, it is possible that test statistic will drift away. Hence a small $v$ is subtracted in Equation~\ref{eqn:statistic} to help control the drift. Therefore $v$ and $\beta$ are the design parameters. \begin{equation} \label{eqn:mean} \hat{\theta}_{i}=\frac{1}{i-i_{0}} \sum_{i=i_{0}+1}^{i} y_{i} \end{equation} \vspace{-3mm} \begin{equation} \label{eqn:statistic} g_{i}=\max \left(g_{i-1}-(y_{i}-\hat{\theta}_{i-1})-\nu, 0\right) \end{equation} Using CUSUM for change detection is reliable since it measures changes in similarity score using an evolving statistical model, and comparing each score to a threshold. It allows to measure tracker performance and recognise events. This is different from using a fixed deterministic similarity score because the threshold can depend on object being tracked, and its difficult to set a common threshold. \paragraph{Model update strategy.} As opposed to the original SiamFC tracker, the adaptive tracker updates the model dynamically during change detection. A list of tracker models are maintained. These models are selected based on tracker quality scores online during tracking, and are cropped from search regions feature maps. Their search region feature maps are larger than target template feature maps. Hence, during tracking, new models are extracted by cropping feature maps from the search region using location information from tracker output. When a gradual change is detected, i.e indication of a drifting model, the target models or features from the memory are fetched sequentially and matched against the current frame search region to obtain score maps. The final object location is derived from the summation of score maps. This is shown in Algorithm ~\ref{Alg:gradual}. \vspace{5mm} \begin{algorithm}[t] \label{Alg:gradual} \KwData{$C$ Memory, search region $ \varphi(x)$} \KwResult{$C_{updated} \leftarrow$ final adapted model} $integratedScore\leftarrow 0$\; $index\leftarrow 0$\; \While{ $index <$ length($C$)}{ set current target model as $C[index]$\; $f(C[index], x)\leftarrow\varphi(C[index]) * \varphi(x)+b$, track current frame $f_i$\; $integratedScore$ = $integratedScore$ + $f(C[index], x)$\; $index=index+1$\; } $BBox_{i} \leftarrow$ get refined bounding box by argmax $integratedScore$\; $C_{updated} \leftarrow$ Crop current template from search region using $BBox_{i}$\; $\varphi_{best} \leftarrow \varphi(C_{updated})$\; \vspace{2mm} \caption{Model update for gradual change.} \end{algorithm} Abrupt changes are often caused by occlusion, fast moving objects and in the worst scenario even complete loss of tracking. In these cases it is helpful when a the search region location is reset by an object detector. Since the change is abrupt, it would be necessary for the the target template to be completely re-initialised by the object detector. \paragraph{Object tracking.} Algorithm ~\ref{Alg:full system} shows the full adaptive Siamese tracking system working with the change detection module. In the first frame, the tracker is initialised by the object detector. The similarity measurement network is initialised by a cropped target template. During tracking, the change detection module which includes similarity measurement and CUSUM continuously predicts scores and looks for changes in scores. We apply lower and higher thresholds to CUSUM (see Fig.~\ref{fig:change}). The lower threshold detects a gradual change, and higher threshold is to detect an abrupt change. In the case of gradual change (see Algorithm~\ref{Alg:gradual}), the the tracker model is adapted. In case of a detection of abrupt change, indicating drift or drastic change in appearance of the target, the object detector is triggered to detect object and produce bounding boxes to re-initialise the tracking. Algorithm~\ref{Alg:gradual} is called to update memory and adapt target features. Adaptation based on gradual changes is helpful to correct tracking that has just begin drifting or to learn features from latest frames. \begin{algorithm}[h!] \label{Alg:full system} \KwData{Image stream $I$, Initial target position $BBox_1$ } \KwResult{Estimated target position $BBox_i$} $i\leftarrow 0$\; $scoreMap \leftarrow 0$\; $C \leftarrow 0$ \; Crop target Image $z$ from $I_0$ and $BBox_0$\; Extract target features $\varphi(z)$\; $\varphi_{best} \leftarrow \varphi(z)$\; \While{ $i < length(I)$}{ $i\leftarrow i+1$\; Extract search region $x_i$ from $I_{i}$ and $BBox_{i-1}$\; Extract search embedding $\varphi(x_i)$ from $x_i$\; $f(z, x)=\varphi_{best} * \varphi(x)+b$, track current frame $I_i$\; $scoreMap = f(z, x)$\; estimate track quality measure $y_i$ \; \If{$y_i > \alpha$}{ Extract $\varphi(z_i)$ from $\varphi(x_i)$ \; Add $\varphi(z_i)$ to memory $C_i$\; } Apply CUSUM on $y_i$ and estimate $g_i$\; \If{$g_i > \beta_{low}$}{ Report gradual change\; $C_{updated} \leftarrow $ from Algorithm~\ref{Alg:gradual}\; $\varphi_{adapted} \leftarrow C_{updated}$, Update target model\; } \If{$g_i > \beta_{high}$}{ Report abrupt change detected\; $BBox_{detector} \leftarrow$ from Object detector\; $\varphi_{detector} \leftarrow$ get embedding from $BBox_{detector}$ and image $I_i$\; $\varphi_{temp}\leftarrow \varphi_{detector}$ reset tracking\; $C \leftarrow \varphi_{temp}$ update memory\; $\varphi_{best} \leftarrow$ from algorithm 1 } \If{$length(C) > Budget$}{ delete $C[1]$ , remove the oldest but first memory } $f(z, x)=\varphi_{best} * \varphi(x)+b$\, $scoreMap = f(z, x)$\; $BBox_{i} \leftarrow scoreMap$, get final tracker bounding box } \caption{Algorithm for the overall Adaptive Siamese tracking change detection and model update} \end{algorithm} \section{Experimental Methodology} Proposed and baseline trackers have been evaluated on a subset of videos of OTB-100~\cite{otb} dataset, each one containing person and vehicle being tracked in a video surveillance application. \begin{table*}[h] \begin{center} \scalebox{0.64} { \begin{tabular}{|l||r|r|r|r||r|r|r|r||r|r|r|r|} \hline & \multicolumn{12}{|c|}{\bf{Average Overlap (\%)}} \\ \bf{Video} & \multicolumn{4}{|c||}{\bf{w/o periodic update}} & \multicolumn{4}{|c||}{\bf{w/ periodic update}} & \multicolumn{4}{|c|}{\bf{w/ periodic update}} \\ & \multicolumn{4}{|c||}{} & \multicolumn{4}{|c||}{\bf{(every 30 frames)}} & \multicolumn{4}{|c|}{\bf{(every 60 frames)}} \\ & & & Adaptive &Adaptive & & & Adaptive&Adaptive& & & Adaptive & Adaptive \\ & SiamFC& DaSiam & Siamese &DaSiam & SiamFC &DaSiam & Siamese& DaSiam &SiamFC &DaSiam & Siamese & DaSiam \\ \hline \hline Basketball & 37.95 & 61.27 & 40.27 & 77.49 & 66.19 &71.24 & 41.76& 77.36 &65.30 &78.16 & 69.12 & 73.24 \\ \hline BlurBody & 66.72 & 78.28 & 67.01 & 78.30 & 69.64 &78.12 & 67.12& 67.12 &84.59 &74.67 & 82.11 & 78.11 \\ \hline BlurCar1 & 84.39 & 84.20 & 78.61 & 84.22 & 85.26 &84.15 & 86.73& 84.22 &80.28 &83.50 & 84.35 & 84.15 \\ \hline BlurCar2 & 84.72 & 83.83 & 84.66 & 83.86 & 85.47 &83.83 & 87.35& 84.03 &81.43 &84.03 & 83.12 & 83.83 \\ \hline BlurCar3 & 82.89 & 81.73 & 80.52 & 81.79 & 83.40 &85.56 & 88.01& 83.86 &85.73 &83.51 & 82.15 & 85.56 \\ \hline BlurCar4 & 84.76 & 85.89 & 85.03 & 86.05 & 88.12 &85.99 & 89.15& 85.99 &70.92 &84.48 & 76.72 & 85.99 \\ \hline Bolt & 1.17 & 69.68 & 0.64 & 69.64 & 29.95 &77.71 & 80.57& 80.57 &40.68 &76.17 & 44.17 & 77.99 \\ \hline Bolt2 & 39.77 & 40.15 & 64.14 & 40.47 & 69.25 &69.19 & 84.07& 84.07 &80.43 &65.15 & 87.56 & 77.16 \\ \hline Car1 & 78.99 & 73.18 & 51.11 & 72.32 & 83.25 &77.27 & 79.69& 79.69 &75.18 &76.35 & 76.11 & 83.88 \\ \hline Car2 & 88.81 & 82.97 & 88.96 & 84.27 & 88.82 &83.36 & 88.55& 88.55 &78.10 &83.97 & 79.63 & 85.92 \\ \hline Car24 & 84.63 & 85.18 & 77.56 & 85.21 & 87.18 &85.23 & 88.30& 88.30 &77.66 &79.52 & 77.45 & 83.08 \\ \hline Car4 & 45.32 & 83.65 & 75.38 & 83.92 & 78.95 &83.72 & 86.64& 86.64 &55.67 &82.91 & 57.31 & 78.21 \\ \hline CarDark & 80.96 & 70.27 & 80.55 & 77.03 & 82.96 &77.08 & 82.47& 82.47 &66.53 &75.46 & 68.43 & 77.51 \\ \hline CarScale & 65.95 & 72.50 & 75.68 & 75.64 & 70.80 &77.39 & 74.77& 74.77 &66.74 &75.62 & 65.12 & 67.13 \\ \hline Couple & 71.49 & 60.49 & 72.25 & 59.74 & 74.46 &59.16 & 76.87& 76.87 &74.82 &56.35 & 77.91 & 71.43 \\ \hline Crowds & 63.74 & 63.74 & 7.64 & 67.06 &64.42 &66.63 & 76.64& 76.64 &71.67 &67.36 & 73.83 & 70.45 \\ \hline David3 & 50.68 & 70.45 & 70.70 & 70.70 & 72.36 &70.45 & 66.04& 66.04 &58.00 &69.14 & 57.25 & 56.30 \\ \hline Diving & 10.80 & 53.64 & 23.29 & 57.09 & 23.35 &56.89 & 35.01& 35.01 &65.30 &56.13 & 67.15 & 71.75 \\ \hline Girl2 & 64.26 & 64.48 & 61.32 & 61.28 & 69.95 &64.46 & 75.24& 75.24 &67.66 &61.76 & 66.87 & 76.48 \\ \hline Human2 & 75.25 & 76.58 & 70.32 & 76.68 & 79.10 &76.48 & 77.44& 77.44 &69.01 &74.74 & 74.78 & 76.08 \\ \hline Human3 & 1.41 & 69.60 & 20.60 & 71.79 & 59.44 &76.08 & 62.46& 62.46 &69.59 &75.81 & 77.66 & 76.63 \\ \hline Human4 & 33.45 & 36.63 & 32.58 & 37.48 & 56.79 &36.63 & 61.70& 61.70 &76.45 &41.44 & 62.37 & 56.46 \\ \hline Human5 & 51.33 & 78.36 & 75.34 & 79.17 & 77.88 &78.29 & 82.57& 82.57 &2.41 &78.78 & 73.47 & 80.68 \\ \hline Human6 & 74.93 & 73.77 & 74.02 & 75.81 & 77.65 &73.85 & 75.56& 75.56 &74.31 &71.71 & 66.85 & 73.48 \\ \hline Human7 & 74.77 & 74.43 & 75.11 & 75.94 & 79.60 &74.97 & 81.33& 81.33 &64.09 &33.77 & 77.74 & 69.08 \\ \hline Human8 & 6.00 & 70.12 & 76.75 & 72.18 & 41.93 &69.84 & 76.75& 76.75 &78.38 &67.43 & 74.92 & 69.84 \\\hline Human9 & 69.94 & 71.25 & 69.01 & 72.48 & 73.77 &71.53 & 81.95& 81.95 &70.61 &82.14 & 73.51 & 71.53 \\ \hline Jogging-1 & 69.78 & 58.75 & 68.89 & 74.34 & 71.22 &58.96 & 78.12& 78.12 &73.50 &71.26 & 48.86 & 68.96 \\ \hline Jump & 23.55 & 49.05 & 13.08 & 49.81 & 25.16 &49.03 & 40.64& 40.64 &31.38 &51.60 & 76.32 & 49.03 \\ \hline RedTeam & 68.58 & 74.71 & 65.96 & 74.17 & 74.93 &74.45 & 74.15& 74.15 &69.62 &40.06 & 82.44 & 79.45 \\ \hline Singer1 & 77.13 & 71.97 & 77.52 & 72.59 & 79.30 &71.87 & 83.48& 83.48 &79.01 &77.98 & 65.94 & 77.87 \\ \hline Singer2 & 3.59 & 30.20 & 30.45 & 59.32 & 63.21 &38.16 & 79.32& 79.32 &65.94 &37.87 & 79.71 & 55.16 \\ \hline Skater & 64.38 & 70.69 & 64.05 & 71.55 & 60.54 &70.69 & 62.89& 62.89 &76.22 &60.26 & 61.85 & 70.69 \\ \hline Skater2 & 59.21 & 70.14 & 60.08 & 72.28 & 61.49 &70.11 & 66.10& 66.10 &67.19 &68.47 & 64.18 & 70.11 \\ \hline Skating1 & 23.72 & 68.71 & 36.87 & 64.09 & 39.41 &68.38 & 42.18& 42.18 &80.65 &65.38 & 39.15 & 68.38 \\ \hline Subway & 17.47 & 35.14 & 18.49 & 37.64 & 66.95 &63.91 & 68.77& 68.77 &48.58 &61.81 & 66.71 & 63.91 \\ \hline Suv & 64.55 & 66.38 & 64.65 & 66.71 & 77.51 &46.68 & 83.16& 83.16 &79.64 &74.78 & 65.92 & 76.68 \\ \hline Walking & 75.49 & 69.95 & 71.68 & 71.12 & 73.92 &69.94 & 74.93& 74.93 &28.19 &71.93 & 75.69 & 73.94 \\ \hline Walking2 & 49.36 & 26.90 & 33.65 & 29.98 & 79.20 &77.46 & 77.49& 77.49 &65.95 &77.57 & 71.66 & 77.46 \\ \hline Woman & 13.10 & 56.93 & 43.02 & 59.83 & 51.80 &64.18 & 70.74& 70.74 &67.11 &63.45 & 70.19 & 64.18 \\\hline\hline \bf{Average} & 54.32\ &66.59 & 57.93 & 69.02 & 67.13 &70.75 & 73.96& 75.94 &68.92 &69.06 & 71.18 & 73.73 \\ \hline \end{tabular} } \end{center} \caption{Average overlap of the original SiamFC, DaSiamRPN(DaSiam) and proposed Adaptive Siamese trackers on OTB-100 subset videos. Results are shown with and without periodic update (every 30 and 60 frames) of the tracker template using ground truth. The templates have been initialised on the first frame using the ground truth bounding box. In the periodic cases, ground truth bounding boxes are also employed for the object being tracked to update the template.} \label{tab:gt_init} \end{table*} For the tracker, original training and implementation of the SiamFC~\cite{bertinetto2016fully} and DaSiamRPN~\cite{DaSiam} tracker was reproduced. The track quality measurement network was trained using ILSVRC2015~\cite{ILSVRC15} dataset similar to SiamFC. This was trained as a similarity measurement network to compare tracked object image with that of a template image that initialised the tracker. Each batch for training would consist 8 pairs of positive and negative samples. Positive samples were generated similar to SiamFC and negative samples were either other images of other objects or background images around the target. The network was trained using a logistic loss function. YOLOv3~\cite{yolov3} object detector trained on COCO dataset was used to initialize the tracker. Two sets of experiments were performed on the Siamese trackers, one with initialization from ground truth bounding box and the other initialised by object detector's bounding box. In a typical surveillance system as shown in Fig.~\ref{fig:dt}, the object detector would be called every $N$ frames to discover possible appearances of new objects and begin tracking. At the same time, the detector output would contain objects from corresponding videos or tracks. This would be used to update the tracks. We call this a periodic update. In order to associate the detector bounding boxes with existing track bounding boxes, some of the common strategies include using Intersection Over Union (IOU) to update, or using a motion model, etc. Hence in our implementation, we propose to use IOU and a constant velocity motion model with a Kalman Filter to update a tracker with detection bounding boxes and also to search for an object with a detector similar to ~\cite{sort}. \begin{table*}[t] \begin{center} \scalebox{0.65} { \begin{tabular}{|l||c||r|r|r|r||r|r|r|r||r|r|r|r|} \hline & \multicolumn{13}{|c|}{\bf{Average Overlap (\%)}} \\ \bf{Video} &{\bf{Detector}} & \multicolumn{4}{|c||}{\bf{w/o periodic update}} & \multicolumn{4}{|c||}{\bf{w/ periodic update}} & \multicolumn{4}{|c|}{\bf{w/ periodic update}} \\ &{\bf{Performance}} & \multicolumn{4}{|c||}{} & \multicolumn{4}{|c||}{\bf{(every 30 frames)}} & \multicolumn{4}{|c|}{\bf{(every 60 frames)}} \\ & & & & Adaptive &Adaptive & & & Adaptive&Adaptive& & & Adaptive & Adaptive \\ & & SiamFC & DaSiam & Siamese & DaSiam & SiamFC & DaSiam & Siamese & DaSiam & SiamFC & DaSiam & Siamese & DaSiam \\ \hline \hline Basketball & 30.08 & 1.47 & 21.11 & 8.98 & 26.46 & 2.27 & 26.49 & 3.76 & 32.46 & 1.34 & 14.63 & 2.12 & 16.53 \\ \hline BlurBody & 76.43 & 74.47 & 72.18 & 77.95 & 77.41 & 72.16 & 76.52 & 75.30 & 79.41 & 74.29 & 77.90 & 79.62 & 76.15 \\ \hline BlurCar1 & 85.94 & 81.01 & 82.41 & 78.59 & 82.98 & 77.76 & 83.46 & 79.52 & 85.59 & 78.47 & 79.38 & 74.83 & 84.15 \\ \hline BlurCar2 & 34.69 & 36.12 & 41.55 & 82.31 & 56.86 & 70.20 & 63.86 & 40.63 & 64.09 & 78.76 & 59.34 & 59.25 & 62.78 \\ \hline BlurCar3 & 21.79 & 26.15 & 31.76 & 76.12 & 37.43 & 71.96 & 38.43 & 25.69 & 41.26 & 70.07 & 35.82 & 34.74 & 37.14 \\ \hline BlurCar4 & 89.41 & 82.21 & 81.53 & 81.70 & 82.55 & 81.42 & 82.78 & 82.00 & 84.00 & 80.47 & 82.00 & 83.01 & 83.29 \\ \hline Bolt & 60.78 & 1.04 & 18.33 & 1.04 & 22.78 & 8.90 & 24.12 & 18.09 & 27.14 & 1.04 & 16.06 & 12.95 & 12.76 \\ \hline Bolt2 & 62.18 & 37.52 & 37.21 & 56.04 & 39.95 & 48.40 & 39.95 & 37.11 & 42.02 & 39.11 & 27.85 & 42.86 & 38.13 \\ \hline Car1 & 87.25 & 71.25 & 72.44 & 73.78 & 73.21 & 72.33 & 73.21 & 77.72 & 74.41 & 72.75 & 65.58 & 80.08 & 71.17 \\ \hline Car2 & 70.94 & 70.49 & 74.92 & 81.49 & 78.34 & 79.99 & 78.34 & 72.38 & 78.26 & 77.75 & 72.26 & 80.08 & 75.21 \\ \hline Car24 & 75.86 & 48.98 & 53.83 & 84.57 & 58.47 & 86.79 & 67.47 & 42.58 & 68.91 & 84.81 & 42.03 & 79.18 & 63.4 \\ \hline Car4 & 92.63 & 63.48 & 67.65 & 83.00 & 71.26 & 81.73 & 71.26 & 73.78 & 73.78 & 80.44 & 73.78 & 76.18 & 68.21 \\ \hline CarDark & 28.91 & 17.11 & 32.12 & 1.72 & 22.46 & 29.18 & 22.46 & 26.72 & 23.60 & 27.35 & 17.97 & 11.62 & 12.38 \\ \hline CarScale & 92.10 & 63.54 & 61.66 & 69.14 & 66.64 & 68.45 & 69.64 & 62.70 & 74.63 & 69.26 & 66.67 & 69.85 & 67.78 \\ \hline Couple & 47.69 & 53.20 & 54.57 & 36.20 & 59.74 & 54.69 & 59.74 & 57.35 & 61.58 & 54.54 & 52.75 & 57.66 & 58.17 \\ \hline Crowds & 33.12 & 1.33 & 12.53 & 1.33 & 15.83 & 5.48 & 46.83 & 54.64 & 53.39 & 1.33 & 23.29 & 54.51 & 51.23 \\ \hline David3 & 2.78 & 0.45 & 7.35 & 0.62 & 17.67 & 2.28 & 17.67 & 6.16 & 19.67 & 0.63 & 19.74 & 4.02 & 23.31 \\ \hline Diving & 72.05 & 8.44 & 14.57 & 15.05 & 18.72 & 15.41 & 28.72 & 19.25 & 15.80 & 12.91 & 15.60 & 13.13 & 12.27 \\ \hline Girl2 & 41.11 & 0.07 & 16.64 & 31.86 & 44.16 & 21.07 & 44.16 & 20.79 & 45.35 & 23.49 & 27.68 & 20.13 & 41.76 \\ \hline Human2 & 84.20 & 72.96 & 71.42 & 22.77 & 74.63 & 45.06 & 74.63 & 47.22 & 75.17 & 43.11 & 33.40 & 42.57 & 71.44 \\ \hline Human3 & 12.46 & 0.09 & 4.12 & 17.99 & 16.75 & 15.73 & 22.75 & 41.45 & 23.47 & 0.08 & 37.19 & 38.59 & 22.72 \\ \hline Human4 & 33.13 & 1.09 & 5.25 & 11.58 & 11.37 & 26.85 & 11.37 & 31.27 & 14.29 & 8.52 & 27.84 & 4.62 & 12.29 \\ \hline Human5 & 56.42 & 0.14 & 3.66 & 0.14 & 7.62 & 3.69 & 7.62 & 58.17 & 16.00 & 0.14 & 41.19 & 55.62 & 15.78 \\ \hline Human6 & 71.84 & 1.49 & 12.59 & 43.67 & 18.31 & 57.94 & 18.31 & 51.82 & 27.72 & 39.72 & 49.57 & 17.83 & 22.16 \\ \hline Human7 & 88.21 & 76.56 & 72.17 & 81.10 & 73.72 & 73.97 & 73.72 & 21.96 & 74.62 & 75.25 & 76.64 & 78.02 & 71.41 \\ \hline Human8 & 89.48 & 5.51 & 7.46 & 46.56 & 11.39 & 20.91 & 44.39 & 77.64 & 45.81 & 24.52 & 77.09 & 75.49 & 44.79 \\ \hline Human9 & 48.13 & 40.38 & 32.75 & 79.52 & 45.35 & 45.39 & 69.35 & 43.22 & 71.52 & 73.99 & 67.63 & 45.73 & 64.11 \\ \hline Jogging-1 & 11.85 & 12.86 & 22.93 & 57.98 & 66.11 & 21.77 & 62.17 & 42.90 & 62.95 & 47.87 & 39.77 & 41.52 & 59.37 \\ \hline Jump & 20.07 & 2.62 & 1.44 & 2.35 & 15.54 & 15.57 & 14.62 & 32.78 & 15.92 & 2.35 & 25.87 & 28.66 & 14.26 \\ \hline RedTeam & 73.82 & 66.61 & 71.31 & 43.87 & 71.76 & 41.10 & 74.79 & 28.58 & 75.38 & 40.67 & 29.17 & 52.46 & 73.32 \\ \hline Singer1 & 65.39 & 2.06 & 17.72 & 48.49 & 25.16 & 52.03 & 42.16 & 66.33 & 43.71 & 0.35 & 67.11 & 1.08 & 33.23 \\ \hline Singer2 & 29.20 & 1.59 & 17.31 & 33.43 & 28.11 & 11.92 & 37.11 & 21.23 & 38.98 & 10.71 & 32.18 & 29.90 & 33.47 \\ \hline Skater & 44.39 & 49.70 & 61.74 & 60.21 & 63.83 & 61.12 & 67.83 & 49.50 & 68.35 & 62.84 & 18.14 & 51.16 & 55.22 \\ \hline Skater2 & 59.66 & 58.66 & 61.38 & 65.29 & 65.28 & 25.05 & 44.28 & 50.30 & 46.30 & 61.06 & 50.30 & 54.59 & 37.35 \\ \hline Skating1 & 6.12 & 2.20 & 5.73 & 44.59 & 7.84 & 49.76 & 39.84 & 58.21 & 41.71 & 18.45 & 54.82 & 21.43 & 40.28 \\ \hline Subway & 36.45 & 8.90 & 11.25 & 10.29 & 18.27 & 14.99 & 18.27 & 5.05 & 19.06 & 8.90 & 10.05 & 10.28 & 14.71 \\ \hline Suv & 71.85 & 59.92 & 61.36 & 38.78 & 64.86 & 60.46 & 58.11 & 59.43 & 57.86 & 39.41 & 53.03 & 58.75 & 46.38 \\ \hline Walking & 24.17 & 0.24 & 3.11 & 3.44 & 8.12 & 1.22 & 28.17 & 10.24 & 30.25 & 0.60 & 16.24 & 0.24 & 31.72 \\ \hline Walking2 & 39.26 & 1.39 & 11.9 & 22.55 & 19.94 & 21.83 & 13.56 & 12.44 & 18.45 & 19.23 & 7.06 & 0.70 & 16.66 \\ \hline Woman & 55.84 & 54.58 & 50.33 & 62.39 & 56.53 & 47.32 & 53.53 & 56.00 & 58.47 & 62.43 & 55.80 & 57.98 & 62.36 \\ \hline \hline \bf{Average}&- & 31.45 & 36.53 & 43.46 & 43.09 & 41.61 & 47.30 & 43.55 & 49.28 & 39.23 & 43.51 & 42.58 & 44.97 \\ \hline \end{tabular} } \end{center} \caption{Average overlap of the original SiamFC, DaSiamRPN (DaSiam) and proposed Adaptive Siamese trackers on OTB-100 subset videos. Results are shown with and without periodic update of the tracker template using object detector. The templates have been initialised on the first frame using the YOLOv3 object detector as in a real world video surveillance scenario (Instead of ground truth unlike in OTB evaluations and in Tab 1).} \label{tab:detector_init} \end{table*} \begin{figure*}[h] \centering \begin{tabular}{ll} \includegraphics[scale=0.3]{AdpSiam_car2} & \includegraphics[scale=0.3]{AdpSiam_skating2} \end{tabular} \caption{\textbf{Left}: Analysis of accuracy versus update rate for different up-date strategies for tracker template with the car2 video from OTB-100 dataset, where detector initialization has a good IOU w.r.t. theground truth(70\%) and \textbf{Right}: same as in left but with Skating2 video from OTB-100 dataset} \label{Fig:analysis} \end{figure*} The measures used for evaluation performance are: (1) the OTB benchmark measure (percentage of frames with IOU $>$ 0.5), and (2) the average overlap $\overline{\phi}$ between ground truth and tracked bounding boxes over all $N$ frames in a video sequence: \begin{equation} \label{eqn:IOU} \overline{\phi} = \frac{1}{N} \sum_{t} \phi_{t} = \frac{1}{N} \sum_{t} \frac{R_{t}^{G} \cap R_{t}^{T}}{R_{t}^{G} \cup R_{t}^{T}} \end{equation} where $R_{t}^{G}$ and $R_{t}^{T}$ are the ground truth and tracked bounding box regions, and $t = 1, 2, ..., N$ frames in a video. \section{Results and Discussion} Two sets of experiments were performed -- in the first case the trackers interact with an ideal detector (ground truth bounding boxes of the object), and in the second case with a real-world YOLOv3~\cite{yolov3} object detector. The corresponding results are shown Tabs.~\ref{tab:gt_init} and~\ref{tab:detector_init}. Column 2 of Tab.~\ref{tab:detector_init} shows the IOU of the object detector's bounding box and the ground truth bounding box of the object in the first frame. From the tables it can be observed that videos that had a poor detector IOU (with respect to ground truth) during initialization have an overall lower performance compared to that of the videos from the ground truth initialization. The VOT performance of deep Siamese trackers declines significantly when using the real-world object detector. However, adaptive Siamese tracking can improve tracking performance by enabling effective detector-tracker interactions especially in videos where ground truth error is high. The average speed of our Adaptive DaSiamRPN is $71 \pm 15$ fps and that of Adaptive Siamese is $55 \pm 12$ fps exhibiting real time performance. The horizontal axis of Fig.~{4} represents the average update rate for a video obtained by changing the threshold of change detection module to be more or less sensitive to changes. This alters the average rate at which the tracker is updated over an entire video. This has helped us to compare different update strategies over accuracy and complexity in terms of the average number times a detector is called over the entire length of a video. In order to analyse the interaction between noisy detections and tracker, two special cases have been selected from the OTB-100 sequences. In one case i.e car2 sequence, the detector bounding box IOU with that of ground truth bounding box of object is good i.e 70\%. Hence in this case, it can be observed that the Adaptive Tracking or AdpSiam (ours) tracker, SiamFC with a periodic update are almost the same with a very marginal advantage from AdpSiam. This is due to low error in the initialization and also, the car2 sequence is an easy case with a steady movement of the object. The second case (skating2) is a more complex video from OTB-100 dataset, where the detector bounding box has a 21\% IOU with the ground truth bounding box. Note that this is a person tracking video and hence the aspect ratio is greater than one. The target template would hence contain greater amount of noise as compared to the car2 sequence. This has caused the SiamFC with periodic update to perform poorly compared to AdpSiam with periodic update and AdpSiam. Also, the video in itself is complex with a few instances of occlusion background clutter. \begin{figure*}[h!] \centering \includegraphics[width=0.74\linewidth]{human8_final} \caption{Frame-by-frame analysis of tracker performance (IOU over time) of the trackers on the Human8 video of OTB-100 dataset for each model update strategy.} \label{fig:hum} \end{figure*} A frame-by-frame analysis of tracking performance is shown in Fig.~\ref{fig:hum}. The horizontal axis shows the frame count, and the vertical axis shows the corresponding tracker accuracy in terms of IOU with ground truth. The performance of the original SiamFC is poor in this particular video (Human8) of OTB-100 dataset. It can be seen that the tracker drifts completely at frame number eighteen. The sequence was also evaluated in the case of SiamFC tracker with a periodic update of sixty frames. This is indicated by the inverted triangle on the horizontal axis. The performance of periodic update is lower than that of the adaptive update because a fixed period of 60 fails to stop the tracker drift at the eighteenth frame. Hence adaptive detection and tracking optimise the update rate for a given video. At the same time, Note that the number of updates for the adaptive tracker is higher in this case due to the complexity of the video. With the adaptive tracker, it is possible to dynamically adjust the update rate without the need of a threshold for number of updates. The same is true for a video with fewer appearance changes. In this case, adaptive tracker reduces the number of updates required while tracking based on the periodic update to the tracker would update the tracker periodically irrespective of the necessity of update. \section{Conclusion} In this paper, the interaction between deep learning models for detection and tracking are analysed. An adaptive Siamese tracker has been introduced that leverages a change detection mechanism to manage its interactions with a detector in real-world video surveillance applications. However, in practice object detectors are noisy, and therefore the tracks initialized with the corresponding bounding boxes tend to drift rapidly. Given the detection of an abrupt appearance change, the proposed tracker relies on the object detector to re-initialize the track template, while for gradual change detection, the detector is used to update an evolving set of templates. Results on the videos from the OTB-100 dataset highlight the importance of detection in long-term VOT -- the tracking performance can decline considerably even with state-of-the-art deep YOLOv3 detector. In all cases, there is a clear benefit in updating templates on a periodic basis. The proposed adaptive Siamese tracker always outperforms the original Siamese FC and DaSiamRPN trackers especially in cases where the tracker initialization is associated with high object detection error. Using change detection allows adapting the update rate to the challenges encountered during tracking, as opposed to using only a fixed periodic update rate. This enables detector-tracker interactions that do not rely on heuristics to update the tracker templates. In future research, track quality measurements will be explored to further improved performance by training with realistic noisy samples similar to those from an object detector's output to improve immunity to noisy detector initialization. \FloatBarrier {\small \bibliographystyle{ieee}
1,314,259,993,095
arxiv
\section{Introduction} \label{sec:intro} Ants (\textit{Hymenoptera: Formicidae}), renowned for their remarkable diversity and ecological significance \cite{wilson1999diversity}, typically display extraordinary collective behavior \cite{holldobler2009superorganism}. A key question in evolutionary neurobiology concerns how ant sociality, ecology, and the ability to make accurate group decisions have impacted their brain structure The emergence of eusociality and social complexity are major novelties likely involving rapid behavioral changes that might be reflected in the anatomy of the brain \cite{ott2010gregarious,amador2015specialization}, although this idea has been controversial \cite{farris2013evolution,farris2016insect}. The remarkable evolutionary and ecological success of ants is hypothesized to be due to their social organization, which features division of labor, and collective behavior \cite{wilson1987causes}. Workers in ant colonies are so intrinsically interdependent that they are considered superorganisms. The ``brain'' of such a superorganism evolved at two levels: to enable individual workers to respond adaptively as individuals acting independently of other workers, and colonies behaving as decision-making groups to cope with the multiple challenges of sociality (coordinated foraging, task specialization, communication, social interactions, nestmate recognition, e.g.). The Social Brain Hypothesis, originally postulated for primates, posits that individual members of larger groups require bigger brains to adaptively process social information \cite{dunbar2007evolution}. However, the degree to which this hypothesis can be meaningfully applied to eusocial insects has been debated \cite{lihoreau2012exploration}. Brain evolution in ants, for example, must have evolved in consideration of body size, and therefore miniaturization of the nervous system.In addition to that, collective intelligence and division of labor may have relaxed individual cognitive challenges \cite{gronenberg2009social}. However, it is unclear if social selection favored the evolution of allometrically smaller or larger brains, as both patterns have been described \cite{riveros2012evolution,kamhi2016social} The ant brain is a mosaic of different subregions (neuropils) that serve different functions \cite{strausfeld2012atlas}: sensory perception (antennal and optic lobes), motor control and navigation (central body and subesophageal ganglion), and multi-sensorial integration, learning and memory (mushroom bodies). Using confocal imaging and manual annotations of brain regions, Muscedere \textit{et al.} demonstrated that minor and major workers of different ages of three species of \textit{Pheidole} have distinct patterns of brain size variation \cite{muscedere2012division}. These differences in subregion sizes and scales reflect the intra-colony division of labor and the sociobiological characteristics of this species. However, all these results come at the cost of allocating significant time to manual record the volumes of functionally specialized brain compartments, which may introduce a bias. Recent advances in image processing, inspired in techniques developed to study the human brain, have allowed extraordinary outputs of unprecedented quality and throughput in neuroanatomical studies in honeybees \cite{rybak2012digital} and fruit flies \cite{rein2002drosophila,Costa2016} among other insects \cite{Menzel2012}. These approaches combine multiple brains in a single model or template, which statistically represents the whole species. Replication is necessary to avoid biases originated in the fixation and imaging processes of the brains as well as to account for inter-individual variability. Template brains have a dual function. Transforming all samples to the same reference space allows normalizing the information from brains imaged under different conditions or image modalities, and anatomical regions of reference brains are usually annotated, which produces the automatic segmentation of registered samples.Although many strategies have been proposed and evaluated in the last decades for the construction of brain templates in mammals \cite{talairach1988co, evans19933d, mazziotta1995probabilistic, chen2006neuroanatomical, dogdas2007digimouse, shattuck2008construction}, only a few of them have been applied to insect brains, most of them to \textit{Drosophila} data \cite{jefferis2007comprehensive, Yu2010, cachero2010sexual,Costa2016}. However, these results have not been translated yet into the ant brain community. This can be partially explained by the lack of expert-made anatomical labels and the larger morphological variability existing in the ant brain, what substantially hinders the registration process. To address these issues, we propose a two-step co-registration solution that allows the construction of atlases of intra- and inter-caste individuals and identify specific differences between anatomical regions. Moreover, we have evaluated our approach in a total of $50$ labeled brains of four species of \textit{Pheidole}, a hyperdiverse genus of ants that exhibits striking morphological differentiation and division of labor: complete dimorphism or “trimorphism” in the worker caste. \section{Materials and Methods} \label{sec:materials} \subsection{Ant brains dataset} \subsubsection{Ant species} \textit{Pheidole}, the most diverse and species rich ant genus \cite{wilson1985ants}, is characterized by worker polymorphism (minor worker, major workers and, in some species, supersoldiers). Four \textit{Pheidole} species, courtesy of Dr. Diana Wheeler’s lab at the University of Arizona, have been selected for this study: \textit{P. spadonia}, \textit{P. rhea}, \textit{P. tepicana} and \textit{P. obtusospinosa}. \subsubsection{Brain imaging and labeling} The immunohistochemical staining and imaging of ant brain neuropil was slightly modified from \cite{ott2008confocal, muscedere2012division}. We imaged 50 brains at a resolution of $\sim0.7\times0.7\times5 \mu$m/voxel: 10 minor worker brains from the mentioned four species and 10 major worker brains from \textit{P. spadonia}. Right brain hemispheres were manually labeled by an expert into $8$ anatomical regions: optic lobes (OL), antennal lobes (AL), mushroom body medial calyx (MB-MC), mushroom body lateral calyx (MB-LC), mushroom body peduncle (MB-P), central body (CB), subesophageal ganglion (SEG) and rest of the brain (ROCB). Fig.~\ref{fig:brains-and-labels} shows a 3D representation of the labels and brain samples of each type. (Image size: $\sim600\times600\times80$ pixels.) \begin{figure}[htb] \centering \centerline{\includegraphics[width=12cm]{brains-and-labels}} \caption{Examples of brain samples and labeled regions. From left to right and from top to bottom: 3D view of anatomical regions (A) and central sections of \textit{P. spadonia} minor (B), \textit{P. spadonia} minor (C), \textit{P. tepicana} (D), \textit{P. obtusospinosa} (E) and \textit{P. rhea} (F) samples. Scale bar: $100\mu m$.} \label{fig:brains-and-labels} \end{figure} \subsection{Image registration and template generation} Group-wise templates were constructed using an algorithm building an average shaped brain within the diffeomorphic space. The approach uses symmetric diffeomorphic image registration (SyN) \cite{avants2008symmetric} with mutual information and cross-correlation to register a group of brain images to one another. The co-registration process is refined using a two-step strategy. First, all of the images are registered to one brain using only an affine transformation model and mutual information as the similarity measure to optimize. The resulting images are then averaged to form an initial blurry reference brain image. Second, the original brain images are non-linearly registered to this average to create a new average that maximizes the cross-correlation of the intensities of all brains. In this second step, the registration is improved gradually at different (in the present case, four) resolution levels and the result is an optimal average template. For combining the co-registered images, we experimented first with a normalized voxel-wise average followed by sharpening with a Laplacian kernel (state-of-the-art in MRI). However, we found experimentally that an alternative strategy in which the template intensity image was generated by computing a voxel-wise median over the co-registered images produced slightly better results. The anatomical label image of the template was obtained by applying to each individual label image the diffeomorphic transformations computed from the corresponding confocal image, followed by a per-voxel majority voting over all warped label images. Individual brain images were registered against the templates using the same two-step strategy, which performs an initial affine registration with mutual information as similarity metric followed by non-rigid registration with SyN and cross-correlation as similarity measure. The first registration is crucial in order to compensate for the large disparities in size among the different ant species and subcastes, while the second one locally finds an optimal solution. All methods are implemented within the Advanced Normalization Tools (ANTs) software \cite{avants2011reproducible}. \subsection{Evaluation metrics} To evaluate the template performance, we registered test brains (not used in the template construction) against the template and transformed the template labels onto the test brain space. We quantified the overlap ratio of the labels in the test brain space using the Dice similarity index, which provides a normalized measure of the overlap between two labels $L_i^A$ and $L_i^B$. The Dice index is defined as $$\mathrm{Dice}(L_i)=2\frac{|L_i^A\cap R_i^B|}{|L_i^A|+|L_i^B|}$$ where $|L_i^A|$ and $|L_i^A|$ are respectively the number of voxels of label $i$ in brain $A$ and $B$. To quantify the shape and boundary errors, we measure the mean symmetric Euclidean distance between the surfaces of the labels. For each label $L_i$ in the pair of brains $A$ and $B$, we calculated the mean Euclidean distance $d_i^{A,B}$ between each surface point on $L_i^A$ and the closest surface point on $L_i^B$. The symmetric distance $d_i^{B,A}$ was calculated in an analogous way. The mean symmetric Euclidean distance was defined as $$\mathrm{Mean\ Symmetric\ Euclidean\ distance}(L_i) = \frac{d_i^{A,B}+d_i^{B,A}}{2}$$ Finally, to measure the maximal boundary and shape differences between the original brain labels and the registered template labels, we calculated the mean symmetric Hausdorff distance. The Hausdorff distance $h_i^{A,B}$ of labels $L_i^A$ and $L_i^B$ is defined as the longest distance between any point on the surface of $L_i^A$ and the closest point on the surface of $L_i^B$. By computing $h_i^{B,A}$ in an analogous way, the symmetric Hausdorff distance can be calculated as $$\mathrm{Symmetric\ Hausdorff\ distance}(L_i) = \frac{h_i^{A,B}+h_i^{B,A}}{2}$$ Notice both distance metrics are expressed in absolute distance units. \section{Results} \subsection{Building an ant brain template} As a proof-of-concept of our methodology, we first attempted to build intra-species and intra-subcaste templates. For that reason, we chose the $10$ minor and $10$ major worker samples from \textit{P. spadonia}. Here we realized that an initial affine pre-registration was needed due to the volume variability and imaging conditions. Both templates were successfully built and evaluated based on how well their consensus labels represented the sample population (see Fig. \ref{fig:minor-volume}). \begin{figure}[htb] \centering \centerline{\includegraphics[width=6.5cm]{minor-volume}} \caption{Evaluation of volume ($\times10^4 \mu m^3$) per anatomical region of minor \textit{P. spadonia} template. Blue lines represent the standard deviation of the volume of the original manual labels while the red dots are the template volume value.} \label{fig:minor-volume} \end{figure} \subsection{Building and evaluating hybrid templates} After analyzing the morphological differences of the sample populations based on their anatomical labels using the open-source toolbox MorphoLibJ \cite{legland2016morpholibj} (see Fig. \ref{fig:morphology-measures}), we decided to build and evaluate hybrid templates mixing minor samples of the different species. More specifically, we constructed one template (RTO) using all minor species except \textit{P. spadonia} (with 3 brains per species) and another template (SRTO) with \textit{P. spadonia} samples as well. All samples not used as part of templates, were used for testing their performance. Fig. \ref{fig:template-results} shows the evaluation results per label for the $4$ templates we created (\textit{P. spadonia} major, \textit{P. spadonia} minor, RTO and SRTO). The only template built with major samples performs notably worse than the other $3$ using both overlap and distance metrics, specially in the OL and CB. It is remarkable how the \textit{P. spadonia} minor performs only slightly worse than RTO and SRTO even though not a single \textit{P. spadonia} sample was used for testing. \begin{figure}[htb] \centering \centerline{\includegraphics[width=12cm]{morphology-measures}} \caption{Morphological differences between species and subcastes. From top to bottom: volume, surface area and sphericity measurements.} \label{fig:morphology-measures} \end{figure} \begin{figure}[htb] \centering \centerline{\includegraphics[width=12cm]{template-results}} \caption{Evaluation of template performance per label. From top to bottom: Dice coefficient, Euclidean distance and Symmetric Hausdorff distance. Distances are expressed in microns.} \label{fig:template-results} \end{figure} \subsection{Evaluating worker polymorphisms and brain structure} One advantage of having templates of a single type of brain is that they allow to study the main morphological differences between species and/or subcastes. Following a methodology previously contrasted for fly brains \cite{manton2014combining}, we can register for instance our \textit{P. spadonia} minor and major templates to each other, and calculate the volume change of each voxel via the use of the Jacobian determinant. Once the difference in size is compensated with the affine transform, the local non-linear deformations can be visualized as a heatmap (see Fig. \ref{fig:jacobian}), emphasizing the regions of large differences. \begin{figure}[htb] \centering \centerline{\includegraphics[width=12cm]{jacobian}} \caption{Inter-type deformation-based morphometry. From left to right: central view of \textit{P. spadonia} minor template, Jacobian determinant of deformation from minor to major template, and \textit{P. spadonia} major template. Scale bar: $100\mu m$.} \label{fig:jacobian} \end{figure} \section{Conclusions and Future work} We present a groupwise 3D registration strategy to build bias-free antbrain atlases that enable the efficient quantification of inter- and intraspecific variation in brain organization as evident in compartmental substructuring by automatic segmentation. We numerically evaluated template performance using expert-made manual annotations to validate that the atlases can be used to accurately study brain anatomy. To the best of our knowledge, this is the first time that automated atlases have been used to quantify ant brain volumes. The application of the current work to address questions in evolutionary neurobiology that require extensive datasets that adequately sample species-rich taxa will expedite the study of ant brain structure in relation to their ecological and evolutionary success and its association with division of labor and collective organization. The ability to accurately and rapidly collect volumetric neuroanatomical data will greatly expand our ability to test social brain evolution in diverse clades such as ants. Combined with phylogenetic analysis, immunohistochemistry, respirometry, high-performance liquid chromatography and other techniques, brain templates can help elucidate macroevolutionary and microevolutionary patterns of brain evolution, as well as mechanistic studies of the energetic cost of functionally specialized regions in the brain and the nature of aminergic control systems. This will allow to better understand regional brain investment in regard to the behavioral ecology of individual workers and their task specializations, and the impact of social processes operating at the colony-level. \section*{Acknowledgements} SA was supported by a Marie Skłodowska-Curie Individual Fellowship (BrainiAnts-660976). DGG, AHP and JFAT, NSF were supported by grant IOS 1354291. \bibliographystyle{IEEEbib}
1,314,259,993,096
arxiv
\section{Introduction} \noindent In many complex networks, nodes cluster and form relatively dense groups---often called communities~\cite{Fortunato2010,Porter2009}. Such a modular structure is usually not known beforehand. Detecting communities in a network is therefore an important problem. One of the best-known methods for community detection is called modularity~\cite{Newman2004Finding}. This method tries to maximise the difference between the actual number of edges in a community and the expected number of such edges. We denote by $e_c$ the actual number of edges in community $c$. The expected number of edges can be expressed as $\frac{K_c^2}{2m}$, where $K_c$ is the sum of the degrees of the nodes in community $c$ and $m$ is the total number of edges in the network. This way of defining the expected number of edges is based on the so-called configuration model. Modularity is given by \begin{equation} \pazocal{H} = \frac{1}{2m} \sum_c \left(e_c - \gamma \frac{K_c^2}{2m} \right), \end{equation} where $\gamma > 0$ is a resolution parameter~\cite{Reichardt2006Statistical}. Higher resolutions lead to more communities, while lower resolutions lead to fewer communities. Optimising modularity is NP-hard~\cite{Brandes}, and consequentially many heuristic algorithms have been proposed, such as hierarchical agglomeration~\cite{Clauset2004}, extremal optimisation~\cite{Duch2005}, simulated annealing~\cite{Reichardt2006Statistical,Guimera2005Functional} and spectral~\cite{Newman2006Finding} algorithms. One of the most popular algorithms to optimise modularity is the so-called Louvain algorithm~\cite{Blondel2008}, named after the location of its authors. It was found to be one of the fastest and best performing algorithms in comparative analyses~\cite{Lancichinetti2009,Yang2016}, and it is one of the most-cited works in the community detection literature. Although originally defined for modularity, the Louvain algorithm can also be used to optimise other quality functions. An alternative quality function is the Constant Potts Model (CPM)~\cite{Traag2011}, which overcomes some limitations of modularity. CPM is defined as \begin{equation} \pazocal{H} = \sum_c \left[e_c - \gamma \binom{n_c}{2} \right], \label{eq:CPM_simple} \end{equation} where $n_c$ is the number of nodes in community $c$. The interpretation of the resolution parameter $\gamma$ is quite straightforward. The parameter functions as a sort of threshold: communities should have a density of at least $\gamma$, while the density between communities should be lower than $\gamma$. Higher resolutions lead to more communities and lower resolutions lead to fewer communities, similarly to the resolution parameter for modularity. In this paper, we show that the Louvain algorithm has a major problem, for both modularity and CPM. The algorithm may yield arbitrarily badly connected communities, over and above the well-known issue of the resolution limit~\cite{Fortunato:2007p183} (Section~\ref{sec:disconnected}). Communities may even be internally disconnected. To address this important shortcoming, we introduce a new algorithm that is faster, finds better partitions and provides explicit guarantees and bounds (Section~\ref{sec:leiden}). The new algorithm integrates several earlier improvements, incorporating a combination of smart local move~\cite{Waltman2013}, fast local move~\cite{Ozaki,Bae2014} and random neighbour move~\cite{Traag2015a}. We prove that the new algorithm is guaranteed to produce partitions in which all communities are internally connected. In addition, we prove that the algorithm converges to an asymptotically stable partition in which all subsets of all communities are locally optimally assigned. The quality of such an asymptotically stable partition provides an upper bound on the quality of an optimal partition. Finally, we demonstrate the excellent performance of the algorithm for several benchmark and real-world networks (Section~\ref{sec:analysis}). To ensure readability of the paper to the broadest possible audience, we have chosen to relegate all technical details to appendices. The main ideas of our algorithm are explained in an intuitive way in the main text of the paper. We name our algorithm the \emph{Leiden algorithm}, after the location of its authors. \section{Louvain algorithm} \begin{figure}[tb] \begin{center} \includegraphics{illustration/louvain/louvain_algo} \end{center} \caption{\textbf{Louvain algorithm}. The Louvain algorithm starts from a singleton partition in which each node is in its own community (a). The algorithm moves individual nodes from one community to another to find a partition (b). Based on this partition, an aggregate network is created (c). The algorithm then moves individual nodes in the aggregate network (d). These steps are repeated until the quality cannot be increased further. } \label{fig:louvain_illustration} \end{figure} \noindent The Louvain algorithm~\cite{Blondel2008} is very simple and elegant. The algorithm optimises a quality function such as modularity or CPM in two elementary phases: (1) local moving of nodes; and (2) aggregation of the network. In the local moving phase, individual nodes are moved to the community that yields the largest increase in the quality function. In the aggregation phase, an aggregate network is created based on the partition obtained in the local moving phase. Each community in this partition becomes a node in the aggregate network. The two phases are repeated until the quality function cannot be increased further. The Louvain algorithm is illustrated in Fig.~\ref{fig:louvain_illustration} and summarised in pseudo-code in Algorithm~\ref{algo:louvain} in Appendix~\ref{sec:code_notation}. Usually, the Louvain algorithm starts from a singleton partition, in which each node is in its own community. However, it is also possible to start the algorithm from a different partition~\cite{Waltman2013}. In particular, in an attempt to find better partitions, multiple consecutive iterations of the algorithm can be performed, using the partition identified in one iteration as starting point for the next iteration. \subsection{Badly connected communities} \label{sec:disconnected} \begin{figure}[bt] \begin{center} \includegraphics{disconnected_community} \end{center} \caption{\textbf{Disconnected community.} Consider the partition shown in (a). When node 0 is moved to a different community, the red community becomes internally disconnected, as shown in (b). However, nodes 1--6 are still locally optimally assigned, and therefore these nodes will stay in the red community. } \label{fig:disconnected_community} \end{figure} We now show that the Louvain algorithm may find arbitrarily badly connected communities. In particular, we show that Louvain may identify communities that are internally disconnected. That is, one part of such an internally disconnected community can reach another part only through a path going outside the community. Importantly, the problem of disconnected communities is not just a theoretical curiosity. As we will demonstrate in Section~\ref{sec:analysis}, the problem occurs frequently in practice when using the Louvain algorithm. Perhaps surprisingly, iterating the algorithm aggravates the problem, even though it does increase the quality function. In the Louvain algorithm, a node may be moved to a different community while it may have acted as a bridge between different components of its old community. Removing such a node from its old community disconnects the old community. One may expect that other nodes in the old community will then also be moved to other communities. However, this is not necessarily the case, as the other nodes may still be sufficiently strongly connected to their community, despite the fact that the community has become disconnected. To elucidate the problem, we consider the example illustrated in Fig.~\ref{fig:disconnected_community}. The numerical details of the example can be found in Appendix~\ref{sec:disconnected_example}. The thick edges in Fig.~\ref{fig:disconnected_community} represent stronger connections, while the other edges represent weaker connections. At some point, the Louvain algorithm may end up in the community structure shown in Fig.~\ref{fig:disconnected_community}(a). Nodes 0--6 are in the same community. Nodes 1--6 have connections only within this community, whereas node 0 also has many external connections. The algorithm continues to move nodes in the rest of the network. At some point, node 0 is considered for moving. When a sufficient number of neighbours of node 0 have formed a community in the rest of the network, it may be optimal to move node 0 to this community, thus creating the situation depicted in Fig.~\ref{fig:disconnected_community}(b). In this new situation, nodes 2, 3, 5 and 6 have only internal connections. These nodes are therefore optimally assigned to their current community. On the other hand, after node 0 has been moved to a different community, nodes 1 and 4 have not only internal but also external connections. Nevertheless, depending on the relative strengths of the different connections, these nodes may still be optimally assigned to their current community. In that case, nodes 1--6 are all locally optimally assigned, despite the fact that their community has become disconnected. Clearly, it would be better to split up the community. Nodes 1--3 should form a community and nodes 4--6 should form another community. However, the Louvain algorithm does not consider this possibility, since it considers only individual node movements. Moreover, when no more nodes can be moved, the algorithm will aggregate the network. When a disconnected community has become a node in an aggregate network, there are no more possibilities to split up the community. Hence, the community remains disconnected, unless it is merged with another community that happens to act as a bridge. Obviously, this is a worst case example, showing that disconnected communities may be identified by the Louvain algorithm. More subtle problems may occur as well, causing Louvain to find communities that are connected, but only in a very weak sense. Hence, in general, Louvain may find arbitrarily badly connected communities. This problem is different from the well-known issue of the resolution limit of modularity~\cite{Fortunato:2007p183}. Due to the resolution limit, modularity may cause smaller communities to be clustered into larger communities. In other words, modularity may ``hide'' smaller communities and may yield communities containing significant substructure. CPM does not suffer from this issue~\cite{Traag2011}. Nevertheless, when CPM is used as the quality function, the Louvain algorithm may still find arbitrarily badly connected communities. Hence, the problem of Louvain outlined above is independent from the issue of the resolution limit. In the case of modularity, communities may have significant substructure both because of the resolution limit and because of the shortcomings of Louvain. In fact, although it may seem that the Louvain algorithm does a good job at finding high quality partitions, in its standard form the algorithm provides only one guarantee: the algorithm yields partitions for which it is guaranteed that no communities can be merged. In other words, communities are guaranteed to be well separated. Somewhat stronger guarantees can be obtained by iterating the algorithm, using the partition obtained in one iteration of the algorithm as starting point for the next iteration. When iterating Louvain, the quality of the partitions will keep increasing until the algorithm is unable to make any further improvements. At this point, it is guaranteed that each individual node is optimally assigned. In this iterative scheme, Louvain provides two guarantees: (1) no communities can be merged and (2) no nodes can be moved. Contrary to what might be expected, iterating the Louvain algorithm aggravates the problem of badly connected communities, as we will also see in Section~\ref{sec:analysis}. This is not too difficult to explain. After the first iteration of the Louvain algorithm, some partition has been obtained. In the first step of the next iteration, Louvain will again move individual nodes in the network. Some of these nodes may very well act as bridges, similarly to node $0$ in the above example. By moving these nodes, Louvain creates badly connected communities. Moreover, Louvain has no mechanism for fixing these communities. Iterating the Louvain algorithm can therefore be seen as a double-edged sword: it improves the partition in some way, but degrades it in another way. The problem of disconnected communities has been observed before in the context of the label propagation algorithm~\cite{Raghavan2007}. However, so far this problem has never been studied for the Louvain algorithm. Moreover, the deeper significance of the problem was not recognised: disconnected communities are merely the most extreme manifestation of the problem of arbitrarily badly connected communities. Trying to fix the problem by simply considering the connected components of communities~\cite{Luecken2016,Wolf2018,Raghavan2007} is unsatisfactory because it addresses only the most extreme case and does not resolve the more fundamental problem. We therefore require a more principled solution, which we will introduce in the next section. \begin{figure*}[tb] \begin{center} \includegraphics[width=\textwidth]{illustration/leiden/leiden_algo} \end{center} \caption{\textbf{Leiden algorithm}. The Leiden algorithm starts from a singleton partition (a). The algorithm moves individual nodes from one community to another to find a partition (b), which is then refined (c). An aggregate network (d) is created based on the refined partition, using the non-refined partition to create an initial partition for the aggregate network. For example, the red community in (b) is refined into two subcommunities in (c), which after aggregation become two separate nodes in (d), both belonging to the same community. The algorithm then moves individual nodes in the aggregate network (e). In this case, refinement does not change the partition (f). These steps are repeated until no further improvements can be made. } \label{fig:leiden_illustration} \end{figure*} \section{Leiden algorithm} \label{sec:leiden} \noindent We here introduce the Leiden algorithm, which guarantees that communities are well connected. The Leiden algorithm is partly based on the previously introduced smart local move algorithm~\cite{Waltman2013}, which itself can be seen as an improvement of the Louvain algorithm. The Leiden algorithm also takes advantage of the idea of speeding up the local moving of nodes~\cite{Ozaki,Bae2014} and the idea of moving nodes to random neighbours~\cite{Traag2015a}. We consider these ideas to represent the most promising directions in which the Louvain algorithm can be improved, even though we recognise that other improvements have been suggested as well~\cite{Rotta2011}. The Leiden algorithm consists of three phases: (1) local moving of nodes, (2) refinement of the partition and (3) aggregation of the network based on the refined partition, using the non-refined partition to create an initial partition for the aggregate network. The Leiden algorithm is considerably more complex than the Louvain algorithm. Fig.~\ref{fig:leiden_illustration} provides an illustration of the algorithm. The algorithm is described in pseudo-code in Algorithm~\ref{algo:leiden} in Appendix~\ref{sec:code_notation}. In the Louvain algorithm, an aggregate network is created based on the partition $\P$ resulting from the local moving phase. The idea of the refinement phase in the Leiden algorithm is to identify a partition $\P_\text{refined}$ that is a refinement of $\P$. Communities in $\P$ may be split into multiple subcommunities in $\P_\text{refined}$. The aggregate network is created based on the partition $\P_\text{refined}$. However, the initial partition for the aggregate network is based on $\P$, just like in the Louvain algorithm. By creating the aggregate network based on $\P_\text{refined}$ rather than $\P$, the Leiden algorithm has more room for identifying high-quality partitions. In fact, by implementing the refinement phase in the right way, several attractive guarantees can be given for partitions produced by the Leiden algorithm. The refined partition $\P_\text{refined}$ is obtained as follows. Initially, $\P_\text{refined}$ is set to a singleton partition, in which each node is in its own community. The algorithm then locally merges nodes in $\P_\text{refined}$: nodes that are on their own in a community in $\P_\text{refined}$ can be merged with a different community. Importantly, mergers are performed only within each community of the partition $\P$. In addition, a node is merged with a community in $\P_\text{refined}$ only if both are sufficiently well connected to their community in $\P$. After the refinement phase is concluded, communities in $\P$ often will have been split into multiple communities in $\P_\text{refined}$, but not always. In the refinement phase, nodes are not necessarily greedily merged with the community that yields the largest increase in the quality function. Instead, a node may be merged with any community for which the quality function increases. The community with which a node is merged is selected randomly (similar to~\cite{Traag2015a}). The larger the increase in the quality function, the more likely a community is to be selected. The degree of randomness in the selection of a community is determined by a parameter $\theta > 0$. Randomness in the selection of a community allows the partition space to be explored more broadly. Node mergers that cause the quality function to decrease are not considered. This contrasts with optimisation algorithms such as simulated annealing, which do allow the quality function to decrease~\cite{Guimera2005Functional,Reichardt2006Statistical}. Such algorithms are rather slow, making them ineffective for large networks. Excluding node mergers that decrease the quality function makes the refinement phase more efficient. As we prove in Appendix~\ref{sec:nondecreasing_move_sequences}, even when node mergers that decrease the quality function are excluded, the optimal partition of a set of nodes can still be uncovered. This is not the case when nodes are greedily merged with the community that yields the largest increase in the quality function. In that case, some optimal partitions cannot be found, as we show in Appendix~\ref{sec:greedy_move_sequences}. Another important difference between the Leiden algorithm and the Louvain algorithm is the implementation of the local moving phase. Unlike the Louvain algorithm, the Leiden algorithm uses a fast local move procedure in this phase. Louvain keeps visiting all nodes in a network until there are no more node movements that increase the quality function. In doing so, Louvain keeps visiting nodes that cannot be moved to a different community. In the fast local move procedure in the Leiden algorithm, only nodes whose neighbourhood has changed are visited. This is similar to ideas proposed recently as ``pruning''~\cite{Ozaki} and in a slightly different form as ``prioritisation''~\cite{Bae2014}. The fast local move procedure can be summarised as follows. We start by initialising a queue with all nodes in the network. The nodes are added to the queue in a random order. We then remove the first node from the front of the queue and we determine whether the quality function can be increased by moving this node from its current community to a different one. If we move the node to a different community, we add to the rear of the queue all neighbours of the node that do not belong to the node's new community and that are not yet in the queue. We keep removing nodes from the front of the queue, possibly moving these nodes to a different community. This continues until the queue is empty. For a full specification of the fast local move procedure, we refer to the pseudo-code of the Leiden algorithm in Algorithm~\ref{algo:leiden} in Appendix~\ref{sec:code_notation}. Using the fast local move procedure, the first visit to all nodes in a network in the Leiden algorithm is the same as in the Louvain algorithm. However, after all nodes have been visited once, Leiden visits only nodes whose neighbourhood has changed, whereas Louvain keeps visiting all nodes in the network. In this way, Leiden implements the local moving phase more efficiently than Louvain. \begin{table}[tb] \caption{Overview of the guarantees provided by the Louvain algorithm and the Leiden algorithm.} \label{tbl:guarantees} \begin{tabular}{lrcc} \toprule & & Louvain & Leiden \\ \midrule \multirow{2}{2cm}{Each iteration} & $\gamma$-separation & \ding{51} & \ding{51} \\ & $\gamma$-connectivity & & \ding{51} \\ \midrule \multirow{2}{2cm}{Stable iteration} & Node optimality & \ding{51} & \ding{51} \\ & Subpartition $\gamma$-density & & \ding{51} \\ \midrule \multirow{2}{2cm}{Asymptotic} & Uniform $\gamma$-density & & \ding{51} \\ & Subset optimality & & \ding{51} \\ \bottomrule \end{tabular} \end{table} \subsection{Guarantees} We now consider the guarantees provided by the Leiden algorithm. The algorithm is run iteratively, using the partition identified in one iteration as starting point for the next iteration. We can guarantee a number of properties of the partitions found by the Leiden algorithm at various stages of the iterative process. Below we offer an intuitive explanation of these properties. We provide the full definitions of the properties as well as the mathematical proofs in Appendix~\ref{sec:guarantees}. After each iteration of the Leiden algorithm, it is guaranteed that: \begin{enumerate} \item All communities are $\gamma$-separated. \item All communities are $\gamma$-connected. \end{enumerate} In these properties, $\gamma$ refers to the resolution parameter in the quality function that is optimised, which can be either modularity or CPM. The property of $\gamma$-separation is also guaranteed by the Louvain algorithm. It states that there are no communities that can be merged. The property of $\gamma$-connectivity is a slightly stronger variant of ordinary connectivity. As discussed in Section~\ref{sec:disconnected}, the Louvain algorithm does not guarantee connectivity. It therefore does not guarantee $\gamma$-connectivity either. An iteration of the Leiden algorithm in which the partition does not change is called a stable iteration. After a stable iteration of the Leiden algorithm, it is guaranteed that: \begin{enumerate}[resume] \item All nodes are locally optimally assigned. \item All communities are subpartition $\gamma$-dense. \end{enumerate} Node optimality is also guaranteed after a stable iteration of the Louvain algorithm. It means that there are no individual nodes that can be moved to a different community. Subpartition $\gamma$-density is not guaranteed by the Louvain algorithm. A community is subpartition $\gamma$-dense if it can be partitioned into two parts such that: (1) the two parts are well connected to each other; (2) neither part can be separated from its community; and (3) each part is also subpartition $\gamma$-dense itself. Subpartition $\gamma$-density does not imply that individual nodes are locally optimally assigned. It only implies that individual nodes are well connected to their community. In the case of the Louvain algorithm, after a stable iteration, all subsequent iterations will be stable as well. Hence, no further improvements can be made after a stable iteration of the Louvain algorithm. This contrasts with the Leiden algorithm. After a stable iteration of the Leiden algorithm, the algorithm may still be able to make further improvements in later iterations. In fact, when we keep iterating the Leiden algorithm, it will converge to a partition for which it is guaranteed that: \begin{enumerate}[resume] \item All communities are uniformly $\gamma$-dense. \item All communities are subset optimal. \end{enumerate} A community is uniformly $\gamma$-dense if there are no subsets of the community that can be separated from the community. Uniform $\gamma$-density means that no matter how a community is partitioned into two parts, the two parts will always be well connected to each other. Furthermore, if all communities in a partition are uniformly $\gamma$-dense, the quality of the partition is not too far from optimal, as shown in Appendix~\ref{sec:bound_on_optimality}. A community is subset optimal if all subsets of the community are locally optimally assigned. That is, no subset can be moved to a different community. Subset optimality is the strongest guarantee that is provided by the Leiden algorithm. It implies uniform $\gamma$-density and all the other above-mentioned properties. An overview of the various guarantees is presented in Table~\ref{tbl:guarantees}. \section{Experimental analysis} \label{sec:analysis} \begin{table}[tb] \caption{Overview of the empirical networks and of the maximal modularity after $10$ replications of $10$ iterations each, both for the Louvain and for the Leiden algorithm.} \label{tbl:real_networks} \begin{tabular}{lrrrr} \toprule & & & \multicolumn{2}{c}{Max. modularity}\\ \cmidrule{4-5} & Nodes & Degree & Louvain & Leiden \\ \midrule DBLP\footnotemark[1] & $317\,080$ & $6.6$ & $0.8262$ & $0.8387$ \\ Amazon\footnotemark[1] & $334\,863$ & $5.6$ & $0.9301$ & $0.9341$ \\ IMDB\footnotemark[2] & $374\,511$ & $80.2$ & $0.7062$ & $0.7069$ \\ Live Journal\footnotemark[1] & $3\,997\,962$ & $17.4$ & $0.7653$ & $0.7739$ \\ Web of Science\footnotemark[3] & $9\,811\,130$ & $21.2$ & $0.7911$ & $0.7951$ \\ Web UK\footnotemark[4] & $39\,252\,879$ & $39.8$ & $0.9796$ & $0.9801$ \\ \bottomrule \end{tabular} \footnotetext[1]{\url{https://snap.stanford.edu/data/}} \footnotetext[2]{\url{https://sparse.tamu.edu/Barabasi/NotreDame_actors}} \footnotetext[3]{Data cannot be shared due to license restrictions.} \footnotetext[4]{\url{http://law.di.unimi.it/webdata/uk-2005/}} \end{table} In the previous section, we showed that the Leiden algorithm guarantees a number of properties of the partitions uncovered at different stages of the algorithm. We also suggested that the Leiden algorithm is faster than the Louvain algorithm, because of the fast local move approach. In this section, we analyse and compare the performance of the two algorithms in practice\footnote{We implemented both algorithms in Java, available from \href{https://github.com/CWTSLeiden/networkanalysis}{github.com/CWTSLeiden/networkanalysis} and deposited at Zenodo~\cite{code}. Additionally, we implemented a Python package, available from \href{https://github.com/vtraag/leidenalg}{github.com/vtraag/leidenalg} and deposited at Zenodo~\cite{codePython}.}. All experiments were run on a computer with 64 Intel Xeon E5-4667v3 2GHz CPUs and 1TB internal memory. In all experiments reported here, we used a value of $0.01$ for the parameter $\theta$ that determines the degree of randomness in the refinement phase of the Leiden algorithm. However, values of $\theta$ within a range of roughly $[0.0005, 0.1]$ all provide reasonable results, thus allowing for some, but not too much randomness. We use six empirical networks in our analysis. These are the same networks that were also studied in an earlier paper introducing the smart local move algorithm~\cite{Waltman2013}. Table~\ref{tbl:real_networks} provides an overview of the six networks. First, we show that the Louvain algorithm finds disconnected communities, and more generally, badly connected communities in the empirical networks. Second, to study the scaling of the Louvain and the Leiden algorithm, we use benchmark networks, allowing us to compare the algorithms in terms of both computational time and quality of the partitions. Finally, we compare the performance of the algorithms on the empirical networks. We find that the Leiden algorithm commonly finds partitions of higher quality in less time. The difference in computational time is especially pronounced for larger networks, with Leiden being up to $20$ times faster than Louvain in empirical networks. \subsection{Badly connected communities} We study the problem of badly connected communities when using the Louvain algorithm for several empirical networks. For each community in a partition that was uncovered by the Louvain algorithm, we determined whether it is internally connected or not. In addition, to analyse whether a community is badly connected, we ran the Leiden algorithm on the subnetwork consisting of all nodes belonging to the community.\footnote{ We ensured that modularity optimisation for the subnetwork was fully consistent with modularity optimisation for the whole network~\cite{Traag2011}.} The Leiden algorithm was run until a stable iteration was obtained. When the Leiden algorithm found that a community could be split into multiple subcommunities, we counted the community as badly connected. Note that if Leiden finds subcommunities, splitting up the community is guaranteed to increase modularity. Conversely, if Leiden does not find subcommunities, there is no guarantee that modularity cannot be increased by splitting up the community. Hence, by counting the number of communities that have been split up, we obtained a lower bound on the number of communities that are badly connected. The count of badly connected communities also included disconnected communities. For each network, we repeated the experiment $10$ times. We used modularity with a resolution parameter of $\gamma = 1$ for the experiments. \begin{figure}[tb] \centering \includegraphics{real_networks_subclusters} \caption{ \textbf{Badly connected communities}. Percentage of communities found by the Louvain algorithm that are either disconnected or badly connected compared to percentage of badly connected communities found by the Leiden algorithm. Note that communities found by the Leiden algorithm are guaranteed to be connected. } \label{fig:subcluster} \end{figure} As can be seen in Fig.~\ref{fig:subcluster}, in the first iteration of the Louvain algorithm, the percentage of badly connected communities can be quite high. For the Amazon, DBLP and Web UK networks, Louvain yields on average respectively $23\%$, $16\%$ and $14\%$ badly connected communities. The percentage of disconnected communities is more limited, usually around $1\%$. However, in the case of the Web of Science network, more than $5\%$ of the communities are disconnected in the first iteration. Later iterations of the Louvain algorithm only aggravate the problem of disconnected communities, even though the quality function (i.e. modularity) increases. The second iteration of Louvain shows a large increase in the percentage of disconnected communities. In subsequent iterations, the percentage of disconnected communities remains fairly stable. The increase in the percentage of disconnected communities is relatively limited for the Live Journal and Web of Science networks. Other networks show an almost tenfold increase in the percentage of disconnected communities. The percentage of disconnected communities even jumps to $16\%$ for the DBLP network. The percentage of badly connected communities is less affected by the number of iterations of the Louvain algorithm. Presumably, many of the badly connected communities in the first iteration of Louvain become disconnected in the second iteration. Indeed, the percentage of disconnected communities becomes more comparable to the percentage of badly connected communities in later iterations. Nonetheless, some networks still show large differences. For example, after four iterations, the Web UK network has $8\%$ disconnected communities, but twice as many badly connected communities. Even worse, the Amazon network has $5\%$ disconnected communities, but $25\%$ badly connected communities. The above results shows that the problem of disconnected and badly connected communities is quite pervasive in practice. Because the percentage of disconnected communities in the first iteration of the Louvain algorithm usually seems to be relatively low, the problem may have escaped attention from users of the algorithm. However, focussing only on disconnected communities masks the more fundamental issue: Louvain finds arbitrarily badly connected communities. The high percentage of badly connected communities attests to this. Besides being pervasive, the problem is also sizeable. In the worst case, almost a quarter of the communities are badly connected. This may have serious consequences for analyses based on the resulting partitions. For example, nodes in a community in biological or neurological networks are often assumed to share similar functions or behaviour~\cite{Bullmore2009}. However, if communities are badly connected, this may lead to incorrect attributions of shared functionality. Similarly, in citation networks, such as the Web of Science network, nodes in a community are usually considered to share a common topic~\cite{Waltman2012,Klavans2017}. Again, if communities are badly connected, this may lead to incorrect inferences of topics, which will affect bibliometric analyses relying on the inferred topics. In short, the problem of badly connected communities has important practical consequences. The Leiden algorithm has been specifically designed to address the problem of badly connected communities. Fig.~\ref{fig:subcluster} shows how well it does compared to the Louvain algorithm. The Leiden algorithm guarantees all communities to be connected, but it may yield badly connected communities. In terms of the percentage of badly connected communities in the first iteration, Leiden performs even worse than Louvain, as can be seen in Fig.~\ref{fig:subcluster}. Crucially, however, the percentage of badly connected communities decreases with each iteration of the Leiden algorithm. Starting from the second iteration, Leiden outperformed Louvain in terms of the percentage of badly connected communities. In fact, if we keep iterating the Leiden algorithm, it will converge to a partition without any badly connected communities, as discussed in Section~\ref{sec:leiden}. Hence, the Leiden algorithm effectively addresses the problem of badly connected communities. \begin{figure*}[tb] \begin{center} \includegraphics{benchmark_nodes} \end{center} \caption{ \textbf{Scaling of benchmark results for network size}. Speed and quality of the Louvain and the Leiden algorithm for benchmark networks of increasing size (two iterations). For larger networks and higher values of $\mu$, Louvain is much slower than Leiden. For higher values of $\mu$, Leiden finds better partitions than Louvain. } \label{fig:benchmark_nodes} \end{figure*} \begin{figure*}[tb] \begin{center} \includegraphics{benchmark_all_itr} \end{center} \caption{ \textbf{Runtime versus quality for benchmark networks}. Speed and quality for the first $10$ iterations of the Louvain and the Leiden algorithm for benchmark networks ($n=10^6$ and $n=10^7$). The horizontal axis indicates the cumulative time taken to obtain the quality indicated on the vertical axis. Each point corresponds to a certain iteration of an algorithm, with results averaged over $10$ experiments. In general, Leiden is both faster than Louvain and finds better partitions. } \label{fig:benchmark_all_itr} \end{figure*} \begin{figure}[tb] \begin{center} \includegraphics{benchmark_mixing} \end{center} \caption{ \textbf{Scaling of benchmark results for difficulty of the partition}. Speed of the first iteration of the Louvain and the Leiden algorithm for benchmark networks with increasingly difficult partitions ($n=10^7$). In the most difficult case ($\mu=0.9$), Louvain requires almost $2.5$ days, while Leiden needs fewer than $10$ minutes. } \label{fig:benchmark_mixing} \end{figure} \subsection{Benchmark networks} To study the scaling of the Louvain and the Leiden algorithm, we rely on a variant of a well-known approach for constructing benchmark networks~\cite{Lancichinetti2008a}. We generated benchmark networks in the following way. First, we created a specified number of nodes and we assigned each node to a community. Communities were all of equal size. A community size of $50$ nodes was used for the results presented below, but larger community sizes yielded qualitatively similar results. We then created a certain number of edges such that a specified average degree $\langle k \rangle$ was obtained. For the results reported below, the average degree was set to $\langle k \rangle = 10$. Edges were created in such a way that an edge fell between two communities with a probability $\mu$ and within a community with a probability $1 - \mu$. We applied the Louvain and the Leiden algorithm to exactly the same networks, using the same seed for the random number generator. For both algorithms, $10$ iterations were performed. We used the CPM quality function. The value of the resolution parameter was determined based on the so-called mixing parameter $\mu$~\cite{Traag2011}. We generated networks with $n=10^3$ to $n=10^7$ nodes. For each set of parameters, we repeated the experiment $10$ times. Below, the quality of a partition is reported as $\frac{\pazocal{H}}{2m}$, where $\pazocal{H}$ is defined in Eq.~(\ref{eq:CPM_simple}) and $m$ is the number of edges. As shown in Fig.~\ref{fig:benchmark_nodes}, for lower values of $\mu$ the partition is well defined, and neither the Louvain nor the Leiden algorithm has a problem in determining the correct partition in only two iterations. Hence, for lower values of $\mu$, the difference in quality is negligible. However, as $\mu$ increases, the Leiden algorithm starts to outperform the Louvain algorithm. The differences are not very large, which is probably because both algorithms find partitions for which the quality is close to optimal, related to the issue of the degeneracy of quality functions~\cite{Good2010}. The Leiden algorithm is clearly faster than the Louvain algorithm. For lower values of $\mu$, the correct partition is easy to find and Leiden is only about twice as fast as Louvain. However, for higher values of $\mu$, Leiden becomes orders of magnitude faster than Louvain, reaching $10$--$100$ times faster runtimes for the largest networks. As can be seen in Fig.~\ref{fig:benchmark_mixing}, whereas Louvain becomes much slower for more difficult partitions, Leiden is much less affected by the difficulty of the partition. Fig.~\ref{fig:benchmark_all_itr} presents total runtime versus quality for all iterations of the Louvain and the Leiden algorithm. As can be seen in the figure, Louvain quickly reaches a state in which it is unable to find better partitions. On the other hand, Leiden keeps finding better partitions, especially for higher values of $\mu$, for which it is more difficult to identify good partitions. A number of iterations of the Leiden algorithm can be performed before the Louvain algorithm has finished its first iteration. Later iterations of the Louvain algorithm are very fast, but this is only because the partition remains the same. With one exception ($\mu=0.2$ and $n=10^7$), all results in Fig.~\ref{fig:benchmark_all_itr} show that Leiden outperforms Louvain in terms of both computational time and quality of the partitions. \subsection{Empirical networks} Analyses based on benchmark networks have only a limited value because these networks are not representative of empirical real-world networks. In particular, benchmark networks have a rather simple structure. Empirical networks show a much richer and more complex structure. We now compare how the Leiden and the Louvain algorithm perform for the six empirical networks listed in Table~\ref{tbl:real_networks}. Our analysis is based on modularity with resolution parameter $\gamma = 1$. For each network, Table~\ref{tbl:real_networks} reports the maximal modularity obtained using the Louvain and the Leiden algorithm. \begin{figure}[tb] \begin{center} \includegraphics{real_networks_1st_itr_speed} \end{center} \caption{ \textbf{First iteration runtime for empirical networks}. Speed of the first iteration of the Louvain and the Leiden algorithm for six empirical networks. Leiden is faster than Louvain especially for larger networks. } \label{fig:real_network_1st_itr_speed} \end{figure} As can be seen in Fig.~\ref{fig:real_network_1st_itr_speed}, the Leiden algorithm is significantly faster than the Louvain algorithm also in empirical networks. In the first iteration, Leiden is roughly $2$--$20$ times faster than Louvain. The speed difference is especially large for larger networks. This is similar to what we have seen for benchmark networks. For the Amazon and IMDB networks, the first iteration of the Leiden algorithm is only about $1.6$ times faster than the first iteration of the Louvain algorithm. However, Leiden is more than $7$ times faster for the Live Journal network, more than $11$ times faster for the Web of Science network and more than $20$ times faster for the Web UK network. In fact, for the Web of Science and Web UK networks, Fig.~\ref{fig:real_networks_all_itr} shows that more than $10$ iterations of the Leiden algorithm can be performed before the Louvain algorithm has finished its first iteration. As shown in Fig.~\ref{fig:real_networks_all_itr}, the Leiden algorithm also performs better than the Louvain algorithm in terms of the quality of the partitions that are obtained. For all networks, Leiden identifies substantially better partitions than Louvain. Louvain quickly converges to a partition and is then unable to make further improvements. In contrast, Leiden keeps finding better partitions in each iteration. The quality improvement realised by the Leiden algorithm relative to the Louvain algorithm is larger for empirical networks than for benchmark networks. Hence, the complex structure of empirical networks creates an even stronger need for the use of the Leiden algorithm. Leiden keeps finding better partitions for empirical networks also after the first $10$ iterations of the algorithm. This contrasts to benchmark networks, for which Leiden often converges after a few iterations. For empirical networks, it may take quite some time before the Leiden algorithm reaches its first stable iteration. As can be seen in Fig.~\ref{fig:real_networks_n_itr}, for the IMDB and Amazon networks, Leiden reaches a stable iteration relatively quickly, presumably because these networks have a fairly simple community structure. The DBLP network is somewhat more challenging, requiring almost $80$ iterations on average to reach a stable iteration. The Web of Science network is the most difficult one. For this network, Leiden requires over $750$ iterations on average to reach a stable iteration. Importantly, the first iteration of the Leiden algorithm is the most computationally intensive one, and subsequent iterations are faster. For example, for the Web of Science network, the first iteration takes about $110$--$120$ seconds, while subsequent iterations require about $40$ seconds. \begin{figure}[tb] \begin{center} \includegraphics{real_networks_all_itr} \end{center} \caption{ \textbf{Runtime versus quality for empirical networks}. Speed and quality for the first $10$ iterations of the Louvain and the Leiden algorithm for six empirical networks. The horizontal axis indicates the cumulative time taken to obtain the quality indicated on the vertical axis. Each point corresponds to a certain iteration of an algorithm, with results averaged over $10$ experiments. Leiden is both faster than Louvain and finds better partitions. } \label{fig:real_networks_all_itr} \end{figure} \begin{figure}[tb] \begin{center} \includegraphics[width=\columnwidth]{real_networks_n_itr}% \end{center} \caption{ \textbf{Number of iterations until stability}. Number of iterations before the Leiden algorithm has reached a stable iteration for six empirical networks. In a stable iteration, the partition is guaranteed to be node optimal and subpartition $\gamma$-dense. } \label{fig:real_networks_n_itr} \end{figure} \section{Discussion} Community detection is an important task in the analysis of complex networks. Finding communities in large networks is far from trivial: algorithms need to be fast, but they also need to provide high-quality results. One of the most widely used algorithms is the Louvain algorithm~\cite{Blondel2008}, which is reported to be among the fastest and best performing community detection algorithms~\cite{Lancichinetti2009,Yang2016}. However, as shown in this paper, the Louvain algorithm has a major shortcoming: the algorithm yields communities that may be arbitrarily badly connected. Communities may even be disconnected. To overcome the problem of arbitrarily badly connected communities, we introduced a new algorithm, which we refer to as the Leiden algorithm. This algorithm provides a number of explicit guarantees. In particular, it yields communities that are guaranteed to be connected. Moreover, when the algorithm is applied iteratively, it converges to a partition in which all subsets of all communities are guaranteed to be locally optimally assigned. In practical applications, the Leiden algorithm convincingly outperforms the Louvain algorithm, both in terms of speed and in terms of quality of the results, as shown by the experimental analysis presented in this paper. We conclude that the Leiden algorithm is strongly preferable to the Louvain algorithm.
1,314,259,993,097
arxiv
\section{Introduction} Secure multiparty computation (MPC) allows a set of mutually distrusting parties to compute a publicly known function on their secret inputs without revealing their inputs to each other. This is done through the execution of a cryptographic protocol which guarantees that the protocol participants learn only the function output on their secret inputs and nothing else. {MPC}\xspace has made rapid strides - from being a theoretical concept three decades ago \cite{yao,gmw}, to now being on the threshold of having real world impact. One of the most compelling use cases for MPC is machine learning (ML) - e.g. being able to do secure ML inference when the model and the query are private inputs belonging to different parties. There has been a flurry of recent works aimed at running inference securely with MPC such as SecureML~\cite{secureml}, MinioNN~\cite{minionn}, ABY$^3$~\cite{aby3}, CHET~\cite{chet}, SecureNN~\cite{securenn}, Gazelle~\cite{gazelle}, Delphi~\cite{delphi}, and so on. Unfortunately, these techniques are not easy-to-use by ML developers and have only been demonstrated on small deep neural networks (DNNs) on tiny datasets such as MNIST and CIFAR. However, in order for MPC to be truly ubiquitous for secure inference tasks, it must be both easy to use by developers with no background in cryptography and capable of scaling to the DNNs used in practice. In this work, we present \textsc{CrypTFlow}, a system that converts {{TensorFlow}}\xspace \cite{tensorflow} inference code into {MPC}\xspace protocols at the push of a button. By converting code in standard {{TensorFlow}}\xspace, a ubiquitous ML framework that is used in production by various technology companies, to {MPC}\xspace protocols, \textsc{CrypTFlow}\ significantly lower the entry barrier for ML practitioners and programmers to use cryptographic {MPC}\xspace protocols in real world applications. We make the following contributions: First, for the developer frontend, we provide a compiler, called {\em Athos}, from {{TensorFlow}}\xspace to a variety of secure computation protocols (both 2 and 3 party) while preserving accuracy. The compiler is designed to be modular and it provides facilities for plugging in different {MPC}\xspace protocols. To demonstrate this modularity, we have integrated Athos with the following backends: ABY-based~\cite{aby} 2-party computation (2PC), SCI-based 2PC~\cite{cryptflow2}, Aramis-based malicious secure 3-party computation~\cite{cryptflow}, and Porthos-based semi-honest secure 3-party computation (3PC). Second, for the cryptographic backend, we provide a semi-honest secure 3-party computation protocol, {\em Porthos}, that outperforms all prior protocols for secure inference and enables us to execute, for the first time, cryptographically secure inference of ImageNet scale networks. Prior work in the area of secure inference has been limited to small networks over tiny datasets such as MNIST or CIFAR. We have evaluated {\textsc{CrypTFlow}}\xspace on secure inference over DNNs that are at least an order of magnitude larger than the state-of-the-art~\cite{delphi,chet,chameleon,securenn,secureml,gazelle,ezpc,minionn,aby3,nhe,xonn,nitin}. Even on MNIST/CIFAR, Porthos has lower communication complexity and is more efficient than prior works~\cite{securenn,aby3,chameleon}. Third, we demonstrate the ease-of-use, efficiency and scalability of \textsc{CrypTFlow}\ by evaluating on {\textsc{ResNet50}}\xspace~\cite{resnet} for ImageNet classification, {\textsc{DenseNet121}}\xspace~\cite{densenet} for detection of lung diseases from chest X-ray images and 3D-UNet~\cite{unet} for segmentation of raw 3D CT images. Our toolchain is publicly available\footnote{\url{https://github.com/mpc-msri/EzPC}}. This paper reviews the original {\textsc{CrypTFlow}}\xspace paper~\cite{cryptflow} briefly and its increment lies in the secure segmentation evaluation (Section~\ref{sec:unet}). \section{Athos} Athos compiles {{TensorFlow}}\xspace inference code to secure computation protocols. The transformations implemented in Athos are sensitive to the performance of {MPC}\xspace protocols. For performance reasons all efficient secure computation protocols perform computation over fixed-point arithmetic - i.e., arithmetic over integers or arithmetic with fixed precision. Athos automatically converts {{TensorFlow}}\xspace code over floating-point values into code that computes the same function over fixed-point values. This compilation is done while {\em matching} the inference accuracy of floating-point code. Prior works (\cite{secureml,minionn,gazelle,aby3,securenn,delphi}) in the area of running ML securely have performed this task by hand with significant losses in accuracy over floating-point code. Athos represents a 32-bit floating-point number $r$ by a 64-bit integer $\lfloor r.2^s\rfloor$ for a precision or scale $s$. Then operations on 32-bit floating-point numbers are simulated by operations on 64-bit integers. For example $r_1\times r2$ is simulated as $\frac{\lfloor r_1.2^s\rfloor\times\lfloor r_1.2^s\rfloor}{2^{s}}$. A large $s$ causes integer overflows and a small $s$ leads to accuracy loss. To obtain a suitable scale $s$ (all variables have the same precision in Athos output), Athos works by ``sweeping through'' various precision levels to estimate the best precision~\cite{cryptflow}. \section{Porthos} Porthos is an improved semi-honest 3-party secure computation protocol (tolerating one corruption) that builds upon SecureNN~\cite{securenn}. Porthos makes two crucial modifications to SecureNN. First, SecureNN reduces convolutions to matrix multiplications and invokes the Beaver triples~\cite{beaver} based matrix multiplication protocol. When performing a convolution with filter size $f\times f$ on a matrix of size $m\times m$, the communication is roughly $2q^2f^2+2f^2+q^2$ elements in the ring $\mathbb{Z}_{2^{64}}$, where $q = m-f+1$. Porthos computes these Beaver triples by appropriately reshaping $m\times m$ and $f\times f$ matrices. This reduces the communication to roughly $2m^2+2f^2+q^2$ ring elements. Typically the filter size, $f$, is between 1 and 11 and the communication of Porthos can be up to two orders of magnitudes less than SecureNN. Additionally, in SecureNN, the protocols for non-linear layers (such as ReLU and MaxPool) require the third party to send secret shares to the first two parties. In Porthos, we cut this communication to half by eliminating the communication of one of these shares~\cite{cryptflow}. This reduces the communication in the overall ReLU and MaxPool protocols by 25\%. \section{Motivating Example}\label{sec:toolchain} In this section, we describe the end-to-end working of \textsc{CrypTFlow}\ through an example of logistic regression. The toolchain is shown in Figure \ref{fig:cryptflowtoolchain}. \begin{figure} \centering \begin{minipage}{.7\textwidth} \centering \includegraphics[width=\linewidth]{end-to-end-new.pdf} \caption{\textsc{CrypTFlow}: End-to-end toolchain} \label{fig:cryptflowtoolchain} \end{minipage}% \begin{minipage}{.3\textwidth} \centering\small \begin{Verbatim} # x is (1,784) MNIST image. # W and b are model parameters. print(tf.argmax(tf.matmul(x, W) + b, 1)) \end{Verbatim} \caption{Logistic Regression: TensorFlow snippet} \label{fig:lrtf} \end{minipage} \end{figure} \textsc{CrypTFlow}\ takes as input a pre-trained floating-point {{TensorFlow}}\xspace model. For example, consider the code snippet for logistic regression over MNIST dataset in {{TensorFlow}}\xspace as shown in Figure \ref{fig:lrtf}. Our compiler first generates the {{TensorFlow}}\xspace graph dump as well as metadata containing the dimensions of all the tensors. Next, the {{TensorFlow}}\xspace graph dump is compiled into a high-level intermediate language HLIL. The code snippet for logistic regression in HLIL is shown in Figure \ref{fig:lrseedot}. Next, Athos' float-to-fixed converter translates the floating-point HLIL code to fixed-point code in a low-level intermediate language LLIL. This step requires Athos to compute the right precision to be used for maximum accuracy Figure \ref{fig:lrezpc} shows the LLIL code snippet for logistic regression. The operation $\mathtt{ScaleDown(X,s)}$ divides each 64-bit integer entry of tensor $X$ by $2^s$. The function calls in the LLIL code can be implemented with a variety of secure computation backends - e.g. ABY~\cite{aby} for the case of 2-party secure computation, Porthos for the case of semi-honest 3-party secure computation, and Aramis~\cite{cryptflow} for the malicious secure variant. Different backends provide different security guarantees and hence vary in their performance. For this example, the three backends take 227ms, 6.5ms, and 10.2ms respectively. \begin{SaveVerbatim}{HLIL_LR_Verbatim} xW = MatMul(x, W); xWb = MatAdd(xW, b); output(ArgMax(xWb)); \end{SaveVerbatim} \begin{SaveVerbatim}[]{LLIL_LR_Verbatim} xW = MatMul(x, W); ScaleDown(xW, 15); //15 bit precision xWb = MatAdd(xW, b); output(ArgMax(xWb)); \end{SaveVerbatim} \begin{figure} \centering \resizebox{0.47\columnwidth}{!}{ \begin{subfigure}{0.49\columnwidth} \centering \setlength{\fboxsep}{1.7mm} \fbox{\BUseVerbatim[fontsize=\small]{HLIL_LR_Verbatim}} \caption{} \label{fig:lrseedot} \end{subfigure} } \resizebox{0.47\columnwidth}{!}{ \begin{subfigure}{0.49\columnwidth} \centering \setlength{\fboxsep}{1.6mm} \fbox{\BUseVerbatim[fontsize=\small]{LLIL_LR_Verbatim}} \caption{} \label{fig:lrezpc} \end{subfigure} } \caption{Logistic Regression in (a) floating-point: HLIL syntax (b) fixed-point: LLIL syntax} \end{figure} \section{Experiments}\label{sec:experiments} \noindent\textbf{Overview.} First, in Section \ref{subsec:bigbenchmarks}, we use {\textsc{CrypTFlow}}\xspace for secure classification on ImageNet using the following pre-trained {{TensorFlow}}\xspace models: {\textsc{ResNet50}}\xspace\footnote{\url{https://github.com/tensorflow/models/tree/master/official/r1/resnet}} and {\textsc{DenseNet121}}\xspace\footnote{\url{https://github.com/pudae/tensorflow-densenet}}. We show that the fixed-point MPC protocols generated by Athos matches the accuracy of cleartext floating-point {\textsc{ResNet50}}\xspace and {\textsc{DenseNet121}}\xspace. We also show how the optimizations in Porthos help it outperform prior works in terms of communication complexity and overall execution time. Finally, we discuss two case-studies of running \textsc{CrypTFlow}\ on DNNs for medical image analysis. The compilation time of {\textsc{CrypTFlow}}\xspace is around 5 sec for {\textsc{ResNet50}}\xspace, 35 sec for {\textsc{DenseNet121}}\xspace and 2 minutes for 3D UNet. \subsection{Secure Inference on ImageNet}\label{subsec:bigbenchmarks} These experiments are in a LAN setting on 3.7GHz machines, each with 4 cores and with 16 GB of RAM. The measured bandwidth between each of the machines was at most 377 MBps and the latency was sub-millisecond. {\textsc{ResNet50}}\xspace takes 25.9 seconds and 6.9 GB of communication; {\textsc{DenseNet121}}\xspace takes 36 seconds and 10.5 GB of communication. We measure communication as total communication between all $3$ parties - each party roughly communicates a third of this value. We show that Athos generated fixed-point code matches the accuracy of floating-code on {\textsc{ResNet50}}\xspace and {\textsc{DenseNet121}}\xspace in Table~\ref{tab:fixed-accuracy}. \begin{table} \parbox{.45\linewidth}{ \centering \begin{tabular}{|c|c|c|c|c|} \hline Benchmark & Float & Fixed & Float & Fixed \\ & Top 1 & Top 1 & Top 5 & Top 5 \\ \hline ${\textsc{ResNet50}}\xspace$ & 76.47 & 76.45 & 93.21 & 93.23 \\ \hline ${\textsc{DenseNet121}}\xspace$ & 74.25 & 74.33 & 91.88 & 91.90 \\ \hline \end{tabular} \caption{Accuracy of fixed- vs floating-point.} \label{tab:fixed-accuracy} } \hfill \parbox{.45\linewidth}{ \centering \begin{tabular}{|c|c|c|c|} \hline SecureNN & Porthos& SecureNN & Porthos \\ (s) & (s) & Comm. (GB) & Comm. (GB) \\ \hline $38.36$ & $25.87$& $8.54$& $6.87$ \\ \hline $53.99$ & $36.00$& $13.53$ & $10.54$ \\ \hline \end{tabular} \caption{Porthos vs SecureNN.} \label{tab:porthosvssecurenn} } \end{table} Detailed comparisons of {\textsc{CrypTFlow}}\xspace with prior works on secure inference can be found in~\cite{cryptflow}. However, since Porthos builds on SecureNN, we compare them on ImageNet scale benchmarks in Table \ref{tab:porthosvssecurenn}. For this purpose, we add the code of SecureNN available at~\cite{securenncode} as another backend to {\textsc{CrypTFlow}}\xspace. These results show that Porthos improves upon the communication of SecureNN by a factor of roughly 1.2X--1.5X and the runtime by a factor of roughly 1.4X--1.5X. \subsection{Lung diseases from 2D chest X-Ray images} In \cite{chestxray2018}, the authors train a {{\textsc{DenseNet121}}\xspace} to predict lung diseases from chest X-ray images. They use the publicly available NIH dataset of chest X-ray images and end up achieving an average AUROC score of 0.845 across 14 possible disease labels. These DNNs are available as pre-trained Keras models. We converted them into {{TensorFlow}}\xspace using~\cite{kttf} and compiled the automatically generated {{TensorFlow}}\xspace code with {\textsc{CrypTFlow}}\xspace. During secure inference, we observed no loss in classification accuracy and the latency is similar to the runtime of {\textsc{DenseNet121}}\xspace for ImageNet. \subsection{Segmenting tumors and organs at risk from 3D CT images} \label{sec:unet} Half a million cancer patients receive radiotherapy each year~\cite{demand}. Personalized radiation treatments require segmenting tumors and organs at risk from 3D volumetric images. Currently, this segmentation is a manual process where an oncologist draws contours along regions of interest slice-by-slice across the whole volume. This process often takes several hours per image which ML provided automation~\cite{stan,shuai} can reduce to minutes. We consider a 3D-UNet model~\cite{unet} that takes as input a raw 3D image obtained via Computed Tomography (CT) scans of the pelvic region and delineates tumor volumes and organs at risk. This model's accuracy is within the inter-observer variability seen among clinical experts~\cite{innereye} and requires 1.87 Teraflops per inference. Since this model is implemented in PyTorch, we first export it to ONNX and then use {\textsc{CrypTFlow}}\xspace's ONNX frontend. For our secure inference setup, each party has 32 cores running at 2.4GHz, no GPUs, and 128GB RAM. The parties are connected on a network with ping latency 0.2s and 625MBps bandwidth. On this set up, secure inference incurs a latency of 1 hour and 57 minutes and 557GB of communication. The most expensive operators in this computation are 3D transposed convolutions (or deconvolutions) and, to the best of our knowledge, {\textsc{CrypTFlow}}\xspace is the only secure inference tool that supports these operations. In our experience, it takes a couple of days for a scan to reach the oncologist for review and hence this latency overhead can be acceptable. \section{Related work and conclusion} Other related systems for converting PyTorch/Tensorflow to MPC protocols~\cite{crypten,tfe,pysyft,quantizednn} only support 3PC. Whereas, {\textsc{CrypTFlow}}\xspace additionally supports 2PC backends. {\textsc{CrypTFlow}}\xspace provides the first implementation and evaluation of a system for secure segmentation. With {\textsc{CrypTFlow}}\xspace, data scientists, with no background in cryptography, can obtain secure inference implementations for their trained models at the push of a button. \begingroup \small
1,314,259,993,098
arxiv
\section{Introduction} We answer in the affirmative a conjecture posed by Hegselmann and Krause about the long-term behavior of opinions in a finite group of individuals, some of them attracted to the truth, the so-called \emph{truth seekers}. Our contribution: Under mild assumptions, the opinions of all truth seekers converge to the truth, despite being distracted by individuals not attracted to the truth, the \emph{ignorants}. The underlying model for opinion dynamics is the \emph{bounded-confidence model}: Opinions, which themselves are represented by real numbers in the unit interval, are influenced by the opinions of others by means of averaging, but only if not too far away. This bounded-confidence model (formal definitions below) was first suggested by Krause in 1997. It received a considerable amount of attention in the artificial societies and social simulation community \cite{lorenz_survey,hegselmann_new, hegselmann2006,hegselmann2002,15.1021F,7291M}. The concept of truth seekers was invented in 2006 by Hegselmann and Krause~\cite{hegselmann2006}, along with a philosophical discussion about the scientific context with respect to the notion of truth. We blind out the philosophical discussions here and focus on the resulting dynamical system, governed by difference equations that we find interesting in their own right. The opinions of \emph{truth seekers} are not only attracted by opinions of others; they are additionally attracted by a constant number, the truth. The resulting opinion is weighted average of the result of the original bounded-confidence dynamics and the truth. Individuals not attracted by the truth in this sense are \emph{ignorants}. In their paper, Hegselmann and Krause show that if all individuals are truth seekers -- no matter how small the weight --, then (the opinions of) all the individuals converge to consensus on the truth value. The question we answer in this paper arises when some of the individuals are ignorants, i.e.{}, the weight of the influence of the truth is zero for them. Numerous simulation experiments led Hegselmann and Krause to the conjecture, that still the opinions of all the truth seekers finally end up at the truth. However, a proof of this fact could not be found so far. Evidence by simulation only, however, bears the risk of numerical artefacts -- very much so in the non-continuous bounded-confidence model. Therefore, it is desirable to provide mathematically rigid proofs of structural properties of bounded-confidence dynamics. Allthough the conjecture may seem self-understood at first glance because of the contraction property of the system dynamics for truth seekers, a second look on the situation reveals that the conjecture and its confirmation in this paper are far from trivial: several innocent-looking generalizations of the conjecture are actually false, as we will show below in the technical parts of the paper. Relying on intuition only is dangerous. Even in the affirmative cases, convergence turns out to be quite slow in general and far from monotone. The main difficulty is the following: the convergence of truth seekers heavily depends on their long-term influence on ignorants. Depending on the configuration of ignorants and the parameters of the system, there are arbitrarily many iterations in which the truth seekers deviate from the truth. The crucial observation is that, during these iterations, the configuration of ignorants is somehow ``improved'' because the truth seekers attract them. After all, the proof is elementary but extremely technical. We introduce some structures like the \emph{confidence graph}, that might prove useful also in other contexts. Other structures we need are rather special, probably with limited use beyond this paper. It would, therefore, be desirable to find a more elegant proof, revealing the reason why the conjecture is true. For example: find a suitable Lyapunov function. The examples we give as we go along in the proof, however, indicate that a certain amount of complexity has to be captured by the arguments because the line between true and false conjectures is extremely thin. \section{Formal problem statement} \label{sec:form-probl-stat} Suppose there is a set $[n]:=\{1,\dots,n\}$ of individuals with opinions $x_i(t)\in[0,1]$ at time $t$ for all $i\in[n]$, $t\in\mathbb{N}$. The abstract truth is modeled as a constant over time, denoted by $h\in[0,1]$. The opinion of an individual $i\in[n]$ is influenced in a time step $t$ only by those individuals which have a similar opinion, more precisely which have an opinion in the \textit{confidence interval} of $x_i(t)$. \begin{definition} For $x\in [0,1]$ and a parameter $\varepsilon\ge 0$ we define the \emph{confidence set of value} $x$ \emph{at time} $t$ as $$ I_x^\varepsilon(t):=\{j\in[n]\mid |x-x_j(t)|\le \varepsilon\}. $$ As a shorthand we define $I_i^\varepsilon(t):=I_{x_i(t)}^\varepsilon(t)$ for any $i\in[n]$. \end{definition} The update of the opinions is modeled as a weighted arithmetic mean of opinions in the confidence set and a possible attraction towards the truth. \begin{definition} A \emph{weighted arithmetic mean symmetric bounded confidence opinion system} (WASBOCOS) is a tupel \begin{equation*} (n, h, \varepsilon, \alpha, \beta; \alpha_i(t), \beta_{ij}(t), x_i(0)), \end{equation*} where \begin{itemize} \item $n\in\mathbb{N}$ ist the number of \emph{individuals}, \item $h\in[0,1]$ ist the \emph{truth}, \item $\varepsilon\in[0,1]$ is the \emph{bounded confidence radius}, \item $\alpha\in(0,1]$ is a lower bound for the weight of the truth for truth seekers, \item $\beta\in(0,\frac{1}{2}]$ is a lower bound for the weight of opinions in the bounded confidence interval, \item $\alpha_i(t)\in[\alpha,1]$ or $\alpha_i(t)=0$ for all $t\in\mathbb{N}$ is the actual weight of the truth for truth seeker~$i$ at time step~$t$, \item $\beta_{ij}(t)\in[\beta,1-\beta]$ with $\sum_{i=j}^n \beta_{ij} = 1$ for all $i\in[n]$ and for all $t\in\mathbb{N}$ is the weight of opinion~$j$ in the view of agent~$i$, \item $x_i(0)\in[0,1]$ is the starting opinion of Individual~$i$. \end{itemize} The \emph{bounded confidence dynamics} on such a system is defined by simultaneous updates of the opinions in the following form: \begin{equation} \label{eq_update} x_i(t+1):=\alpha_i(t)\cdot h+\Big(1-\alpha_i(t)\Big)\frac{\sum\limits_{j\in I_i^\varepsilon(t)}\beta_{ij}(t)x_j(t)} {\sum\limits_{j\in I_i^\varepsilon(t)}\beta_{ij}(t)}. \end{equation} \emph{Individuals} are members of the index set $[n]$. \emph{Truth seekers} are members of the set $K:=\{k\in[n]\mid \alpha_k(t)\ge \alpha\, \forall t\in\mathbb{N}\}$. All other individuals, i.e.{}, those with $\alpha_i(t)=0$ for all $t$, are called \emph{ignorants}; their set is denoted by~$\overline{K}$. \end{definition} See Figure~\ref{fig:example-0} for a sketch of a typical set of trajectories. Remark: The term \emph{symmetric} in the notion of a (WASBOCOS) refers to the confidence radius, not to the weights that individuals assign to other individuals' opinions. \begin{figure} \begin{center} \includegraphics[width=0.65\linewidth]{Example_6.pdf} \label{fig:example-0} \end{center} \end{figure} The main result we wish to prove is the following: \begin{theorem}(Generalized Hegselmann-Krause Conjecture) \label{main_result} All truth seekers in an (WASBOCOS) $\Omega$ converge to the truth~$h$. Formally, for each $\gamma>0$ and each $\Omega$ there exists a $T(\gamma,\Omega)$ so that we have $|x_k(t)-h|<\gamma$ for all $k\in K$ and all $t\ge T(\gamma,\Omega)$. \end{theorem} Note that we use $\gamma>0$ in the statement of convergence instead of~$\varepsilon>0$ because $\varepsilon$ is traditionally used for the bounded confidence radius. It is important that convergence is not just implied by the contraction property of the dynamics with ignorants ignored. Ignorants and where their opinions are make a huge difference (see Figure~\ref{fig:example-1} for an example). \begin{figure} \centering \includegraphics[width=0.65\linewidth]{Example_1} \caption{The only truth seeker is located at the bottom on the truth; it gets attracted away from the truth for quite some time; eventually, the ignorants either are ``converted'' or left behind. Thus, we cannot expect monotone convergence of the truth seeker farthest from the truth, i.p, reaching the truth once does not imply convergence to the truth.} \label{fig:example-1} \end{figure} It would be nice if one could derive a bound on the speed of convergence, i.e., a bound on $T(\gamma,\Omega)$, in terms of the structural parameters $\varepsilon$, $\alpha$, $\beta$, and $n$. Unfortunately, this is not possible. The speed of convergence is not determined by the structural parameters alone. This can be seen in the following simple example. \begin{example} \label{ex_interupted_convergent} Consider a (WASBOCOS) with truth $h=\varepsilon$, $\varepsilon>0$, $\alpha_1(t)=\alpha$, $\alpha_2(t)=0$, $\beta_{ij}(t)=\frac{1}{2}$, $\beta=\frac{1}{2}$, $x_1(0)=2\varepsilon$, $x_2(0)=\tilde{\varepsilon}$, where $\varepsilon>\tilde{\varepsilon}>0$. Let $T\in\mathbb{N}$ be the smallest integer so that $(1-\alpha)^T\varepsilon\le\tilde{\varepsilon}$. Then by induction we have $x_1(t)=\varepsilon+(1-\alpha)^t\varepsilon$ and $x_2(t)=x_2(0)=\tilde{\varepsilon}$ for all $t\le T$. So truth seeker $1$ seems to monotonically converge to the truth, but at time $T+1$ we have $x_1(T+1)=\alpha\varepsilon+\frac{1-\alpha}{2}\left(\varepsilon+(1-\alpha)^T\varepsilon+\tilde{\varepsilon}\right) \le \frac{1+\alpha}{2}\cdot\varepsilon+\tilde{\varepsilon}$. \end{example} See Figure~\ref{fig:example-2} for a sketch of the situation. \begin{figure} \centering \includegraphics[width=0.65\linewidth]{Example_2} \caption{A sketch of interrupted convergence: a lonely truth seeker starting at $2\varepsilon$ seems to monotonically converge to the truth right away, but suddenly its confidence interval picks up the ignorant on the other side of the truth, and the truth seeker gets distracted. However, finally the ignorant gets distracted himself, and convergence is eventually established.} \label{fig:example-2} \end{figure} Since we may choose $\tilde{\varepsilon}$ arbitrarily small, we find the following: in general, we can not expect that for every $\gamma>0$ there is a $T(\gamma,\varepsilon,\alpha,\beta,n)$ such that for all $t\ge T(\gamma,\varepsilon,\alpha,\beta,n)$ we have $|x_k(t)-h|<\gamma$ for each truth seeker $k\in K$. But we may have \emph{interrupted convergence}: In a first phase, the truth seekers come arbitrarily close to the truth in time only dependent on the structural parameters; then, they may temporarily get distracted at some point; finally, they converge to the truth in time depending only on the structural parameters \emph{and the time of distraction}. This can be formalized as follows: \begin{definition} \label{def_interupted_convergent} Given $\varepsilon$, $\alpha$, $\beta$, $n$, we say that truth seekers $k\in K$ are (1-fold) \emph{interrupted convergent to the truth}, if for each $\gamma > 0$ there exist two functions $T_1^s(\gamma,\varepsilon,\alpha,\beta,n)$ and $T_2^s(\gamma,\varepsilon,\alpha,\beta,n,T_1^e)$, so that for each (WASBOCOS) $\Omega$, with structural parameters $\varepsilon$, $\alpha$, $\beta$ and $n$, there exists an $T_1^e\in\mathbb{N}$ satisfying \begin{eqnarray*} &&\forall k\in K,\,\forall t\in[T_1^s(\gamma,\varepsilon,\alpha,\beta,n),T_1^e]:\, |x_k(t)-h|<\gamma,\\ &&\forall k\in K,\,\forall t\ge T_2^s(\gamma,\varepsilon,\alpha,\beta,n,T_1^e):\,|x_k(t)-h|<\gamma. \end{eqnarray*} \end{definition} Theorem \ref{main_result} is now a corollary of the following substantially strengthened Theorem: \begin{theorem} \label{thm_interupted_convergent} All truth seekers in an (WASBOCOS) $\Omega$ are (1-fold) interrupted convergent to the truth. \end{theorem} Originally Hegselmann and Krause considered the (WASBOCOS) model for $\alpha_{i}(t)\in\{0,\alpha\}$ and $\beta_{ij}(t)=\frac{1}{n}$. In the case of complete absence of truth seekers they have already proved, that the opinion of each individual converges, as can be expected, not necessarily to the truth. In fact in general the individuals form several clusters, where two individuals of different clusters converge to different opinions. We give an example without truth seekers where the individuals will converge to five different clusters. \begin{example} \label{ex_1} Consider a (WASBOCOS) with $\alpha_i(t)=0$ (no truth seekers), $\beta=\beta_{ij}(t)=\frac{1}{n}$, $n=12$; the values of~$\alpha$ and~$h$ do not matter. The starting positions are given by \begin{eqnarray*} &x_1(0)=x_2(0)=0,\, x_3(0)=\varepsilon,\, x_4(0)=2\varepsilon,\, x_5(0)=3\varepsilon,\, x_6(0)=x_7(0)=4\varepsilon,\,\\ &x_8(0)=5\varepsilon,\, x_9(0)=6\varepsilon,\, x_{10}(0)=7\varepsilon,\, \text{and}\, x_{11}(0)=x_{12}(0)=8\varepsilon, \end{eqnarray*} see Figure \ref{fig_ex_1}. \medskip In Table \ref{table_ex_1} we give the complete dynamics of the opinions of all $12$ individuals over time until the opinion of every individual has converged. For brevity we write $x_i$ instead of $x_i(t)$. After three time steps, see Figure~\ref{fig:example-5} for the dynamics, we have reached a stable state, see Figure \ref{fig_ex_1_2} for the resulting positions of the individuals. \end{example} \begin{figure}[htp] \begin{center} \setlength{\unitlength}{1.0cm} \begin{picture}(8,1.4) \put(0,1){\circle{0.3}} \put(0,1.4){\circle{0.3}} \put(1,1){\circle{0.3}} \put(2,1){\circle{0.3}} \put(3,1){\circle{0.3}} \put(4,1){\circle{0.3}} \put(4,1.4){\circle{0.3}} \put(5,1){\circle{0.3}} \put(6,1){\circle{0.3}} \put(7,1){\circle{0.3}} \put(8,1){\circle{0.3}} \put(8,1.4){\circle{0.3}} \put(-1,0.5){\line(1,0){10}} \multiput(0,0)(1,0){9}{\put(0,0.4){\line(0,1){0.2}}} \put(-0.2,0){$0\varepsilon$} \put(0.8,0){$1\varepsilon$} \put(1.8,0){$2\varepsilon$} \put(2.8,0){$3\varepsilon$} \put(3.8,0){$4\varepsilon$} \put(4.8,0){$5\varepsilon$} \put(5.8,0){$6\varepsilon$} \put(6.8,0){$7\varepsilon$} \put(7.8,0){$8\varepsilon$} \end{picture} \caption{Starting positions of the individuals in Example \ref{ex_1}.} \label{fig_ex_1} \end{center} \end{figure} \begin{table}[htp] \begin{center}\renewcommand{\arraystretch}{1.5} \begin{tabular}{r@{\qquad}rrrrrrrrr} \toprule $t$ & $x_1=x_2$ & $x_3$ & $x_4$ & $x_5$ & $x_6=x_7$ & $x_8$ & $x_9$ & $x_{10}$ & $x_{11}=x_{12}$ \\ \midrule 0 & $0\varepsilon$ & $1\varepsilon$ & $2\varepsilon$ & $3\varepsilon$ & $4\varepsilon$ & $5\varepsilon$ & $6\varepsilon$ & $7\varepsilon$ & $8\varepsilon$ \\[0.9mm] 1 & $\frac{1}{3}\varepsilon$ & $\frac{3}{4}\varepsilon$ & $2\varepsilon$ & $\frac{13}{4}\varepsilon$ & $4\varepsilon$ & $\frac{19}{4}\varepsilon$ & $6\varepsilon$ & $\frac{29}{4}\varepsilon$ & $\frac{23}{3}\varepsilon$ \\[1.6mm] 2 & $\frac{17}{36}\varepsilon$ & $\frac{17}{36}\varepsilon$ & $2\varepsilon$ & $\frac{15}{4}\varepsilon$ & $4\varepsilon$ & $\frac{17}{4}\varepsilon$ & $6\varepsilon$ & $\frac{271}{36}\varepsilon$ & $\frac{271}{36}\varepsilon$ \\[1.6mm] 3 & $\frac{17}{36}\varepsilon$ & $\frac{17}{36}\varepsilon$ & $2\varepsilon$ & $4\varepsilon$ & $4\varepsilon$ & $4\varepsilon$ & $6\varepsilon$ & $\frac{271}{36}\varepsilon$ & $\frac{271}{36}\varepsilon$\\ \bottomrule \end{tabular} \medskip \caption{The dynamics of Example \ref{ex_1} in numbers.} \label{table_ex_1} \end{center} \end{table} \begin{figure}[htbp] \centering \includegraphics[width=0.65\linewidth]{Example_5} \caption{The dynamics in Example \ref{ex_1}.} \label{fig:example-5} \end{figure} \begin{figure}[!ht] \begin{center} \setlength{\unitlength}{1.0cm} \begin{picture}(8,2.6) \put(0.4722,1){\circle{0.3}} \put(0.4722,1.4){\circle{0.3}} \put(0.4722,1.8){\circle{0.3}} \put(2,1){\circle{0.3}} \put(4,1){\circle{0.3}} \put(4,1.4){\circle{0.3}} \put(4,1.8){\circle{0.3}} \put(4,2.2){\circle{0.3}} \put(6,1){\circle{0.3}} \put(7.5277,1){\circle{0.3}} \put(7.5277,1.4){\circle{0.3}} \put(7.5277,1.8){\circle{0.3}} \put(-1,0.5){\line(1,0){10}} \multiput(0,0)(1,0){9}{\put(0,0.4){\line(0,1){0.2}}} \put(-0.2,0){$0\varepsilon$} \put(0.8,0){$1\varepsilon$} \put(1.8,0){$2\varepsilon$} \put(2.8,0){$3\varepsilon$} \put(3.8,0){$4\varepsilon$} \put(4.8,0){$5\varepsilon$} \put(5.8,0){$6\varepsilon$} \put(6.8,0){$7\varepsilon$} \put(7.8,0){$8\varepsilon$} \end{picture} \caption{Final positions of the individuals in Example \ref{ex_1}.} \label{fig_ex_1_2} \end{center} \end{figure} We remark that for symmetric weights $\beta_{ij}(t)=\beta_{ji}(t)$ one can easily show that in the absence of truth seekers the dynamics becomes stable after a finite number of time steps. In the case of asymmetric weights $\beta_{ij}(t)\neq\beta_{ji}(t)$ we only have convergence, but need not reach a stable state after an arbitrary, problem dependent, but finite number of time steps, as illustrated in the following example. \begin{example} \label{ex_2} Consider a (WASBOCOS) with $\alpha_i(t)=0$ (no truth seekers), $n=2$, $x_1(0)=0$, $x_2(0)=\varepsilon$, $\beta_{11}(t)=\frac{2}{3}$, $\beta_{12}(t)=\frac{1}{3} = \beta$, $\beta_{21}(t)=\frac{1}{2}$, $\beta_{22}(t)=\frac{1}{2}$; the values of $\alpha$ and~$h$ do not matter. \medskip One can easily verify, e.g.{}, by induction, that we have $$ x_1(t)=\left(\frac{2}{5}-\frac{2}{5\cdot 6^t}\right)\cdot\varepsilon\text{ and } x_2(t)=\left(\frac{2}{5}+\frac{3}{5\cdot 6^t}\right)\cdot\varepsilon $$ for all $t\in\mathbb{N}$. So we have $|x_1(t)-x_2(t)|=\frac{1}{6^t}\cdot\varepsilon>0$ but clearly the opinions of the two individuals converge to $\frac{2}{5}$. \end{example} All stated insights with the absence of truth seekers were known so far. It becomes a bit more interesting if we allow truth seekers, i.e.{}, if we consider a general (WASBOCOS). \begin{example} \label{ex_3} Consider a (WASBOCOS) with $\alpha_1(t)=\alpha$, $\alpha_i(t)=0$ for $i\neq 1$, $\beta_{ij}(t)=\frac{1}{n}$, $h=\frac{1}{2}\varepsilon$, $x_1(0)=0$, and $x_i(0)=\varepsilon$ for $i\neq 1$. The opinion $u_t$ of the truth seeker $1$ at time $t$ and the opinion $v_t$ of the other ignorants at time $t>0$ are given by \begin{eqnarray*} u_t&=&\left[\frac{1}{2}-\alpha \left(\frac{1}{2}-\frac{1}{n}\right)\left(1-\frac{\alpha}{n}\right)^{t-1}\right]\varepsilon,\\ v_t&=&\left[\frac{1}{2}+ \left(\frac{1}{2}-\frac{1}{n}\right)\left(1-\frac{\alpha}{n}\right)^{t-1}\right]\varepsilon \end{eqnarray*} respectively. This can be verified, e.g.{}, by induction. We see that the opinions of the truth seekers, and here also those of the ignorants, converge to the truth $h=\frac{1}{2}\varepsilon$. Note the opinions of ignorants may in general fail to converge to the truth as one can see by adding some further ignorants with $\tilde{x}_i(0)=3\varepsilon$. \end{example} As our analytical investigation of the previous example was rather technical, we also depict the situation for special values $n=6$ and $\alpha=\frac{2}{3}$ in Figure \ref{fig_ex_3}. We sketch the truth seeker by a filled circle and the ignorants by an empty circle. \begin{figure}[h] \begin{center} \includegraphics[width=0.65\linewidth]{Example_4} \caption{The dynamics in example \ref{ex_3}.} \label{fig_ex_3} \end{center} \end{figure} One can easily imagine more complicated configurations as in Example \ref{ex_3} where one has little chance and willingness to describe the situation analytically. Our result Theorem \ref{main_result} states that -- whatever the parameters of a (WASBOCOS) are -- the opinions of the truth seekers converge to the truth. This settles an open conjecture of Hegselmann and Krause. \section{The crucial objects} To get a first impression of what we may expect in terms of convergence we consider a lonely truth seeker, i.e.{}, $n=1$. \begin{lemma} \label{lemma_lonely} For a lonely truth seeker $i=1$ we have $$ |x_i(t+r)-h|\le |x_i(t)-h|\cdot(1-\alpha)^r. $$ \end{lemma} \begin{proof} $$ |x_i(t+1)-h|=|x_i(t)-h|\cdot(1-\alpha_i(t))\le |x_i(t)-h|\cdot(1-\alpha). $$ \end{proof} Clearly this bound is tight. Similar to this very special situation of a lonely truth seeker is the case $\varepsilon=0$, so that we now assume $\varepsilon>0$ for the remaining part of this article. In order to describe the states of the discrete time dynamical system with more than one truth seeker, we look at the truth seekers with the most extreme opinions. \begin{definition} \label{def_lower_upper} We define $\tilde{u}(t)\in K$ as the lexicographically smallest truth seeker which fullfills $x_{\tilde{u}(t)}(t)\ge h$ and $x_{\tilde{u}(t)}(t)\ge x_k(t)$ for all $k\in K$. If there is no truth seeker with opinion greater or equal to the truth $h$ we set $\tilde{u}(t)=0$. In order to avoid case distinctions, we define $x_0(t'):=h$ for all $t'\in\mathbb{N}$. Similar we define $\tilde{l}(t)$ as the lexicographically smallest truth seeker that fullfills $x_{\tilde{l}(t)}(t)\le h$ and $x_{\tilde{l}(t)}(t)\le x_k(t)$ for all $k\in K$. Again, we set $\tilde{l}(t)=0$ if there is no such truth seeker. \end{definition} Due to the \emph{symmetrical} -- one could say \textit{fair} -- definition of the confidence set, the confidence structure between the individuals can be described as a simple graph with loops. \begin{definition} The \emph{confidence graph} $\mathcal{G}(t)$ with vertex set $V(t)$ and edge set $E(t)$, of a configuration $x(t)=(x_1(t) , \dots , x_n(t))\in\mathbb{R}^n$ and the additional is defined as follows: \begin{eqnarray*} V(t):=[n]\cup\{0\},\\ E(t):=\{\{i,j\} \in \tbinom{V}{2} \mid |x_i(t)-x_j(t)|\le\varepsilon\}. \end{eqnarray*} For $i\in V(t)$ let $C_i(t)$ be the set of vertices in the connectivity component of vertex $i$ in $\mathcal{G}(t)$. \end{definition} Because we want to keep track of the individuals which can influence the truth seekers in the future, we give a further definition for individuals, which is similar to Definition \ref{def_lower_upper} for truth seekers. \begin{definition} \label{def_extreme_individuals} We define $\hat{u}(t)\in C_{\tilde{u}(t)}(t)$ as the lexicographically smallest individual with $x_{\hat{u}(t)}(t)\ge x_c(t)\,\,\forall c\in C_{\tilde{u}(t)}(t)$ and $\hat{l}(t)\in C_{\tilde{l}(t)}(t)$ as the lexicographically smallest individual with $x_{\hat{l}(t)}(t)\le x_c(t)\,\,\forall c\in C_{\tilde{l}(t)}(t)$ for all $t\in\mathbb{N}$. \end{definition} The opinions of $\hat{u}(t)$ and $\hat{l}(t)$ form an interval $[x_{\hat{l}(t)}(t),x_{\hat{u}(t)}(t)]$ called the \emph{hope interval} which is crucial for our further investigations. To prove the main theorem we will show that the length of this hope interval converges to zero. \medskip In Figure \ref{fig_def_borders}, we have depicted a configuration to illustrate Definition \ref{def_lower_upper} and Definition \ref{def_extreme_individuals}. In particular, we have $\tilde{l}=4$, $\tilde{u}=9$, $\hat{l}=2$, and $\hat{u}=12$. Individual~$1$ is \textit{lost} and not contained in the hope interval, because there is no path in $\mathcal{G}$ from $1$ to $\tilde{l}=4$. So we already know that the opinion of Individual~$1$ will not converge to the truth. \begin{figure}[htp] \begin{center} \setlength{\unitlength}{1.0cm} \begin{picture}(6,3.0) \put(-1,1){\circle{0.3}} \put(1.7,1){\circle{0.3}} \put(2.3,1){\circle{0.3}} \put(3,1){\circle*{0.3}} \put(3.35,1){\circle{0.3}} \put(3.7,1){\circle*{0.3}} \put(4,1){\circle*{0.3}} \put(4.5,1){\circle{0.3}} \put(5,1){\circle*{0.3}} \put(5,1.4){\circle*{0.3}} \put(5.4,1){\circle{0.3}} \put(5.8,1){\circle{0.3}} \put(-1.1,1.7){$\downarrow$} \put(-1.1,2.1){$1$} \put(1.6,1.7){$\downarrow$} \put(1.6,2.1){$2$} \put(2.2,1.7){$\downarrow$} \put(2.2,2.1){$3$} \put(2.9,1.7){$\downarrow$} \put(2.9,2.1){$4$} \put(3.25,1.7){$\downarrow$} \put(3.25,2.1){$5$} \put(3.6,1.7){$\downarrow$} \put(3.6,2.1){$6$} \put(3.9,1.7){$\downarrow$} \put(3.9,2.1){$7$} \put(4.4,1.7){$\downarrow$} \put(4.4,2.1){$8$} \put(4.9,1.7){$\downarrow$} \put(4.9,2.1){$9$} \put(4.75,2.5){$10$} \put(5.3,1.7){$\downarrow$} \put(5.15,2.1){$11$} \put(5.7,1.7){$\downarrow$} \put(5.65,2.1){$12$} \put(-1,0.5){\line(1,0){8}} \multiput(0,0)(1,0){8}{\put(-1,0.4){\line(0,1){0.2}}} \put(-1.2,0){$0\varepsilon$} \put(-0.2,0){$\frac{1}{2}\varepsilon$} \put(0.8,0){$1\varepsilon$} \put(1.8,0){$\frac{3}{2}\varepsilon$} \put(4.25,-0.2){\line(0,1){1.5}} \put(4.15,1.7){$\downarrow$} \put(4.13,2.1){$\mathbf{h}$} \put(2.8,0){$2\varepsilon$} \put(3.8,0){$\frac{5}{2}\varepsilon$} \put(4.8,0){$3\varepsilon$} \put(5.8,0){$\frac{7}{2}\varepsilon$} \end{picture} \caption{Illustration of Definition \ref{def_lower_upper} and Definition \ref{def_extreme_individuals}.} \label{fig_def_borders} \end{center} \end{figure} In the configuration depicted in Figure \ref{fig_def_borders_2} we have $\tilde{l}=2$, $\tilde{u}=0$, $\hat{l}=2$, and $\hat{u}=5$. \begin{figure}[htp] \begin{center} \setlength{\unitlength}{1.0cm} \begin{picture}(6,3.0) \put(-1,1){\circle{0.3}} \put(1.7,1){\circle*{0.3}} \put(4.5,1){\circle{0.3}} \put(5.4,1){\circle{0.3}} \put(5.8,1){\circle{0.3}} \put(-1.1,1.7){$\downarrow$} \put(-1.1,2.1){$1$} \put(1.6,1.7){$\downarrow$} \put(1.6,2.1){$2$} \put(4.4,1.7){$\downarrow$} \put(4.4,2.1){$3$} \put(5.3,1.7){$\downarrow$} \put(5.31,2.1){$4$} \put(5.7,1.7){$\downarrow$} \put(5.71,2.1){$5$} \put(-1,0.5){\line(1,0){8}} \multiput(0,0)(1,0){8}{\put(-1,0.4){\line(0,1){0.2}}} \put(-1.2,0){$0\varepsilon$} \put(-0.2,0){$\frac{1}{2}\varepsilon$} \put(0.8,0){$1\varepsilon$} \put(1.8,0){$\frac{3}{2}\varepsilon$} \put(4.25,-0.2){\line(0,1){1.5}} \put(4.15,1.7){$\downarrow$} \put(4.13,2.1){$\mathbf{h}$} \put(2.8,0){$2\varepsilon$} \put(3.8,0){$\frac{5}{2}\varepsilon$} \put(4.8,0){$3\varepsilon$} \put(5.8,0){$\frac{7}{2}\varepsilon$} \end{picture} \caption{Illustration of a special case in Definition \ref{def_lower_upper}.} \label{fig_def_borders_2} \end{center} \end{figure} Note that the weights $\beta_{ij}$ may be assymmetric. Thus, the sequence of the opinions of the individuals may reorder during the time steps. As an example, consider, e.g.{}, three ignorants with starting positions $x_1(0)=1\varepsilon$, $x_2(0)=\frac{3}{2}\varepsilon$, and $x_3(0)=2\varepsilon$. The weights may be given as $\beta_{11}(0)=0.01$, $\beta_{12}(0)=0.01$, $\beta_{13}(0)=0.98$, $\beta_{21}(0)=0.98$, $\beta_{22}(0)=0.01$, $\beta_{23}(0)=0.01$, $\beta_{31}(0)=0.4$, $\beta_{32}(0)=0.4$, and $\beta_{33}(0)=0.2$. After one time step the new opinions are given by $x_1(1)=1.985\varepsilon$, $x_2(1)=1.015\varepsilon$, and $x_3(1)=1.4\varepsilon$. We remark that it is possible to achieve every ordering of the three opinions in one time step by choosing suitable weights $\beta_{ij}$ in this example. Nevertheless we have the following straight-forward lemma: \begin{lemma} \label{lemma_interval_1} Let $i$ be an ignorant, $l\in I_i^\varepsilon(t)$ be an individual with smallest opinion and $u\in I_i^\varepsilon(t)$ be an individual with largest opinion then we have $x_i(t+1)\in[x_l(t),x_u(t)]$. \end{lemma} \begin{proof} This follows directly from the system dynamics in Equation (\ref{eq_update}). \end{proof} For truth seekers we have a similar lemma: \begin{lemma} \label{lemma_interval_2} Let $i$ be a truth seeker, $l\in I_i^\varepsilon(t)$ be an individual with smallest opinion, and let $u\in I_i^\varepsilon(t)$ be an individual with largest opinion. For $x_i(t)\le h$ we have $x_i(t+1)\in[x_l(t),\max(h,x_u(t))]$ and for $x_i(t)\ge h$ we have $x_i(t+1)\in[\min(h,x_l(t)),x_u(t)]$. \end{lemma} Our goal is to prove that the length of the hope interval converges to zero. To this end, we show first that the length does not increase after an iteration of Equation (\ref{eq_update}). \begin{lemma} \label{lemma_decreasing} For all time steps $t\in\mathbb{N}$ we have $x_{\hat{u}(t+1)}(t+1)\le x_{\hat{u}(t)}(t)$ and $x_{\hat{l}(t+1)}(t+1)\ge x_{\hat{l}(t)}(t)$. \end{lemma} \begin{proof} We only prove the last inequality since the proof is symmetric for the first inequality. Due to Definition \ref{def_extreme_individuals}, we have $x_{\hat{l}(t+1)}(t+1)\le h$ and $x_{\hat{l}(t)}(t)\le h$. By $\mathcal{L}(t)$ we denote the set of individuals with opinion strictly smaller than $x_{\hat{l}(t)}(t)$. That is, $\mathcal{L}(t'):=\{i\in[n]\mid x_i(t')<x_{\hat{l}(t')}(t')\}$ for all $t'\ge t$. We remark that, by definition, $\mathcal{L}(t')$ does not contain a truth seeker. We set $\mathcal{U}(t'):=[n]\backslash\mathcal{L}(t')$; this set contains the remaining individuals. \medskip Let $u$ be an individual in $\mathcal{L}(t)$ with the largest opinion. By applying Lemma \ref{lemma_interval_1} we get $x_i(t+1)\le x_u(t)$ for all $i\in\mathcal{L}(t)$. Now let $l$ (e.g.{}, $l=\hat{l}(t)$) be an individual in $\mathcal{U}(t)$ with smallest opinion then by applying Lemma \ref{lemma_interval_1} and Lemma \ref{lemma_interval_2} we receive $x_i(t+1)\ge x_l(t)$ for all $i\in\mathcal{U}(t)$. Thus, we have $\hat{l}(t+1)\in\mathcal{U}(t)$ and so $x_{\hat{l}(t+1)}(t+1)\ge x_{\hat{l}(t)}(t)$ follows. \end{proof} In the remaining part of this article we prove that the length of the hope interval $|x_{\hat{u}(t)}(t)-x_{\hat{l}(t)}(t)|$ converges (in some special sense) to zero, as $t$ tends to infinity. \section{Proof of the Generalized Hegselmann-Krause Conjecture} One difficulty in the proof arises from the fact that convergence happens in two phases: in a first phase, the hope interval becomes sufficiently small so that the confidence graph is the complete graph. Then, it may happen that truth seekers approaching the truth from one side get distracted to the other side of the truth. At that point, however, the confidence structure is so simple that all individuals in the hope interval converge to the truth. Since all truth seekers are in the hope interval at all times, this proves the theorem. Where exactly we split the phases is a technical decisison. First, we show that after a finite number $T_1$ of time steps, depending only on $n$, $\varepsilon$, $\alpha$, and $\beta$, the hope interval $[x_{\hat{l}(T_1)}(T_1),x_{\hat{u}(T_1)}(T_1)]$ is contained in the interval $[h-\varepsilon-\frac{\varepsilon\alpha\beta}{12},h+\varepsilon+\frac{\varepsilon\alpha\beta}{12}]$. Therefore, we introduce the following notion. \begin{definition} A \emph{good iteration} is an iteration where for $1\le r\le 3$ one of the following conditions is fullfilled: \begin{enumerate} \item[(1)] the number of individuals in the hope interval decreases, \item[(2)] the opinion of $\hat{l}(t+r)$ reaches or passes $h-\varepsilon-\frac{\varepsilon\alpha\beta}{12}$, \item[(3)] the opinion of $\hat{u}(t+r)$ reaches or passes $h+\varepsilon+\frac{\varepsilon\alpha\beta}{12}$, \item[(4)] $|x_{\hat{u}(t+r)}(t+r)-x_{\hat{u}(t)}(t)|\ge\frac{\varepsilon\alpha\beta^2}{12}$, \item[(5)] $|x_{\hat{l}(t+r)}(t+r)-x_{\hat{l}(t)}(t)|\ge\frac{\varepsilon\alpha\beta^2}{12}$. \end{enumerate} \end{definition} Clearly, there is only a finite number of good iterations. We may choose $T_1=3\cdot\left(n+2\cdot 1+2\cdot\frac{12}{\varepsilon\alpha\beta^2}\right)$. We formulate the next two lemmas only for the lower bound $x_{\hat{l}(t)}(t)$ because analog arguments hold for $x_{\hat{u}(t)}(t)$. As a shorthand we define $d(i,j,t):=\left|x_i(t)-x_j(t)\right|$. For each point in time $t$ we define the sets \begin{eqnarray*} &&\mathcal{N}(t):=\left\{i\in[n]\mid d(\hat{l}(t),i,t)\in\Big[0,\frac{\varepsilon\alpha\beta}{12} \Big)\right\},\\ &&\mathcal{M}(t):=\left\{i\in[n]\mid d(\hat{l}(t),i,t)\in\Big[\frac{\varepsilon\alpha\beta}{12}, \varepsilon\Big]\right\}, \text{ and}\\ &&\mathcal{F}(t):=\left\{i\in[n]\mid x_i(t)-x_{\hat{l}(t)}(t)>\varepsilon\right\}. \end{eqnarray*} \begin{lemma} \label{lemma_middle} If $\mathcal{M}(t)\neq\emptyset$ then there is a good iteration after $1$ step. \end{lemma} \begin{proof} We assume that there is an individual $j\in\mathcal{M}(t)$, i.e.{}, $d(\hat{l}(t),j,t)\in\left[\frac{\varepsilon\alpha\beta}{12},\varepsilon\right]$. For the evaluation of Equation (\ref{eq_update}) for elements of $\mathcal{N}(t)$, $\mathcal{M}(t)$, or $\mathcal{F}(t)$ we do not need to consider the opinion of individuals in $[n]\backslash(\mathcal{N}(t)\cup\mathcal{M}(t)\cup\mathcal{F}(t))$. Let $i$ be an element of $\mathcal{N}(t)$ with opinion $x_i(t)=x_{\hat{l}(t)}+\delta$, where $0\le\delta<\frac{\varepsilon\alpha\beta}{12}$. Let us first assume that $i$ is an ignorant. Due to Individual~$j$ we have \begin{eqnarray*} x_i(t+1)&\ge& x_i(t)-\underset{\text{individuals in }\mathcal{N}(t)\backslash\{i\}} {\underbrace{\delta\left(1-2\beta\right)}} +\underset{i}{\underbrace{0\cdot\beta}} +\underset{j}{\underbrace{ \left(\frac{\varepsilon\alpha\beta}{12}-\delta\right)\cdot\beta}}\\ &\ge& x_{\hat{l}(t)}+\frac{\varepsilon\alpha\beta^2}{12}. \end{eqnarray*} For a truth seeker we similarly get \begin{eqnarray*} x_i(t+1)&\ge&x_i(t)+\alpha\varepsilon+(1-\alpha) \left(-\delta\left(1-2\beta\right)+\left(\frac{\varepsilon\alpha\beta}{12} -\delta\right)\cdot\beta\right)\\ &\ge&x_{\hat{l}(t)}+\frac{\varepsilon\alpha\beta^2}{12}. \end{eqnarray*} Now let $i$ be an element of $\mathcal{M}(t)\cup\mathcal{F}(t)$ with $x_i(t)=x_{\hat{l}(t)}+\delta$ where $\delta\ge\frac{\varepsilon\alpha\beta}{12}$. In any case ($i$ being a truth seeker or an ignorant) we have $$ x_i(t+1)\ge x_{\hat{l}(t)}+\delta-\underset{\text{individuals with smaller opinion than $i$}}{\underbrace{\delta(1-\beta)}}+\beta\cdot 0\ge x_{\hat{l}(t)}+\frac{\varepsilon\alpha\beta^2}{12}. $$ \end{proof} \begin{lemma} If $x_{\hat{l}(t)} < h-\varepsilon-\frac{\varepsilon\alpha\beta}{12}$ then after at least $3$ time steps we have a good iteration. \end{lemma} \begin{proof} Due to Lemma \ref{lemma_middle} we can assume $\mathcal{M}(t)=\mathcal{M}(t+1)=\mathcal{M}(t+2)=\emptyset$. We can also assume \begin{eqnarray*} \left|x_{\hat{l}(t)}(t)-x_{\hat{l}(t+1)}(t+1)\right|&<&\frac{\varepsilon\alpha\beta^2}{12},\\ \left|x_{\hat{l}(t+1)}(t+1)-x_{\hat{l}(t+2)}(t+2)\right|&<&\frac{\varepsilon\alpha\beta^2}{12},\text{ and}\\ d\left(\hat{l}(t+1),0,t+1\right)&>&\varepsilon+\frac{\varepsilon\alpha\beta}{12} \end{eqnarray*} since otherwise we have a good iteration in at most $2$ time steps. At first we claim $\mathcal{N}(t+1)\cap K=\emptyset$. If at time $t$ there is a truth seeker $i\in\mathcal{N}(t)\cap K$ then we have \begin{eqnarray*} x_i(t+1)&\ge& x_{\hat{l}(t)}(t)+\alpha\varepsilon-\frac{(1-\alpha)(1-\beta)\varepsilon\alpha\beta}{12}\\ &\ge& x_{\hat{l}(t)}(t) +\frac{\varepsilon\alpha\beta^2}{12}+\frac{\varepsilon\alpha\beta}{12}\\ &\ge& x_{\hat{l}(t+1)}(t+1) +\frac{\varepsilon\alpha\beta}{12}. \end{eqnarray*} So the only truth seekers that have a chance to move into the set $\mathcal{N}(t+1)$ could be those of the set $\mathcal{F}(t)$. So let truth seeker $i$ be in the set $\mathcal{F}(t)\cap K$, with $x_i(t)=x_{\hat{l}(t)}(t)+\delta$, where $\varepsilon<\delta<\varepsilon+\frac{\varepsilon\alpha\beta}{12}$. (Truth seekers where $\delta\ge \varepsilon+\frac{\varepsilon\alpha\beta}{12}$ are ruled out by Lemma \ref{lemma_interval_2}.) We have \begin{eqnarray*} x_i(t+1) &\ge& \underset{\le x_i(t)}{\underbrace{x_{\hat{l}(t)}(t)+\varepsilon}}-(1-\alpha)(1-2\beta)\\ &\ge& x_{\hat{l}(t)}(t)+\varepsilon\alpha\\ &\ge& x_{\hat{l}(t)}(t)+\frac{\varepsilon\alpha\beta^2}{12}+\frac{\varepsilon\alpha\beta}{12}\\ &\ge& x_{\hat{l}(t+1)}(t+1)+\frac{\varepsilon\alpha\beta}{12}. \end{eqnarray*} Similarly, we can deduce $\mathcal{N}(t+2)\cap K=\emptyset$. Now we can assume that the individuals of $\mathcal{N}(t+1)$, who are all ignorants, are in the hope interval at time $t+1$, since otherwise we would have a good iteration after $1$ time step. So there exist individuals $i\in\mathcal{N}(t+1)$ and $j\in\mathcal{F}(t+1)$ with $\left|x_i(t+1)-x_j(t+1)\right|\le\varepsilon$. We set $x_i(t+1)=x_{\hat{l}(t+1)}(t+1)+\delta$, where $0\le \delta\le\frac{\varepsilon\alpha\beta}{12}$ and calculate \begin{eqnarray*} x_i(t+2) &\ge& x_i(t+1)-(1-2\beta)\delta+ \underset{i}{\underbrace{\beta\cdot 0}} +\underset{j}{\underbrace{\beta\left(\varepsilon-\frac{\varepsilon\alpha\beta}{12}\right)}}\\ &\ge& x_{\hat{l}(t+1)}(t+1)+\frac{\varepsilon\alpha\beta^2}{12}+\frac{\varepsilon\alpha\beta}{12}\\ &\ge& x_{\hat{l}(t+2)}(t+2)+\frac{\varepsilon\alpha\beta}{12}. \end{eqnarray*} For the other direction we have \begin{eqnarray*} x_i(t+2) &\le& x_i(t+1)-\underset{\hat{l}(t+1)}{\underbrace{\beta\delta}}+(1-2\beta)\varepsilon\\ &\le& x_{\hat{l}(t+1)}(t+1)+\frac{\varepsilon\alpha\beta}{12}+\varepsilon-2\beta\varepsilon\\ &\le& x_{\hat{l}(t+1)}(t+1)+\frac{\varepsilon\alpha\beta^2}{12}+\varepsilon\\ &\le& x_{\hat{l}(t+2)}(t+2)+\varepsilon. \end{eqnarray*} Thus, $i\in\mathcal{M}(t+2)$, which results in a good iteration in three time steps. \end{proof} Thus, we can conclude: \begin{corollary} After a finite number $T_1(\varepsilon,n,\alpha,\beta)$ of steps we have $x_{\hat{l}(T_1)}(T_1)\ge h- \varepsilon-\frac{\varepsilon\alpha\beta}{12}$ and $x_{\hat{u}(T_1)}(T_1)\le h+\varepsilon+\frac{\varepsilon\alpha\beta}{12}$. \end{corollary} Due to Lemma \ref{lemma_lonely} there can not exist a general bound on the convergence that does not depend on $\alpha$. We consider the two side lengths $\ell_2(t):=|x_{\hat{u}(t)}(t)-h|$ and $\ell_1(t):=|x_{\hat{l}(t)}(t)-h|$ of the hope interval. Clearly $\ell_1(t)$ and $\ell_2(t)$ are not increasing due to Lemma \ref{lemma_decreasing}. For $t\ge T_1$ we have $\ell_1(t),\ell_2(t)\le\varepsilon+\frac{\varepsilon\alpha\beta}{12}$ \begin{lemma} \label{lemma_epsilon_interval} If $\ell_1(t)+\ell_2(t)\le\varepsilon$ then we have $$ (\ell_1(t+2)+\ell_2(t+2))\le (\ell_1(t)+\ell_2(t))\cdot\left(1-\frac{\alpha\beta}{2}\right). $$ \end{lemma} \begin{proof} Let us assume, without loss of generality, that $\ell_1(t)\ge \ell_2(t)$. At first we consider the case $\ell_2(t)>0$. If $i$ is an ignorant with $x_i(t)=h-\ell_1(t)+\delta$ then we have \begin{eqnarray*} x_i(t+1) &\ge& h-\ell_1(t)+\delta -(1-2\beta)\delta+\beta(\ell_1(t)+\ell_2(t)-\delta)\\ &\ge& h-(1-\beta)\ell_1(t). \end{eqnarray*} For a truth seeker $i$ with $x_i(t)=h-\ell_1(t)+\delta$ we have \begin{eqnarray*} x_i(t+1) &\ge& h-\ell_1(t)+\delta-\alpha(\delta-\ell_1(t))-(1-\alpha)(1-2\beta)\delta+\\ &&(1-\alpha)\beta(\ell_1(t)+\ell_2(t)-\delta)\\ &\ge&h-\ell_1(t)+\beta\delta(1-\alpha)+\alpha \ell_1(t)(1-\beta)+\beta \ell_2(t)(1-\alpha)+\beta \ell_1(t)\\ &\ge&h-(1-\beta)\ell_1(t). \end{eqnarray*} Similarly we obtain $x_i(t+1)\le h+(1-\beta)\ell_2(t)$ in both cases. \medskip Next we consider the case $\ell_1(t)>\ell_2(t)=0$ and $l_i(t+1)>l_i(t)\cdot\left(1-\frac{\alpha}{2}\right)$. Let $i$ be an arbitrary truth seeker with opinion $x_i(t)=h-\ell_1(t)+\delta$. We have \begin{eqnarray*} x_i(t+1)&\ge& h-\ell_1(t)+\delta+\alpha(\ell_1(t)-\delta)-(1-\alpha)(1-\beta)\delta\\ &\ge & h-\ell_1(t)+\alpha \ell_1(t). \end{eqnarray*} Thus, we have $x_i(t+1)\ge h-\ell_1(t+1)+\frac{\alpha}{2}\cdot \ell_1(t)$. If $j$ is an ignorant with $x_j(t+1)=h-\ell_1(t+1)+\delta$, then we have \begin{eqnarray*} x_j(t+2)&\ge& h-\ell_1(t+1)+\delta-(1-2\beta)\delta+\beta\left(\frac{\alpha}{2}\cdot \ell_1(t)-\delta\right)\\ &\ge& h-\ell_1(t)+\frac{\alpha\beta \ell_1(t)}{2}. \end{eqnarray*} For an arbitrary truth seeker $j$ we have \begin{eqnarray*} x_i(t+2)&\ge& h-\ell_1(t+1)+\alpha \ell_1(t+1)\\ &\ge& h-\ell_1(t)+\frac{\alpha\beta \ell_1(t)}{2}. \end{eqnarray*} \medskip \noindent Thus, in all cases we have $(\ell_1(t+2)+\ell_2(t+2))\le (\ell_1(t)+\ell_2(t))\cdot\left(1-\frac{\alpha\beta}{2}\right)$. \end{proof} This states that once the length of the hope interval becomes at most $\varepsilon$ its lengths converges to zero. \begin{lemma} \label{lemma_one_step} Let $t\ge T_1$. If there exists an individual~$i$ with $\frac{\alpha\beta \ell_1(t)}{12}\le d(\hat{l}(t),i,t)\le\varepsilon$, then we have $\ell_1(t+1)\le \ell_1(t)\cdot \left(1-\frac{\alpha\beta^2}{12}\right)$. If there exists an individual $i$ with $\frac{\alpha\beta \ell_2(t)}{12}\le d(\hat{u}(t),i,t)\le\varepsilon$, then we have $\ell_2(t+1)\le \ell_2(t)\cdot \left(1-\frac{\alpha\beta^2}{12}\right)$. \end{lemma} \begin{proof} Due to symmetry it suffices to prove the first statement. Let $j$ be an ignorant with $x_j(t)=h-\ell_1(t)+\delta$, where $\delta\ge 0$. We have \begin{eqnarray*} x_j(t+1) &\ge& h-\ell_1(t)+\delta-(1-2\beta)\delta+\underset{i}{\underbrace{\beta\left( \frac{\alpha\beta \ell_1(t)}{12}-\delta\right)}}\\ &\ge& h-\left(1-\frac{\alpha\beta^2}{12}\right)\ell_1(t). \end{eqnarray*} For a truth seeker $j$ with $x_j(t)=h-\ell_1(t)+\delta$, $\delta\ge 0$ we have \begin{eqnarray*} x_j(t+1) &\ge& h-\ell_1(t)+\delta+\alpha(\ell_1(t)-\delta)-(1-\alpha)(1-2\beta)\delta+\\ &&\underset{i}{\underbrace{(1-\alpha)\beta\left(\frac{\alpha\beta \ell_1(t)}{12}-\delta\right)}}\\ &\ge& h-\ell_1(t)+\beta\delta(1-\alpha)+\alpha \ell_1(t)\left(1-\frac{\alpha\beta^2}{12}\right)+\frac{\alpha\beta^2 \ell_1(t)}{12}\\ &\ge& h-\left(1-\frac{\alpha\beta^2}{12}\right)\ell_1(t). \end{eqnarray*} \end{proof} For transparency we introduce the following six sets: \begin{eqnarray*} \mathcal{N}_1(t):&=&\left\{i\in[n]\mid d(\hat{l}(t),i,t) <\frac{\alpha\beta \ell_1(t)}{12}\right\},\\ \mathcal{N}_2(t):&=&\left\{i\in[n]\mid d(\hat{u}(t),i,t) <\frac{\alpha\beta \ell_2(t)}{12}\right\},\\ \mathcal{M}_1(t):&=&\left\{i\in[n]\mid \frac{\alpha\beta \ell_1(t)}{12}\le d(\hat{l}(t),i,t)\le\varepsilon\right\},\\ \mathcal{M}_2(t):&=&\left\{i\in[n]\mid \frac{\alpha\beta \ell_2(t)}{12}\le d(\hat{u}(t),i,t)\le\varepsilon\right\},\\ \mathcal{F}_1(t):&=&\left\{i\in[n]\mid d(\hat{l}(t),i,t)>\varepsilon,\,x_i(t)\le h+\ell_2(t)\right\},\\ \mathcal{F}_2(t):&=&\left\{i\in[n]\mid d(\hat{u}(t),i,t)>\varepsilon,\,x_i(t)\ge h-\ell_1(t)\right\}.\\ \end{eqnarray*} With this the individuals of the hope interval are partitioned into $$ \mathcal{N}_1(t)\cup\mathcal{M}_1(t)\cup\mathcal{F}_1(t)=\mathcal{N}_2(t)\cup\mathcal{M}_2(t)\cup\mathcal{F}_2(t). $$ \begin{lemma} \label{lemma_two_step} If for $k\in\{1,2\}$ and $t\ge T_1$ there exists an ignorant $i\in \mathcal{N}_k(t)$ and an individual $j\in\mathcal{F}_k(t)$ with $|x_i(t)-x_j(t)|\le\varepsilon$ then $l_k(t+2)\le l_k(t)\cdot \left(1-\frac{\alpha\beta^2}{12}\right)$. \end{lemma} \begin{proof} If $l_k(t+1)>l_k(t)\cdot \left(1-\frac{\alpha\beta^2}{12}\right)$, then it is easy to check that the influence of Individual~$j$ suffices to put ignorant $i$ in set $\mathcal{M}_k(t+1)$. In this case we can apply Lemma \ref{lemma_one_step} \end{proof} \begin{lemma} \label{lemma_no_near_truth_seeker} If $\mathcal{N}_k(t+1)\cap K\neq\emptyset$ and $t\ge T_1$ then $l_k(t+1)\le l_k(t)\cdot\left(1-\frac{\alpha}{2}\right)$ for $k\in\{1,2\}$. \end{lemma} \begin{proof} Due to symmetry it suffices to consider $k=1$. So let $i$ be a truth seeker with $i\in\mathcal{N}_1(t+1)$. We set $x_i(t)=h-\ell_1(t)+\delta$ and calculate \begin{eqnarray*} x_i(t+1) &\ge& h-\ell_1(t)+\delta +\alpha(\ell_1(t)-\delta)-(1-\alpha)(1-\beta)\delta\\ &\ge& h-(1-\alpha)\ell_1(t). \end{eqnarray*} \end{proof} \begin{lemma} \label{lemma_one_shrinks} For $t\ge T_1$ we have $l_k(t+3)\le l_k(t)\cdot \left(1-\frac{\alpha\beta^2}{12}\right)$ for at least one $k\in\{1,2\}$. \end{lemma} \begin{proof} Due to Lemma \ref{lemma_no_near_truth_seeker} we can assume $\mathcal{N}_k(t+1)\cap K=\emptyset$. At time $t+1$ there must be a truthseeker $i$. Without loss of generality, we assume $x_i(t)\le h$ and $i=\tilde{l}(t+1)$. Due to Lemma \ref{lemma_one_step} we can assume $i\in\mathcal{F}_1(t+1)$. Now let $j_1$ be the ignorant with smallest opinion fulfilling $d(i,j_1,t+1)\le\varepsilon$. If $j_1\in\mathcal{N}_1(t+1)$ then we can apply Lemma \ref{lemma_two_step} with $j_1$ and $i$. Otherwise we let $j_2$ be the ignorant with smallest opinion fulfilling $d(j_1,j_2,t+1)\le\varepsilon$. So we have $d(j_2,i,t+1)>\varepsilon$ and $j_2\in\mathcal{N}_1(t+1)$. Thus, we can apply Lemma \ref{lemma_two_step} with $j_2$ and $j_1$. \end{proof} \begin{lemma} \label{lemma_greater_epsilon} If $l_k(t)>\varepsilon$ and $t\ge T_1$ then we have $l_k(t+3)-\varepsilon\le (l_k(t)-\varepsilon)\cdot \left(1-\frac{\alpha\beta^2}{12}\right)$ or $l_k(t+3)\le\varepsilon$. \end{lemma} \begin{proof} Due to Lemma \ref{lemma_no_near_truth_seeker}, we can assume $\mathcal{N}_k(t+1)\cap K=\emptyset$ and, due to Lemma \ref{lemma_one_step}, we can assume $\mathcal{M}_k(t+1)=\emptyset$. Due to symmetry, we only consider the case $k=1$. Let $i\in\mathcal{N}_1(t+1)$ the ignorant with largest opinion $x_i(t+1)$, meaning that $d(\hat{l}(t+1),i,t+1)$ is maximal. If there exists an individual $j\in\mathcal{F}_1(t+1)$ with $d(i,j,t+1)\le\varepsilon$, then we can apply Lemma \ref{lemma_two_step}. If no such individual $j$ exists then we must have $d(i,0,t+1)\le\varepsilon$ or $\ell_1(t+1)=0$. So only the first case remains. We set $\delta=d(\hat{l}(t+1),i,t+1)\ge \varepsilon-\ell_1(t+1)$. Let $h\in\mathcal{N}_1(t+1)$ be an ignorant with $x_h(t+1)=x_{\hat{l}(t+1)}(t+1)+\mu$, where $0\le\mu\le\delta$. For time $t+2$ we get \begin{eqnarray*} x_h(t+2) &\ge& x_{\hat{l}(t+1)}(t+1)+\mu -(1-2\beta)\mu+\beta(\delta-\mu)\\ &\ge& x_{\hat{l}(t+1)}(t+1)+\beta\delta\\ &\ge& x_{\hat{l}(t+1)}(t+1)+\beta(\varepsilon-\ell_1(t+1)). \end{eqnarray*} \end{proof} From Lemma \ref{lemma_one_shrinks} and Lemma \ref{lemma_greater_epsilon} we conclude: \begin{corollary} \label{lemma_everything_small} There exists a finite number $T_2(\varepsilon,n,\alpha,\beta)$ so that we have $$ \ell_1(t)+\ell_2(t)\le\varepsilon+\frac{\varepsilon\alpha^2\beta^3}{60},\quad\text{and}\quad \min(\ell_1(t),\ell_2(t))\le\frac{\varepsilon\alpha^2\beta^3}{60} $$ for all $t\ge T_2(\varepsilon,n,\alpha,\beta)$. \end{corollary} We would like to remark that, e.g.{}, $T_2(\varepsilon,n,\alpha,\beta)=T_1(\varepsilon,n,\alpha,\beta)+\frac{36}{\alpha^2\beta^4}$ suffices. \begin{lemma} \label{lemma_final} For each $t\ge T_2(\varepsilon,n,\alpha,\beta)$ we have $$ \ell_1(t+3)+\ell_2(t+3)\le\varepsilon $$ or $$ d(k,0,t)\le\frac{\varepsilon\alpha^2\beta^3}{60}\cdot\left(1-\frac{\alpha\beta}{2}\right)^{\left\lfloor\frac{t-T_2}{2}\right\rfloor} $$ for all truth seekers $k\in K$. \end{lemma} \begin{proof} Without loss of generality, we assume $\ell_1(t)\le\frac{\varepsilon\alpha^2\beta^3}{60}$ and prove the statement by induction on $t$. Due to Lemma \ref{lemma_one_step} and Lemma \ref{lemma_no_near_truth_seeker}, we can assume $\mathcal{M}_2(t+r)=\mathcal{N}_2(t+r)\cap K=\emptyset$ for $r\in\{0,1\}$ since otherwise we would have $\ell_1(t+3)+\ell_2(t+3)\le\varepsilon$. Thus, we have $d(k,0,t+r)\le\frac{\varepsilon\alpha^2\beta^3}{60}$ for all $k\in K$ and $K\subseteq \mathcal{F}_2(t+r)$. Due to Lemma \ref{lemma_one_step} for $r\in\{0,1\}$, the individuals in $\mathcal{F}_2(t+r)$ are not influenced by the individuals in $\mathcal{F}_2(t+r)$ since otherwise we would have $\ell_1(t+3)+\ell_2(t+3)\le\varepsilon$. Thus, we can apply Lemma \ref{lemma_epsilon_interval} for the individuals in $\mathcal{F}_2(t)$. \end{proof} From the previous lemmas we can conclude Theorem \ref{main_result} and Theorem \ref{thm_interupted_convergent}. \begin{proof}(Proof of Theorems~\ref{main_result} and~\ref{thm_interupted_convergent}.) After a finite time $T_2(\varepsilon,n,\alpha,\beta)$ we are in a \textit{nice} situation as described in Lemma \ref{lemma_everything_small}. If we have $\ell_1(T_2+3)+\ell_2(T_2+3)\le\varepsilon$ then we have an ordinary convergence of the truth seekers being described in Lemma \ref{lemma_epsilon_interval}. Otherwise we have $d(k,0,T_2)\le\frac{\varepsilon\alpha^2\beta^3}{60}$ for all truth seekers $k\in K$. Due to Lemma \ref{lemma_final} and Lemma \ref{lemma_epsilon_interval} either we have $$ d(k,0,t)\le\frac{\varepsilon\alpha^2\beta^3}{60}\cdot\left(1-\frac{\alpha\beta}{2}\right)^{\left\lfloor\frac{t-T_2}{2}\right\rfloor} $$ for all truth seekers $k\in K$ and all $t\ge T_2$, or there exists an $S\in\mathbb{N}$, such that we have \begin{enumerate} \item[(1)] $d(k,0,t)\le\frac{\varepsilon\alpha^2\beta^3}{60}\cdot\left(1-\frac{\alpha\beta}{2}\right)^{\left\lfloor\frac{t-T_2}{2}\right\rfloor}$ for all $T_2\le t\le S$, \item[(2)] $d(k,0,t)\le\varepsilon\left(1-\frac{\alpha\beta}{2}\right)^{\left\lfloor\frac{t-S-3}{2}\right\rfloor}$ for all $t\ge S+3$, \end{enumerate} for all $k\in K$. The latter case is $1$-fold interrupted convergence. Thus, the Hegselmann-Krause Conjecture is proven. \end{proof} \section{Remarks} In this section we would like to generalize the Hegselmann-Krause Conjecture and show up which requirements can not be weakened. \begin{lemma} A finite number $n$ of individuals and symmetric confidence intervals are necessary for a convergence of the truth seekers. \end{lemma} \begin{proof} Infinitely many ignorants can clearly hinder a truth seeker in converging to the truth. If the confidence intervals are not symmetric then it is easy to design a situation where some ignorants are influencing a truth seeker which does not influence the ignorants, so that the truth seeker has no chance to converge to the truth. \end{proof} \begin{lemma} The condition $\beta_{ij}(t)\ge\beta>0$ is necessary for a convergence of the truth seekers. \end{lemma} \begin{proof} If we only require $\beta_{ij}(t)>0$, then we have the following example: $n=2$, $x_1(0)=1-\frac{1}{5}\varepsilon$, $x_2(0)=1-\varepsilon$, $\alpha_1(t)=\frac{1}{5}$, $\alpha_2(t)=0$, $\beta_{11}(t)=\left(\frac{1}{2}\right)^{t+1}$, $\beta_{12}(t)=1-\left(\frac{1}{2}\right)^{t+1}$, $\beta_{21}(t)=\left(\frac{1}{2}\right)^{t+1}$, $\beta_{22}(t)=1-\left(\frac{1}{2}\right)^{t+1}$, and $h=1$. By a straight forward calculation we find that $|x_1(t)-h|\ge\frac{1}{2}\varepsilon$ for $t\ge 1$. \end{proof} We remark that conditions like $\beta_{ij}(t)+\beta_{ij}(t+1)\ge 2\beta$ would also not force a convergence of the truth seekers in general. One might consider an example consisting of two ignorants with starting positions $h\pm\frac{7}{10}\varepsilon$ and a truth seeker $k$ with starting position $h-\frac{1}{5}\varepsilon$. We may choose suitable $\beta_{ij}(t)$ and $\alpha_i(t)$ so that we have $|h-x_k(t)|\ge\frac{1}{5}\varepsilon$ for all $t$, $h-x_k(t)\ge\frac{1}{5}\varepsilon$ for even $t$ and $x_k(t)-h\ge\frac{1}{5}\varepsilon$ for odd $t$. \medskip For the next lemma we need a generalization of Definition \ref{def_interupted_convergent}. \begin{definition} Given $\varepsilon$, $\alpha$, $\beta$, $n$, we say that the truth seekers $k \in K$ are $r$-fold interrupted convergent, if for each $\gamma > 0$ there exists $r+1$ functions $T_i^s(\gamma,\varepsilon,\alpha,\beta,n,T_{i-1}^e)$, $i=1,\dots,r+1$, so that for each (WASBOCOS) $\Omega$ with structural parameters $\varepsilon$, $\alpha$, $\beta$ and $n$ there exist $T_i^e\in\mathbb{N}$, $i=1,\dots,r$ satisfying $$ \forall k\in K,\,\forall t\in[T_i^s(\gamma,\varepsilon,\alpha,\beta,n,T_{i-1}^s),T_i^e]:\, |x_k(t)-h|<\gamma $$ for $i=1,\dots,r$, where $T_0^e=0$, and $$ \forall k\in K,\,\forall t\ge T_{r+1}^s(\gamma,\varepsilon,\alpha,\beta,n,T_r^e):\,|x_k(t)-h|<\gamma. $$ \end{definition} \begin{lemma} The condition $\alpha_i(t)=0$ for all $i\in\overline{K}$ is necessary for Theorem \ref{thm_interupted_convergent}. If it is dropped then the truth seekers are not ($|\overline{K}|-1$)-fold convergent in general. \end{lemma} \begin{proof} At first we remark that it clearly suffices to have $\alpha_i(t)=0$ for all $i\in\overline{K}$ only for all $t\ge T$, where $T$ is a fix integer. W.l.o.g.{} we assume $T=0$ and consider the following example: $h=1$, $x_i(0)=1-2i\varepsilon$, $1\in K$, $1\neq i\in\overline{K}$, $\beta_{ij}(t)=\beta$, $\alpha_i(t)=\alpha$ for the truth seekers, and $\alpha_i(t)=0$ for the ignorants until we say otherwise. Let there be a given $\gamma>0$ being sufficiently small. There exists a time $T_1$ until $x_1(T_1)<1-\gamma$. Up to this time no other individual has changed its opinion. After time $T_1+1$ we suitably choose $\alpha_2(t)$ so that we have $\frac{1}{2}\varepsilon\le x_1(\tilde{T}_1)-x_2(\tilde{T}_1)\le\varepsilon$. So at time $\tilde{T}_1+1$ the convergence of truth seeker $1$ is interrupted the first time. After that we may arrange it that $x_1$ and $x_2$ get an equal opinion and will never differ in there opinion in the future. Now there exists a time $T_2$ until $x_2(T_2)=x_1(T_2)<1-\gamma$ and we may apply our construction described above again. Thus, every ignorant $i\in\overline{K}$ may cause an interruption of the convergence of the truth seekers. \end{proof} \begin{conjecture} If we drop the condition $\alpha_i(t)=0$ for all $i\in\overline{K}$ in Theorem \ref{thm_interupted_convergent} then we have ($|\overline{K}|$)-fold convergence of the truth seekers. \end{conjecture} The Hegselmann-Krause Conjecture might be generalized to opinions in $\mathbb{R}^m$ instead of $\mathbb{R}$ when we use a norm instead of $|\cdot|$ in the definition of the update formula. Using our approach to prove this $m$-dimensional conjecture would become very technical, so new ideas and tools are needed. We give an even stronger conjecture: \begin{conjecture} The $m$-dimensional generalized Hegselmann-Krause Conjecture holds and there exists a function $\phi(\Omega,\gamma)$ so that the truth seekers in an arbitrary generalized (WASBOCOS) $\Omega$ are $\phi(\Omega,\gamma)$-fold interrupted convergent in $\varepsilon$, $\alpha$, $\beta$, and $n$. \end{conjecture}
1,314,259,993,099
arxiv
\section{Introduction} There is plenty of indirect cosmological evidence~\cite{ref::wimpcosmology} that the vast majority of the Universe's energy content is dark, with about 25\% being in the form of dark matter which builds large scale structures. Another 70\% is the mysterious dark energy, responsible for the accelerated expansion of the Universe and only about 5\% is ``ordinary'', baryonic matter which forms stars, planets, and eventually us. There is no known particle in the Standard Model of Particle Physics which could be the dark matter particle, neutrinos for example are too light and fast, hence the dark matter particle must be from new physics and is yet unknown. One of the most favorite candidates is the weakly interacting massive particle (WIMP)~\cite{ref::wimps}, which arises naturally in several extensions of the Standard Model, such as Supersymmetry, Universal Extra Dimensions, and little Higgs models. Many experiments aim at the direct detection of these particles by measuring nuclear recoils of target nuclei after they interact with a WIMP~\cite{ref::directdetect}. Sensitive detectors are placed in deep underground laboratories in order to fight backgrounds induced by cosmic rays and their daughter particles. The expected WIMP interaction rate is less than 1~event per kg of target material and year, and the featureless recoil spectrum is exponentially falling with typical energies of tens of~keV only. These experiments use different targets and detection methods, which all have different pros and cons. Liquid noble gases such as xenon and argon, but possibly also neon, have started to play an important role in the field since about a decade and are currently placing the most stringent limits on spin-independent WIMP-nucleon cross-sections over a large range of WIMP masses~\cite{ref::xe100run08,ref::xe10s2only}. This article gives a brief review on these detectors. \section{Noble Gases as WIMP Targets} The noble gases neon (Ne), argon (Ar), and xenon (Xe), which in liquid phase are all used or being considered as target materials for WIMP searches, have boiling points between 27.1~K (Ne) and 165.0~K (Xe), see Table~\ref{tab::gases}. This makes operation easier than for cryogenic detectors which have to be run at mK~temperatures. Xe and Ar can be even liquefied using liquid nitrogen. All three elements are excellent scintillators with very high light yields, and liquid xenon (LXe) and liquid argon (LAr) are very good ionizers as well, allowing for a direct measurement of the ionization signal induced by particle interactions. For this reason, mainly LAr and LXe are employed in current and future experiments, while neon is currently only considered as one option for one experiment (see CLEAN in Sect.~\ref{ref::experiments}). Hence, we restrict this summary to these two elements. \begin{table}[tb] \caption{\label{tab::gases} Selected properties of noble gases being used as WIMP targets. $W_{ph}$ and $W$ are the average energies to create a scintillation photon or an electron-ion pair. Numbers taken from~$^6$.} \begin{center} \begin{tabular}{|l|lll|} \hline Element & Xenon & Argon & Neon \\ \hline Atomic Number $Z$& 54 & 18 & 10 \\ Atomic mass $A$ & 131.3 & 40.0 & 20.2 \\ Boiling Point $T_b$ [K] & 165.0 & 87.3 & 27.1 \\ Liquid Density @ $T_b$ [g/cm$^3$] & 2.94 & 1.40 & 1.21 \\ Fraction in Earth's Atmosphere [ppm] & 0.09 & 9340 & 18.2 \\ \hline Price & \$\$\$\$ & \$ & \$\$ \\ Scintillator & \checkmark & \checkmark & \checkmark \\ \ \ $W_{ph}$ ($\alpha,\beta$) [eV] & 17.9 / 21.6 & 27.1 / 24.4 & \\ \ \ Scintillation Wavelength [nm] & 178 & 128 & 78 \\ Ionizer & \checkmark & \checkmark & -- \\ \ \ $W$ ($E$ to generate e-ion pair) [eV] & 15.6 & 23.6 & \\ Experiments \footnotesize{[stopped, running, in preparation]} & $\sim 5$ & $\sim 5$ & 1/2 \\ \hline \end{tabular} \end{center} \end{table} More material properties are summarized in Table~\ref{tab::gases}. In particular the possibility to build large, monolithic detectors make cryogenic noble liquids interesting for WIMP searches, as it is considered to be somewhat easier to scale these detectors up to the ton scale and beyond. Compared to the expensive Xe, the price of Ar is rather modest. However, while Xe is intrinsically clean from the radioactive point of view (there are no long-live Xe isotopes and contaminations of radioactive the $^{85}$Kr can be removed by cryogenic distillation), radioactive $^{39}$Ar is present in natural argon at the 1~Bq/kg level. This leads to background and pile-up problems. Finally, the wavelength of the scintillation light is at 178~nm for LXe which is observable with commercially available photocathodes, while LAr based detectors need to employ wavelength shifters (such as TPB) to detect the light. Fig.~\ref{fig::scint} (left) shows how scintillation light is generated in liquid noble gases. \begin{figure}[tb] \begin{center} \includegraphics*[width=0.7\columnwidth]{scintillation.eps} \end{center} \caption{\label{fig::scint} (left) Particle interactions excite and ionize the target (Xe in this example, but Ar works exactly the same way). Excited atoms Xe$^*$ combine with a neutral atom and form an excimer state Xe$^*_2$ which decays under the emission of scintillation light. If ionization electrons are not removed from the interaction site (e.g., by an electric field in a TPC), they eventually recombine and also produce scintillation light. Therefore, the light and the charge signal are anti-correlated. (right) Expected nuclear recoil spectra from interactions of a 100~GeV/c$^2$ WIMP with LXe and LAr, assuming a cross-section of $\sigma=10^{-43}$~cm$^2$. The expected rate is higher in LXe at low energies, but form factor suppressed at higher energies, which is not the case for LAr. A low detection threshold is therefore necessary if LXe is used. Experimentally achieved thresholds are indicated by the colored areas.} \end{figure} The expected nuclear recoil energy $E_{nr}$ spectra from spin-independent WIMP-nucleon scattering interactions are featureless exponentials, and the interaction rate is expected to scale with $A^2$, see Fig.~\ref{fig::scint} (right). Therefore, the much heavier Xe is preferred. However, since the nucleus is also larger coherence is lost for large momentum transfers, leading to a form factor suppression of the rate at higher $E_{nr}$. A low detector threshold is therefore more mandatory for LXe. This is not the case for LAr, however, its overall interaction rate is always smaller. \paragraph{Background Discrimination} If they exist, WIMPs are feebly interacting particles and their expected interaction rates are much lower than the omnipresent backgrounds from natural radioactivity and induced by cosmic rays. The latter can be decreased considerably (typically by factors $\sim10^{-6}$) by placing the experiments in deep underground laboratories, protected by several~km of rock overburden. Environmental radioactivity requires additional massive shields in which the detectors, built from selected radiopure materials, are being placed. The most critical background is from neutrons as these interact with the target nuclei and generate nuclear recoils, which makes them indistinguishable from WIMPs if they interact only once. However, neutrons have a rather large probability to interact several times inside a detector what allows for their rejection if these individual interactions can be resolved. The liquids, in particular LXe, have a rather high stopping power which can be used for self-shielding if only the inner part of the detector is selected for analysis (``fiducialization''). This, however, requires that the interaction vertex can be fairly well reconstructed in 3~dimensions. The most abundant background for almost all dark matter experiments is from gamma and beta backgrounds which generate electronic recoils. These have a different energy loss d$E$/d$x$ compared to nuclear recoils leading to detectable differences in their signals which can be used for signal/background discrimination. The first possibility for noble liquids is the pulse shape of the scintillation signal. The excimers (see Fig.~\ref{fig::scint}) eventually emitting the light can be formed in singlet and triplet states which have different decay times. The individual population of the states depends on the particle interaction. In LAr, the lifetimes are about 3~order of magnitude different, with 0.005~$\mu$s and 1.6~$\mu$s for the singlet and triplet state, respectively, leading to a large slow component of the pulse for events from electronic recoil interactions. It has been demonstrated that this feature can be used to reject electronic recoils at the $3\times10^{-8}$ level~\cite{ref::LArDiscr}. However, such high levels are mandatory in order to cope with the huge background from $^{39}$Ar. With 4~ns and 22~ns, the singlet and triplet lifetimes are very similar in LXe and only very moderate rejection levels ($\sim0.1$) can be achieved~\cite{ref::LXeDiscr}, hence it is not used as default in any experiment. If the charge and the light signal generated in an interaction are measured simultaneously for every event, one can exploit that the different d$E$/d$x$ for electronic recoil backgrounds and nuclear recoils signals produce a different charge/light ratio. The discrimination depends on the deposited energy and on the electric field strength applied to extract the charge signal. In LXe, it ranges between values of $5\times10^{-3}$ and $1\times10^{-4}$ at 50\% nuclear recoil acceptance~\cite{ref::xes2s1}. It is also used in LAr, however, its performance is much weaker than the pulse shape discrimination channel and would by itself not be sufficient to reduce the $^{39}$Ar background to the required low levels. \section{Detector Concepts} Detectors using liquid noble gases as WIMP targets are currently operated by using two different concepts, which are illustrated in Fig.~\ref{fig::concepts} and are explained below. \begin{figure}[tb] \begin{center} \includegraphics*[width=0.8\columnwidth]{concepts2.eps} \end{center} \caption{\label{fig::concepts} The two detector concepts currently used for dark matter detectors based on liquid noble gases. (Left) Single phase detectors are essentially a large volume of a noble liquid which is viewed by many photosensors, usually PMTs, in order to detect the scintillation light S1. (Right) In a double phase detector the S1 signal is also detected by photosensors, but the ionization charge signal is measured as well since the detector is operated as a time projection chamber (TPC). An electric field across the target volume removes the ionization electrons from the interaction site and drifts them towards the gas phase on top of the liquid. The electrons are extracted into the gas and generate proportional scintillation light S2, which is registered time-delayed by the drift time. } \end{figure} \paragraph{Single Phase Detectors} These detectors are conceptually very simple devices in which a large volume of a liquid noble gas is viewed by as many light sensors (usually PMTs) as possible in order to reduce the detection threshold, see Fig.~\ref{fig::concepts} (left). Since only rather short scintillation light signals have to be detected, it also allows for rather high event rates since pile-up is almost no issue. The chosen geometry is usually spherical in order to exploit self shielding as much as possible. The $4\pi$~arrangement of the PMTs can be used for some rough event vertex reconstruction, with a resolution of typically several~cm. The reconstruction performance, however, depends on the number of detected photons and deteriorates close to threshold. Since only the light is detected, background discrimination via the charge/light ratio is not possible. Hence experiments have to rely on pulse shape discrimination or, in case of LXe, on almost perfect background reduction by shielding. For this reasons, most experiments will only use the innermost part of the detector as WIMP target and the outer part (which can be almost up to 90\% of the mass) as background shield. \paragraph{Double Phase Detectors, Time Projection Chambers} Time projection chambers (TPCs), see Fig.~\ref{fig::concepts} (right), provide much better 3-dimensional vertex reconstruction, with demonstrated $z$-resolutions below 1~mm and a $xy$-resolution of $\sim 3$~mm~\cite{ref::xe100run08}. This is achieved by measuring the scintillation light and the ionization charge signal simultaneously. A particle interaction leads to scintillation and liberates ionization electrons which are removed from the interaction site by a strong electric field $E$ (``drift field'', typically around 1~kV/cm). The electrons drift towards the top of the cylindrical detector, where they are extracted into the gas phase above the liquid and generate a secondary light signal which is proportional to the charge~\cite{ref::tpc}. The lightpattern on the top PMT array is used to derive the $xy$-position and the time difference between light (S1) and charge (S2) signal to determine $z$. The excellent vertex detection capabilities allow for powerful background rejection via fiducialization and multi-scatter identification, accompanied by charge/light discrimination (plus pulse shape discrimination for LAr detectors). On the other hand, the optical coverage with photosensors is usually considerably smaller compared to single phase detectors, which might lead to an increased threshold. Additionally, one has to deal with the technical challenges related to the necessary high voltage system. \section{Current Experiments using Noble Liquids}\label{ref::experiments} In this section, we give a brief overview on most experimental efforts which currently employ liquid noble gases as WIMP targets or which will use them in the near future. We have collected this information to the best of our knowledge (using the experiment's presentations given recently~\cite{ref::ucladm}), it represents the status of May~2012. For space reasons, some projects have been omitted. Experimentally achieved WIMP exclusion limits (at 90\% CL) are shown as solid lines in Fig.~\ref{fig::limits} for spin-independent WIMP-nucleon scattering interactions. Projected sensitivities are indicated by dashed lines. The most stringent limit to date comes from the XENON100 experiment~\cite{ref::xe100run08} excluding cross sections above $7.0\times10^{-45}$~cm$^2$ for $m_\chi=50$~GeV/$c^2$. Below $\sim 10$~GeV/$c^2$, the best limit is from XENON10~\cite{ref::xe10s2only}. \begin{figure}[h!] \begin{center} \includegraphics*[width=0.8\columnwidth]{limits.eps} \end{center} \caption{\label{fig::limits} Achieved 90\% exclusion limits (solid lines) and projected sensitivities (dashed) of various dark matter projects using liquid noble gases (with the exception of CDMS-II). Shown is the spin-independent WIMP-nucleon scattering cross section vs.~the WIMP mass. Not all existing or planned experiments are shown, and most of the current experimental constraints are omitted. The closed areas indicate theoretically preferred SUSY regions~$^{12}$.} \end{figure} \paragraph{ZEPLIN-III} was a 12~kg double-phase LXe TPC out of which 5.1~kg were used as WIMP target. The experiment was installed in Boulby mine, UK. The extremely flat TPC geometry allowed for a very high drift field of 3-4~kV/cm and therefore a charge/light background rejection of $\sim1\times10^{-4}$. In the last science run from 2010-2011~\cite{ref::zeplin}, 8~events were observed in the predefined WIMP search region which was compatible with the background expectation and therefore led to an exclusion limit. With this result, the long history of ZEPLIN experiments has come to an end. \paragraph{XENON100 and XENON1T} The current stage of the phased XENON program is XENON\-100, a double-phase LXe TPC with a total mass of 161~kg, located at Laboratori Nazionali del Gran Sasso (LNGS), Italy. 62~kg are inside the TPC and the remaining xenon surrounding the target in $4\pi$ is used as active veto. In the last science run~\cite{ref::xe100run08} of 100.9~days$\times$48~kg raw exposure, three events were observed, fully compatible with the expected background of $(1.8\pm0.6)$~events. A limit was placed which is currently setting the most stringent constraints for $m_\chi>10$~GeV/$c^2$. The results of a new dataset with about twice the exposure, a lower background, and a lower trigger threshold will be published soon. The collaboration is already working on the next phase, XENON1T, which aims to explore cross sections down to $2\times10^{-47}$~cm$^2$ by 2017, after two years of data taking with a TPC of 1~ton LXe fiducial mass. XENON1T will be also installed at LNGS, inside a water shield of $\sim10$~m diameter which is operated as Cerenkov muon veto and will suppress ambient gamma radiation and neutrons. \paragraph{XMASS} is a Japan-based single phase LXe detector, which aims for sensitivities around $2 \times 10^{-45}$~cm for $m_\chi = 100$~GeV/$c^2$~\cite{ref::xmass}. It employs a total of 800~kg LXe and uses about 100~kg as WIMP target. Since end of 2010 it is installed and running in Kamioka mine. A very high light yield has been achieved due to the large coverage with photosensors ($\sim60$\%). A first year of science data has been already collected, however, the collaboration has recently announced some issues with unexpected radioactive background from the PMTs~\cite{ref::xmassproblems}. XMASS is currently working to reduce the background. \paragraph{LUX} is a double-phase LXe TPC which will be installed at the Sanford Underground Research Facility (SURF, USA). The detector employs a total of 350~kg of LXe and aims for a 100~kg fiducial mass for the WIMP search~\cite{ref::lux}. It is currently operated above ground to have a fully working detector once the underground space is ready for occupation. It has already demonstrated a rather high light yield and underground science is expected to start end of 2012 aiming at 300~days of data taking. \paragraph{DarkSide} is a double-phase TPC which will use LAr as WIMP target~\cite{ref::darkside}. The goal for the next years is to build and operate DarkSide-50 with about 50~kg target mass. It will be located at LNGS (Italy), inside the the Borexino counting test facility (CTF), a large water tank which is currently being refurbished for this purpose. Inside the water shield, DarkSide will be surrounded by a spherical boron-loaded liquid scintillator neutron veto and it will use Ar which is depleted in $^{39}$Ar by a factor $\sim100$. Commissioning is scheduled for end of 2012, and two years of data taking is necessary to reach the final sensitivity around $10^{-45}$~cm$^2$. \paragraph{ArDM} is a double-phase LAr detector~\cite{ref::ardm} which has been installed and commissioned at CERN and is currently being moved underground to the Canfranc laboratory (Spain). It employs a large target mass of 850~kg of LAr in a TPC of 120~cm height and 80~cm diameter. The collaboration is developing novel ways to deal with the technical challenges of multi-ton LAr/LXe detectors. The high voltage to bias the TPC, e.g., is generated next to the field cage in a Greinacher circuit and ArDM's final goal is to detect the charge signal with sub-mm precision in large micro-machined charge amplification detectors (large electron multipliers, LEMs). \paragraph{DEAP-3600 and MiniCLEAN} is a large single phase detector using 3.6~tons of LAr, with about 1000~kg being used as WIMP target~\cite{ref::deap}. The LAr will be contained inside an acrylic vessel installed in a cryostat which itself is inside a water shield. Construction of the experiment is ongoing at SNOLab (Canada) and the first filling is expected around the end of 2013. The science goal is to reach the $10^{-46}$~cm$^2$ level after 3~years of operation. The large light collection in the single phase setup will allow for a very good rejection of electronic recoil background via pulse shape discrimination. The ``twin''-experiment MiniCLEAN~\cite{ref::miniclean} is being installed right next to DEAP-3600. With 150~kg LAr fiducial mass (500~kg total) it is considerably smaller, however, the experiment is being designed such that it can also be operated with liquid neon (LNe). Initially, this has been proposed in order to detect low energy neutrinos from the Sun and from supernovae~\cite{ref::clean}. However, if a signal is being seen in LAr it can be very useful to cross check this finding using the same detector (and the same systematics) with another target nucleus. MiniCLEAN is expected to run from end of 2012 to 2014. \paragraph{Ultimate WIMP Facilities} Even though experimental results with ton-scale detectors have not been realized yet, several collaborations have already started to study the ultimate WIMP facilities which will explore the parameter space around $\sigma_\chi=10^{-48}$~cm$^2$, where neutrinos will be an homogeneously distributed irreducible background. The existing proposals DARWIN~\cite{ref::darwin}, MAX~\cite{ref::max}, and LZ~\cite{ref::lz} are all double phase TPCs. At the current stage, all these projects are just design and R\&D studies for multi-ton LXe and LAr detectors, which will likely not being built before 2020. \section{Summary and Outlook} Many experiments aim to directly detect WIMP dark matter by searching for nuclear recoils from elastic WIMP collisions inside very sensitive detectors with ultra-low backgrounds. A large number of projects employs the noble gases xenon or argon, cooled down and liquefied in order to obtain high-density targets. We have detailed why these elements are excellent WIMP targets and have explained the most common detector concepts. These are either single phase detectors measuring the scintillation light signal only, or double-phase detectors measuring the light and the charge signal (from ionization) in a TPC setup. At the time of writing, the most stringent exclusion limits for all WIMP masses are from LXe based detectors. We have presented the current status of more than 10~experiments using noble liquids which are all aiming to reach even higher sensitivities. Their goal is to explore new regions in the cross section vs.~mass parameter space (see Fig.~\ref{fig::limits}) and to finally detect the dark matter particle with detectors of 100-1000~kg target mass or even beyond. \section*{Acknowledgments} We would like to thank the organizers of Rencontres de Moriond Cosmology 2012 for their kind invitation to this great conference. \section*{References}
1,314,259,993,100
arxiv
\section{Introduction} Over the last 30 years the Crab pulsar has been extensively studied. The reasons for it are clear -- it is the brightest pulsar seen in optics, it is nearby and young. However, the post popular groups contemporary theories of the Crab high-energy emission, the ``polar cap'' \citep{daugherty} and ``outer gap'' \citep{cheng} ones, can't explain the whole set of observational data. One of the main properties of the Crab emission is the very high stability of its optical pulse shape despite the secular decrease of the luminosity, related to the spin rate decrease \citep{pacini,nasuti}. At the same time the pulsars in general and the Crab itself are unstable. The instabilities manifest itself as a glitches, likely related to the changes of the neutron star crust, timing noise, powered by the collective processes in the superfluid internal parts of it, magnetospheric instabilities, results of the wisps around the pulsar, precession, et al. All these factors may influence the optical pulse structure and change it on various time scales, both in periodic and stochastic way. However, it has been found early that the variations of the Crab optical light curve, in contrast with the radio ones, are governed by the Poissonian statistics \citep{Kristian}. A number of observations show the absence of non-stationary effects in the structure, intensity and the duration of the Crab optical pulses, and the restrictions on the regular and stochastic fine structure of its pulse on the time scales from 3$\mu$s to 500$\mu$s \citep{beskin,percival}, the fluctuations of the pulse intensity \citep{Kristian}. Along with the increase of the observational time spans and the accuracy of measurements the small changes of the optical pulse intensity, synchronous with the giant radio pulses, have been detected \citep{shearer}. Also, the evidence for the short time scale precession of the pulsar has been detected by studying its optical light curve \citep{cadez}. All this raises the importance of the monitoring of the Crab optical emission with high time resolution. \section{Observations} \begin{table*}[t] \caption{Log of observations} \centering \label{table_observations} \begin{tabular}{lllll} \hline\noalign{\smallskip} Date & Telescope & Instrument & Duration, sec & Spectral range \\[3pt] \tableheadseprule\noalign{\smallskip} Dec 7, 1994 & BTA, Russia & Four-color photometer & 2400 & U+B+V+R \\ & & with photomultiplier & & \\ Dec 2, 1999 & WHT, Canary Islands & Avalanche photo-diode & 6600 & R \\ Nov 15, 2003 & BTA, Russia & Avalanche photo-diode & 1800 & R \\ Dec 29, 2005 - & BTA, Russia & Panoramic spectro-polarimeter & 48000 & 4000 - 7000 A \\ Jan 3, 2006 & & with position-sensitive detector & & \\ \noalign{\smallskip}\hline \end{tabular} \end{table*} We analyzed the sample of observational data obtained by our group over the time span of 12 years on different telescopes. The details of observations are summarized in Table \ref{table_observations}. The equipment used were four-color standard photometer with diaphragms based on photomultipliers, fast photometer with avalanche photo-diodes \citep{shearer} and panoramic spectro-polarimeter based on position-sensitive detector \citep{psd,mppp}. All devices provide the 1$\mu$s time resolution. For each data set the list of photon arrival times has been formed. They have been processed in the same way by using the same software to exclude the systematic differences due to data analysis inconsistencies. Photon arrival times have been corrected to the barycenter of the Solar System using the adapted version of {\it axBary} code by Arnold Rots. The accuracy of this code has been tested with detailed examples provided by \cite{lyne_bary_examples} and is found to be better than 2$\mu$s. The barycentered photon lists then have been folded using both Jodrell-Bank radio ephemerides \cite{pjb_ephem} and our own fast-folding based method of timing model fitting. The accuracy of timing model is proved to be better than at least several microseconds (see Figure \ref{fig_shift_1999}), which permits to fold the light curve with 5000 bin (6.6 $\mu$s) resolution. \section{Phase stability} \begin{figure} \centering {\centering \resizebox*{1\columnwidth}{!}{\includegraphics{crab_shift_1999.eps}} \par} \caption{Timing residuals of the Crab pulsar after applying second-order timing model (up to second frequency derivative). It corresponds to the Gaussian noise with 4.1 $\mu$s rms.} \label{fig_shift_1999} \end{figure} \begin{figure} \centering {\centering \resizebox*{1\columnwidth}{!}{\includegraphics[angle=270]{crab_shift_2006.eps}} \par} \caption{Timing residuals of the Crab pulsar after applying second-order timing model. The quasi-periodic behaviour with characteristic time scale of 0.7 days is seen.} \label{fig_shift_2006} \end{figure} We performed the search for timing model residuals using two longest continuous data sets of 1999 and 2005-2006 years. The data has been divided into the number of subsets of fixed length and they have been folded separately using the same base epoch. Then the sample light curves have been cross-correlated with the standard one (which has been derived for each set separately by folding the whole data) and its phase shift have been derived by fitting the maximum of the cross-correlation function with the Gaussian. The results for 1999 year set are shown in Figure \ref{fig_shift_1999}. No evidence for significant deviations from zero is seen, the phase is consistent with the Gaussian noise with 4.1$\mu$s rms in the 10 s - 2 hr time range. The data of the last set of 2005-2006 years, however, show the significant quasi-periodic variations with $\sim 2.5\cdot10^{-3} P$ rms amplitude. The characteristic time scale of the variations is estimated to be roughly $0.7$ d. \section{Pulse shape} \begin{figure} \centering {\centering \resizebox*{1\columnwidth}{!}{\includegraphics{crab_all.eps}} \par} \caption{Phased light curves of the Crab pulsar for all observations sets, scaled to the same pulse height.} \label{fig_shape_all} \end{figure} \begin{figure} \centering {\centering \resizebox*{1\columnwidth}{!}{\includegraphics[angle=270]{crab_2006_nights.eps}} \par} \caption{Phased light curves of the Crab pulsar for the three nights of the Dec 2005 - Jan 2006 set.} \label{fig_shape_nights} \end{figure} Due to the presence of significant residuals relative to the timing model the pulse profile during the observations of 2005-2006 years can't be derived by folding the whole data set directly. Instead, we divided the data set into the one-hour segments and folded them separately applying the time shift corrections to compensate the phase residuals. The intrinsic phase shift inside each block is less than $2\cdot10^{-4}$, so the folding with 5000 bins is possible. The folded light curves have been co-added. All the other data has been folded directly and shifted in phase to the same pulse position for the ease of comparison. All pulse profiles are shown in Figure \ref{fig_shape_all}, with the off-pulse emission subtracted and pulse height scaled to the same value. The profiles of 1994, 1999 and 2003 years are in a perfect agreement with each other. The profile of 2005-2006 years, however, deviates from them significantly -- the pulse remains of the same FWHM while its skewness is much smaller, and its shape is nearly symmetric. We folded the data of this set for each of three observational nights separately using the same method. These profiles are shown in Figure \ref{fig_shape_nights}. There is the significant variation of its shape from night to night. Unfortunately the low amount of data available do not permit to track the profile shape change inside each night and check whether it is smooth or whether the shape is correlated with the timing residuals. \section{Pulse fine structure} \begin{figure} \centering {\centering \resizebox*{1\columnwidth}{!}{\includegraphics{crab_123.eps}} \par} \caption{The main pulse peak of the sum of light curves of 1994, 1999 and 2003 years data.} \label{fig_pulse_123} \end{figure} \begin{figure} \centering {\centering \resizebox*{1\columnwidth}{!}{\includegraphics{crab_23.eps}} \par} \caption{The comparison of the peaks of 1999 and 2003 years. The peaks are shifted vertically for 0.03 for clearance.} \label{fig_pulse_23} \end{figure} For the first three observational set where the pulse profile is stable we performed the search for the fine structure of the main pulse. The data sets has been reduced to the same phase base point with precision better than half of the phase bin (less than 3.3$\mu$s) and the cumulative light curve has been computed. The peak region of it is shown on Figure \ref{fig_pulse_123}. No statistically significant deviations from the smooth peak shape is seen. However, light curves of 1999 and 2003 years data sets alone (plotted on Figure \ref{fig_pulse_23}) each show the evidence of fine structure on the level of 3-5 sigma (roughly 1 \% of the intensity) with typical duration of 10-30 $\mu$s. Such details may give an evidence of coherent generation of optical emission, if the emission generation region is deep enough (deeper than 0.1 of light cylinder radius), due to brightness temperature exceedes $10^{12}$ K. \section{Discussion} The results of the last data set differs significantly from all previous results. We studied carefully the possibility of its being the result of some hardware or data processing problem. The data of last set has been acquired using the panoramic spectro-polarimeter based on the position-sensitive photon counter \citep{psd,mppp} in low-resolution spectral mode. There is no difference of the pulse profile in different spectral bands (derived using different parts of the spectrum detected). The detector behaviour is proved to be linear in the flux range used. The data acquisition system, ``Quantochron 4-48'', which records the time of arrival of the detected photons, has been checked for short time scale stability by recording the signal from the stationary 100-Hz generator. It has been processed and folded in the same way as a pulsar one (passing the unnecessary barycentering step). It shows no distortion of the signal shape larger than 1$\mu$s. Large scale timing stability of acquisition system is ensured by means of 1 Hz and 10 kHz frequency signals from GPS receiver. The barycentering correction code passed the tests provided in \cite{lyne_bary_examples} with the accuracy of 1 $\mu$s. The correctness of the radio ephemerides has been checked by performing the timing model fitting using our own software, the results of phase shift and folding analysis agree with ones based on the radio data. There is no small time scale (of order of 100 seconds and larger) changes of the pulse profile inside the set with amplitude comparable to the difference between the last set and previous ones. Taking into account all these arguments we may conclude that the pulse profile change and quasi-periodic phase shifts detected in this observational set is most likely not related to the hardware or software problems of the equipment used. The detection of the variations of both the pulse arrival times and its shape strongly supports the geometrical interpretation of the effect. It may be described as a quasi-periodic change of the pulsar beam orientation due to the strong precession commenced suddenly before the observations, but after the previous set. It may be related to the very strong glitch of the Feb-Mar 2004, or other recent change in the neutron star state \citep{lyne_bary_examples} \section{Conclusions} We analyzed the data of several sets of optical observations with high temporal resolution of the Crab pulsar performed by our group over the 12 last years. No evidence for short time scale precession (like 60-sec free precession discovered in \cite{cadez} ) is detected on the level of $10^{-5}$ - $10^{-7}$ s$^{-1}$ pulsar frequency variation on 10 s - 2 hours time scale on Dec 2, 1999 (see Figure \ref{fig_shift_1999}), which corresponds to the precession wobble angle to be less than approximately $2\cdot10^{-3}$. Also, no signatures of short time scale timing noise is seen in this data set. No significant fine structure is detected in the integral pulse profile of 1994, 1999 and 2003 years data set (see Figure \ref{fig_pulse_123}), however, each data set alone show the evidence of fine structure on the level of 3-5 sigma, which may be related to its instability on the time scale of years along with the stability of the pulse shape on the same scale. We discovered the significant change of the time-averaged Crab pulse profile in the Dec 2005 - Jan 2006 set of observations. The pulse profile also shows the variations between the nights. Also, the quasi-periodic phase shifts in respect to the second-order timing solution (up to second frequency derivative) has been detected in the data with amplitude of $\sim 100\mu$s and characteristic time scale of 0.7 days. We have not found any hardware or software issue able to mimic such pulsar behaviour. These results may be interpreted as a geometric effects due to the Crab precession suddenly started between our observations of 2003 and 2005-2006 years.
1,314,259,993,101
arxiv
\section{Introduction} \label{sec:intro} The origin of asteroid orbit determination is closely tied to the discovery of Ceres by Giuseppe Piazzi on January 1, 1801. The new object, believed to be a comet, was followed briefly and lost after going through solar conjunction. To resolve the problem, Carl Friedrich Gauss developed a new method of orbit determination later the same year. With his prediction, the orbit of Ceres was determined and it was recovered at the end of 1801. Nowadays, more than 700,000 asteroid orbits are known and managed by The International Astronomical Union's Minor Planet Center (MPC). With the current CCD surveys, dedicated search telescopes, sub-arcsecond astrometry, and improved orbit determination techniques, the derivation of an initial orbit for a single object observed multiple times within a few days is relatively simple, with some known caveats. Subsequent follow-up and archival matching usually extends the observed arc to months or even years. However, new telescopes and deeper search, mostly for Near-Earth Objects (NEOs), make contributions more challenging for the follow-up community. NEOs have been a focus of attention for at least two decades, mostly motivated by the goal of reducing the Earth impact hazard by cataloging the population, but lately also for sample-return missions, proposed asteroid mining and crewed missions. Therefore, the MPC's NEO Confirmation Page is being saturated, often leaving objects either unconfirmed or with very short arcs, necessitating archival linking or searches. In 2022, the Large Synoptic Survey Telescope \citep[LSST,][]{Ivezic2014} will start to operate and provide unprecedented numbers of asteroid and comet detections at magnitudes beyond the reach of the current follow-up telescopes. LSST will operate in almost real-time in an automated mode, including identification of known moving objects and linking detections of newly discovered asteroids. The complexity of the problem lies in the fact that instead of careful treatment of a single object, LSST will have to treat millions of detections, including the spurious and false ones, and successfully link them into orbits, while rejecting false linkages. The latter part has yet to be demonstrated on a real asteroid survey comparable to LSST. The idea of automated asteroid detection, linkage and identification was implemented for Pan-STARRS \citep{2002SPIE.4836..154K,2004SPIE.5489..667H} in its Moving Object Processing System \citep[MOPS,][]{2013PASP..125..357D}. MOPS was developed as an end-to-end collection of algorithms and programs, able to process individual detections, link detections into single-night tracklets, combine tracklets into tracks and then derive multi-night orbits. Its features and capabilities include propagation and simulation of synthetic orbits, efficiency studies, identification of detections with a catalog of known or derived orbits, providing alerts and an interactive web-interface. Its track-finding algorithm is based on scalable and variable kd-tree algorithms \citep{2007ASPC..376..395K}. The MOPS linking performance was tested in simulations, including all types of Solar System objects (150,000 orbits by \citet{2007ASPC..376..395K}) and even with expected false detection and asteroid density rates (15 million orbits by \citet{2013PASP..125..357D}). The resulting linking efficiency was close to 100\% and the high accuracy suggested that MOPS will work for an advanced asteroid survey. However, the Pan-STARRS project was never completed into its original design of 4-telescopes, its 3-year mission with a single telescope was not Solar System optimized and was different from the proposed and tested survey cadence, its limiting magnitude was below the predicted value, and the rates of spurious detections were orders of magnitudes larger than predicted. It was the false detection rates in particular, that did not allow MOPS to derive orbits due to the dramatic increase in the number of false tracks that overwhelmingly outnumbered the real ones. Notably, this experience might be a source of skepticism regarding LSST 's ability to manage the large load of real and false detections into a working linking algorithm, which has not been demonstrated yet. Still, some components of MOPS are being successfully used by many (Pan-STARRS1, Pan-STARRS2, NEOWISE, DECAM) and MOPS is planned to be used with its full capabilities, including linking, for LSST. However, the effectiveness of MOPS is crucial. Without successful linking, the expected numbers of LSST discovered Solar System objects could be significantly decreased. Our goal was to test MOPS for LSST with a realistic density of moving objects and false detections and to understand whether MOPS can handle the expected large number of false tracks. We emphasized the realistic approach by employing the baseline survey cadence, the exact shape of the field of view, several observational constraints and parameters, as well as the most recent population models of the Solar System objects. \section{Large Synoptic Survey Telescope} LSST is a next generation all-sky survey telescope currently being constructed atop Cerro Pach\'{o}n in Chile. Its first light is scheduled for 2020 and its 10-year nominal survey will start 2 years later. This 8.4-meter, wide-field telescope, with a 3.2 Gigapixel camera and real-time image processing, data acquisition and transport, is mainly motivated by the study of Dark Matter and Dark Energy. However, its nightly coverage of $6,000$ square degrees, mostly done in two visits to a given field in the same night, provides an excellent opportunity for a deep survey of the small bodies of the Solar System. Because of its limiting magnitude, reaching to 24.5 in r-band in 30-seconds, and large load of detected asteroids and comets, the discovery and characterization of Solar System objects must be done automatically, by identifying with known objects and correct linking of new objects. For our simulations, we selected one month of the 10-year \texttt{\detokenize{enigma_1189}}\xspace baseline survey (see \citet{2017Veres_1} for a description of \texttt{\detokenize{enigma_1189}}\xspace) created by the Operations Simulator \citep[OpSim,][]{2014SPIE.9150E..15D}. OpSim provides a list of fields with information on their positions, epochs, limiting magnitudes, filters, seeing, etc. Fields also avoid the Moon and filters are sensitive to the phase of the Moon and its presence above the horizon. The selected dates covered the 28th observing cycle (OC 28) of \texttt{\detokenize{enigma_1189}}\xspace. An observing cycle is a MOPS-defined interval of time, from a full moon to a full moon. OC 28 spanned through the months of May when the ecliptic has the largest altitude above the horizon around midnight and also the nights are the longest in the summer at the LSST site (Figure~\ref{fig.focal_plane}). Thus, the density of NEOs and Main Belt Asteroids (MBAs) is at its greatest. Some nights were removed by OpSim to simulate weather, resulting in 27 clear nights. A small fraction of fields were observed only once per night; these singletons were removed from our simulation. The mean and maximum limiting magnitude of the selected observing cycle as well as the time spent in individual filters are denoted in Table~\ref{tab.surveys_mag}. The survey spends most its time in the r, i and z-band, and only 3\% of time in the u-band. \begin{figure}[tbh] \centering \includegraphics[width=0.7\textwidth]{oc28.png} \caption{LSST coverage of the sky in OC 28 in equatorial coordinates.} \label{fig.focal_plane} \end{figure} \begin{table}[tbh] \small \begin{center} \caption{SNR=5 limiting magnitudes ($m_5$) of one month of the \texttt{\detokenize{enigma_1189}}\xspace survey and fraction of time spent in individual filters.} \begin{tabular}{c|ccc} \tableline\tableline Filter & Average $m_5$ & Max $m_5$ & Time spent (\%)\\ \hline u&23.54$\pm$0.32&24.16&3\\ g&24.69$\pm$0.31&25.36&9\\ r&24.32$\pm$0.30&24.93&25\\ i&23.65$\pm$0.36&24.34&26\\ z&22.30$\pm$0.39&23.57&20\\ y&21.49$\pm$0.23&21.94&17\\ \tableline \end{tabular} \label{tab.surveys_mag} \end{center} \end{table} The LSST camera consists of 21 platforms called rafts, each consisting of a $3\times3$ array of 9 CCD chips, yielding a total of 189 CCDs. Each chip comprises $4096\times4096$ 10-micron pixels, and so the total number of active pixels is 3,170,893,824. Because there are gaps between chips within the $3\times3$ rafts and also between the rafts, some fraction of the focal plane is not useable. The total active area is equal to $9.50\,\mathrm{deg}^2$, whereas the total raft area yields to $10.45\,\mathrm{deg}^2$, resulting in a fill factor of 0.9089. Gaps can be simulated by an exact mask or by a statistical treatment of detections. The pixel mask approach is computationally more expensive, because it requires building and matching up the fields with the 3.2 billion pixels. This work used the probabilistic approach, where fill factor represents the probability of a potential detection to be found in a single frame. To simulate the field, we employed a square layout of 25 rafts with the area of $12.445\mathrm{deg}^2$ and then applied a mask for the four corner rafts to obtain the above mentioned $10.45\,\mathrm{deg}^2$. Finally, 90.89\% of detections were randomly selected to form the detection list. LSST utilizes an altitude-azimuthal mount and the camera is able to rotate, and thus the fields are not generally aligned with the local RA-DEC frame. In fact, due to desired dithering, each exposure is observed in a randomized field orientation. The field rotation affects the probability of the detection to be visible in multiple visits, because some of the detection can hit the masked area in the second visit. This aspect of the survey is fully modeled in our simulations. \section{Field density} \subsection{Asteroid detections} We generated synthetic detections for NEO and MBA population models by propagation of the orbits to the epochs of the OpSim fields. The propagation used JPL's small body codes with the DE405 planetary ephemerides, where all planets, plus Pluto and the Moon were perturbing bodies. We did not use any asteroids as perturbers. Only detections inside of the rotated field were analyzed and filtered based on the field limiting magnitude and other selected parameters of the detection model. Some details of the detection model are described in \citet{2017Veres_1}. We utilized a \citet{2016Natur.530..303G} NEO population containing 801,959 Keplerian orbits with absolute magnitude down to $H<25$. The distribution of its orbital elements is roughly similar to earlier work by \citet{2002Icar..156..399B}, however, the \citet{2016Natur.530..303G} population is size-dependent and its size-frequency distribution covers the $H>22$ space better than the previous work which underestimated the count. (See Figure \ref{MB_model}.) The orbital and size-frequency distribution properties of ``Granvik's" NEO population were derived from analysis of NEO observations by the Catalina Sky Survey and Mt. Lemmon telescopes. Our NEO population is artificially deficient of large NEO with $H<17$; however, these are believed to be essentially all discovered and there are only about 500 of them, thus they are a negligible fraction of other detections in an LSST field of view. Initially, we were also using the earlier NEO model by \citet{2002Icar..156..399B}, which we denote as ``Bottke's'', that also contains objects down to $H<25$; however, its total number is only $268,896$ and is thus deficient in small objects, particularly for $H>22$. MBAs will dominate the number density of moving objects in the LSST field of view, and they represent a source of background noise and possible confusion for NEO identification. In our LSST simulations, we used the \citet{2011PASP..123..423G} model of the main-belt population (see Figure \ref{MB_model}). This population contains 13,883,361 orbits and is the most robust population model available to date. In the Grav MBA model, the cumulative distribution slope is equal to $\alpha=0.28\pm0.01$ for H between 16 and 20. However, the population was created for a Pan-STARRS4-like survey with a limiting magnitude of $m_{V}=24.5$, and so it is truncated to remove MBAs that are fainter than $m_{V}=24.5$ when at perihelion and at opposition. This truncation results in an artificial break, seen in Figure~\ref{MB_model}, in the Grav population size-frequency distribution at $H\simeq21$. To investigate how this break affects the areal density of MBAs in the LSST survey simulation, we compared the simulated MBA density in LSST fields to the predicted number density by \citet{2009Icar..202..104G} who had observed MBAs with the 3.8-meter Mayall telescope at Kitt Peak National Observatory in 2001 within the so-called SKADS survey. SKADS detected asteroids in a fixed 8.4 deg$^2$ patch of sky around the opposition point in the Johnson-Cousins R-band down to limiting magnitude of 23.0--23.5 on six nights spanning an 11-night baseline. Based on \citet{2009Icar..202..104G}, the debiased cumulative number of MBAs follows the equation \begin{equation} N(>H)\propto10^{\alpha H} \end{equation} where $\alpha=0.30\pm0.02$. This slope was derived for H in the range 15--18, with assumed validity to at least H=20. \citet{2009Icar..202..104G} derived the areal density of MBAs as \begin{equation} \label{eq.gladman} N(<m_{R})=210*10^{0.27*(m_{R}-23)} \end{equation} where $N(<m_{R})$ is the cumulative number of asteroids brighter than $m_{R}$ per square degree. The derived detection efficiency was $98\%$ at $m_{R}$=17. \begin{figure}[tbh] \centering \epsscale{0.6} \plotone{MB_NEO_sfd.png} \caption{Comparison of MBA \citep{2011PASP..123..423G} and NEO \citep{2016Natur.530..303G} size-frequency distributions, where the model MBA slope change at $H\sim21$ is an artifact of designing a population of Pan-STARRS4 accessible MBAs.} \label{MB_model} \end{figure} To compare with our modeled number density of MBAs, we selected LSST fields with solar elongation greater than $178\deg$ and within one degree from the ecliptic from OC 28, yielding 27 fields. This simulation was run with fill factor of $\epsilon_{0}=90.89\%$, fading and color transformation assuming all asteroids are of a spectroscopic S-type (see \citet{2017Veres_1}). There was a slight difference in the definition of detection efficiency. Our modeled detections are subject to so-called fading that reduces detection efficiency as \begin{equation} \label{eq.fading1} \epsilon(m) = \frac{\epsilon_{0}}{1+e^{\frac{m-m_5}{w}}} \end{equation} where $\epsilon(m)$ is the detection efficiency, $m$ the apparent magnitude, $m_5$ the limiting magnitude defined for $\mathrm{SNR}=5$ and $w=0.1$ the width of the fading function. SKADS defined its detection efficiency by a similar relation \begin{equation} \label{eq.fading2} \epsilon(m) = \frac{\eta_{0}-c(m-17)^{2}}{1+e^{\frac{m-m_5}{w}}} \end{equation} where, based on observations, $\eta\approx0.98$ and $c\approx0.005$. Here $c$ measures the strength of the quadratic drop and the remaining parameters are the same as in the previous equation. Additionally, there are a number of sources of uncertainties that must to be considered in the estimate of the MBA density: \begin{enumerate}[a)] \item A different slope of the population. $\alpha=0.28$ and $0.30$ for Grav and Gladman, respectively. \item The transformation from the LSST bands and the SKADS R-band to V-band. The term $V-R$ in SKADS was $0.37\pm0.15$ mag, leading to relative uncertainty of about 9\% in areal density when transforming to V-band. \item The scaling of the detection efficiency. This work used a different model than SKADS for fading. \end{enumerate} Figure~\ref{MB_comp} shows the number density of MBAs near opposition as a function of limiting magnitude of the field in V-band based on the SKADS survey and the simulated LSST survey with the synthetic Grav MBA population. Note that at $m_5>24.5$ the simulated MBA density drops because of the artificially truncated Grav's population. In \texttt{\detokenize{enigma_1189}}\xspace, 14\% of the fields have a limiting magnitude fainter than 24.5 in V-band. Depending on the limiting magnitude and the elongation from ecliptic and opposition, the MBA density in our simulation was underestimated by up to 12\% in those fields. However, few of the 14\% fields fainter than 24.5 mag were taken at opposition near the ecliptic, and so the effect of the truncation in Grav's MBA population is presumed negligible. The density of MBAs decreases significantly as a function of ecliptic latitude (Figure~\ref{MB_density}). \begin{figure}[tbh] \centering \epsscale{0.6} \plotone{MB_comp.png} \caption{Number of MBAs per square degree at opposition on the ecliptic based on \citet{2009Icar..202..104G} and the \citet{2011PASP..123..423G} population used in this work.} \label{MB_comp} \end{figure} \begin{figure}[tbh] \centering \epsscale{0.6} \plotone{densityOC28-30.png} \caption{Number of detected MBAs per LSST field as a function of limiting magnitude (V) and ecliptic latitude.} \label{MB_density} \end{figure} \subsection{Measurement errors} Each ephemeris-based position in the field was altered by adding realistic astrometric and photometric errors based on the computed signal-to-noise ratio (SNR). The limiting magnitude of the field $m_5$ is defined for SNR=5. The SNR of a detection \citep{2009AAS...21346003I} is computed from the difference between the computed magnitude $m$ and $m_5$ as \begin{equation} \label{eq.snr} \mathrm{SNR} = \frac{1}{\sqrt{(0.04-\gamma).\chi+\gamma \chi^2}} \end{equation} where $\gamma=0.038$ and $\chi=10^{0.5(m-m_5)}$. Then, photometric uncertainty is derived as \begin{equation} \sigma_m=2.5\log_{10}{\left(1+\frac{1}{\mathrm{SNR}}\right)} .\end{equation} and the computed $m$ is combined with an error drawn from a normal distribution with a mean of zero and variance $\sigma_{m}^2$. We have assumed that LSST astrometry is measured relative to a post-Gaia star catalog and so absolute systematic errors are negligible while relative errors are expected at a floor level of 10\,mas. The astrometric error $\sigma_{\mathrm{astr}}$ for any detection is therefore computed as quadrature combination of $10\,\mathrm{mas}$ and the ratio of the seeing $\Theta$ and SNR \begin{equation} \sigma_{\mathrm{astr}}^2=(10\,\mathrm{mas})^{2}+\left({\frac{\Theta}{\mathrm{SNR}}}\right)^2. \label{eq.snr} \end{equation} Asteroids are moving targets and so, depending on the rate of motion, their shape deviates from a stellar PSF and is in fact a convolution of the motion and the PSF. The faster the object moves, the larger the astrometric error. Therefore, if the trail length $L>\Theta$, the seeing term $\Theta$ in Eq.~\ref{eq.snr} is replaced by the geometric mean of seeing and trail length as $\Theta'=\sqrt{\Theta L}$. To obtain realistic astrometry, we combine the computed position with an astrometric error term drawn from a normal distribution with a zero mean and variance of $\sigma_{\mathrm{astr}}^2$. Figure~\ref{fig.unc} shows histograms of astrometric uncertainties, in both linear and log-scale. The latter shows that there are two populations of NEA detections, those with high SNR and therefore low uncertainty, around $10\,\mathrm{mas}$, and another centered around $100\,\mathrm{mas}$ from low SNR detections, which presumably also includes most of the objects with relatively fast rates of motion. The median astrometric error obtained for NEOs is $47\,\mathrm{mas}$. \begin{figure}[tbh] \centering \includegraphics[width=0.49\textwidth]{unc_norm.png} \includegraphics[width=0.49\textwidth]{unc_log.png} \caption{Distribution of astrometric uncertainties of NEOs - normal scale (left) and logarithmic scale (right).} \label{fig.unc} \end{figure} To simulate observational constraints and limitations of the LSST processing pipeline and CCD effects, we employed a set of filters that determined whether a detection that fulfilled the limiting magnitude was still visible. We included vignetting, which reduces sensitivity to detections that are far from the optical axis of the field. The LSST optical design minimizes vignetting, with only 7\% of the collecting area having a penalty above 0.1 mag. In CCD surveys the limiting magnitude does not behave like a step function that strictly determines the visibility. In fact, the detection limit follows a fading function, e.g., Eq. (\ref{eq.fading1}) that defines the limiting magnitude as a 50\% probability of detection. In our work, this value is taken at SNR=5 and denoted as $m_5$. The fading function is multiplied by a fill factor, simulating the focal plane gaps. Because of the sidereal tracking rate, all asteroids will move, and particularly fast moving NEOs will look trailed. The detected trails are described by a convolution of a point-spread-function with the motion vector. The longer the trail, the fainter the peak signal and the SNR decreases. This loss effectively decreases the magnitude of asteroids as a function of their on-sky rate of motion. We assumed that all NEOs and MBAs are of S-types for the purpose of the ephemeris computed V-band magnitude transformed to the LSST filter system. The details of this detection model are discussed by \citet{2017Veres_1}. \subsection{False detections} The LSST transient detection data stream will include many detections that are not associated with solar system objects, and the objective of linking only real LSST detections of moving objects to form tracks and orbits represents a significant challenge. There are three broad categories of non-solar system transients that are expected from LSST. The first category of LSST transient detections arise from real astrophysical phenomena (e.g., variable stars, supernovae, etc.) that appear in the same location in multiple instances. Such astrophysical transients will be filtered out of the MOPS input stream by virtue of their stationary appearance and thus will not affect the asteroid linking problem. The remaining two categories of non-solar system transients consist of spurious detections arising from either random noise or image differencing artifacts, both of which will enter the MOPS input stream. The first source of false detections, from random fluctuations in the sky background and from detector noise, is driven by gaussian statistics at the individual pixel level. The number $N_{>\eta}$ of these $random$ sources above a given signal-to-noise threshold $\eta$ in the CCD image where Gaussian noise is convolved with a Gaussian PSF follows the formula by \citet{Kaiser04} \begin{equation} N_{>\eta}=\frac{S}{2^{5/2}\pi^{3/2}\sigma_g^2}\eta e^{-\eta^2/2}, \label{eq.Kaiser}\end{equation} where $S$ is the total number of pixels in the focal plane array, $\sigma_g\simeq\Theta/2.35$, and $\Theta$ is the FWHM seeing measured in pixels. The number of random false positives depends strongly on the seeing (Figure~\ref{fig.NS_noise}), with the better the seeing the larger the number of random false positives. The average \texttt{\detokenize{enigma_1189}}\xspace seeing of 0.80 arcsecond leads to 650 random false positives with $\mathrm{SNR}>5$ in one LSST image. We generated random false positives following Equation~\ref{eq.Kaiser} in random x-y positions in the field. The number of random false positives for a given field was selected from a normal distribution with a mean and variance of $N$ from equation~\ref{eq.Kaiser}. Then, magnitudes were assigned to the generated random noise as follows: We generated a random number $p$ from a uniform distribution [0,1]. This number corresponds to the normalized cumulative distribution $N(>\eta)/N_{TOTAL}$. Then $\eta=\sqrt{\eta_{0}^{2}-2\log(1-p)}$ which can be directly transformed to a magnitude as $V=V_{LIM}-2.5\log(\eta/\eta_{0})$ where $V_{LIM}$ is the $m_5$ limiting magnitude at $\eta_{0}=5$. The number density of random false positives has a strong dependence on SNR; therefore, most of the random noise sources will be near the the limiting magnitude (Figure~\ref{fig.hist_noise}). \begin{figure}[tbh] \centering \includegraphics[width=0.6\textwidth]{false_dets_LSSTcam.png} \includegraphics[width=0.6\textwidth]{FD_model.png} \caption{(top) The number of random noise in LSST field as a function of signal-to-noise ratio. Similarity to normal distribution is demonstrated by the dashed line. (bottom) The theoretical and generated numbers of random noise in LSST fields.} \label{fig.NS_noise} \end{figure} The second source of false detections comes from difference image artifacts, which arise from differencing a field image with a fiducial image of the static sky that has been derived from a stack of several (or a great many) images of the same field over some time period. This differencing removes stationary objects so that only transient sky phenomena, including moving objects, appear as detections in the difference image. However, registration errors across the field can leave dipole-shaped artifacts in the difference image at the location of a static source. Artifacts may also originate from a poor convolution kernel, variable seeing across the field, stray light in the optical system or reflections in the lenses. Artifacts are often concentrated around bright sources due to saturation or diffraction spikes, and masking around these sources can be an efficient means of substantially reducing the rate of artifacts. Although an improved optical configuration and machine learning can remove many of these false artifacts, some fraction will always remain in the data stream. For this work we assumed the estimated density of differencing artifacts derived by \citet{Slater}, who used actual imagery obtained by the Dark Energy Camera (DECAM) on Cerro Tololo \citep{2015AJ....150..150F} and processed them with a nascent version of the LSST image processing pipeline. \citet{Slater} report that the primary result of their study is that ``the LSST pipeline is capable of producing a clean sample of difference image detections, at roughly the 200--400 per square degree level.'' This is their final result, but our work used a preliminary estimate as the point of departure for our linking simulations. This earlier estimate allowed for roughly 90--380 artifacts per square degree, and we took the geometric mean of this range as the starting point, which leads to $185/\mathrm{deg}^2$ or 1777 artifacts per LSST field. \citet{Slater} did find far higher concentrations of artifacts near bright stationary sources, which they eliminated by masking the area around them, thus allowing the reported low artifact density. Following their result, we modeled bright source masking by reducing the effective fill factor by 1\%. To seed the detection list with artifacts, we selected the number of artifacts in each field according to a gaussian distribution with mean and variance 1777 and distributed them randomly across the field. Thus our artifact rate was roughly $3\times$ the rate from random noise in typical seeing (Figure~\ref{fig.scatter_noise}), and about half of the upper bound derived by \citet{Slater} from processing actual DECam data. Our model for difference artifacts is independent of observing conditions such as seeing and field density. However, we note that the most dense regions of the galactic plane are relegated to the Galactic Plane proposal observations in \texttt{\detokenize{enigma_1189}}\xspace, which happens to be mostly covered by a single-visit-per-night cadence, and is anyway only a few percent of observing time. If we remove all Galactic Plane proposal fields from \texttt{\detokenize{enigma_1189}}\xspace there is a negligible effect on NEO completeness. Thus our linking and completeness results do not require or assume operation in star fields with extreme density. Based on the \citet{Slater} report, we model that the SNR distribution of differencing artifacts follows $\propto\mathrm{\mathrm{SNR}}^{-2.5}$. The algorithm computes the SNR $\eta$ from $\eta=\eta_{0}{(1-p)}^{-2/3}$ where $p$ is a randomly generated number from a uniform distribution [0,1]. (See Figure~\ref{fig.hist_noise}.) The magnitude of a simulated artifact is then derived according to $V=V_{LIM}-2.5\log(\eta/\eta_{0})$ where $V_{LIM}$ is the $m_5$ limiting magnitude at $\eta_{0}=5$. Artifacts have much shallower dependence on $\eta$, and therefore tend to be far brighter than random noise sources. Roughly half of modeled artifacts have $\mathrm{SNR}>10$, while virtually none of the random false detections had $\mathrm{SNR}>7$. The brightness distribution of artifacts suggests that at least some potential false tracklets that include artifacts can be immediately eliminated by enforcing consistency in the photometry. However, according to Figure~\ref{fig.hist_noise}, about 90\% of artifacts have $\mathrm{SNR}<20$, and if a bright artifact with $\mathrm{SNR}=20$ is paired with a faint asteroid detection having $\mathrm{SNR}=5$ the magnitude difference will be $\Delta m = 2.5\log_{10}\frac{20}{5}\simeq 1.5\,\mathrm{mag}$. As it happens, MOPS limits the photometric variation among tracklet components to $\Delta m < 1.5\,\mathrm{mag}$ by default, which suggests that few false tracklets in our simulation have been eliminated in this way. This criteria could be made more strict, which would reduce the false tracklet rate at the risk of removing real objects that are actually more interesting by virtue of a large light-curve amplitude. Thus, as a rule, the photometric consistency requirement should be as relaxed as much as feasible in order to avoid eliminating real tracklets. We suspect that this requirement can be dropped altogether without significantly impacting linking performance. \begin{figure}[tbh] \centering \includegraphics[width=0.49\textwidth]{noise1.png} \includegraphics[width=0.49\textwidth]{noise2.png} \caption{Histogram (left) and cumulative distribution (right) of random noise and artifacts on one night of LSST survey.} \label{fig.hist_noise} \end{figure} \begin{figure}[tbh] \centering \includegraphics[width=0.6\textwidth]{noise3.png} \caption{Random noise and artifact counts per individual field as a function of seeing during one month of the LSST survey.} \label{fig.scatter_noise} \end{figure} We note that our work neglects the possibility that artifacts are spatially correlated in RA-DEC, which could introduce difficulties in the linking process whereby artifacts could reappear near the same RA-DEC location and mimic the motion of asteroids. RA-DEC correlation among artifacts could arise from two causes, either camera defects or stationary sources. For LSST, the rotational dithering of the camera serves to break the correlation from any defects in the instrument, most of which would already be masked in processing, and the masking of bright stationary sources serves to remove them as a source of artifacts. \citet{jones2017} found that the rate of correlated detections in the DECam data stream was low enough to be negligible for our purposes, only $\sim2/\mathrm{deg}^2$. This no-correlation assumption is at variance with the Pan-STARRS1 experience, but appears to be well justified for LSST. \section{Moving Object processing System} A central question for this work is whether the linking of tracklets into tracks and orbits will prove successful with real LSST data. LSST MOPS will receive full-density lists of detections of moving and transient targets, including NEOs, MBAs and false detections. From these inputs MOPS must create tracklets, tracks and orbits, despite the fact that the data stream is contaminated by potentially large numbers of false detections, which leads to high rates of false tracklets. Our simulation synthesized detections in the LSST fields from a full-density NEO model ($\sim850,000$ orbits), an MBA model ($\sim11$ million orbits) and false detections (both random noise and differencing artifacts). The final detection lists were submitted to the MOPS \texttt{\detokenize{makeTracklets}}\xspace routine, and tracklets were created. Finally, tracklets were submitted to the linking stage, the most challenging step. \subsection{Tracklets} The list of detections for a given field that has been multiple times in a night is submitted to the \texttt{\detokenize{makeTracklets}}\xspace part of MOPS. A tracklet is created for a detection in the first image if there is a second detection in its vicinity in the second image. The radius of the search circle is defined by the lower and upper velocity thresholds of \texttt{\detokenize{makeTracklets}}\xspace, which were set to 0.05$\degree$/day and 2.0$\degree$/day, respectively, in this study. If there are more possible connections in the circle, in addition to the ``CLEAN" tracklet, consisting of detections of one object, then a ``MIXED" tracklet consisting of detections of two objects or a ``BAD" tracklet that includes a false detection is created as well. Increasing the upper velocity limit increases the search area and thus the number of false tracklets. In some simulations, for velocities of 1.2--2.0$\degree$/day, we used the information on the trail length to limit the search area for companion detections. At 1.2$\degree$/day, a detection will have a non-PSF shape and its length will be 1.8 times the PSF for the average 0.86 arcsec seeing, and so its length and orientation can be determined. Thus, instead of a large circular search area around trailed detections, smaller regions consistent with the anticipated velocity and direction of the trails are searched, and any matching detections must have a compatible trail length and direction. See Figure~\ref{fig.tracklet_cartoon} for a graphical depiction. \begin{figure}[tb] \centering \epsscale{0.7} \plotone{tracklet_cartoon.png} \caption{Schematic diagram for tracklet generation. Dots represent detections from the first image, x signs from the second one. The large circle represents the upper velocity limit for creating tracklets without rate information (up to $1.2\degree$/day). Arrows in that circle are all possible tracklets, connecting the first detections with all detections from the second image in the reach. Every detection in the image has such a circle and corresponding set of tracklets. If the detection is faster than 1.2$\degree$/day it will be trailed (on the right), and information on the trail length and orientation can be used to search a smaller area for its counterpart in the second image (in two separate regions because the direction of motion is unknown). The matching detection must also be a trail with similar length and orientation.} \label{fig.tracklet_cartoon} \end{figure} \begin{figure}[tb] \centering \epsscale{0.9} \plotone{noisefig_new.png} \caption{An example of a high-density LSST field from the \texttt{\detokenize{enigma_1189}}\xspace survey. The depicted field is number 1891 from night 3052, taken in the r filter with $m_5=24.79$, seeing 0.63 arcsec, airmass 1.04, and field center at opposition-centered ecliptic coordinates (Lat., Long.) $=(2.91\degree, 1.26\degree)$. Thus the field is near opposition in excellent conditions. The various types of detections referenced in the legend are ``MB''---main-belt asteroids, ``NEO''---near-Earth objects, ``NS''---false detections from random noise, and ``FD''---false detections arising from image differencing artifacts.} \label{fig.density} \end{figure} The number of tracklets depends on the density of detections, which can be large (Figure~\ref{fig.density}). To understand the feasibility of the simulation we gradually increased the number of detections in OC 28. The following steps are also summarized as Cases A-E in Table~\ref{Tab.confusion_FF}. \begin{enumerate} \item Initially, we only used NEO orbits from Bottke's model (Case A, Table~\ref{Tab.confusion_FF}). Switching to Granvik's NEO model increased the number of detections by 35\% and tracklets by 55\% (Case B). Because Granvik's NEO model is more current and has many more objects we used that population in the simulations. At this stage, with only NEO orbits, nearly all tracklets were CLEAN, with only 4 MIXED tracklets (99.97\% tracklet purity). \item Adding the MBA population to Granvik's NEOs (Case C) increased the number of detections in one month to 15 million and the number of tracklets to 6 million. Most of the tracklets were for MBAs; however, about 17\% of tracklets were MIXED, i.e., derived from different objects. The large number of MIXED tracklets was substantially reduced by taking advantage of trail-related velocity information when in the velocity range 1.2--$2.0\degree/\mathrm{day}$ (Case D). In this dual velocity mode of \texttt{\detokenize{makeTracklets}}\xspace, $1.2\degree/\mathrm{day}$ is the upper threshold for creating a tracklet by searching in a circle. If the detection is trailed and the trail length implies a velocity $>1.2\degree/\mathrm{day}$, then its matching pair in the second image must be in a predicted location, based on the time between exposures, and the position and velocity of the first detection (Figure~\ref{fig.tracklet_cartoon}). Thus, the number of randomly linked detections in a large circle decreased dramatically. This increased the number of good NEO tracklets by 20\% and decreased the number of MIXED tracklets by a factor of 5. \item The next step added false detections from random noise to the full-density NEO and MBA detection list (Case E). This doubled the number of detections to 30 million, and so the synthetic to false detection ratio was about 1:1. However, the number of tracklets only increased from 6 million in case C to 7.5 million in case E. In this scenario tracklets were created up to the $2\degree$/day limit without the use of velocity information. In addition to 1 million MIXED tracklets, the simulation generated about 700,000 BAD tracklets (i.e., those with both synthetic and false detections) and 600,000 NONSYNTH tracklets consisting solely of false detections. \item The final, full-density simulation was achieved by also injecting differencing artifacts, which more than doubled again the total number of detections, to 66 million (Case F). Now, over 77\% of detections were false, and so the ratio between synthetic and false detections was about 1:3.5. NEOs represent only 0.07\% of the detection list. The full-density simulation was challenging for the tracklet stage. Therefore, we used trail-derived velocity information for tracklets created in the velocity range of 1.2--$2.0\degree$/day. Still, the total number of tracklets was very large, $\sim11.9$ million. Out of this sample, about 57\% of tracklets were somehow erroneous, either including at least one false detection or detections of different objects. This simulation revealed that artifacts related to false positives create the majority of the linking challenge. Though we did not directly test it, the use of trail-related velocity information presumably leads to a dramatic reduction in the false tracklet rate for the full-density simulation. \end{enumerate} \begin{sidewaystable}[ht!] \small \setlength\tabcolsep{1.5pt} \caption{Number of detections and tracklets for OC 28 in various simulations. MIXED tracklets include detections from at least two distinct objects, BAD tracklets include detections from both false detections and moving objects, and NONSYNTH tracklets consist entirely of false detections.} \begin{center} \begin{tabular}{l|lcc|cccc|cccccc} \tableline \tableline & & & &\multicolumn{4}{c} {Detections} & \multicolumn{6}{c} {Tracklets}\\ Case & NEO & MBA & False Det &Total &\%NEO & \%MBA & \%False & Total &\%NEO & \%MBA & \%MIXED & \%BAD & \%NONSYNTH \\ \hline A &Bottke &No &None &36k &100 &0 &0 &11k &100 &0 &0 &0 &0\\ B &Granvik &No &None &49k &100 &0 &0 &17k &100 &0 &0 &0 &0\\ C &Granvik &Yes &None &15M &0.3 &99.7 &0 &6.2M &0.23 &82.8 &16.9 &0 &0\\ D \footnote{\label{1sttablefoot}Tracklet generation used rate information from 1.2--2.0$\degree$/day. Otherwise rate information was ignored over entire range 0.05--2.0$\degree$/day.} &Granvik &Yes &None &15M &0.3 &99.7 &0 &5.4M &0.31 &94.8 &4.9 &0 &0\\ E &Granvik &Yes &Random only &30M &0.2 &50.6 &49.2 &7.5M &0.19 &68.2 &14 &9.7 &7.9\\ F \footref{1sttablefoot} & Granvik &Yes &Random + artifacts &66M &0.1 &22.6 &77.3 &12M &0.14 &42.7 &2.2 &6.1 &48.8\\ \hline \end{tabular} \end{center} \label{Tab.confusion_FF} \end{sidewaystable} \subsubsection{The Linking Process} Automated linking of tracklets is a crucial element of LSST's NEO discovery pipeline. Without an automated linking stage, the NEO discovery rate would suffer and would rely heavily on follow-up observers, which will be impractical given the faint limit and volume of the LSST detections. The MOPS linking algorithm connects tracklets from three distinct nights into candidate tracks that are subsequently tested through orbit determination. The process consists of the following four distinct steps: \begin{enumerate} \item {\em Assemble tracklet list.} The first step collects, for a given field, all of the tracklets from the last $N$ nights for which the earlier position and velocity project into the destination field. The forward mapping of tracklets is based on linear motion, and acceleration that leads to nonlinear motion is not accounted for. Thus some NEO tracklets may be neglected, especially those very near the Earth with a rapid change in geometry and observable acceleration. \hspace{3mm} The combinatorics of linking strongly favor small $N$, but the objective of NEO completeness favors large $N$, which allows more missed detection opportunities. For LSST, $N$ usually ranges from 12--20, though 30 has been contemplated as a stretch goal. This work used $N=12$ days for linking tests, consistent with our objective of understanding whether linkage could be at all successful in the presence of large numbers of false detections. NEO linkage of nearby objects is not likely to succeed for large $N$ unless MOPS is extended so that some plane-of-sky acceleration is allowed when assembling the field-by-field tracklet lists. This would likely lead to a modest increase in the NEO discovery rate at the expense of many more false tracklets and increased linking overhead. \item {\em Assemble candidate track list.} The second step in linkage generates a list of candidate tracks based on the input tracklets. Generally, there are hundreds of available fields per night, each being processed in parallel. The \texttt{\detokenize{linkTracklets}}\xspace algorithm is based on a kd-tree search \citep{2007ASPC..376..395K} that reduces the number of potential tracks to be tested from $n^2$ to $n\log n$, where $n$ is the number of tracklets available for linking on the given field. This saves a significant amount of computational resources, but the problem remains challenging. \hspace{3mm} \texttt{\detokenize{linkTracklets}}\xspace has multiple tunable parameters, such as the minimum number of nights, the minimum number of detections, the minimum and maximum velocities, and some kd-tree linking parameters (\texttt{\detokenize{vtree_thresh}}\xspace, \texttt{\detokenize{pred_thresh}}\xspace, \texttt{\detokenize{plate_width}}\xspace). The ``vtrees" finds 2 compatible tracklets from which to estimate the endpoints of the track. The initial search pruning is done with respect to a maximum error denoted as \texttt{\detokenize{vtree_thresh}}\xspace. The track is confirmed when additional ``support tracklets" are found. \texttt{\detokenize{pred_thresh}}\xspace is a threshold for the goodness of fit for the support tracklets to the model estimated from the 2 initial tracklets. \texttt{\detokenize{plate_width}}\xspace in days flattens the tracklets epoch to the same time, if they fall within this margin. Different parameter values led to vastly different CPU and memory requirements, and markedly different numbers of candidate tracks. However, optimization of this stage is complex. The ideal parameter settings depended on the number of detections and varied from field to field. For instance, experiments with only synthetic NEO orbits led to 99\% linking efficiency. Adding noise and MBAs and running tests for selected target fields and tracks and varying \texttt{\detokenize{linkTracklets}}\xspace parameters led to inconclusive results because the correct parameters depend on the field, and optimizing on a full lunation was infeasible. We explored the optimization of the kd-tree parameters on a single, dense field in the middle of OC28. The total number of candidate tracks increased as a function of \texttt{\detokenize{vtree_thresh}}\xspace and \texttt{\detokenize{pred_thresh}}\xspace, and there was only a weak dependence on \texttt{\detokenize{plate_width}}\xspace, at least for $\texttt{\detokenize{plate_width}}\xspace<0.01$ (Figure~\ref{fig.KD_total}). However, the most correct NEO tracks were derived for $\texttt{\detokenize{plate_width}}\xspace=0.003$ and$\texttt{\detokenize{vtree_thresh}}\xspace=0.003$ (Figure~\ref{fig.KD_neo}). Pushing the kd-parameters to obtain as many NEOs as possible led to an extreme increase in the number of false candidate tracks (Figures~\ref{fig.KD_ratio1}--\ref{fig.KD_ratio4}). Also, the memory and CPU load increased dramatically (Figures~\ref{fig.KD_cpu}--\ref{fig.KD_memory}). \hspace{3mm} This work was conducted with a single 8-core Linux workstation with 96 GB of memory (upgraded from 32 GB during the course of the work), and a crucial part of the challenge of linking was avoiding out-of-memory crashes. The final values utilized for the main linking simulation in this work were therefore a combination of feasibility and available computational resources: $(\texttt{\detokenize{vtree_thresh}}\xspace, \texttt{\detokenize{pred_thresh}}\xspace, \texttt{\detokenize{plate_width}}\xspace)= (0.001, 0.001, 0.003)$. This corresponds to the lower left corner of the upper right plot in Figures~\ref{fig.KD_total}--\ref{fig.KD_memory}. Better performance could have been obtained for, say, $(\texttt{\detokenize{vtree_thresh}}\xspace, \texttt{\detokenize{pred_thresh}}\xspace, \texttt{\detokenize{plate_width}}\xspace)= (0.003, 0.003, 0.003)$, but this would require use of a large cluster with more memory per core, something that will be readily available to LSST. \item {\em Derive preliminary orbit.} The third step took the candidate tracks derived by \texttt{\detokenize{linkTracklets}}\xspace and submitted them for Initial Orbit Determination (IOD). MOPS uses Gauss' method to generate potential initial orbits from the astrometry, and for each track the best fitting IOD is selected. Most false tracks were eliminated at this stage with no valid IOD. \item {\em Perform differential corrections.} The fourth stage was Orbit Determination (OD), which used JPL OD routines to obtain converged orbits. This includes sophisticated fall-back logic to try to obtain 4- or 5-parameter fits if the 6-parameter orbit fit diverged. MOPS filtered out some false tracks at this stage based on rudimentary screening on post-fit residual statistics. As discussed below, MOPS's built-in orbit quality filtering is not strict and is agnostic regarding the expected errors in the astrometry, and thus relatively few false orbits were rejected at this stage. All orbits that passed the MOPS default quality screening were added to the MOPS derived object table, which was the basis for understanding the overall linking performance. \end{enumerate} \clearpage \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{KD_0001total.png} \includegraphics[width=0.35\textwidth]{KD_0003total.png} \includegraphics[width=0.35\textwidth]{KD_001total.png} \includegraphics[width=0.35\textwidth]{KD_003total.png} \caption{Total number of candidate tracks derived for a single, dense field as a function of the \texttt{\detokenize{vtree_thresh}}\xspace, \texttt{\detokenize{pred_thresh}}\xspace and \texttt{\detokenize{plate_width}}\xspace kd-tree linking parameters.} \label{fig.KD_total} \end{figure} \begin{figure}[tbh] \centering \includegraphics[width=0.35\textwidth]{KD_0001neo.png} \includegraphics[width=0.35\textwidth]{KD_0003neo.png} \includegraphics[width=0.35\textwidth]{KD_001neo.png} \includegraphics[width=0.35\textwidth]{KD_003neo.png} \caption{Total number of CLEAN NEO tracks derived for a single, dense field as a function of the \texttt{\detokenize{vtree_thresh}}\xspace, \texttt{\detokenize{pred_thresh}}\xspace and \texttt{\detokenize{plate_width}}\xspace kd-tree linking parameters.} \label{fig.KD_neo} \end{figure} \begin{figure}[tbh] \centering \includegraphics[width=0.35\textwidth]{KD_0001totalfrac.png} \includegraphics[width=0.35\textwidth]{KD_0003totalfrac.png} \includegraphics[width=0.35\textwidth]{KD_001totalfrac.png} \includegraphics[width=0.35\textwidth]{KD_003totalfrac.png} \caption{Ratio total number of tracks to number of CLEAN tracks derived for a single, dense field as a function of the \texttt{\detokenize{vtree_thresh}}\xspace, \texttt{\detokenize{pred_thresh}}\xspace and \texttt{\detokenize{plate_width}}\xspace kd-tree linking parameters.} \label{fig.KD_ratio1} \end{figure} \begin{figure}[tbh] \centering \includegraphics[width=0.35\textwidth]{KD_0001badfrac.png} \includegraphics[width=0.35\textwidth]{KD_0003badfrac.png} \includegraphics[width=0.35\textwidth]{KD_001badfrac.png} \includegraphics[width=0.35\textwidth]{KD_003badfrac.png} \caption{Ratio number of BAD tracks to number of CLEAN tracks derived for a single, dense field as a function of the \texttt{\detokenize{vtree_thresh}}\xspace, \texttt{\detokenize{pred_thresh}}\xspace and \texttt{\detokenize{plate_width}}\xspace kd-tree linking parameters.} \label{fig.KD_ratio2} \end{figure} \begin{figure}[tbh] \centering \includegraphics[width=0.35\textwidth]{KD_0001nonsynthfrac.png} \includegraphics[width=0.35\textwidth]{KD_0003nonsynthfrac.png} \includegraphics[width=0.35\textwidth]{KD_001nonsynthfrac.png} \includegraphics[width=0.35\textwidth]{KD_003nonsynthfrac.png} \caption{Ratio number of NONSYNTH tracks to number of CLEAN tracks derived for a single, dense field as a function of the \texttt{\detokenize{vtree_thresh}}\xspace, \texttt{\detokenize{pred_thresh}}\xspace and \texttt{\detokenize{plate_width}}\xspace kd-tree linking parameters.} \label{fig.KD_ratio3} \end{figure} \begin{figure}[tbh] \centering \includegraphics[width=0.35\textwidth]{KD_0001mixedfrac.png} \includegraphics[width=0.35\textwidth]{KD_0003mixedfrac.png} \includegraphics[width=0.35\textwidth]{KD_001mixedfrac.png} \includegraphics[width=0.35\textwidth]{KD_003mixedfrac.png} \caption{Ratio number of MIXED tracks to number of CLEAN tracks derived for a single, dense field as a function of the \texttt{\detokenize{vtree_thresh}}\xspace, \texttt{\detokenize{pred_thresh}}\xspace and \texttt{\detokenize{plate_width}}\xspace kd-tree linking parameters.} \label{fig.KD_ratio4} \end{figure} \begin{figure}[tbh] \centering \includegraphics[width=0.35\textwidth]{KD_0001CPUtotal_s.png} \includegraphics[width=0.35\textwidth]{KD_0003CPUtotal_s.png} \includegraphics[width=0.35\textwidth]{KD_001CPUtotal_s.png} \includegraphics[width=0.35\textwidth]{KD_003CPUtotal_s.png} \caption{CPU time for running \texttt{\detokenize{linkTracklets}}\xspace on a single, dense field as a function of the \texttt{\detokenize{vtree_thresh}}\xspace, \texttt{\detokenize{pred_thresh}}\xspace and \texttt{\detokenize{plate_width}}\xspace kd-tree linking parameters.} \label{fig.KD_cpu} \end{figure} \begin{figure}[tbh] \centering \includegraphics[width=0.35\textwidth]{KD_0001Memory_kb.png} \includegraphics[width=0.35\textwidth]{KD_0003Memory_kb.png} \includegraphics[width=0.35\textwidth]{KD_001Memory_kb.png} \includegraphics[width=0.35\textwidth]{KD_003Memory_kb.png} \caption{Memory usage of \texttt{\detokenize{linkTracklets}}\xspace for a single, dense field as a function of the \texttt{\detokenize{vtree_thresh}}\xspace, \texttt{\detokenize{pred_thresh}}\xspace and \texttt{\detokenize{plate_width}}\xspace kd-tree linking parameters.} \label{fig.KD_memory} \end{figure} \clearpage \subsubsection{Linking Performance} Linking tests were conducted on observing cycle 28 of the \texttt{\detokenize{enigma_1189}}\xspace baseline survey, with Granvik's NEO model, MBAs and the full false detection lists (Case F, Table~\ref{Tab.confusion_FF}). The NEO linking efficiency is defined as the number of unique NEOs present in the post-linking, derived-object catalog divided by the number of unique NEOs with possible 12-day tracks in the detection list. The linking efficiency was 93.6\% for $H<22$ NEOs and 84.0\% for all NEOs (i.e., $H<25$). These numbers were lower than the case without the false detections, where we achieved $>99\%$ linking efficiency, similar to previous work \citep{2007ASPC..376..395K,2013PASP..125..357D}. The lower efficiency for all NEOs arises from the fact that the vast majority of NEOs were of the smallest diameters, e.g., $23<H<25$. Also, smaller objects tend to have faster rates and greater acceleration because they are seen at closer geocentric distances, and they tend to have shorter observability windows. Note that the derived linking efficiency was for a single set of selected kd-tree parameters with a single 8-core workstation. With more powerful computational facilities and a more optimized kd-tree search (possibly on a per-field basis), there is excellent reason to believe that the linking efficiency can be significantly improved. Many derived NEO orbits stemmed from objects in the MBA input catalog. Table~\ref{Tab.orbit_accuracy} shows the makeup of the 5348 NEO orbits (defined by $q<1.3\,\mathrm{au}$) derived from OC 28 alone. Among these orbits, 2222 originated from CLEAN linkages of actual NEOs, 1896 were CLEAN orbits associated with MBAs and 1230 were erroneous (``Not CLEAN") linkages. Nearly all of the erroneous linkages combined detections of different MBAs to form an NEO orbit; few were contaminated by false detections. At first blush this implies a purity of 77.0\% in the NEO catalog, but we describe below why this apparently low accuracy is mostly a manifestation of an ineffective orbit quality screening applied by MOPS. Correct interpretation of the orbits and improved screening increases the accuracy to 96\%. In contrast to the NEO orbits, Table~\ref{Tab.orbit_accuracy} reveals that the MBA catalog has 99.8\% purity already at this stage, without more refined filtering on orbit quality. Only 6 NEOs appear in the non-NEO orbit catalog, and most of these are borderline cases where $q\simeq1.3\,\mathrm{au}$. \begin{table}[tbh] \small \caption{Accuracy of derived orbits from OC 28. The ``Incorrect Class.'' column indicates the number of objects for which the source object and the derived object had a different classification based on perihelion distance $q$. ``Not CLEAN'' indicates erroneous linkage of observations from either false detections or multiple objects.} \begin{center} \begin{tabular}{c|cccc} \tableline \tableline Derived Classification& All & Incorrect Class. & Not CLEAN & Accuracy\\ \hline NEO ($q\le1.3\,\mathrm{au}$) & 5348 & 1896 Non-NEO & 1230 &77.0\%\\ Non-NEO ($q>1.3\,\mathrm{au}$) & 765,833 & 6 NEO & 1635 & 99.8\%\\ \hline \end{tabular} \end{center} \label{Tab.orbit_accuracy} \end{table} \subsubsection{Orbit Quality Filtering} The large fraction of erroneous linkages that appear in the NEO orbit catalog stem from a weak orbit quality filter implemented by MOPS, which requires the post-fit RMS of astrometric residuals to be less than 0.4 arcsec, a criterion that is too readily met for astrometry with a median error less than 0.05 arcsec. Moreover, because the RMS is not normalized by the reported astrometric uncertainty, it fails to take into account the varying quality of astrometry within and between tracklets in a candidate track. The upshot of this approach is that most such erroneous linkages show residuals clearly inconsistent with the astrometric uncertainty, and yet they pass the MOPS quality control test. Rather than modifying MOPS and re-running the simulation, we post-processed the post-fit astrometric residuals, with their associated uncertainties, to derive the sum of squares of the normalized residuals for each orbit in the NEO catalog. This provided the so-called $\chi^2$ of residuals, from which it is straightforward from classical statistics to calculate the probability $p_\mathrm{val}$ that the fit is valid, which is to say, the likelihood of of getting a higher value of $\chi^2$ by chance. A higher post-fit $\chi^2$ naturally leads to a lower $p_\mathrm{val}$ because the increased residuals reflect a poorer fit that has a lower probability. Figure~\ref{fig.neo_quality} depicts the distribution of $p_{val}$ among the 5348 cataloged NEO orbits. The histogram reveals that few erroneous linkages appear for $p_{val}>0.25$ and that few NEOs appear for $p_{val}<0.25$, thus we selected 25\% as the $p_{val}$ cutoff for acceptable orbits. This criterion led to rejection of 7\% of clean and 87\% of not clean orbits. Most of the clean orbits that were filtered out were MBAs mis-classified as NEOs, 14\% of which were filtered out. Only 2\% of clean NEO orbits were removed by this filter. As tabulated in Table~\ref{Tab.MBconfusion}, more aggressive $p_\mathrm{val}$ filtering---at the 50\% or 90\% level---is less effective at removing erroneous linkages, even as the loss of clean NEOs becomes unacceptable. Thus a modest modification of MOPS is necessary to allow a more statistically rigorous orbit quality filtering, but the rudimentary approach described here leads to a 96\% purity (3816/3979, see Table~\ref{Tab.MBconfusion}) in the NEO catalog. In the context of accuracy, the clean MBAs that appear in the NEO orbit catalog are accounted as correctly linked, which is, in fact, the case. \begin{figure}[tbh] \centering \plotone{p_val_hist.png} \caption{Histogram of postfit residual statistics of derived NEO orbits. In most cases, Not CLEAN NEO candidates can be easily distinguished.} \label{fig.neo_quality} \end{figure} \begin{table}[tbh] \small \caption{The number of cataloged NEO orbits of various classifications for varying values of the $p_{val}$ orbit quality filter. Here ``Non-NEO'' refers to MBAs that appear in the derived NEO catalog with $q<1.3\,\mathrm{au}$.} \begin{center} \begin{tabular}{l|rrrr} \tableline \tableline & \multicolumn{4}{c} {$p_{val}$ cutoff}\\ Classification & 0\% & 25\% & 50\% & 90\% \\ \hline All&5348&3979&3636&2314\\ CLEAN&4118&3816&3532&2279\\ Not CLEAN&1230&163&104&35\\ w/False Detection&35&3&1&1\\ CLEAN NEO&2222&2180&2062&1375\\ CLEAN MBA&1896&1636&1470&904\\ Not CLEAN NEO&2&0&0&0\\ Not CLEAN MBA&1228&163&104&35\\ \hline \hline \end{tabular} \end{center} \label{Tab.MBconfusion} \end{table} The rate of contamination of NEO orbits by false positives is extremely low, despite the large numbers of false positives injected into the detection stream. As shown in Table~\ref{Tab.confusion_noise}, after filtering at $p_{val}>25\%$, only 5 false detections appear in the NEO catalog. This can be compared to the total of over 29,000 detections that form the NEO catalog and the 51M false detections polluting the data stream. This result demonstrates that NEOs can be successfully linked with high efficiency and high accuracy when surveying with the baseline LSST cadence, even in the presence of significant numbers of false detections. \begin{table}[tbh] \small \caption{Number of detections of various classifications from OC 28. The total number in the input detection list and the number that were linked into the derived NEO catalog are shown.} \begin{center} \begin{tabular}{l|rrr} \tableline \tableline &Total&\multicolumn{2}{c}{---Derived NEO Catalog---}\\ & &All&$p_{val}<25\%$\\ \hline Total &65,900,928&39,188&29,288\\ MBA&14,899,279&20,680&11,868\\ NEO &48,628&18,446&18,060\\ False&50,953,021&62&5\\ \% False&77.3\%&0.16\%&0.02\%\\ \hline \hline \end{tabular} \end{center} \label{Tab.confusion_noise} \end{table} \subsubsection{Confusion from MBAs} To better understand the issue of the large fraction of NEO orbits stemming from correctly linked non-NEO objects, we used systematic ranging to explore the full orbit determination problem for these cases. Systematic ranging is an orbit estimation technique designed to analyze poorly constrained orbits, typically with only one or a few nights of astrometry, for which the conventional least squares orbit determination can fail due to nonlinearity \citep{2015Icar..258...18F}. We tested hundreds of cases and found that nearly all showed a characteristic ``V''-shaped orbital uncertainty pattern in $e$ vs. $q$ that allowed both NEO and MBA orbits (left panel, Figure~\ref{fig.MB_conf3}). In some cases the ``V'' shape was broken at the vertex so that there were two distinct orbital solutions (center panel, Figure~\ref{fig.MB_conf3}). The systematic ranging technique affords a statistically rigorous estimate of the probability that the track represents an NEO orbit, and for these correctly-linked MBAs that appear with NEO orbits, few have high NEO probabilities, reflective of the fact that the data are compatible with the non-NEO (truth) orbits (Figure~\ref{fig.neo_prob}). It is also important to note that most of these MBAs that appear as NEOs are detected far from opposition. Figure~\ref{fig.opp-dist} shows that only $\sim10\%$ of these cases are found within $60\degree$ from opposition, and that about half are detected at $80\degree$ or farther from opposition. This result is merely reflecting the classical result that orbital ambiguities result from three-night orbits of objects far from opposition. It is an unavoidable feature of observing at low solar elongations, and is generally corrected after a fourth night of data is obtained. However, as described below, the current MOPS configuration does not efficiently attribute a fourth night of data to the already cataloged orbit, and so the ambiguity is often not resolved in our simulations. We note also that this confusion is an artifact of simulating only a single observing cycle. In actual operations, MBAs seen at low solar elongation would eventually move into the opposition region and appear even brighter there. These MBAs would be readily cataloged with their correct orbits because there is little ambiguity in the opposition region, at which point it becomes straightforward to link to the ambiguous orbits arising from near-sun detections. \begin{figure}[tbh] \centering \epsscale{1.2} \plotone{MB_conf3.png} \caption{Examples of typical uncertainty regions for misclassified or erroneous linkages in the derived NEO orbit catalog. The plots depict Monte Carlo samples from systematic ranging that reflect the extent of possible solutions in perihelion distance $q$ and eccentricity $e$. The plots show the typical case of an MBA discovery (left) where the data are compatible with orbits spanning the NEO and MBA orbital regimes. In some of such cases two disjoint solutions are present, one NEO and one MBA (center). Erroneous linkages of two different MBAs often lead to NEO orbits with a small uncertainty, though many such cases are also hyperbolic.} \label{fig.MB_conf3} \end{figure} \begin{figure}[tbh] \epsscale{0.7} \centering \plotone{neo_prob.png} \caption{Histogram of computed probability that a track derived from MBA tracklets relates to an NEO orbit, as derived from systematic ranging analyses.} \label{fig.neo_prob} \end{figure} \begin{figure}[tbh] \epsscale{0.7} \centering \plotone{opp-dist.png} \caption{Cumulative distribution of opposition distance for MBAs that appear in the NEO orbit catalog with $p_{val}>25\%$. The distribution shows that this main-belt confusion is largely limited to detections made far from opposition, i.e., with low solar elongation.} \label{fig.opp-dist} \end{figure} We also conducted systematic ranging analyses on some of the erroneous linkages leading to NEO orbits, almost all of which were erroneous MBA-MBA linkages, and these revealed a very different characteristic pattern in the $e$ vs. $q$ uncertainty space (right panel, Figure~\ref{fig.MB_conf3}). The uncertainty region was typically very small, leading to a high computed probability that the orbit is of an NEO (``Not Clean'' in Figure~\ref{fig.neo_prob}). In these cases, the uncertainty regions were also elongated and with one side having a sharp cutoff. In many such cases the heliocentric orbits were hyperbolic. This points to a likelihood that more effective screening tests can be developed to eliminate these false MBA-MBA linkages, despite the fact that some pass even strict orbit quality tests. For example, Table~\ref{Tab.MBconfusion} shows that even for $p_{val}>90\%$ a few dozen erroneous linkages remain in the NEO catalog. However, most of these erroneous MBA-MBA linkages are readily repaired when the individual MBAs are eventually re-observed at other epochs and correctly linked through other tracklets. \subsubsection{Duplicate Orbits} Table~\ref{Tab.orbit_accuracy} indicates that there were 4118 clean linkages in the NEO catalog, but not all of these are unique. Table~\ref{Tab.orbit_types} shows that 8.7\% of these are actually duplicate entries of the same object. In Figure~\ref{fig.duplicates} we see that the duplicate NEO entries are of almost identical orbits, with 95\% of duplicates matching in both eccentricity and perihelion distance (in au) to within 0.02. The non-NEO catalog has an even greater rate of duplication (17.3\%). \begin{table}[tbh] \small \caption{Duplication among derived orbits.} \begin{center} \begin{tabular}{c|cccc} \tableline \tableline Class & Clean & Unique & Duplicates & Fraction\\ \hline NEO & 4118 & 3758 & 360 & 8.7\%\\ Non-NEO & 764,198 & 632,298 & 131,900 & 17.3\%\\ \hline \hline \end{tabular} \end{center} \label{Tab.orbit_types} \end{table} Virtually all of these duplicates are readily linked with standard orbit-to-orbit identifications techniques \citep{2000Icar..144...39M}, which are already part of MOPS. Most duplicates can be avoided altogether with a more efficient application of the MOPS attribution algorithm \citep{2001Icar..151..150M}. Within the linking process, a tracklet is first checked to see if it is can be attributed to an object already in the catalog. If so then it is linked to that object and removed from the tracklet list so that it is not passed along to kd-tree linking. The fact that so many objects in our simulation are linked into multiple independent tracks in a single observing cycle implies, first, that there are at least six tracklets in the lunation, indicating a very solid discovery, and second, that the attribution algorithm can easily be tuned to attribute these extra tracklets before they are even linked into tracks. Not only would such a re-tuning keep the orbit catalog cleaner, it would also cut down on the computational expense of kd-tree searches by removing tracklets from the search that are associated with already discovered objects. The problem of duplicate orbits is likely to be easily resolved through testing and tuning of existing MOPS functionality. \begin{figure}[tbh] \centering \plottwo{dup_sep.png}{sep_cdf.png} \caption{(left) Scatter plot of $\Delta q$ and $\Delta e$ between duplicate NEO orbits. (right) Cumulative distribution of duplicate separation in the $q$ and $e$ phase space. } \label{fig.duplicates} \end{figure} \section{Discussion} We performed a high-fidelity simulation of linking NEO and MBA detections into orbits in a realistic density scenario with false detections and constraints of the LSST survey in one observing cycle. Tracklet generation created false tracklets at a rate of 57\% being false. This rate can be larger if one neglects the information on trail length and orientation when creating tracklets. We used this velocity information for the velocity range of 1.2--2.0 deg/day. Optimization of kd-tree parameters to provide maximum number of clean tracks is correlated with large number of false tracks and varies from field to field. It is also CPU and memory intensive, though it can be managed by distributed and multi-core or cloud computing. On a single-lunation, full-density simulation, with NEOs, MBAs and false detections, we obtained a linking efficiency of 93.6\% for $H<22$ NEOs with 12-day tracks. Linking efficiency on the full population down to $H<25$ was lower. We believe that, with modest revision and tuning of the MOPS linking algorithms and an appropriate allocation of computational resources that this number can be significantly increased, probably to 99\% or more. On the same simulation, the derived NEO catalog was comprised of 96\% correct linkages. The remaining 4\% of linkages were almost exclusively incorrect MBA-MBA links, most of which should be eliminated over a longer duration simulation. Less than 0.1\% of orbits in the derived NEO catalog included false detections. Some enhancements to MOPS are needed in the linking stage to eliminate duplicate and false orbits. This includes improving the orbit quality filter and tuning of the attribution, precovery\footnote{Here ``precovery'' refers to a search of the MOPS database for tracklets observed previously that did not form a derived object because not enough tracklets were observed at the time. It is similar to attribution of new detections, but operates on past observations.} and orbit-orbit identification modules. Together with optimization of the kd-tree track search, this would increase the linking efficiency and thus increase the number of cataloged NEOs. The linking efficiency directly affects the discovery completeness as discussed in \citet{2017Veres_1}. \acknowledgements \begin{center} {\em Acknowledgments} \end{center} The Moving Object Processing System was crucial to the successful completion of this study. This tool is the product of a massive development effort involving many contributors, whom we do not list here but may be found in the author list of \citet{2013PASP..125..357D}. This report identifies a few deficiencies in MOPS, but our remarks should not be viewed in a pejorative sense. The software has so far never been fielded for its designed purpose, and we expect that minor improvements and tuning can resolve the issues that we have mentioned. We thank Larry Denneau (IfA, Univ. Hawaii) for his tremendous support in installing and running the MOPS software. This study benefited from extensive interactions with Zeljko Ivezic, Lynne Jones and Mario Juric, all from the University of Washington. As members of the LSST project, they provided vital guidance in understanding the performance and operation of LSST. They also provided important insight into the expected interpretation and reliability of LSST data. And they reviewed with us their early results on DECam image processing, which allowed us to include credible image differencing artifacts in the simulated LSST detection stream. Davide Farnocchia (JPL) supported the systematic ranging analyses of linking products described in this report. Mikael Granvik (Univ. Helsinki) kindly provided an early version of the \citet{2016Natur.530..303G} NEO population model, which was used extensively in this study. This research was conducted at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. \vspace{.3cm} \noindent\copyright\ Copyright 2017 California Institute of Technology. Government sponsorship acknowledged.
1,314,259,993,102
arxiv
\section{Introduction}\label{s_intro} Consider the input/output linear system \begin{equation}\label{system_td}\left\{\begin{array}{l} x_k=\mathrm q x_{k-1}+\mathrm w u_{k-1}~~k=1,\dots,K\\ y_k=c x_k+n_k\\ \end{array}\right. \end{equation} with $K\in\mathds{N}$ (possibly tending to infinity), $u_k\in\{0,1\}$ for $k=0,\dots, K-1$, $x_k\in\mathds{R}$ for $k=0,\dots,K$, $y_k,n_k\in\mathds{R}$ for $k=1,\dots,K$, $\mathrm q,\mathrm w,c\in\mathds{R}$, and $\mathrm q\in (0,1)$ to preserve stability. Our aim is to recover the binary input $u_k$, in an online fashion, given the output $y_k$ corrupted by a noise $n_k$. To this purpose, we retrieve a low-complexity algorithm introduced in \cite{s09} and discussed in \cite{s10,s11}, and we propose a comprehensive theoretical analysis of its performance. As a result of the analysis, we will be able to evaluate the performance as a function of the system's parameters. The digital signal reconstruction problem is a paradigm in data transmissions, where signals arising from finite alphabets are sent over noisy continuous channels, and in hybrid frameworks, where digital and analog signals have to be merged in the same system. In \cite{s09}, a particular instance of model \eqref{system_td} was derived as time discretization of a convolution system and the input estimation described as a deconvolution problem. The same can be achieved for model \eqref{system_td}: if we consider the system \begin{equation}\label{system}\left\{\begin{array}{l} x'(t)=ax(t)+b u(t)~~~~ t \in [0,T]\\ y(t)=cx(t)+n(t)~~~~x(0)=x_0\\ u(t), x(t), y(t),~ a, b, c \in \mathds{R}, a<0 \end{array}\right. \end{equation} we have \begin{equation}\label{convolution} x(t)= e^{t a}x_0+ b\int_0^t e^{a(t-s)} u(s) \mathrm{d} s.\end{equation} Given \begin{equation}\label{bin_input}u(t)=\sum_{k=0}^{K-1} u_k \mathds{1}_{[k\tau,(k+1)\tau[}(t),~~~u_k \in \mathcal{U}=\{0,1\},~\tau>0\end{equation} we can discretize in the following way: by defining \begin{equation}\begin{split} x_k&:=x(k \tau )~~~ \text{ for } k=0,1,\dots, K=T/\tau\\ \mathrm q&:=e^{\tau a}\in (0,1)\\ \mathrm w&:=b\frac{1-e^{\tau a}}{-a}=-\frac{b}{a}(1-\mathrm q) \end{split}\end{equation} we obtain \begin{equation}\label{enc}\begin{split} x_k &=\mathrm q^k x_0+b\mathrm q^k \int_0^{k\tau}e^{-as}\sum_{h=0}^{K-1} u_h \mathds{1}_{[h\tau,(h+1)\tau[}(s)\mathrm{d} s\\ &=\mathrm q^k x_0+b\mathrm q^k\sum_{h=0}^{k-1} u_h\int_{h\tau}^{(h+1)\tau}e^{-as}\mathrm{ d}s\\ &=\mathrm q^k x_0+\frac{b}{-a}\mathrm q^k\sum_{h=0}^{k-1} u_h e^{-a(h+1)\tau}(1-e^{a\tau})\\ &=\mathrm q^k x_0+\mathrm w\sum_{h=0}^{k-1} u_h \mathrm q^{k-1-h}\\ \end{split}\end{equation} from which we have the recursive formula \begin{equation}\label{rec_encoding} x_k=\mathrm q x_{k-1}+\mathrm w u_{k-1}. \end{equation} In system \eqref{system}, recovering $u(t)$ basically consists in the inversion of the convolution integral $ y(t)= c e^{t a}x_0+ cb\int_0^t e^{a(t-s)} u(s) \mathrm{d}+n(t)$ (where $n(t)$ represents an additive noise), which is a long-standing mathematical ill-posed problem: small observation errors may produce defective solutions. Several estimation approaches have been studied in the last fifty years and the literature on deconvolution is widespread: we refer the reader to early papers \cite{tik63, tik77} and to later \cite{ary78,spa96,sta02,fag02}, which show also some possible applications in geophysics, astronomy, image processing and biomedical systems. For more references, see \cite{SoThesis}. Most of known deconvolution methods exploit the regularity of the input function to provide good estimations. This work instead is a contribution for deconvolution in case of discontinuous input functions. Considering a binary alphabet, which has been chosen mainly to keep the analysis straightforward, is consistent with many applications: the output of several digital devices, such as computers and detection devices \cite{s10}, are binary. Nevertheless, the algorithm and the analysis presented in this paper could be generalized to larger alphabets with no much effort. In \cite{s09}, low-complexity decoding algorithms were introduced, derived from the optimal BCJR \cite{BCJR} algorithm, and applied to perform the deconvolution of the system \eqref{system} with $a=0$ and $b=c=1$. In this work, we apply the simplest of those algorithms, the so-called One State Algorithm (OSA for short) to the system \eqref{system_td}. We then describe the performance in terms of Mean Square Error (MSE) for long-time transmissions, through a probabilistic analysis arising from the Markovian behavior of the algorithm. The scheme of the analysis is the same proposed in \cite{s09}, but leads to completely different scenarios: while for $a=0$, $b=c=1$ standard ergodic theorems for denumerable Markov Processes were sufficient to compute the MSE, in the present case the denumerable model does not proved the expected results, and more sophisticated arguments are used, arising from Markov Processes, Iterated Random Functions (IRF for short) and sequences of probability measures. The paper is organized as follows. In Section \ref{s_prob} we complete the description of the system, giving some observations and probabilistic assumptions; in Sections \ref{s_alg} and \ref{s_sim}, we present our algorithm and some simulations. The core of the paper is the performance analysis provided in Section \ref{s_analysis}. Finally we propose some concluding observations. Notice that Sections \ref{s_prob} and \ref{s_alg} mainly retrieve the model presented in \cite{s09} and \cite{s10}. \subsection{Notation} We use the following notation throughout the paper: $\Prob$ indicates a discrete probability, while $P(\cdot,\cdot)$ is the transition probability kernel of a Markov Process; $\mathbb{E}$ is the stochastic mean. $\mathcal{B}(\mathcal{S})$ indicates the Borel $\sigma$-field of a space $\mathcal{S}$. Given a bounded measurable function $v$ defined on a space $\mathcal{S}$, $Pv(x)=\int_{\mathcal{S}}v(y)P(x,\mathrm{d}y)$. For every measure $\mu$ on $(\mathcal{S},\mathcal{B}(\mathcal{S}))$ and $F \in \mathcal{B}(\mathcal{S})$, $\mu P(F)=\int_{\ms}P(x,F)\mu(\mathrm{d}x)$. The complementary error function $\text{erfc}$ is defined as $\text{erfc}(x)=\frac{2}{\sqrt{\pi}}\int_x^{+\infty} e^{-t^2}\mathrm{d}t$, $x\in\mathds{R}$; the indicator function $\mathds{1}_A(x)$ is equal to one if $x\in A$, and zero otherwise. Moreover we often use the following acronyms: OSA for One State Algorithm, MSE for Mean Square Error, IRF for for Iterated Random Functions, MAP for Maximum a Posteriori. \section{Problem Statement }\label{s_prob} Let us develop a deeper understanding of the problem and specify some assumptions. First notice that, given $ x_k=\mathrm q x_{k-1}+\mathrm w u_{k-1}$, we have \begin{equation*} x_k=\mathrm q^k x_0+\mathrm w\sum_{h=0}^{k-1}u_{k-h-1}\mathrm q^h\end{equation*} which shows that each $x_k$ is determined by the initial state $x_0$ and by a binary polynomial of degree $k-1$ in $\mathrm q$. From now onwards, let $x_0=0$, so that, for any $k=0,1,\dots,K$, \begin{equation} x_k\in\mathcal{X}=\mathrm w \left \{ \sum_{h=0}^{K} \mu_h \mathrm q^h,~\mu_h\in\{0,1\} \right\}.\end{equation} Moreover, let us introduce some prior probabilistic information: \emph{Assumption 1:} the additive noise $n_k$ is white Gaussian, that is, $n_1,\dots, n_K$ are realizations of independent Gaussian random variables $N_1,\dots, N_K$, with null mean and variance $\sigma^2$. \emph{Assumption 2:} the binary inputs $u_0,\dots, u_{K-1} $ are realizations of independent Bernoulli random variables $U_0, \dots U_{k-1}$ with parameter $\frac{1}{2}$. Input and noise are also supposed to be mutually independent. Under these assumptions the system can be rewritten in probabilistic terms as follows (capital letters are used instead of small letters to indicate random quantities): for $ k=1,\dots, K$, \begin{equation} \left\{\begin{array}{l} U_{k-1}\sim \text{Ber}\left(1/2\right)\\ N_k\sim \mathcal{N}(0,\sigma^2)\\ X_k=\mathrm q X_{k-1}+\mathrm w U_{k-1}~~(X_0=0)\\ Y_k=c X_k+N_k.\\ \end{array}\right.\end{equation} While Assumption 1 is realistic in physical terms, Assumption 2 is less motivated by applications, where source bits are often not independent (for example, they may be governed by a Markov Chain). We have however imposed it for simplicity of treatment, although extensions to more sophisticated prior distributions do not require much effort. Similarly, we have chosen to propose a one-dimensional problem to make the analysis more readable, while the structure would be almost the same also for multidimensional problems (see \cite{s10}). Given this setting, we aim at providing a method to decode the bit $u_{k-1}$ at each time step $k=1,\dots, K$, based on the current measurement $y_k$ and the probabilistic properties of the system. In order to perform this online recovery, the algorithm is allowed to store a few information (just one real value) about the state $x_k$. \section{Binary Input Reconstruction }\label{s_alg} The One State Algorithm (OSA for short) introduced in \cite{s09} fits the requirements described in the previous section. The OSA is a suboptimal version of the Bahl, Cocke, Jelinek, and Raviv algorithm (most known as BCJR \cite{BCJR}), a prominent decoding algorithm used for convolutional codes. The BCJR performs a maximum a posteriori estimation (MAP) of the input bit sequence by evaluating the probabilities of the states of the encoder, through a forward and a backward recursion; it is optimal in the sense that it minimizes the Mean Square Error: \begin{equation} \text{MSE}(\RU,\RUE):=\frac{1}{K}\sum_{k=0}^{K-1}\mathds{E}(U_k-\UE_{k})^2 \end{equation} where $\RU=(U_0,\dots, U_{K-1})$ and $\RUE=(\UE_0,\dots, \UE_{K-1})$ is the estimated input sequence. In the binary case, this is equivalent to \begin{equation*} \text{MSE}(\RU,\RUE)=\frac{1}{K}\sum_{k=0}^{K-1}\Prob(U_k\neq\hat{U}_{k}).\end{equation*} As shown in \cite{s09}, the BCJR can be adapted to the binary input deconvolution problem with optimal results, but with complexity drawbacks when the transmission is long. This motivated the introduction of the OSA, a BCJR-based method that consists only in a forward recursion and that stores only one state at each iteration step. More precisely, the OSA pattern is as follows: \begin{enumerate} \item Initialization of the state estimate $\xe_0$; \item For $k=1,\dots,K$: given $y_k \in \mathds{R}$ and $\xe_{k-1}$, \begin{equation}\label{OSApattern}\begin{split}&\ue_{k-1}= \left\{\begin{array}{l} 0\text{ if } |y_k-c\mathrm q\xe_{k-1}|\leq |y_k-(c\mathrm q\xe_{k-1}+c\mathrm w)| \\ 1\text{ otherwise.} \end{array} \right.\\ &\xe_{k}=\mathrm q\xe_{k-1}+\mathrm w \ue_{k-1}.\end{split}\end{equation} \end{enumerate} Typically, we assume to know $x_0$, so that we can initialize correctly by $\xe_0=x_0$. This point will be discussed later in Section \ref{s_analysis}. Given the probabilistic setting previously introduced, the OSA can also be written as: \begin{equation}\left\{\begin{array}{l} \UE_{k-1}=\mathds{1}_{(c\mathrm q \XE_{k-1}+\frac{c\mathrm w}{2},+\infty)}(Y_k)\\ \XE_k=\mathrm q \XE_{k-1}+\mathrm w \UE_{k-1}.\\ \end{array}\right.\end{equation} While the BCJR estimates the probabilities of all the possible states at each step, the OSA individuates the most likely state and assumes it to be the correct one; on the basis of this state estimate, it decides on the current bit. The decoding is performed with a MAP decision on the current bit, which in our probabilistic setting (Bernoulli input and Gaussian noise) reduces to the comparison between two Euclidean distances. The OSA is suboptimal, but presents two main good properties: (a) it is low-complexity, both for number of computations and storage locations; (b) it is causal, that is, it uses only the past and the present information to decode the current bit. Therefore, (a) it can be applied to our case in which the number of states is (not countably) infinite and (b) it can be used online, making unnecessary the complete transmission before starting deconvolution, this feature being fundamental to study long time transmissions. In \cite{s09}, we introduced other causal algorithms: the Causal BCJR, which is a version of BCJR performing only the forward recursion, and the Two States Algorithm (TSA), which works basically the same as OSA, but estimates the two best states at each step with their probabilities of being the correct ones. The TSA is then oriented to soft decoding; in the differentiation case $a=0$, $b=c=1$, it was proved to have similar performance to Causal BCJR and better than the OSA. Nevertheless, neither the Causal BCJR nor the TSA are efficient for system \eqref{system_td}. The first one, in fact, presents complexity drawbacks due to the number of states. The second one, instead, has performance too much similar to the OSA, in spite of higher complexity: owing to the structure of the state space $\mathcal{X}$, the two best states turn out to be very close to each other, which does not enhance the information provided by the OSA. \subsection{Similar algorithms} We notice that our setting and decoding procedure \eqref{OSApattern} are very similar to the Decision Feedback Equalizer introduced to mitigate the effects of channels' intersymbol inference (ISI, see \cite{pulf_thesis} for a complete review). As in our case, the model considered in channel equalization is a linear system with digital input, and the goal is the input recovery for equalization purposes. Various methods have been proposed in literature and much effort has been addressed towards complexity reduction (see, e.g., \cite{eyu88, due89, wil92, que07}). Typically, complexity is reduced by collecting information only from fixed time blocks, which is also our attempt; more precisely we consider only the current measurement and an estimation of the previous state, that is, ``minimal'' blocks of length one, but extensions to larger time blocks are possible to improve the performance. The recovery techniques in the cited works present many analogies with ours. For example, the method introduced in \cite{que07}, if restricted to minimal blocks, differs from ours only for the introduction of a prior distribution on the state $x_k$. Nevertheless, an outstanding difference lies in the model: channel equalization exploits the input estimate to provide a feedback equalizer to the system, while our final aim is just the input recovery. Given the several connections, in our future work we will study possible implementations of our low-complexity algorithms, derived from BCJR, for channel equalization and propose more detailed comparisons. \section{Simulations}\label{s_sim} We now show a few simulations' results, obtained by 2000 Monte Carlo Runs of 320 bit transmissions. \begin{figure} \begin{center} \includegraphics[width=8cm]{out_simu-crop.pdf} \caption{Mean Square Error in function of the Signal-To-Noise Ratio $\frac{c^2\mathrm w^2}{\sigma^2}$ (in dB); $b=c=1$, $a=-2,-1,-0.5,-0.3$} \includegraphics[width=8cm]{out_simu_zoom-crop.pdf}\label{figF1} \caption{A zoom that highlights the gain obtained by decreasing $a$.} \label{figF1b}\end{center} \end{figure} We consider the system derived from \eqref{system}, with $\tau=1$, $\mathrm q=e^{ a}$, $\mathrm w=-\frac{b}{a}(1-\mathrm q)$. We show the behavior of the MSE with respect to $\frac{c^2\mathrm w^2}{\sigma^2}$, that can be interpreted as the Signal-To-Noise-Ratio (SNR for short) of the transmission: since for each $k$, the transmitted signal is $cx_k\in\{c\mathrm q x_{k-1},c\mathrm q x_{k-1}+c\mathrm w\}$ then $c^2\mathrm w^2$ is proportional to the signal power. As expected, the MSE tends to zero when the SNR is large, while for small SNR tends to $\frac{1}{2}$. If we fix $b=c=1$ and let $a$ vary, we obtain a slight gain (that is, a lower MSE curve) by decreasing $a$, as shown in Figures \ref{figF1}-\ref{figF1b}. In other terms, more stable systems are preferable. This phenomenon will be retrieved in Section \ref{s_analysis}. \section{Analysis of the Algorithm}\label{s_analysis} For simplicity, in the next we will assume $\mathrm w>0$, the analysis in the case $\mathrm w<0$ being analogous. The goal of this section is the analytic evaluation of the Mean Square Error for the One State Algorithm, in case of long-time transmissions. The analysis starts from the definition of the following Markov Process: \begin{equation}\left\{\begin{array}{l}D_k=\XE_k-X_k=\mathrm q D_{k-1}+\mathrm w(\UE_{k-1}+U_{k-1})~~~~~k=1,2,\dots\\ D_0=\alpha\\ \end{array}\right.\end{equation} For any $k=1,2,\dots$, if $D_{k-1}=z$ then $D_{k}\in\{\mathrm q z,\mathrm q z+\mathrm w,\mathrm q z -\mathrm w\}$, and the transition probabilities are: \begin{equation}\label{tpk}\begin{split} P(z,\mathrm q z+\mathrm w)&=\Prob(\UE_k=1,U_k=0|D_k=z)\\ &=\frac{1}{4}\text{erfc}\left(\frac{c\mathrm q z+c\mathrm w/2}{\sigma\sqrt{2}} \right)\\ P(z,\mathrm q z-\mathrm w)&=\Prob(\UE_k=0,U_k=1|D_k=z)\\ &=\frac{1}{4}\text{erfc}\left(\frac{-c\mathrm q z+c\mathrm w/2}{\sigma\sqrt{2}} \right)\\ P(z,\mathrm q z)&=1-P(z,\mathrm q z+\mathrm w)-P(z,\mathrm q z-\mathrm w).\\ \end{split}\end{equation} Since for any $k\in\mathds{N}$, $D_k\in\left \{\mathrm w \sum_{h=0}^{k-1} \mu_h \mathrm q^h,~\mu_h\in\{-1,0,1\} \right\}$, if we fix $D_0=0$, $(D_k)_{k\in\mathds{N}}$ is a Markov Process on the denumerable state space \begin{equation}\label{denumstsp} \left \{ \mathrm w \sum_{h=0}^{\infty} \mu_h \mathrm q^h,~\mu_h\in\{-1,0,1\} \text{ such that } (\mu_h)_{h\in\mathds{N}} \text{ is definitely null}\right\}.\end{equation} By definitely null, we mean that for any $D_k$ the coefficients $\mu_h$ with $h\geq k$ are null. This set is denumerable since any $D_k$ can be seen as the ternary representation of a non-negative integer. Notice that fixing $D_0=0$ just means that $x_0$ is known. The key point of the analysis is that, for large $k$, the MSE of the OSA can be computed using the ergodic properties of the Markov Process $(D_k)_{k\in\mathds{N}}$; more precisely, we require the existence of a stationary distribution. In the next, we will propose two different ways to study the stationary distribution: the first one does not depend on the initial state $D_0\in \ms$ (where $\ms$ is compact set which will be defined shortly), but requires some contractive properties; the second one is valid even in the non-contractive case, but depends on the initial state. For both methods, the presented setting is not still adequate to study the possible stationary distributions, since the states of $(D_k)_{k\in\mathds{N}}$ are transient: when the process visits a state, then it leaves it definitely (except for a negligible set of states that have a periodic ternary representation, for example $0$, $\pm\mathrm w/(1-\mathrm q)$ ); moreover, the process is not irreducible since there is no reciprocal communication between the states (see \cite{s09} for more details). Thus we conclude that no hypotheses are fulfilled to apply the standard ergodic results for denumerable Markov Processes (see \cite{s09}). In other terms, if a stationary distribution exists, it does not concentrate on single states. This suggests to consider $(D_k)_{k\in\mathds{N}}$ on a non-denumerable state space. In particular, we can extend \eqref{denumstsp} to \begin{equation}\label{extstsp}\left \{ \mathrm w \sum_{h=0}^{\infty} \mu_h \mathrm q^h,~\mu_h\in\{-1,0,1\}\right\}.\end{equation} It is interesting to notice that if $\mathrm q\geq\frac{1}{3}$, then the set \eqref{extstsp} coincides with the closed interval $\left[-\frac{\mathrm w}{1-\mathrm q},+\frac{\mathrm w}{1-\mathrm q}\right]$, while if $\mathrm q<\frac{1}{3}$ it is a Cantor set included in $\left[-\frac{\mathrm w}{1-\mathrm q},+\frac{\mathrm w}{1-\mathrm q}\right]$ (for a proof of this fact, see \cite{s10}). Let us then consider as state space \begin{equation*} \ms=\left[-\frac{\mathrm w}{1-\mathrm q},+\frac{\mathrm w}{1-\mathrm q}\right]\end{equation*} and study the ergodic properties of the Markov Process $(D_k)_{k\in\mathds{N}}$ on $\ms$. Before continuing the analysis, let us introduce some rigorous notions that will be used in the next. Let $\mathcal{B}(\ms)$ be the Borel $\sigma$-field of $\ms$. We call \textit{transition probability kernel} (see, e.g., \cite[Section 3.4.1]{mey93}) an application $P:\ms \times \mathcal{B}(\ms)\to[0,1]$ such that\\ (i) for each $F \in\mathcal{B}(\ms)$, $P(\cdot, F)$ is a non-negative measurable function;\\ (ii) for each $x \in \ms$, $P(x, \cdot)$ is a probability measure on $(\ms,\mathcal{B}(\ms))$. Given a bounded measurable function $v$ on $\ms$, we denote by $Pv$ the bounded measurable function defined as \begin{equation*} Pv(x)=\int_{\ms}v(y)P(x,\mathrm{d}y).\end{equation*} Furthermore, let $\mu$ be a measure on $(\ms,\mathcal{B}(\ms))$: we define the measure $\mu P$ by \begin{equation*}\mu P(F)=\int_{\ms}P(x,F)\mu(\mathrm{d}x)\;\;\;\;\;F \in \mathcal{B}(\ms).\end{equation*} We finally define the $n$-th power of the transition kernel $P$ by $P^1(x,F)=P(x,F)$ and $P^n(x,F)=\int_{\ms}P(x,\mathrm{d}y)P^{n-1}(y,F)$. It is easy to see that $P^n(x,F)$ are transition kernels, too. At this point, we can make explicit the relationship between the MSE and $(D_k)_{k\in\mathds{N}}$. In the next, we will always consider $D_0=0$, if not differently specified. Given the transition probability kernel $P$ of $(D_k)_{k\in\mathds{N}}$, defined by \eqref{tpk}, we have \begin{equation}\begin{split}\label{acheserveilMP}\text{MSE}&(\RU,\RUE) =\frac{1}{K}\sum_{k=0}^{K-1}\Prob(\UE_k\neq U_k)=\\ &=\frac{1}{K}\sum_{k=0}^{K-1}\int_{\mathcal{D}}\Prob(\UE_k\neq U_k|D_k=z)P^k(\alpha,\mathrm d z)=\frac{1}{K}\sum_{k=0}^{K-1}P^k g(\alpha) \end{split}\end{equation} where \begin{equation}\label{def_g}g(z)=\Prob(\UE_k\neq U_k|D_k=z)\end{equation} and $D_0=\alpha$ is any initial state in $\ms$. Therefore $P^k g$ (and in particular its behavior for large $k$) will be the object of our further analysis. In the sequel, we will distinguish two main scenarios: when $(D_k)_{k\in\mathds{N}}$ has some \emph{contractive} properties and when it has not. In the first scenario, we can exploit the theory of Iterated Random Functions to prove that $P^k g$ converges, while in the second one we will use known results of convergence of probability measures. \subsection{Contractive case} Let $ l\text{-Lip}(\ms)$ be the set of all the Lipschitz functions with Lipschitz constant equal to $l$ on $\ms$. We define the Kantorovich (or Wasserstein) distance $d_W$ between probability measures (see \cite[Section 2.1, Example 3.2.2]{rac91}) as \begin{equation}\label{Kantorovich} d_W(\mu,\nu)=\sup_{f\in 1\text{-Lip}(\ms)}\bigg|\int_{\ms} f\mathrm d (\mu- \nu) \bigg|.\end{equation} We can prove the following \begin{theorem}\label{ERGT} If $\frac{c^2\mathrm w^2}{\sigma^2}>4$ and $\mathrm q <\frac{1}{3+\sqrt{\frac{2}{e\pi}}}$ or if $\frac{c^2\mathrm w^2}{\sigma^2}\leq 4$ and $\mathrm q\leq \frac{1}{1+\sqrt{\frac{2}{e\pi}}}$, then, \begin{equation}\label{ERG} \lim_{K\to\infty}\text{MSE}(\RU,\RUE)=\int_{\mathcal{D}}g \mathrm d\mu~~~\text{ for any } D_0\in \ms\end{equation} where $\mu$ is the unique probability measure such that $\sup_{x\in\ms} d_W(P^k(x,\cdot),\mu(\cdot)) \stackrel{k\to\infty}{\longrightarrow}0$, $P$ being the kernel of $(D_k)_{k\in \mathds{N}}$.\end{theorem} Notice that $g(z)$ is time-invariant (i.e., does not depend on $k$) and can be analytically computed. In fact, given $D_{k-1}=z$, $D_k=\mathrm q z$ if and only if $\UE_k=U_k$, $D_k=\mathrm q z+\mathrm w$ if and only if $\UE_k=1$ and $U_k=0$, $D_k=\mathrm q z-\mathrm w$ if and only if $\UE_k=0$ and $U_k=1$ and \begin{equation} g(z)=P(z,\mathrm q z+\mathrm w)+P(z,\mathrm q z -\mathrm w).\end{equation} Furthermore, the probability measure $\mu$ can be numerically evaluated. Recall that, as already noticed, $\frac{c^2\mathrm w^2}{\sigma^2}$ can be interpreted as the Signal-To-Noise-Ratio. Moreover, the bounds on $\mathrm q$ can be interpreted as the necessity of a stronger stability for convergence. In order to prove the theorem, let us introduce some elements from the Iterated Random Functions theory. \subsubsection{Iterated Random Functions}\label{IRF} Let $(\ms, d)$ be a complete metric space and $\mis$ be a measurable space. Consider a measurable function $w: \ms \times \mis \to \ms$ and for each fixed $s \in \mis$, $w_s(x):=w(x,s)$, $x\in\ms$. Let $(I_k)_{k\in\mathds{N}}$ be a stochastic sequence in $\mis$ such that $I_0, I_1, \dots$ are independent, identically distributed. Then, the set $\{w_{I_k}(x),~k\in\mathds{N}\}$ is a family of random functions. The systems obtained by iterating such random functions, called Iterated Random Functions (IRF), are studied for diverse purposes: for example, IRF with contractive properties are used to construct fractal sets, see \cite{hut81, dia99}. More interesting for our study is the exploitation of IRF to study Markov Processes. Given an IRF and a starting state $x\in\ms$, we can define the induced Markov Process $(Z_k(x))_{k\in\mathds{N}}$ as \begin{equation} Z_k(x):=w_{I_{k-1}}\circ w_{I_{k-2}}\circ w_{I_{k-3}}\circ\cdots\circ w_{I_{0}}(x)~~~~(k\geq 1)\end{equation} and analyze its asymptotic behavior through the properties of $w_{I_k}(x), k\in\mathds{N}$. It has been proved that if the $w_{I_k}(x)$ have \emph{some} contractive properties, the transition probability kernel of $Z_n(x)$ converges to a probability measure, unique for all the initial states $x\in \ms$. The required contractive properties may be slightly different: \cite{dia99} studied the case of Lipschitz functions $w_{I_k}(x)$ ``contracting on average'', while similar results have been obtained by \cite{sten01} without the continuity requirement on $w_{I_k}(x)$, by \cite{stein99} for ``locally contractive'' functions, and by \cite{jar01} for ``non-separating on average'' functions. A useful survey on the argument has been recently proposed by \cite{ios09}. Let us show how to exploit the IRF theory in our framework. The evolution of $(D_k)_{k\in\mathds{N}}$ can be modeled by IRF. We consider the complete metric space $\ms=\left[-\frac{\mathrm w}{1-\mathrm q},+\frac{\mathrm w}{1-\mathrm q}\right]$ naturally endowed with the Euclidean metric $d$ of $\mathds{R}$, the measurable space $\mis=\{0,1\}\times \mathds{R}$ and the stochastic process $$I_k=(U_k,N_{k+1}),~~k\in\mathds{N}$$ on $\mis$, and we define the random function \begin{equation} w_{I_k}(x)=\mathrm q x+\mathrm w \mathds{1}_{(c\mathrm q x+c\mathrm w\left(\frac{1}{2}- U_k\right),+\infty)}(N_{k+1})-\mathrm w U_k,~~x\in\ms\end{equation} that describes the dynamics of $(D_k)_{k\in\mathds{N}}$. The key result for our purpose is the following theorem (here stated for compact spaces), which does not require continuity: \textbf{Stenflo's Theorem} \cite[Theorem 1]{sten01} Suppose that there exists a constant $l<1$ such that \begin{equation}\label{av_conv} \mathbb{E}[d(w_{I_0}(x),w_{I_0}(y))]\leq l~d(x,y)\end{equation} for all $x,y\in\ms$, $(\ms,d)$ being a compact metric space. Then there exist a unique probability measure $\mu$ and a positive constant $\gamma_{\ms}$ such that \begin{equation} \sup_{x\in\ms} d_W(P^n(x,\cdot),\mu(\cdot))\leq \frac{\gamma_{\ms}}{1-l}l^n~~~~n\geq 0\end{equation} where $P^n(x,\cdot)$ is the $n$-step transition probability kernel of the Markov Process $Z_n(x)$.\\ Now, Theorem \ref{ERGT} can be proved by applying the Stenflo's Theorem. \subsubsection{ Proof of Theorem \ref{ERGT}} Let us analyze the condition (\ref{av_conv}). Consider $x,y\in\ms$ with $x>y$ (recall that $\mathrm q>0, \mathrm w>0$). Let $ H=H(x,y,I_0)$ and $\mathcal{I}_u$ respectively be defined as \begin{equation*}\begin{split}& H:= \mathds{1}_{\left(c\mathrm q y+c\mathrm w\left(\frac{1}{2}- U_0\right),c\mathrm q x+c\mathrm w\left(\frac{1}{2}- U_0\right)\right)}(N_1)\\ &\mathcal{I}_u:=\frac{1}{\sqrt{2\pi}\sigma}\int_{c\mathrm q y+c\mathrm w\left(\frac{1}{2}- u\right)}^{c\mathrm q x+c\mathrm w\left(\frac{1}{2}- u\right)}e^{-\frac{n^2}{2\sigma^2}}\mathrm d n\\ &=\frac{1}{2}\text{erfc}\left(c\frac{\mathrm q y+\mathrm w\left(\frac{1}{2}- u\right)}{\sigma\sqrt{2}}\right)-\frac{1}{2}\text{erfc}\left(c\frac{\mathrm q x+\mathrm w \left(\frac{1}{2}- u\right)}{\sigma\sqrt{2}}\right).\\ \end{split}\end{equation*} Hence, \begin{equation}\label{my_contr} \begin{split} &\mathbb{E}\left[|w_{(U_0,N_1)}(x)-w_{(U_0,N_1)}(y)|\right] =\mathbb{E}\left[ |\mathrm q(x-y)-\mathrm w H | \right]\\ &=\sum_{u\in\{0,1\}}\Prob(U_0=u) \frac{1}{\sqrt{2\pi}\sigma}\int_{\mathds{R}}e^{-\frac{n^2}{2\sigma^2}}|\mathrm q(x-y)-\mathrm w H | \mathrm{d} n\\ &=\frac{1}{2}\sum_{u\in\{0,1\}}\frac{1}{\sqrt{2\pi}\sigma}\int_{\mathds{R}}e^{-\frac{n^2}{2\sigma^2}}|\mathrm q(x-y)-\mathrm w H |\mathrm{d} n\\ &=\frac{1}{2}\sum_{u\in\{0,1\}}|\mathrm q(x-y)-\mathrm w|\mathcal{I}_u+\mathrm q(x-y)(1-\mathcal{I}_u).\\ \end{split}\end{equation} If $\mathrm q(x-y)>\mathrm w$, then $\mathbb{E}\left[|w_{(U_0,N_1)}(x)-w_{(U_0,N_1)}(y)|\right]<\mathrm q(x-y)$ and the contraction would be proved with $l=\mathrm q$. This is never the case when $\mathrm q<\frac{1}{3}$ and $|x-y|\leq 2\frac{\mathrm w}{1-\mathrm q}< \frac{\mathrm w}{\mathrm q}$. Let us then consider $\mathrm q(x-y)<\mathrm w$. We can write \begin{equation}\begin{split}\mathbb{E}&\left[|w_{(U_0,N_1)}(x)-w_{(U_0,N_1)}(y)|\right]\\ &=\frac{1}{2}\sum_{u\in\{0,1\}}(\mathrm w-\mathrm q(x-y))\mathcal{I}_u+\mathrm q(x-y)(1-\mathcal{I}_u)\\ &\leq\frac{1}{2}\sum_{u\in\{0,1\}}\mathrm w\mathcal{I}_u+\mathrm q(x-y).\\ \end{split}\end{equation} The last expression is obtained by neglecting $-\sum_{u\in\{0,1\}}~ \mathrm q$ $(x-y)\mathcal{I}_u$, which is the sum of two second degree terms in $(x-y)$, since, by the integral mean value theorem, \begin{equation} \mathcal{I}_u= \frac{1}{\sqrt{2\pi}\sigma}c\mathrm q (x-y)e^{-\frac{n_0^2}{2\sigma^2}}\end{equation} for some $n_0\in\left(c\mathrm q y+c\mathrm w\left(\frac{1}{2}-u\right),c\mathrm q x+c\mathrm w\left(\frac{1}{2}-u\right) \right)$, $(n_0 \neq 0).$ The remaining terms are of order one. Notice that \begin{equation}\frac{1}{2}\sum_{u\in\{0,1\}}\mathrm w\mathcal{I}_u+\mathrm q(x-y)=F(x)-F(y) \end{equation} where \begin{equation*} F(x)=\mathrm q x-\frac{\mathrm w}{4}\text{erfc}\left(\frac{c\mathrm q x+c\frac{\mathrm w}{2}}{\sigma\sqrt{2}}\right)-\frac{\mathrm w}{4}\text{erfc}\left(\frac{c\mathrm q x-c\frac{\mathrm w}{2}}{\sigma\sqrt{2}}\right).\end{equation*} Therefore, the thesis is achieved if $F(x)$ is a contraction; since $F(x)$ is differentiable and monotone increasing, its Lipschitz constant is the maximum of its first derivative:\begin{equation}\label{F1}\begin{split}& F'(x)=\mathrm q+\\&\frac{c\mathrm w\mathrm q}{2\sigma\sqrt{2\pi}}\left[\exp\left(-\frac{\left(c\mathrm q x+c\frac{\mathrm w}{2}\right)^2}{2\sigma^2}\right) +\exp\left(-\frac{\left(c\mathrm q x-c\frac{\mathrm w}{2}\right)^2}{2\sigma^2}\right)\right].\\\end{split}\end{equation} In order to find the maximum of $F'(x)$, let us compute $F''(x)$: \begin{equation*}\begin{split}&F''(x)=\\ &=-\frac{c\mathrm w\mathrm q}{2\sigma\sqrt{2\pi}}\frac{2\left(c\mathrm q x+c\frac{\mathrm w}{2}\right)c\mathrm q}{2\sigma^2} \exp\left(-\frac{\left(c\mathrm q x+c\frac{\mathrm w}{2}\right)^2}{2\sigma^2}\right)+\\ &~~~-\frac{c\mathrm w\mathrm q}{2\sigma\sqrt{2\pi}}\frac{2\left(c\mathrm q x-c\frac{\mathrm w}{2}\right)c\mathrm q}{2\sigma^2} \exp\left(-\frac{\left(c\mathrm q x-c\frac{\mathrm w}{2}\right)^2}{2\sigma^2}\right)\\ \end{split}\end{equation*} which is null for $x$ satisfying: \begin{equation}\label{exp_eq} \left(\mathrm q x+\frac{\mathrm w}{2}\right)\exp\left(-\frac{c^2\mathrm q\mathrm w x}{\sigma^2}\right)+ \left(\mathrm q x-\frac{\mathrm w}{2}\right)=0 \end{equation} a solution of which is $x=0$. Now, considering that $F'(x)$ is determined by a mixture of two Gaussians, two cases may occur: (a) $x=0$ is the maximum of $F'(x)$; (b) $x=0$ is a minimum for $F'(x)$ and there are two symmetric maxima ($F''(x)$ is an even function) at $x_0\in (0,\frac{\mathrm w}{1-\mathrm q}]$ and $-x_0$, but $x_0$ cannot be analytically computed from the exponential equation (\ref{exp_eq}). Let us study the sign of $F''(x)$ for $x\to 0$ in order to determine the nature of the point $x=0$ for $F'(x)$. Notice that \begin{equation}\begin{split}&F''(x)>0 \Leftrightarrow -\left(\mathrm q x+\frac{\mathrm w}{2}\right)\exp\left(-\frac{c^2\mathrm q\mathrm w x}{\sigma^2}\right)- \left(\mathrm q x-\frac{\mathrm w}{2}\right)>0.\\ \end{split}\end{equation} Moreover, if $x\to 0$, $\exp\left(-\frac{c^2\mathrm q\mathrm w x}{\sigma^2}\right)\sim 1 - \frac{c^2\mathrm q\mathrm w x}{\sigma^2} $ and \begin{equation}\begin{split}&-\left(\mathrm q x+\frac{\mathrm w}{2}\right)\exp\left(-\frac{c^2\mathrm q\mathrm w x}{\sigma^2}\right)- \left(\mathrm q x-\frac{\mathrm w}{2}\right)\sim-2\mathrm q x+\frac{c^2\mathrm q\mathrm w^2 x}{2\sigma^2}.\\ \end{split}\end{equation} Finally, if $ \frac{c^2\mathrm w^2}{\sigma^2}>4$ \begin{equation} -2\mathrm q x+\frac{c^2\mathrm q\mathrm w^2 x}{2\sigma^2}\to 0^+ \text{ for } x\to 0^+\end{equation} \begin{equation} -2\mathrm q x+\frac{c^2\mathrm q\mathrm w^2 x}{2\sigma^2} \to 0^- \text{ for } x\to 0^-.\end{equation} In conclusion, $x=0$ is a maximum point for $F'(x)$ if and only if $\frac{c^2\mathrm w^2}{\sigma^2}<4$, that is, only for large noise. Let us now study $\frac{c^2\mathrm w^2}{\sigma^2}>4$ ($x=0$ is a minimum point) and let us state conditions that make $F(x)$ contractive. In particular, consider $x>0$ and $\sigma^2$ close to zero: by (\ref{exp_eq}), $|x-\frac{\mathrm w}{2\mathrm q}|$ tends to zero more quickly than $\sigma^2$, hence $\exp\left(-\frac{\left(c\mathrm q x-c\frac{\mathrm w}{2}\right)^2}{2\sigma^2}\right)$ tends to one and the maximum of $F'(x)$ (see \eqref{F1}) may assume very large values. More in general, we observe that the points $x=\pm\frac{\mathrm w}{2\mathrm q}$ are tricky as they are the unique points where the OSA fails: for these values, the error probability given by (\ref{tpk}) is $\frac{1}{2}$, no matter which is the noise variance. This ``singular'' phenomenon is more evident when the noise is small; in terms of $F(x)$, it causes large variations (then the loss of the contractivity) in a neighborhood of the point $\pm\frac{\mathrm w}{2\mathrm q}$, the radius of the neighborhood being larger for smaller $\sigma^2$. Let us set in the case $\mathrm q<\frac{1}{3}$, so that $\pm\frac{\mathrm w}{2\mathrm q}\notin \ms$. Under this assumption, for any $x\in\ms$, \begin{equation}\begin{split} &\exp\left(-\frac{\left(c\mathrm q x+c\frac{\mathrm w}{2}\right)^2}{2\sigma^2}\right)<\exp\left(-\frac{\left(-c\mathrm q\frac{\mathrm w}{1-\mathrm q}+c\frac{\mathrm w}{2}\right)^2}{2\sigma^2}\right)\\ &\exp\left(-\frac{\left(c\mathrm q x-c\frac{\mathrm w}{2}\right)^2}{2\sigma^2}\right)<\exp\left(-\frac{\left(c\mathrm q \frac{\mathrm w}{1-\mathrm q}-c\frac{\mathrm w}{2}\right)^2}{2\sigma^2}\right)\\ \end{split}\end{equation} hence \begin{equation} F'(x)\leq\mathrm q+\frac{c\mathrm w\mathrm q}{\sigma\sqrt{2\pi}} \exp\left(-\frac{ c^2 \mathrm w^2\left(\frac{1-3\mathrm q}{2(1-\mathrm q)}\right)^2}{2\sigma^2}\right).\end{equation} As for $t\geq 0$, $\max te^{-t^2}=\frac{1}{\sqrt{2e}}$, \begin{equation*} \frac{\mathrm q}{\sqrt{\pi}}\frac{2}{1-3\mathrm q}\frac{1}{\sqrt{2e}}<1 \Longrightarrow F'(x)<1\end{equation*} In conclusion a sufficient condition for average contractivity is \begin{equation*}\mathrm q<\frac{1}{3+\sqrt{\frac{2}{e\pi}}}.\end{equation*} Let us now study the case $\frac{c^2\mathrm w^2}{\sigma^2}<4$ ($x=0$ is the maximum point): \begin{equation} \begin{split} F'(x)&\leq F'(0)=\mathrm q+2\frac{c\mathrm w\mathrm q}{2\sigma \sqrt{2\pi}}\exp\left(-\frac{c^2\mathrm w^2}{8\sigma^2}\right) \\ &\leq\mathrm q+2\frac{\mathrm q}{ \sqrt{2e\pi}}\\ \end{split} \end{equation} then $F'(x)<1$ when $$\mathrm q<\frac{1}{1+\sqrt{\frac{2}{e\pi}}}.$$ In conclusion, we have stated that if $\frac{c^2\mathrm w^2}{\sigma^2}>4$ and $\mathrm q<\frac{1}{3+\sqrt{\frac{2}{e\pi}}}$ or if $\frac{c^2\mathrm w^2}{\sigma^2}\leq 4$ and $\mathrm q\leq \frac{1}{1+\sqrt{\frac{2}{e\pi}}}$, then the hypotheses of Stenflo's Theorem are fulfilled. Now, let us prove the convergence of the Mean Square Error. Since $g\in L_g$-Lip$(\ms)$ where $L_g=\max_{z\in\ms} |g'(z)|$ and \begin{equation} |g'(z)|=\frac{c\mathrm q}{2\sigma\sqrt{2\pi}}\left|-e^{-\frac{(c\mathrm q z+c\mathrm w)^2}{2\sigma^2}} +e^{-\frac{(-c\mathrm q z+c\mathrm w)^2}{2\sigma^2}}\right|\leq\frac{c\mathrm q}{\sigma\sqrt{2\pi}} \end{equation} then $L_g\leq \frac{c\mathrm q}{\sigma\sqrt{2\pi}}$ is finite. Since for any $L>0$ \begin{equation*} \begin{split} &\sup_{f\in L\text{-Lip}(\ms)}\bigg|\int_{\ms} f \mathrm d (\mu- \nu)\bigg| = \sup_{f\in L\text{-Lip}(\ms)}L\bigg|\int_{\ms}\frac{1}{L} f \mathrm d (\mu- \nu)\bigg|\\ &\leq \sup_{f\in 1\text{-Lip}(\ms)}L\bigg|\int_{\ms} f \mathrm d (\mu- \nu)\bigg|= L~ d_W(\mu,\nu) \end{split} \end{equation*} we have \begin{equation} \begin{split} \sup_{x\in\ms}\bigg| P^k g(x)&-\int_{\ms} g\mathrm d \mu\bigg| =\sup_{x\in\ms}\bigg | \int_{\ms} g(z)P^k(x,\mathrm d z)-\int _{\ms} g\mathrm d \mu\bigg| \\ &\leq \sup_{x\in\ms} \sup_{f\in L_g\text{-Lip}}\bigg|\int_{\ms} f(z)P^k(x,\mathrm d z)-\int_{\ms} f\mathrm d \mu\bigg|\\ &= \sup_{x\in\ms} L_g d_W(P^k(x,\cdot), \mu(\cdot))\stackrel{k\to\infty}{\longrightarrow}0.\\ \end{split} \end{equation} The convergence is then assured also for the Cesàro sum, for any initial state $D_0=\alpha\in\ms$: \begin{equation} \frac{1}{K}\sum_{k=0}^{K-1}P^k g(\alpha)\stackrel{K\to\infty}{\longrightarrow}\int_{\ms} g\mathrm d \mu~~~~\forall \alpha\in\ms.\end{equation} \qed Notice that the initial state does not affect the convergence value if it is contained in $\ms$, but $\ms$ has been obtained by fixing $D_0=0$: this seems not coherent. However, even if we consider $D_0=\alpha\notin \ms$, given the dynamics of $D_k$ ($\alpha$ is multiplied by $\mathrm q$ at each step), $\ms$ turns out to be the ``limit'' state space, and with high probability $D_k$ enters $\ms$ for some finite $k$, so it makes sense to reduce to $\ms$. Further details are here omitted for brevity, but one must be convinced that considering the initial error lying in a compact set centered at $0$ is a suitable choice. \subsection{Non contractive case} If the hypotheses of Theorem \ref{ERGT} are not fulfilled, we can prove the following \begin{theorem}\label{NonContrTheor} For any initial state $D_0=\alpha$, there exists a unique probability measure $\phi(\alpha,\cdot)$ such that \begin{equation} \label{NonContr} \lim_{K\to\infty}\text{MSE}(\RU,\RUE)=\phi g(\alpha) \end{equation} where $\phi g(\alpha)=\int g(z)\phi(\alpha,\mathrm{d} z) $. \end{theorem} We recall that although this result holds also for the contractive case, the IRF argument is preferable in that case since the convergence value is independent from $\alpha$. \subsubsection{Proof of Theorem \ref{NonContrTheor}} Let $A_{x,n}$ the set of the points that $D_k$ can reach in $n$ steps starting from $x$, i.e., $A_{x,n}=\{\mathrm q^n x +\mathrm w \sum_{i=0}^{n-1}\alpha_i\mathrm q^i,~\alpha_i\in\{-1,0,1\} \}$ \begin{lemma}\label{equilip} For any $f\in l_f \text{-Lip}(\ms) $, there exists a positive constant $\mathrm{M}_f$ such that $P^n f \in \frac{\mathrm{M}_f}{1-\mathrm q}\text{-Lip}(\ms)$ for any $n\in\mathds{N}$. \end{lemma} \proof We have \begin{equation} \max_{x\in\ms}\max_{y\in A_{x,1}} \left|\frac{\mathrm{d}}{\mathrm{d}x}P(x,y)\right| \leq \frac{c\mathrm q}{\sigma\sqrt{2\pi}} \end{equation} then the three functions $P(x,\mathrm q x)$, $P(x, \mathrm q x+\mathrm w)$ and $P(x,\mathrm q x-\mathrm w)$ are Lipschitz with constant $l:=\frac{c\mathrm q}{\sigma\sqrt{2\pi}}$ in $\ms$. For any $x,x_0 \in \ms$ and any $(y,y_0)\in\{(\mathrm q x,\mathrm q x_0)$, $(\mathrm q x+\mathrm w, \mathrm q x_0+\mathrm w), (\mathrm q x-\mathrm w, \mathrm q x_0-\mathrm w) \}$, \begin{equation} \begin{split} &\left|P(x,y)f(y)-P(x_0,y_0)f(y_0) \right|=\\ &~~~~=\left|P(x,y)f(y)\pm P(x_0,y_0)f(y) - P(x_0,y_0)f(y_0) \right| \\ &~~~~\leq ||f||_{\infty}|P(x,y)-P(x_0,y_0)|+ P(x_0,y_0)|f(y) -f(y_0)| \\ &~~~~\leq ||f||_{\infty} l |x-x_0|+ P(x_0,y_0)l_f|y-y_0| \\ &~~~~\leq \left(||f||_{\infty} l+ P(x_0,y_0)l_f \mathrm q\right) |x-x_0|. \end{split} \end{equation} Thus, \begin{equation} \begin{split} &\left|Pf(x)-Pf(x_0) \right|=\\&=|P(x,\mathrm q x)f(\mathrm q x)-P(x_0,\mathrm q x_0)f(\mathrm q x_0)+ P(x,\mathrm q x+\mathrm w)f(\mathrm q x+\mathrm w)\\&~~~~-P(x_0,\mathrm q x_0+\mathrm w)f(\mathrm q x_0+\mathrm w)+P(x,\mathrm q x-\mathrm w)f(\mathrm q x-\mathrm w)\\&~~~~-P(x_0,\mathrm q x_0-\mathrm w)f(\mathrm q x_0-\mathrm w)| \\ &\leq \left(3 ||f||_{\infty} l |x-x_0|+l_f \mathrm q\right) |x-x_0|. \end{split} \end{equation} In conclusion, $Pf\in L_1\text{-Lip}(\ms)$ where $L_1=3 l||f||_{\infty} +l_f \mathrm q$. Now, given any $n\in\mathds{N}$ and $x=x_0+\delta$, \begin{equation}\label{Pngeneral} \begin{split} &\left|P^nf(x)-P^n f(x_0) \right|= \left|\sum_{z\in A_{x,1}}P(x,z) P^{n-1}f(z)- \sum_{z_0\in A_{x_0,1}}P(x_0,z_0) P^{n-1}f(z_0) \right| \\ &=\bigg|\sum_{z\in A_{x,1}}P(x,z) P^{n-1}f(z) \pm \sum_{z_0\in A_{x_0,1}}P(x_0,z_0)P^{n-1}f(z_0+\mathrm q \delta)\\&~~~~ - \sum_{z_0\in A_{x_0,1}}P(x_0,z_0) P^{n-1}f(z_0) \bigg| \\ &= \bigg|\sum_{z\in A_{x,1}} \left[P(x,z) - P(x-\delta,z-\mathrm q \delta)\right] P^{n-1}f(z)\\&~~~~+ \sum_{z_0\in A_{x_0,1}}P(x_0,z_0)\left[ P^{n-1}f(z_0+\mathrm q \delta) - P^{n-1}f(z_0)\right] \bigg| \\ &\leq \sum_{z\in A_{x,1}} \left|P(x,z) - P(x-\delta,z-\mathrm q \delta)\right|~||f||_{\infty}\\&~~~~+ \sum_{z_0\in A_{x_0,1}}P(x_0,z_0)\left| P^{n-1}f(z_0+\mathrm q \delta) - P^{n-1}f(z_0)\right| \\ &\leq 3 l\delta ||f||_{\infty}+ \sum_{z_0\in A_{x_0,1}}P(x_0,z_0)\left| P^{n-1}f(z_0+\mathrm q \delta) - P^{n-1}f(z_0)\right|. \end{split} \end{equation} If $n=2$, \begin{equation*} \left|P^2f(x)-P^2 f(x_0) \right|\leq 3 l \delta ||f||_{\infty}+ \sum_{z_0\in A_{x_0,1}}P(x_0,z_0)L_1\mathrm q\delta\leq \left(3 l ||f||_{\infty}+L_1\mathrm q\right)\delta \end{equation*} that is, $P^2f\in L_2\text{-Lip}(\ms)$ where $L_2=3 l ||f||_{\infty}+L_1\mathrm q$. At this point, by iterating \eqref{Pngeneral}, we obtain that for any $n\in\mathds{N}$, $P^nf\in L_n\text{-Lip}(\ms)$ where $L_n=3 l ||f||_{\infty}+L_{n-1}\mathrm q$. Moreover, be recursion, \begin{equation*} L_n=3 l ||f||_{\infty}(1+\mathrm q+\dots \mathrm q^{n-1})+\mathrm q^n l_f \leq \frac{\mathrm{M}_f}{1-\mathrm q}~~~~~\mathrm{M}_f:=\max\{3 l ||f||_{\infty}, l_f \}. \end{equation*} \qed Let us recall that a sequence of measures $\{\mu_n\}_{n\in\mathds{N}}$ is said to be \emph{weakly convergent} to a measure $\mu$ if $\lim_{n\to\infty}\int f(x)\mathrm{d} \mu_n=\int f(x)\mathrm{d}\mu$ for every continuous and bounded function $f$. In the next, we will denote weakly convergence by $\mu_n\xrightarrow{~w~}\mu$. \begin{lemma}\label{exist_subs} Let $\overline{P}_N(x,\cdot)=\frac{1}{N}\sum_{n=0}^{N-1}P^n(x,\cdot)$, $N\in\mathds{N}$. For any $x\in\ms$, there exist a subsequence $\overline{P}_{N_j}(x,\cdot)$, $j, N_j\in\mathds{N}$, and a probability measure $\phi(x,\cdot)$ such that $\overline{P}_{N_j}(x,\cdot)\xrightarrow{~w~} \phi(x,\cdot)$. \end{lemma} \proof This is a simple consequence of Prohorov's Theorem (see, e.g., \cite[Theorem 6.1]{bill68}; in our context tightness is trivial since the space $\ms$ is compact). \qed \begin{lemma}\label{all_subs} If all the convergent subsequences of $\overline{P}_{N}(x,\cdot)$ weakly converge to the same $\phi(x,\cdot)$, then also $\overline{P}_{N}(x,\cdot)$ weakly converges to $\phi(x,\cdot)$. \end{lemma} \proof Again this is a consequence of Prohorov's Theorem (see, e.g.,\cite[Theorem 2.3]{bill68} ) \qed Given Lemmas \ref{exist_subs} and \ref{all_subs}, to prove Theorem \ref{NonContrTheor} it is sufficient to show that all the convergent subsequences of $\overline{P}_{N}(x,\cdot)$ converge to $\phi(x,\cdot)$. Let us suppose that there exist a subsequence $\{M_i\}_{i\in \mathds{N}} \neq \{N_j\}_{j\in\mathds{N}}$ and a probability measure $\psi(x,\cdot)\neq \phi(x,\cdot)$ on $\ms$ such that $\overline{P}_{M_i}(x,\cdot )\xrightarrow{~w~} \psi(x,\cdot)$. First notice that for any $m\in\mathds{N}$, by applying the Dominated Convergence Theorem, \begin{equation*} \begin{split} P^m \phi f(x)&=\int_{y\in\ms}P^m(x,\mathrm{d}y)\phi f(y)\\&=\int_{y\in\ms}P^m(x,\mathrm{d}y)\lim_{j\to +\infty}\int_{z\in\ms}\frac{1}{N_j}\sum_{n=0}^{N_j-1}P^n(y,\mathrm{d}z)f(z)\\ &=\lim_{j\to +\infty}\frac{1}{N_j}\sum_{n=0}^{N_j-1}P^{n+m}f(x)\\&=\lim_{j\to +\infty}\frac{1}{N_j}\left(\sum_{n=0}^{N_j-1}P^{n}f(x)+\sum_{n=N_j}^{N_j-1+m}P^{n}f(x)-\sum_{n=0}^{m-1}P^{n}f(x)\right)\\&=\phi f(x).\\ \end{split} \end{equation*} Similarly, exploiting the continuity of $P^mf$, we obtain $$\phi P^m f(x)=\phi f (x).$$ The same can be clearly said for $\psi$. Now, for any $M_i\in\mathds{N}$ $$\overline{P}_{M_i}\phi f(x)=\phi f(x)$$ and $$\lim_{i\to\infty}\overline{P}_{M_i}\phi f(x)=\phi f(x).$$ If $f$ is $l_f$-Lip$(\ms)$, then $\overline{P}_{N_j}f(x)$ are equicontinuous by Lemma \ref{equilip}, and clearly also equibounded by $||f||_{\infty}$. Therefore, by Ascoli-Arzelà Theorem, $\phi f(x)$ is continuous and $\lim_{i\to\infty}\overline{P}_{M_i}\phi f(x)=\psi \phi f(x)$ In conclusion, $$\phi f (x)=\psi \phi f (x).$$ Analogously, one can prove that $\psi f(x)=\phi \psi f(x)$ and, since by Dominated Convergence $\psi \phi f(x)=\phi \psi f(x), $ we obtain $\phi f(x)=\psi f(x)$. To summarize, we have proved that, for any $x\in \ms$, there exists a unique probability measure $\phi(x,\cdot)$ such that $$\lim_{N\to\infty}\overline{P}_N f(x)=\phi f(x)$$ for any $f\in l_f$-Lip($\ms$). The thesis of Theorem \ref{NonContrTheor} follows by considering $f=g$.\qed The arguments used in this proof partially arise from the proof of \cite[Theorem 12.4.1]{mey93}. \subsection{Simulations vs Theoretical Results} The convergence values $\int_{\mathcal{D}}g \mathrm d\mu$ and $\phi g(\alpha)$ can be numerically evaluated. \begin{figure} \begin{center} \includegraphics[width=5.7cm]{out2-crop.pdf} \includegraphics[width=5.7cm]{out1-crop.pdf} \includegraphics[width=5.7cm]{out05-crop.pdf} \includegraphics[width=5.7cm]{out03-crop.pdf} \caption{Simulations vs Theoretical Results: MSE for $b=c=1$, $a=-2,-1,-0.5,-0.3$.}\label{F2} \end{center} \end{figure} In Figure \ref{F2}, we show their consistency with the simulations previously presented. Notice that for simulations we have assumed to know the initial state $x_0$, so that $D_0=0$. Since analytical results are asymptotic, while simulations' results are obtained by averaging transmissions of 320 bits, we intuitively conclude that the rate of convergence is fast. \section{Conclusion}\label{s_concl} In this paper, we have proposed using the One State decoding algorithm to recover the binary input of a linear system and we have analyzed its behavior. When the system has particular contractive properties, the analysis is based on Iterated Random Functions, while in the non-contractive case known results from convergence of probability measures can be exploited. The theoretical results allow to predict the Mean Square Error of the One State Algorithm for long-time transmissions, given the parameters of the system and some prior probabilistic information. Simulations and theoretical results are consistent. The One State Algorithm could be extended to multi-dimensional problems and to the recovery of digital inputs arising from larger source alphabets and with different probabilistic distributions. Moreover, its use for problems with feedback, such as channel equalization, should be further studied. \section{Acknowledgements} The author wishes to thank Prof. Fabio Fagnani who suggested the problem and the possible solutions. \bibliographystyle{model1-num-names}
1,314,259,993,103
arxiv
\section{Introduction} It is a well-known consequence of the amalgamated product structure of $\Aut({\mathbb A}^{2})$ that every reductive subgroup $G \subset \Aut({\mathbb A}^{2})$ is conjugate to a subgroup of $\GL_{2}({\mathbb C}) \subset \Aut({\mathbb A}^{2})$, i.e. there is a $\psi \in \Aut({\mathbb A}^{2})$ such that $\psi G \psi^{-1} \subset \GL_2({\mathbb C})$ (\cite{Ka1979Automorphism-group}, cf. \cite{Kr1996Challenging-proble}). The ``Linearization Problem'' asks whether the same holds in higher dimension. It was shown by Schwarz in \cite{Sch89} that this is not the case in dimensions $n \ge 4$ . In this paper we consider the analogue of the Linearization Problem for Lie algebras. It is known that the Lie algebra $\Lie(\Aut({\mathbb A}^{2}))$ of the ind-group $\Aut({\mathbb A}^{2})$ is isomorphic to the Lie algebra $\VF^c({\mathbb A}^{2})$ of vector fields of constant divergence (\cite{Sh81}, cf. \cite{Kum02}). We will see that the Lie subalgebra $$ K(x^2 \frac{\partial }{\partial x} - 2xy \frac{\partial }{\partial y}) \oplus K (x\frac{\partial }{\partial x} - y \frac{\partial }{\partial y}) \oplus K\frac{\partial }{\partial x} \subset \VF^{c}({\mathbb A}^{2}) $$ is isomorphic to $\sltwo$, but not conjugate to the standard $\sltwo \subset \VF({\mathbb A}^{2})$ under $\Aut({\mathbb A}^{2})$ (Remark~\ref{example.rem}). On the other hand, for other subalgebras of $\VF^c({\mathbb A}^{2})$ the situation is different. Let $\Aff_2(K)\subset\Aut({\mathbb A}^{2})$ be the group of affine transformations and $\SAff_2(K)$ the subgroup of affine transformation with determinant equal to 1, and denote by $\aff_{2}$, resp. $\saff_{2}$ their Lie algebras. The first result we prove is the following (see Proposition~\ref{subVec2.prop}). For $f \in K[x,y]$ we set $D_{f}:=-f_{y}\frac{\partial}{\partial x} + f_{x}\frac{\partial}{\partial y} \in\VF({\mathbb A}^{2})$. \begin{prop*} Let $L\subset \VF^{c}({\mathbb A}^{2})$ be a Lie subalgebra isomorphic to $\aff_2$. Then there is an \'etale map $\alpha=(f,g)$ such that $L=\alpha(\aff_2)$. More precisely, if $(D_{f},D_{g})$ is a basis of the radical of $[L,L]$, then $$ L = \langle D_{f},D_{g},D_{f^{2}}, D_{g^{2}}, fD_g,gD_f \rangle, $$ and one can take $\alpha=(f,g)$. \end{prop*} The analogous statements hold for Lie subalgebras isomorphic to $\saff_2$. As a consequence of this classification we obtain the next result (see Theorem~\ref{JC.thm} and Corollary~\ref{corollary}). Recall that a Lie subalgebra of $\VF({\mathbb A}^{2})$ is {\it algebraic\/} if it acts locally finitely on $\VF({\mathbb A}^{2})$. \begin{prop*} The following statements are equivalent: \begin{enumerate} \item[(i)] The Jacobian Conjecture holds in dimension 2; \item[(ii)] All Lie subalgebras $L \subset \VF^{c}({\mathbb A}^{2})$ isomorphic to $\aff_{2}$ are conjugate under $\Aut({\mathbb A}^{2})$; \item[(iii)] All Lie subalgebras $L \subset \VF^{c}({\mathbb A}^{2})$ isomorphic to $\saff_2$ are conjugate under $\Aut({\mathbb A}^{2})$; \item[(iv)] All Lie subalgebras $L \subset \VF^{c}({\mathbb A}^{2})$ isomorphic to $\aff_2$ are algebraic; \item[(v)] All Lie subalgebras $L \subset \VF^{c}({\mathbb A}^{2})$ isomorphic to $\saff_2$ are algebraic; \end{enumerate} \end{prop*} {\small\noindent \textbf{Acknowledgement}: The author would like to thank his thesis advisor Hanspeter Kraft for permanent support and help during writing this paper.} \medskip \section{The Poisson algebra} \subsection*{Definitions} Let $K$ be an algebraically closed field of characteristic zero and let $P$ be the {\it Poisson algebra}, i.e. the Lie algebra with underlying vector space $K[x,y]$ and with Lie bracket $\{f,g\}:=f_{x}g_{y} - f_{y}g_{x}$ for $f,g\in P$. If $\Jac(f,g)$ denotes the {\it Jacobian matrix\/} and $j(f,g)$ the {\it Jacobian determinant}, $$ \Jac(f,g): = \begin{bmatrix} f_x & f_y \\ g_x &g_y \end{bmatrix},\quad j(f,g) := \det \Jac(f,g), $$ then $\{f,g\} = j(f,g)$. Denote by $\VF({\mathbb A}^{2})$ the polynomial vector fields on affine 2-space ${\mathbb A}^{2} = K^{2}$, i.e. the derivations of $K[x,y]$: $$ \VF({\mathbb A}^{2}) := \{ p\partial_{x} + q\partial_{y} \mid p,q \in K[x,y]\} = \Der(K[x,y]). $$ There is a canonical homomorphism of Lie algebras $$ \mu\colon P \to \VF({\mathbb A}^{2}), \ h\mapsto D_{h}:=h_{x}\partial_{y} - h_{y}\partial_{x}, $$ with kernel $\ker\mu = K$. The next lemma collects some properties of the Lie algebra $P$. These results are essentially known, see e.g. \cite{NoNa1988Rings-of-constants}. If $L$ is any Lie algebra and $X \subset L$ a subset, we define the {\it centralizer\/} of $X$ by $$ \cent_{L}(X) := \{z \in L \mid [z,x]=0 \text{ for all } x\in X\}, $$ and we shortly write $\cent(L)=\cent_{L}(L)$ for the {\it center\/} of $L$. \begin{lem}\lab{lem1} \begin{enumerate} \item The center of $P$ are the constants $K$. \item Let $f,g\in P$ such that $\{f,g\}= 0$. Then $f,g\in K[h]$ for some $h\in K[x,y]$. \item If $f,g\in P$ such that $\{f,g\}\neq 0$, then $f,g$ are algebraically independent in $K[x,y]$ and $\cent_{P}(f) \cap \cent_{P}(g) = K$. \item $P$ is generated, as a Lie algebra, by $\{x, x^{3},y^{2}\}$. \end{enumerate} \end{lem} \begin{proof} (a) is easy and left to the reader. (b) Consider the morphism $\alpha=(f,g)\colon {\mathbb A}^{2} \to {\mathbb A}^{2}$. Then $C:=\overline{\alpha({\mathbb A}^{2})} \subset {\mathbb A}^{2}$ is an irreducible rational curve, and we have a factorization $$ \begin{CD} \alpha\colon {\mathbb A}^{2} @>h>> {\mathbb A}^{1} @>\eta>> C \subset {\mathbb A}^{2} \end{CD} $$ where $\eta$ is the normalization of $C$. It follows that $f,g \in K[h]$. (c) It is clear that $f,g$ are algebraically independent, i.e. $\tdeg_{K}K(f,g) = 2$. Equivalently, $K(x,y) / K(f,g)$ is a finite algebraic extension. Now assume that $\{h,f\}=\{h,g\}=0$. Then the derivation $D_{h}$ vanishes on $K[f,g]$, hence on $K[x,y]$. Thus $D_{h}=0$ and so $h\in K$. (d) Denote by $P_{d}:=K[x,y]_{d}$ the homogeneous part of degree $d$. Let $L \subset P$ be the Lie subalgebra generated by $\{x, x^{3},y^{2}\}$. We first use the equations $$ \{x,y\} = 1, \ \{x,y^{2}\}= 2y, \ \{x^{3},y\} = 3x^{2}, \ \{x^{2},y^{2}\}=4xy, \ \{x^{3},y^{2}\}=6x^{2}y $$ to show that $K\oplus P_{1}\oplus P_{2} \subset L$ and that $x^{2}y\in L$. Now the claim follows by induction from the relations $$ \{x^{n},x^{2}y\}=nx^{n+1} \text{ \ and \ } \{x^{r}y^{s},y^{2}\}= 2rx^{r-1}y^{s+1}. $$ \end{proof} \subsection*{Divergent} The next lemma should also be known. Recall that the {\it divergence\/} $\Div D$ of a vector field $D = p \partial_{x} + q \partial_{y}\in\VF({\mathbb A}^{2})$ is defined by $\Div D := p_{x}+q_{y} \in K[x,y]$. \begin{lem}\lab{derivation.lem} Let $D$ be a non-trivial derivation of $K[x,y]$. \begin{enumerate} \item The kernel $K[x,y]^{D}$ is either $K$ or $K[f]$ for some $f\in K[x,y]$. \item If $\Div D =0$, then $D=D_{h}$ for some $h\in K[x,y]$. \end{enumerate} Now assume that $D = D_{f}$ for some non-constant $f\in K[x,y]$ and that $D(g)=1$ for some $g\in K[x,y]$. \begin{enumerate}\setcounter{enumi}{2} \item Then $K[x,y]^{D}= K[f]$. \item If $D$ is locally nilpotent, then $K[x,y]=K[f,g]$. \end{enumerate} \end{lem} \begin{proof} (a) See \cite{NoNa1988Rings-of-constants} Theorem 2.8. (b) Let $D = f \frac{\partial }{\partial x} + g \frac{\partial }{\partial y}$, then $\Div D = f_x + g_y=0$ implies that there exist $h \in K[x,y]$ such that $f = h_y$, $g = -h_x$. (c) It is obvious that $\ker(D) \supset K[f]$, hence by (a) one has $\ker(D) = K[h] \supset K[f]$. Thus $f = F(h)$ for some $F \in K[t]$ and then $D_f(g) = D_{F(h)}(g) = F'(h) D_h(g) = 1$ which implies $F$ is linear. (d) Let $G$ be an affine algebraic group, $X$ be an affine variety and $\phi: X \to G$ be a $G$-equivariant retraction. Then there is the general fact which says that $O(X) = \phi^*(O(G)) \otimes O(X)^G$. In our case $O({\mathbb A}^{2}) =O(G) \otimes O({\mathbb A}^{2})^G = K[g] \otimes K[f]$. \end{proof} It follows from Lemma~\ref{derivation.lem}(b) above that the image of $\mu\colon P \to \VF({\mathbb A}^{2})$, $h\mapsto D_{h}$, is $\mu(P) = \VF^{0}({\mathbb A}^{2}):=\{D\in\VF({\mathbb A}^{2})\mid \Div D = 0\}$. We will also discuss the Lie subalgebra $\VF^{c}({\mathbb A}^{2}):=\{D\in\VF({\mathbb A}^{2})\mid \Div D \in K\}$. \subsection*{Automorphisms of the Poisson algebra} Denote by $\Aut_{\text{\it LA}}(P)$ the group of Lie algebra automorphisms of $P$. There is a canonical homomorphism $$ p\colon \Aut_{\text{\it LA}}(P) \to K^{*}, \quad \phi \mapsto \phi(1), $$ which has a section $s \colon K^{*}\to \Aut_{\text{\it LA}}(P)$ given by $s(t)|_{K[x,y]_{n}}:= t^{1-n} \id_{K[x,y]_{n}}$. Thus $\Aut_{\text{\it LA}}(P)$ is a semidirect product $\Aut_{\text{\it LA}}(P) = \SAut_{\text{\it LA}}(P) \rtimes K^{*}$ where $$ \SAut_{\text{\it LA}}(P) :=\ker p = \{\phi\mid \phi(1)=1\}. $$ \begin{lem}\lab{lem2} Every automorphism $\phi\in\Aut_{\text{\it LA}}(P)$ is determined by $\phi(1)$, $\phi(x)$ and $\phi(y)$, and $K[x,y] = K[\phi(x),\phi(y)]$. \end{lem} \begin{proof} Replacing $\phi$ by the composition $\phi\circ s(\phi(1)^{-1})$ we can assume that $\phi(1)=1$. We will show that $\phi(x^{n}) = \phi(x)^{n}$ and $\phi(y^{n}) = \phi(y)^{n}$ for all $n\geq 0$. Then the first claim follows from Lemma~\ref{lem1}(d). By induction, we can assume that $\phi(x^{j}) = \phi(x)^{j}$ for $j <n$. We have $\{x^{n},y\}=nx^{n-1}$ and so $\{\phi(x^{n}),\phi(y)\}=n\phi(x^{n-1}) = n\phi(x)^{n-1}$. On the other hand, we get $\{\phi(x)^{n},\phi(y)\} = n\phi(x)^{n-1}\{\phi(x),\phi(y)\} = n\phi(x)^{n-1}$, hence the difference $h:=\phi(x^{n})-\phi(x)^{n}$ belongs to the kernel of the derivation $D_{\phi(y)}\colon f \mapsto \{f,\phi(y)\}$. Since $D_{\phi(y)}$ is locally nilpotent, we get from Lemma~\ref{derivation.lem}(c)--(d) that $\ker D_{\phi(y)} = K[\phi(y)]$ and that $K[\phi(x),\phi(y)] = K[x,y]$. This already proves the second claim and shows that $h$ is a polynomial in $\phi(y)$. Since $\{\phi(x^{n}),\phi(x)\} = \phi(\{x^{n},x\})=0$ and $\{\phi(x)^{n},\phi(x)\}=n\phi(x)^{n-1}\{\phi(x),\phi(x)\}$ we get $\{h,\phi(x)\}=0$ which implies that $h\in K$. In the same way, using $\{x,xy\} = x$ and $\{y,xy\}=-y$, we find $\phi(xy)-\phi(x)\phi(y) \in K$. Hence $$ n\phi(x^{n}) = \{\phi(x^{n}),\phi(xy)\}=\{\phi(x)^{n},\phi(x)\phi(y)\}=n\phi(x)^{n}, $$ and so $\phi(x^{n}) = \phi(x)^{n}$. By symmetry, we also get $\phi(y^{n}) = \phi(y)^{n}$. \end{proof} \subsection*{Automorphisms of affine 2-space} Denote by $\Aut(K[x,y])$ the group of $K$-algebra automorphisms of $K[x,y]$. For $\alpha\in\Aut(K[x,y])$ we will use the notation $\alpha=(f,g)$ in case $\alpha(x)=f$ and $\alpha(y)=g$, which implies that $K[x,y]=K[f,g]$. There is a homomorphism $$ j\colon \Aut(K[x,y]) \to K^{*}, \quad \alpha\mapsto j(\alpha):= j(\alpha(x),\alpha(y)) = \det \Jac(\alpha(x),\alpha(y)) $$ which has a section $\sigma\colon t\mapsto (tx,ty)$. Hence, $\Aut(K[x,y])$ is a semidirect product $\Aut(K[x,y]) = \SAut(K[x,y]) \rtimes K^{*}$ where $$ \SAut(K[x,y]) :=\ker j = \{\alpha=(f,g)\mid j(f,g)=1\}. $$ We can consider $\Aut(K[x,y])$ and $\Aut_{\text{\it LA}}(P)$ as subgroups of the $K$-linear automorphisms $\GL(K[x,y])$. \begin{lem}\lab{aut.lem} As subgroups of $\GL(K[x,y])$ we have $\SAut_{\text{\it LA}}(P) = \SAut(K[x,y])$. \end{lem} \begin{proof} (a) Let $\alpha$ be an endomorphism of $K[x,y]$ and put $\Jac(\alpha):=\Jac(\alpha(x),\alpha(y))$. For any $f,g\in K[x,y]$ we have $\Jac(\alpha(f),\alpha(g)) = \alpha(\Jac(f,g)) \Jac(\alpha)$, because \begin{multline*} \frac{\partial}{\partial x}\alpha(f) = \frac{\partial f}{\partial x}(\alpha (x),\alpha(y)) \frac{\partial\alpha(x)}{\partial x} +\frac{\partial f}{\partial y}(\alpha (x),\alpha(y))\frac{\partial\alpha(y)}{\partial x} \\ = \alpha(\frac{\partial f}{\partial x})\frac{\partial\alpha(x)}{\partial x} + \alpha(\frac{\partial f}{\partial y})\frac{\partial\alpha(y)}{\partial x}. \end{multline*} It follows that $\{\alpha(f),\alpha(g)\} = \alpha(\{f,g\}) j(\alpha)$. This shows that $\SAut(K[x,y]) \subset \SAut_{\text{\it LA}}(P)$. (b) Now let $\phi\in\SAut_{\text{\it LA}}(P)$. Then $j(\phi(x),\phi(y)) = \{\phi(x),\phi(y)\} = \phi(1)=1$ and, by Lemma~\ref{lem2}, $K[\phi(x),\phi(y)] = K[x,y]$. Hence, we can define an automorphism $\alpha\in\SAut(K[x,y])$ by $\alpha(x):=\phi(x)$ and $\alpha(y) := \phi(y)$. From (a) we see that $\alpha\in\SAut_{\text{\it LA}}(P)$, and from Lemma~\ref{lem2} we get $\phi=\alpha$, hence $\phi \in\SAut(K[x,y])$. \end{proof} \begin{rem}\lab{homP.rem} The first part of the proof above shows the following. If $f,g\in P$ are such that $\{f,g\} = 1$, then the $K$-algebra homomorphism defined by $x\mapsto f$ and $y \mapsto g$ is an injective homomorphism of $P$ as a Lie algebra. (Injectivity follows, because $f,g$ are algebraically independent.) \end{rem} \subsection*{Lie subalgebras of $P$} The subspace $$ P_{\leq 2} := K\oplus P_{1}\oplus P_{2} = K \oplus Kx \oplus Ky \oplus Kx^{2} \oplus Kxy \oplus Ky^{2} \subset P $$ is a Lie subalgebra. This can be deduced from the following Lie brackets which we note here for later use. \begin{gather} \{x^{2},xy\} = 2x^{2}, \ \{x^{2},y^{2}\}=4xy, \ \{y^{2},xy\}= -2y^{2};\\ \{x^{2},x\}=0, \ \{xy,x\}=-x, \ \{y^{2},x\}=-2y, \\ \{x^{2},y\}=2x, \ \{xy,y\}=y, \ \{y^{2},y\}=0;\\ \{x,y\}=1. \end{gather} For example, $P_{2}=Kx^{2}\oplus Kxy \oplus Ky^{2}$ is a Lie subalgebra of ${P_{\leq 2}}$ isomorphic to $\sltwo$, and $P_{1}=Kx \oplus Ky$ is the two-dimensional simple $P_{2}$-module. From Remark~\ref{homP.rem} we get the following lemma. \begin{lem} Let $f,g\in K[x,y]$ such that $\{f,g\}=1$. Then $\langle 1,f,g,f^{2},fg,g^{2}\rangle \subset P$ is a Lie subalgebra isomorphic to $P_{\leq 2}$. An isomorphism is induced from the $K$-algebra homomorphism $P \to P$ defined by $x\mapsto f, y\mapsto g$. \end{lem} \begin{defn} For $f,g \in K[x,y]$ such that $\{f,g\} \in K^{*}$ we put $$ P_{f,g}:= \langle 1,f,g,f^{2},fg,g^{2}\rangle \subset P. $$ We have just seen that this is a Lie algebra isomorphic to ${P_{\leq 2}}$. Clearly, $P_{f,g}=P_{f_{1},g_{1}}$ if $\langle 1,f,g\rangle = \langle 1,f_{1},g_{1}\rangle$. Denoting by $\rad L$ the solvable radical of the Lie algebra $L$ we get $$ \rad P_{f,g} = \langle 1, f, g\rangle \text{ \ and \ } P_{f,g}/\rad P_{f,g} \simeq \sltwo. $$ \end{defn} \begin{prop}\lab{subP.prop} Let $Q \subset P$ be a Lie subalgebra isomorphic to ${P_{\leq 2}}$. Then $K \subset Q$, and $Q = P_{f,g}$ for every pair $f,g\in L$ such that $\langle 1, f, g\rangle=\rad Q$. In particular, $\{f,g\}\in K^{*}$. \end{prop} \begin{proof} We first show that $\cent(Q) =K$. In fact, $Q$ contains elements $f,g$ such that $\{f,g\}\neq 0$. If $h\in\cent(Q)$, then $h \in \cent_{P}(f)\cap \cent_{P}(g) = K$, by Lemma~\ref{lem1}(c). Now choose an isomorphism $\phi\colon {P_{\leq 2}} \xrightarrow{\sim} Q$. Then $\phi(K)=K$, and replacing $\phi$ by $\phi\circ s(t)$ with a suitable $t\in K^{*}$ we can assume that $\phi(1)=1$. Setting $f:=\phi(x), g:=\phi(y)$ we get $\{f,g\} = 1$, and putting $f_{0}:=\phi(x^{2}), f_{1}:=\phi(xy), f_{2}:=\phi(y^{2})$ we find $$ \{f_{1},f\} = \phi \{xy,x\}=\phi(-x)=-f = \{fg,f\}. $$ Similarly, $\{f_{1},g\}=\{fg,g\}$, hence $fg = f_{1}+ c \in Q$, by Lemma~\ref{lem1}(c). Next we have $$ \{f_{0},f\} =0 \text{ \ and \ } \{f_{0},g\}=\phi(\{x^{2},y\})=\phi(2x)=2f = \{f^{2},g\}. $$ Hence $f^{2}=f_{0}+d$, and thus $f^{2}\in Q$. A similar calculation shows that $g^{2}\in Q$, so that we finally get $Q = P_{f,g}$. \end{proof} \subsection*{Characterization of ${P_{\leq 2}}$} The following lemma gives a characterization of the Lie algebras isomorphic to ${P_{\leq 2}}$. \begin{lem}\lab{charP.lem} Let $Q$ be a Lie algebra containing a subalgebra $Q_{0}$ isomorphic to $\sltwo$. Assume that \begin{enumerate} \item $Q = Q_{0}\oplus V_{2}\oplus V_{1}$ as an $Q_{0}$-module where $V_{i}$ is simple of dimension $i$, \item $V_{1}$ is the center of $Q$, and \item $[V_{2},V_{2}] = V_{1}$. \end{enumerate} Then $Q$ is isomorphic to ${P_{\leq 2}}$. \end{lem} \begin{proof} Choosing an isomorphism of $P_{2}=\langle x^{2},xy,y^{2}\rangle$ with $Q_{0}$ we find a basis $(a_{0},a_{1},a_{2})$ of $Q_{0}$ with relations \[\tag{$1'$} [a_{0},a_{1}]= 2a_{0}, \ [a_{0},a_{2}]= 4a_{1}, \ [a_{2},a_{1}]= -2a_{2} \] (see (1) above). Since $V_{2}$ is a simple two-dimensional $Q_{0}$-module and $Kx\oplus Ky$ a simple two-dimensional $P_{2}$-module we can find a basis $(b,c)$ of $V_{2}$ such that \begin{gather*}\tag{$2'$} [a_{0},b]=0, \ [a_{1},b]=-b, \ [a_{2},b] = -2c, \\ \tag{$3'$} [a_{0},c]=2b, \ [a_{1},c]=c, \ [a_{2},c] = 0 \end{gather*} (see (2) and (3) above). Finally, the last assumption (c) implies that \[\tag{$4'$} d:=[b,c]\neq 0, \text{ hence } V_{1}=Kd. \] Comparing the relations (1)--(4) with ($1'$)--($4'$) we see that the linear map ${P_{\leq 2}} \to Q$ given by $x^{2}\mapsto a_{0}$, $xy\mapsto a_{1}$, $y^{2}\mapsto a_{2}$, $x\mapsto b$, $y\mapsto c$, $1 \mapsto d$ is a Lie algebra isomorphism. \end{proof} \medskip \section{Vector fields on affine 2-space} \subsection*{The action of $\Aut(K[x,y])$ on vector fields} The group $\Aut(K[x,y])$ acts on the vector fields $\VF({\mathbb A}^{2})$ by conjugation: If $D$ is a derivation and $\alpha\in\Aut(K[x,y])$, then $\alpha(D) := \alpha\circ D \circ \alpha^{-1}$. Writing $D = p\partial_{x}+q\partial_{y}$ and $\alpha=(f,g)$, then \[\tag{$*$} \alpha(D) = \frac{1}{j(\alpha)}\left(\left(g_{y}\alpha(p)-f_{y}\alpha(q)\right)\partial_{x} + \left(-g_{x}\alpha(p)+f_{x}\alpha(q)\right)\partial_{y}\right) \] In fact, $\alpha(\Jac(\alpha^{-1})) \cdot \Jac(\alpha) = E$, hence $\alpha(\Jac(\alpha^{-1})) = \Jac(\alpha)^{-1}= \frac{1}{j(\alpha)} \begin{bmatrix} g_{y} & -f_{y}\\ -g_{x} & f_{x} \end{bmatrix}$. Thus we get for $h\in K[x,y]$ \begin{multline*} \alpha(D)(h) = \alpha(D(\alpha^{-1}(h))) = (h_{x},h_{y})\cdot \alpha(\Jac(\alpha^{-1})) \cdot\begin{bmatrix} \alpha(p) \\ \alpha(q) \end{bmatrix} =\\= (h_{x},h_{y})\cdot \frac{1}{j(\alpha)} \begin{bmatrix} g_{y} & -f_{y}\\ -g_{x} & f_{x} \end{bmatrix} \cdot \begin{bmatrix} \alpha(p) \\ \alpha(q) \end{bmatrix} \end{multline*} In particular, for $h=x$ or $h=y$, we find $$ \alpha(D)(x) = \frac{1}{j(\alpha)}(g_{y}\alpha(p) - f_{y}\alpha(q))\text{ \ and \ } \alpha(D)(y) = \frac{1}{j(\alpha)}(-g_{x}\alpha(p) + f_{x}\alpha(q)), $$ and the claim follows. \begin{rem}\lab{etale.rem} If $\alpha\colon K[x,y] \to K[x,y]$ is \'etale, i.e. $j(\alpha)\in K^{*}$, then formula $(*)$ still makes sense and defines a map $$ \alpha \colon \VF({\mathbb A}^{2}) \to \VF({\mathbb A}^{2}), \ D \mapsto \alpha(D), $$ which is an injective homomorphism of Lie algebras. In fact, we have by definition $\alpha(D)\circ\alpha = \alpha\circ D$, and so \begin{multline*} \alpha([D_{1},D_{2}])\circ \alpha = \alpha\circ D_{1}\circ D_{2}-\alpha\circ D_{2}\circ D_{1}= \alpha(D_{1})\circ \alpha\circ D_{2} - \alpha(D_{2})\circ \alpha\circ D_{1}=\\ =\alpha(D_{1})\circ\alpha(D_{2})\circ \alpha - \alpha(D_{2})\circ\alpha(D_{1})\circ \alpha = [\alpha(D_{1}),\alpha(D_{2})]\circ \alpha. \end{multline*} \end{rem} Recall that $\VF^{c}({\mathbb A}^{2}) \subset \VF({\mathbb A}^{2})$ are the vector fields $D$ with $\Div D \in K$. Clearly, the divergence $\Div\colon \VF^{c}({\mathbb A}^{2}) \to K$ is a character with kernel $\VF^{0}({\mathbb A}^{2})$, and we have the decomposition $$ \VF^{c}({\mathbb A}^{2}) = \VF^{0}({\mathbb A}^{2}) \oplus KE \text{ \ where \ } E:=x\partial_{x} + y\partial_{y} \text{ \ is the {\it Euler field}}. $$ \begin{lem}\lab{equiv.lem} If $\alpha\colon K[x,y] \to K[x,y]$ is \'etale, then $\alpha(D_{h}) = j(\alpha)^{-1}D_{\alpha(h)}$, and $\Div(\alpha(E)) = 2$. Hence $\alpha(\VF^{0}({\mathbb A}^{2})) \subset \VF^{0}({\mathbb A}^{2})$ and $\alpha(\VF^{c}({\mathbb A}^{2})) \subset \VF^{c}({\mathbb A}^{2})$ . In particular, the homomorphism $\mu\colon P \to \VF({\mathbb A}^{2})$ is equivariant with respect to the group $\SAut(K[x,y])=\SAut_{\text{\it LA}}(P)$. \end{lem} \begin{proof} We have $\alpha(D_{h})\circ\alpha = \alpha\circ D_{h}$, hence \begin{multline*} \alpha(D_{h})(\alpha(f)) = \alpha(D_{h}(f)) = \alpha(j(h,f)) = j(\alpha)^{-1}j(\alpha(h),\alpha(f)) =\\= j(\alpha)^{-1}D_{\alpha(h)}(\alpha(f)). \end{multline*} From formula $(*)$ we get $\alpha(E) = \frac{1}{j(\alpha)}\left((g_{y}f-f_{y}g)\partial_{x} + (-g_{x}f+f_{x}g)\partial_{y}\right)$ which implies that $\Div\alpha(E) = 2$. \end{proof} \subsection*{Lie subalgebras of $\VF({\mathbb A}^{2})$} Let $\Aff({\mathbb A}^{2})$ denote the group of {\it affine transformations\/} of ${\mathbb A}^{2}$, $x \mapsto Ax + b$, where $A\in\GL_{2}(K)$ and $b\in K^{2}$. The determinant defines a character $\det\colon \Aff({\mathbb A}^{2}) \to K^{*}$ whose kernel will be denoted by $\SAff({\mathbb A}^{2})$. For the corresponding Lie algebras we write $\saff_{2}:=\Lie \SAff({\mathbb A}^{2}) \subset \aff_{2}:=\Lie \Aff({\mathbb A}^{2})$. There is a canonical embedding $\aff_{2}\subset \VF({\mathbb A}^{2})$ which identifies $\aff_{2}$ with the Lie subalgebra $$ \langle \partial_{x},\partial_{y}, x\partial_{x} + y\partial_{y}, x\partial_{x} - y\partial_{y}, x\partial_{y}, y\partial_{x} \rangle \subset \VF^{c}({\mathbb A}^{2}), $$ and $\saff_{2}$ with $$ \mu(P_{x,y}) = \langle \partial_{x},\partial_{y}, x\partial_{x} - y\partial_{y}, x\partial_{y}, y\partial_{x} \rangle \subset \VF^{0}({\mathbb A}^{2}). $$ Note that the {\it Euler field\/} $E=x\partial_{x} + y\partial_{y} \in \aff_{2}$ is determined by the condition that $E$ acts trivially on $\sltwo$ and that $[E,D]=-D$ for $D\in \rad(\saff_{2})=K\partial_{x}\oplus K\partial_{y}$. We also remark that the centralizer of $\saff_{2}$ in $\VF({\mathbb A}^{2})$ is trivial: $$ \cent_{\VF({\mathbb A}^{2})}(\saff_{2})=(0). $$ In fact, $\cent_{\VF({\mathbb A}^{2})}(\{\partial_{x},\partial_{y})\}= K\partial_{x}\oplus K\partial_{y}$, and $(K\partial_{x}\oplus K\partial_{y})^{\sltwo} = (0)$. Let $\alpha=(f,g)\in\End(K[x,y])$ be \'etale, and assume, for simplicity, that $j(f,g)=1$. Then we get from formula $(*)$ \begin{gather*} \alpha(\partial_{x})=g_{y}\partial_{x} -g_{x}\partial_{y} = - D_{g}, \quad \alpha(\partial_{y})= -f_{y}\partial_{x} +f_{x}\partial_{y} = D_{f},\\ \alpha(x\partial_{y}) = f D_{f}=\textstyle{\frac{1}{2}}D_{f^{2}}, \quad \alpha(y \partial_{x}) = -g D_{g}= -\textstyle{\frac{1}{2}}D_{g^{2}}, \\ \alpha(x\partial_{x}) = -fD_{g}, \quad \alpha(y\partial_{y})=g D_{f}, \quad \alpha(x\partial_{x}-y\partial_{y}) = -D_{fg}. \end{gather*} This shows that for an \'etale map $\alpha=(f,g)$ we obtain \begin{gather*} \alpha(\aff_{2}) = \langle D_{f},D_{g},D_{f^{2}},D_{g^{2}},fD_{g},gD_{f}\rangle, \\ \alpha(\saff_{2}) = \langle D_{f},D_{g},D_{f^{2}},D_{g^{2}},D_{fg}\rangle = \mu(P_{f,g}) \end{gather*} \begin{prop}\lab{subVec1.prop} Let $L\subset \VF^{c}({\mathbb A}^{2})$ be a Lie subalgebra isomorphic to $\saff_{2}$. Then there is an \'etale map $\alpha=(f,g)$ such that $L = \alpha(\saff_{2})$. More precisely, if $(D_{f},D_{g})$ is a basis of $\rad(L)$, then $L = \langle D_{f},D_{g},D_{f^{2}},D_{g^{2}},D_{fg}\rangle$, and one can take $\alpha=(f,g)$. \end{prop} \begin{proof} We first remark that $L \subset \VF^{0}({\mathbb A}^{2})$, because $\saff_{2}$ has no non-trivial character. By Proposition~\ref{subP.prop} it suffices to show that $Q:=\mu^{-1}(L)\subset P$ is isomorphic to ${P_{\leq 2}}$. We fix a decomposition $L =L_{0}\oplus \rad(L)$ where $L_{0}\simeq \sltwo$. It is clear that the Lie subalgebra $\tilde Q:=\mu^{-1}(L_{0})\subset P$ contains a copy of $\sltwo$, i.e. $\tilde Q= Q_{0}\oplus K$ where $Q_{0}\simeq \sltwo$. Hence, as an $Q_{0}$-module, we get $Q = Q_{0}\oplus V_{2}\oplus K$ where $V_{2}$ is a two-dimensional irreducible $Q_{0}$-module which is isomorphically mapped onto $\rad(L)$ under $\mu$. Since $\{\rad(L),\rad(L)\} = (0)$ we have $\{V_{2},V_{2}\}\subset K$. Now the claim follows from Lemma~\ref{charP.lem} if we show that $\{V_{2},V_{2}\}\neq (0)$. Assume that $\{V_{2},V_{2}\}= (0)$. Choose an $\sltwo$-triple $(e_0,h_0,f_0)$ in $Q_{0}$ and a basis $(f,g)$ of $V_{2}$ such that $\{e_0,f\}=g$ and $\{e_0,g\}=0$. Since $\{f,g\}=0$ we get from Lemma~\ref{lem1}(b) that $f,g\in K[h]$ for some $h\in K[x,y]$, i.e. $f=p(h)$ and $g=q(h)$ for some polynomials $p,q\in K[t]$. But then $0=\{e_0,g\}=\{e_0,q(h)\} = q'(h)\{e_0,h\}$ and so $\{e_0,h\}=0$. This implies that $g=\{e_0,f\}=\{e_0,p(h)\}=p'(h)\{e_0,h\}=0$, a contradiction. \end{proof} \begin{rem}\lab{splitsaff.rem} The above description of the Lie subalgebras $L$ isomorphic to $\saff_{2}$ also gives a Levi decomposition of $L$. In fact, $(D_{f},D_{g})$ is a basis of $\rad(L)$ and $L_{0} := \langle D_{f^{2}},D_{g^{2}},D_{fg}\rangle$ is a subalgebra isomorphic to $\sltwo$. The following corollary shows that every Levi decomposition is obtained in this way. \end{rem} \begin{cor}\lab{subVec.cor} Let $L \subset \VF^{c}({\mathbb A}^{2})$ be a Lie subalgebra isomorphic to $\saff_{2}$, and let $L = \rad(L)\oplus L_{0}$ be a Levi decomposition. Then there exist $f,g\in K[x,y]$ such that $\rad(L) = \langle D_{f},D_{g}\rangle$ and $L_{0}= \langle D_{f^{2}},D_{fg},D_{g^{2}}\rangle$. Moreover, if $L'\subset \VF^{c}({\mathbb A}^{2})$ is another Lie subalgebra isomorphic to $\saff_{2}$ and if $L'\supset L_{0}$, then $L'= L$. \end{cor} \begin{proof} We can assume that $L = \saff_{2} =\langle D_{x},D_{y},D_{x^{2}},D_{y^{2}},D_{xy}\rangle$. Then every Lie subalgebra $L_{0}\subset L$ isomorphic to $\sltwo$ is the image of $\sltwo= \langle D_{x^{2}},D_{y^{2}},D_{xy}\rangle$ under conjugation with an element $\alpha$ of the solvable radical $R$ of $\SAff_{2}$. As a subgroup of $\Aut(K[x,y])$ the elements of $R$ are the translations $\alpha=(x+a,y+b)$, and we get $\rad(L)=\langle D_{x+a},D_{y+b}\rangle$ and $\alpha(\sltwo)=\langle D_{(x+a)^{2}},D_{(y+b)^{2}},D_{(x+a)(y+b}\rangle$ as claimed. For the last statement, we can assume that $L' = \langle D_{f},D_{g},D_{f^{2}},D_{g^{2}},D_{fg}\rangle$ such that $\langle D_{f^{2}},D_{g^{2}},D_{fg}\rangle = \sltwo$. This implies that $\langle f^{2},g^{2},fg,1\rangle=\langle x^{2},y^{2},xy,1\rangle$, and the claim follows. \end{proof} \begin{prop}\lab{subVec2.prop} Let $M\subset \VF^{c}({\mathbb A}^{2})$ be a Lie subalgebra isomorphic to $\aff_{2}$. Then there is an \'etale map $\alpha$ such that $M = \alpha(\aff_{2})$. More precisely, if $(D_{f},D_{g})$ is a basis of $\rad([M,M])$, then $M = \langle D_{f},D_{g},fD_{f},gD_{g},gD_{f},fD_{g}\rangle$, and one can take $\alpha=(f,g)$. \end{prop} \begin{proof} The subalgebra $M':=[M,M]$ is isomorphic to $\saff$, hence, by Proposition~\ref{subVec1.prop}, $M'=\alpha(\saff_{2})$ for an \'etale map $\alpha=(f,g)$ where we can assume that $j(\alpha)=1$. We want to show that $\alpha(\aff_{2})=M$. Consider the decomposition $M = J \oplus M_0 \oplus KD$ where $J = \rad(M')$, $M_0$ is isomorphic to $\sltwo$, and $D$ is the Euler-element acting trivially on $M_0$. We have $\alpha(\aff_2)=M' \oplus KE$ where $E$ is the image of the Euler element of $\aff_2$. Since $\VF^{c}({\mathbb A}^{2}) = \VF^{0}({\mathbb A}^{2}) \oplus KD'$ for any $D' \in \VF^{c}({\mathbb A}^{2})$ with $\Div D'\neq 0$ we can write $D = aE + F$ with some $a\in K$ and $F \in \VF^{0}({\mathbb A}^{2})$, i.e. $F = D_{h}$ for some $h\in K[x,y]$. By construction, $F=D-aE$ commutes with $M_0$. Since $M_{0}= \langle D_{f^{2}},D_{g^{2}},D_{fg} \rangle$ we get $\{h,f^2\} = c$ where $c \in K$. Thus $c = \{h,f^2\} = 2f\{h,f\}$ which implies that $\{h,f\}=0$. Similarly, we find $\{h,g\}=0$, hence $h$ is in the center of $\mu^{-1}(M')=P_{f,g} \subset P$. Thus, by Lemma~\ref{lem1}(c), $h\in K$ and so $D_h=0$ which implies $D = aE$. \end{proof} \medskip \section{Vector fields and the Jacobian Conjecture} \subsection*{The Jacobian Conjecture} Recall that the {\it Jacobian Conjecture} in dimension $n$ says that an \'etale homomorphism $\alpha\colon K[x_{1},\ldots,x_{n}] \to K[x_{1},\ldots,x_{n}]$ is an isomorphism. \begin{thm}\lab{JC.thm} The following statements are equivalent. \begin{enumerate} \item[(i)] The Jacobian Conjecture holds in dimension 2. \item[(ii)] All Lie subalgebras of $P$ isomorphic to ${P_{\leq 2}}$ are equivalent under $\Aut_{\text{\it LA}}(P)$. \item[(iii)] All Lie subalgebras of $\VF^{c}({\mathbb A}^{2})$ isomorphic to $\saff_{2}$ are conjugate under $\Aut(K[x,y])$. \item[(iv)] All Lie subalgebras of $\VF^{c}({\mathbb A}^{2})$ isomorphic to $\aff_{2}$ are conjugate under $\Aut(K[x,y])$. \end{enumerate} \end{thm} For the proof we need to compare the automorphisms of $P$ with those of the image $\mu(P)=\VF^{0}({\mathbb A}^{2}) \simeq P/K$. Since $K$ is the center $P$, we have a canonical homomorphism $F\colon\Aut_{\text{\it LA}}(P) \to \Aut_{\text{\it LA}}(P/K)$, $\phi\mapsto \bar\phi$. \begin{lem} The map $F\colon \Aut_{\text{\it LA}}(P) \to \Aut_{\text{\it LA}}(P/K)$ is an isomorphism. \end{lem} \begin{proof} If $\phi\in\ker F$, then $\phi(x)=x+a, \phi(y)=y+b$ where $a,b\in K$. By Lemma~\ref{aut.lem}, the $K$-algebra automorphisms $\alpha$ of $K[x,y]$ defined by $x\mapsto x+a$, $y \mapsto y+b$ is a Lie algebra automorphisms of $P$, and $\phi=\alpha$ by Lemma~\ref{lem2}. But then $\phi(x^{2}) = (x+a)^{2}= x^{2}+2ax + a^{2}$, and so $\bar\phi(\overline{x^{2}}) = \overline{x^{2}} + 2a\overline{x}$. Therefore, $a=0$, and similarly we get $b=0$, hence $\phi=\id_{P}$. Put ${\bar P} := P/K$ and let $\rho\colon {\bar P} \xrightarrow{\sim} {\bar P}$ be a Lie algebra automorphism. Then $\bar L := \rho({\bar{P}_{\leq 2}})\subset {\bar P}$ is a Lie subalgebra isomorphic to $\saff_{2}$ and thus $L:=p^{-1}(\bar L)$ is a Lie subalgebra of $P$ isomorphic to ${P_{\leq 2}}$, by Proposition~\ref{subP.prop}. Choose $f,g\in L$ such that $\bar f=\phi(\bar x)$ and $\bar g = \phi(\bar y)$. Then $\langle 1,f,g \rangle = \rad(L)$, and so $L=P_{f,g}$, by Proposition~\ref{subP.prop}. It follows that the map $\phi\colon x \mapsto f, y \mapsto g$ is an injective endomorphism of $P$ (see Remark~\ref{homP.rem}), and $\bar\phi = \rho$. Since $\rho$ is an isomorphism the same holds for $\phi$. \end{proof} \subsection*{Proof of Theorem~\ref{JC.thm}} (i)$\Rightarrow$(ii): If $L \subset P$ is isomorphic to ${P_{\leq 2}}$, then $L = P_{f,g}$ for some $f,g\in K[x,y]$ such that $\{f,g\}=1$ (Proposition~\ref{subP.prop}). By (i) we get $K[x,y] = K[f,g]$, and so the endomorphism $x \mapsto f, y\mapsto g$ of $K[x,y]$ is an isomorphism of $P$, mapping ${P_{\leq 2}}$ to $L$. (ii)$\Rightarrow$(iii): If $\bar L \subset \VF^{c}({\mathbb A}^{2})$ is a Lie subalgebra isomorphic to $\saff_{2}$, then $\bar L = \mu(P_{f,g})$ for some $f,g \in K[x,y]$, by Proposition~\ref{subVec1.prop}. By (ii), $P_{f,g} = \alpha({P_{\leq 2}})$ for some $\alpha\in\SAut_{\text{\it LA}}(P)=\SAut(K[x,y])$. Hence $\bar L = \mu( \alpha({\bar{P}_{\leq 2}}))=\bar\alpha(\saff_{2})$, by Lemma~\ref{equiv.lem}. (iii)$\Rightarrow$(iv): Let $M \subset \VF^{c}({\mathbb A}^{2})$ be a Lie subalgebra isomorphic to $\aff_{2}$, and set $M':=[M,M]\simeq \saff_{2}$. By (iii) there is an automorphism $\alpha\in\Aut(K[x,y])$ such that $M'=\alpha(\saff_{2})$. It follows that $\alpha(\aff_{2}) = M$ since $M$ is determined by $\rad(M')$ as a Lie subalgebra, by Proposition~\ref{subVec2.prop}. (iv)$\Rightarrow$(i): Let $f,g\in K[x,y]$ such that $\{f,g\}=1$, and let $\alpha\colon K[x,y] \to K[x,y]$ be the \'etale homomorphism defined by $\alpha(x)=f$ and $\alpha(y)=g$. Then $M:=\alpha(\aff_{2})\subset \VF^{c}({\mathbb A}^{2})$ is a Lie subalgebra isomorphic to $\aff_{2}$, by Lemma~\ref{equiv.lem}. By assumption (iv), there is an automorphism $\beta \in\Aut(K[x,y])$ such that $\beta(\aff_{2}) = M$. It follows that $\beta^{-1}\circ \alpha$ is an \'etale homomorphism which induces an isomorphism of $\aff_{2}$, hence of $\saff_{2}$ and thus of $\rad(\saff_{2}) = K\partial_{x} \oplus K\partial_{y}$. This implies that $\beta^{-1}\circ\alpha$ is an automorphism, and so $K[f,g]=K[x,y]$. \qed \begin{rem}\lab{example.rem} It is not true that the Lie subalgebras of $P$ or of $\VF^{c}({\mathbb A}^{2})$ isomorphic to $\sltwo$ are equivalent respectively conjugate. This can be seen from the example $S = Kx^{2}y\oplus Kxy \oplus Ky \subset P$ which is isomorphic to $\sltwo$, but not equivalent to $Kx^{2}\oplus Kxy \oplus Ky^{2}$ under $\Aut_{\text{\it LA}}(P)$. In fact, the element $x^{2}y$ does not act locally finitely on $P$. \end{rem} \subsection*{Algebraic Lie algebras} If an algebraic group $G$ acts on an affine variety $X$ we get a canonical anti-homomorphism of Lie algebras $\Phi\colon\Lie G \to \VF(X)$ defined in the usual way: $$ \Lie G \ni A \mapsto \xi_{A} \text{ with } (\xi_{A})_{x} := d\phi_{x}(A) \text{ for } x\in X, $$ where $\phi_{x}\colon G \to X$ is the orbit map $g\mapsto gx$. A Lie algebra $L \subset \VF(X)$ is called {\it algebraic\/} if $L$ is contained in $\Phi(\Lie G)$ for some action of an algebraic group $G$ on $X$. If is shown in \cite{CoDr2003From-Lie-algebras-} that $L$ is algebraic if and only if $L$ acts locally finitely on $\VF(X)$. With this result we get the following consequence of our Theorem 1. \begin{cor}\lab{corollary} The following statements are equivalent. \begin{enumerate} \item[(i)] The Jacobian Conjecture holds in dimension 2. \item[(ii)] All Lie subalgebras of $\VF^{c}({\mathbb A}^{2})$ isomorphic to $\saff_{2}$ are algebraic. \item[(iii)] All Lie subalgebras of $\VF^{c}({\mathbb A}^{2})$ isomorphic to $\aff_{2}$ are algebraic. \end{enumerate} \end{cor} \begin{proof} It is clear that the equivalent statements (i), (ii) or (iii) of Theorem 1 imply (ii) and (iii) from the corollary. It follows from the Propositions~\ref{subVec1.prop} and \ref{subVec2.prop} that every Lie subalgebra $L$ isomorphic to $\saff_{2}$ is contained in a Lie subalgebra $Q$ isomorphic to $\aff_{2}$, hence (iii) implies (ii). It remains to prove that (ii) implies (i). We will show that (ii) implies that $L$ is equivalent to $\saff_{2}$. Then the claim follows from Theorem 1. By (ii), there is a connected algebraic group $G$ acting faithfully on ${\mathbb A}^{2}$ such that $\Phi(\Lie G)$ contains $L$. Therefore, $\Lie G$ contains a subalgebra $\mathfrak{s}$ isomorphic to $\sltwo$, and so $G$ contains a closed subgroup $S$ such that $\Lie S = \mathfrak{s}$. Since every action of $\SLtwo$ on ${\mathbb A}^{2}$ is linearizable (see \cite{KrPo1985Semisimple-group-a}), there is an automorphism $\alpha$ such that $\alpha(\mathfrak{s}) = \sltwo = \langle x\partial_{y},y\partial_{x},x\partial_{x} - y\partial_{y} \rangle$. But this implies, by Corollary~\ref{subVec.cor}, that $\alpha(L) =\saff_{2}$. \end{proof}
1,314,259,993,104
arxiv
\section{Introduction : the notion of interaction confinement} Nanofluidic channels -- devices that allow for the well-controlled study of confined water and ion transport -- have experienced a tremendous scale reduction in recent years~\cite{Kavokine2021}. Only a few years ago has molecular scale fluid confinement been achieved in all possible geometries: 0D nanopores~\cite{Feng2016}, 1D nanotubes~\cite{Tunuguntla2017} and 2D nano-slits~\cite{Radha2016}. As the size of experimentally accessible channels shrink, the complexity of the theoretical tools required to describe them grows, in particular in what concerns the description of the solid-liquid interface. In microfluidic devices -- with channels on the few micrometer scale -- a wall typically provides a no-slip boundary condition. In nanofluidics -- with channel sizes smaller than 100 nm -- a wall needs to be described in terms of microscopic but still coarse-grained parameters: typically, the surface charge and the hydrodynamic slip length. At the even smaller scale of \emph{single-digit nanopores}~\cite{Faucher2019}, -- channels with one dimension smaller than 10 nm -- we highlight in this paper that the wall needs to be described in terms of its electronic properties, since these affect the inter-particle interactions within the channel. We define \emph{interaction confinement} as the regime where the interactions between particles inside a channel are \PR{modified} by the \PR{presence and }nature of the channel walls. Without being named as such, interaction confinement has been known for many years in the theory of biological ion channels. It was realized as early as 1969 by Parsegian~\cite{Parsegian1969} that an ion faces an energy barrier when crossing a \PR{lipid membrane: as the dielectric screening of the ion's Coulomb potential is weaker in the membrane than in water, the ion acquires an additional dielectric self-energy when entering the channel. This corresponds precisely to a modification of the Coulomb potential due to interaction confinement} Yet, the equivalence between self-energy barrier and modified inter-particle interactions was realized much later, in the study of strongly confined ion transport~\cite{Cheng2005,Kamenev2006,Zhang2006,Zhang2005,Kaufman2015}. Initial studies focused on one-dimensional tube-like channels; both the channel wall material and the water inside were treated as local dielectric media. The \PR{contrast between the dielectric constants of }the two media was shown to result in the actual confinement of the electric field produced by an ion, with the electric field lines being forced to remain parallel to the channel walls (Fig. 1). The corresponding effective Coulomb interactions are stronger than in bulk water, their strength increasing with decreasing channel width~\cite{Kavokine2021}. In channels with diameters smaller than about 2 nm, these were predicted to cause strong ionic correlations, which in turn result in Bjerrum pairing and ion-exchange phase transitions~\cite{Nicholson2003,Kamenev2006,Zhang2006,Zhang2005}, leading to deviations from Ohm's law transport in the form of Wien effect conduction~\cite{Kavokine2019} and Coulomb-blockade-like phenomena~\cite{Kaufman2015,Kavokine2019}. The concept of reinforced Coulomb interactions due to a dielectric contrast was later extended to a two-dimensional nano-slit geometry~\cite{Robin2021}, where it was found to produce even more striking non-linear ion transport: under the effect of an electric field, the ions may undergo a dynamical phase transition and assemble into dense clusters termed Bjerrum polyelectrolytes~\cite{Robin2021,Zhao2021,Robin2022}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figure1.pdf} \caption{Schematic view of interaction confinement for a positive ion in a two-dimensional channel. The Coulomb potential produced by the ion is modified by the polarization charges it induces within the channel channel wall. The corresponding field lines are typically confined within the channel instead of pointing isotropically away from the ion.} \end{figure} The above-mentioned non-linear effects have been studied under the assumption that the channel wall material can be described as a local dielectric medium, with a permittivity much lower than that of bulk water. This is typically the case when the channel is embedded into a lipid membrane, or if it is made of an insulating material such as hexagonal boron nitride (hBN) or $\rm MoS_2$. However, the local dielectric assumption no longer holds for carbon-based materials -- graphene and graphite -- that are widely used to manufacture nanofluidic channels, as these may have conduction electrons \LB{and polarization effects can alter wall-ion interactions \cite{Misra2017,Misra2021}.} An opposite approximation, where the channel wall is treated as a perfect metal, has been proposed to study ion \LB{behavior} in carbon nanopores~\cite{Kondrat2011,Lee2014,Merlet2012}. The Thomas-Fermi model may be used to interpolate between a metallic and an insulating behavior~\cite{Mahan} and computations of Coulomb interactions above a Thomas-Fermi surface have been reported~\cite{Vorotyntsev1980,Kornyshev1980,Kornyshev1982,Comtet2017,Kaiser2017,Scalfi2020,Schlaich2022}. But, to our knowledge, these have not been extended to a confined geometry. \LB{Further,} no practical way of accounting for a more complex dielectric response has been proposed. This is an important shortcoming, since the confined interactions directly control the non-linear ion transport and phase behavior in single-digit nanopores. As a recent example, it was shown experimentally that the confinement-induced shift in the freezing transition of an ionic liquid depends on the metallic or insulating nature of the solid wall \cite{Comtet2017}. In this paper, we introduce a method for evaluating the confined Coulomb interactions in a two-dimensional channel with \PR{a slit geometry and} arbitrary wall material, described by its \emph{surface response function}. We derive the general expression \PR{of the confined Coulomb potential} in Sec. II, and in Sec. III we discuss how the surface response function is expressed in terms of a material's electronic properties. In Sec. IV, we evaluate explicitly the confined potential for different channel wall materials. Finally, in Sec. V, we implement confined interactions in Brownian dynamics simulations and show that the ionic conduction in a 2D channel can be adjusted from a Wien effect to an Ohm's law behavior through the electronic properties of the channel wall. \section{Evaluation of the confined potential} We consider a point charge $+e$ placed in the middle of a slit-like channel of height $h$ and infinite length and width (Fig. 1). The $z$ axis is perpendicular to the channel walls and we work in cylindrical coordinates $(\boldsymbol{\rho},z)$. The channel is filled with water, which we for now assume to have a local and isotropic dielectric permittivity $\epsilon_w$. The \PR{dielectric properties} of the channel wall are characterized by the \emph{surface response function} $g(q,\omega)$, which is a well-known quantity in the surface science and plasmonics literature~\cite{Liebsch,Pitarke2007}. It is phenomenologically defined as the reflection coefficient for evanescent plane waves. If an external potential \beq \phi_{\rm ext}(\boldsymbol{\rho},z,t) = \phi_0 e^{i(\mathbf{q} \boldsymbol{\rho} -\omega t)}e^{q(z+h/2)} \eeq acts, say, on the confining wall at $z<-h/2$, then the potential induced by the confining wall in the half-space $z>-h/2$ is \beq \phi_{\rm ind}(\boldsymbol{\rho},z,t) = - \phi_0 g(q,\omega) e^{i(\mathbf{q} \boldsymbol{\rho} -\omega t)}e^{-q(z+h/2)}. \label{def_g_ph} \eeq In this paper we will only be concerned with static potentials, hence we will use $g(q) \equiv g(q,\omega = 0)$, but general expressions valid for any frequency $\omega$ will still be given where relevant. We discuss in Sec. III how the surface response function is related to the microscopic properties of the wall material. For now, we use it to derive a formal expression for the electrostatic potential within the channel. The Coulomb potential inside the \PR{channel} is the sum of the ``external" potential produced by the test charge $+e$, and of the ``induced" potential produced by the polarisation charges in the two confining walls. The external potential is simply the 3D Coulomb potential $\phi_{\rm ext}(\mathbf{r}) = e/(4\pi\epsilon_0 \epsilon_w | \mathbf{r}|)$, which supports the following Fourier decomposition: \begin{equation} \phi_{\rm ext}(\rho,z) = \frac{e}{4\pi \epsilon_0\epsilon_w} \int \frac{\d \mathbf{q}}{(2\pi)^2} \frac{2\pi}{q} e^{-q|z|}e^{i \mathbf{q} \boldsymbol{\rho}}. \label{phiext} \end{equation} The induced potential may be determined separately for every wavevector $q$. Let $\phi_{\rm ext}^m(q)$ be the external potential acting on the wall at $z = -h/2$. It is the sum of the potential produced by the test charge, and of the induced potential created by the polarization charges in the medium at $z > h/2$, both screened by the water dielectric constant. By symmetry, the external potential is the same in the upper and lower dielectric medium. This yields the following self-consistent equation: \begin{equation} \phi_{\rm ext}^m (q) = \frac{e}{4\pi \epsilon_0 \epsilon_w} \frac{2\pi}{q} e^{-qh/2} - g(q) \phi_{\rm ext}^m(q) e^{-q h}. \label{ion2Dsc} \end{equation} We are interested in the total potential in the plane $z = 0$, which is \begin{equation} \phi_{\rm tot} (q,z = 0) = \phi_{\rm ext}(q,z = 0) -2 g(q) \phi_{\rm ext}^m (q) e^{-qh/2}. \end{equation} Making use of eq.~\eqref{ion2Dsc}, we obtain \begin{equation} \phi_{\rm tot}(q,0) = \frac{e}{4\pi \epsilon_0\epsilon_w} \frac{2\pi}{q} \left( 1- \frac{2 g(q) e^{-qh}}{1+g(q)e^{-qh}} \right). \label{main_result} \end{equation} The potential in real space is then obtained by inverse Fourier transformation, which thanks to the rotational symmetry reduces to \begin{equation} \phi_{\rm tot} (\rho,0) \equiv \phi(\rho) = \int_0^{+\infty} \frac{\d q}{2\pi} \, q J_0(q \rho) \phi_{\rm tot} (q,0), \label{invhankel} \end{equation} with $J_0$ the Bessel function of the first kind. \LB{These expressions make the link between the confined charge-charge interactions and the surface response fonction of the materials (defined for a semi-infinite medium).} Eqs.~\eqref{main_result} and ~\eqref{invhankel} constitute the main formal result of this paper. \section{Surface response functions} The result in Eqs.~\eqref{main_result} and ~\eqref{invhankel} is of little use unless one is able to evaluate the surface response function for a given channel wall material. This is the purpose of the present section. Throughout this section, we will consider a single interface in the plane $z = 0$, with the solid material filling the half-space $z<0$ and the half-space $z>0$ filled with vacuum. We will subsequently extend our results to the case where the solid is in contact with water. \subsection{Case of a local dielectric} If the solid material is a local dielectric characterized by a permittivity $\epsilon_m$, it cannot contain any induced charges (except on the surface), and the potential therefore solves the Laplace equation $\nabla^2 \phi_m = 0$ inside the material. Upon Fourier transformation, the Laplace equation becomes \beq \PR{\frac{\partial^2 \phi_m}{\partial z^2} (q,z)- q^2 \phi_m(q,z) = 0.} \label{laplace} \eeq The solution of \eqref{laplace} that vanishes at $z \to -\infty$, is of the form $\phi_m(q,z) = \phi_m e^{qz}$. Outside the material, the potential is the sum of the evanescent wave external potential and the induced potential. Since the Laplace equation holds, the outside potential reads \begin{equation} \phi(q,z) = \phi_{\rm ext} e^{qz} + \phi_{\rm ind} e^{-qz}. \end{equation} Now, we must enforce boundary conditions on the interface. These are given by continuity of the potential and of the displacement field $\mathbf{D} = -\epsilon_0 \epsilon \nabla \phi$, where $\epsilon$ is the relative permittivity (1 or $\epsilon_m$). The boundary conditions read \begin{equation} \begin{split} &\phi_{\rm ext} + \phi_{\rm ind} = \phi_m \\ &\phi_{\rm ext} - \phi_{\rm ind} = \epsilon_m \phi_m. \end{split} \label{BC_local} \end{equation} Hence we obtain $\phi_{\rm ind} = \phi_{\rm ext}(1-\epsilon_m)/(\epsilon_m+1)$, and, using the definition in eq.~\eqref{def_g_ph}, the expression of the surface response function: \begin{equation} g(q) = \frac{\epsilon_m-1}{\epsilon_m +1}. \end{equation} The surface response function thus appears as a generalization of the image charge formalism: in the local dielectric case, its expression corresponds to the magnitude of the image charge~\cite{Jackson}. \subsection{General case: microscopic expression} When no particular assumption can be made for the material's dielectric properties, the surface response function must be determined from its general microscopic expression. For any medium, we may define the density-density response function $\chi$ as the linear response function relating the induced charge density $\delta n$ to the externally applied potential $\phi_{\rm ext}$ (in energy units): \begin{equation} \delta n (\mathbf{r},t) = \int_{-\infty}^{+\infty} \d t' \int \d \mathbf{r}' \chi(\mathbf{r},\mathbf{r}',t-t') \phi_{\rm ext}(\mathbf{r}',t'). \label{chidef} \end{equation} In our interface geometry (with translational invariance parallel to the interface), we may define the space-time Fourier transform \begin{equation} \begin{split} \chi(q,z,z',\omega) &= \int \d(\boldsymbol{\rho}-\boldsymbol{\rho}') \int_{-\infty}^{+\infty} \d(t-t') e^{-i\mathbf{q} (\boldsymbol{\rho} - \boldsymbol{\rho}')} \\ &e^{-i \omega(t-t')} \chi(\boldsymbol{\rho}-\boldsymbol{\rho}',z,z',t-t'). \end{split} \end{equation} The surface response function is then expressed as \begin{equation} g(q,\omega) = - \frac{e^2}{2 \epsilon_0q} \int_{-\infty}^{0}\d z \d z' \, e^{q(z+z')} \chi(q,z,z',\omega). \label{gdef} \end{equation} We may check that this expression is consistent with the phenomenological definition in eq.~\eqref{def_g_ph}. \PR{Suppose the solid is subject to an evanescent plane wave at frequency $\omega$, of the form $\phi_{\rm ext}(\boldsymbol{\rho},z,t) = \phi_0 e^{i (\mathbf{q} \boldsymbol{\rho} - \omega t)} e^{q z} $. Its space-time Fourier transform is $\phi_{\rm ext}(z,q,\omega) = \phi_{0} e^{qz}$. Then, the induced charge density is \begin{equation} \delta n (q,z,\omega) = \phi_{0} \int_{-\infty}^0 \d z' \chi_q(z,z',\omega) e^{qz'}, \end{equation} and the induced potential at a distance $z$ above the medium is \begin{equation} \begin{split} \phi_{\rm ind}(q,z,\omega) &= \phi_{0} \int_{-\infty}^0 \d z' \d z'' \chi(q,z',z'',\omega) \frac{e^2}{2 \epsilon_0 q} e^{q(z'+z''-z)} \\ &= -g(q,\omega) \phi_{0} e^{-qz}, \end{split} \end{equation} or, in real space \beq \phi_{\rm ind}(\rho,z,t) = - \phi_0 g(q,\omega_0) e^{i (\mathbf{q} \boldsymbol{\rho}-\omega_0 t)} e^{-q z}. \eeq } Computing the surface response function according to the microscopic expression in eq.~\eqref{gdef} requires the knowledge of the density response function $\chi(q,z,z',\omega)$ for a semi-infinite solid. There exist various analytical and numerical methods for its evaluation to varying degrees of precision. If the solid's polarisation is mainly of electronic origin, the simplest treatment (RPA, Random Phase Approximation) that takes into account electron-electron interactions requires to solve the following integral (Dyson) equation~\cite{Pitarke2007}: \begin{equation} \begin{split} &\chi(q,z,z',\omega) = \chi^0(q,z,z',\omega) + \dots \\ &+ \int \d z_1 \d z_2 \, \chi^0(q,z,z_1,\omega) V_q(z_1-z_2) \chi (q,z_2,z',\omega), \end{split} \label{dysonchi} \end{equation} where $V_q(z) = (e^2/2\epsilon_0 q) e^{-q|z|}$ is the Fourier-transformed Coulomb potential. Here $\chi^0$ is the non-interacting density response function: it determines the electrons' response to an external potential $\phi_{\rm ext}$, the electron-electron interactions being switched off. It can in principle be computed if the eigenenergies $E_{\lambda}$ and eigenfunctions $\psi_\lambda (\mathbf{r})$ of the non-interacting system are known~\cite{rammer_ch6}: \begin{equation} \begin{split} \chi^0(\mathbf{r},\mathbf{r}',\omega) = \sum_{\lambda,\lambda'}\, \frac{n_{\rm F}(E_{\lambda})-n_{\rm F}(E_{\lambda'})}{E_{\lambda}-E_{\lambda'} + \hbar \omega + i \delta} \dots \\ \dots \psi^*_{\lambda}(\mathbf{r}) \psi_{\lambda'}(\mathbf{r}) \psi^*_{\lambda'}(\mathbf{r}') \psi_{\lambda}(\mathbf{r}'), \end{split} \label{chi0def} \end{equation} where $n_{\rm F}$ is the Fermi-Dirac distribution and $\delta \to 0^+$. In this way, the effective Coulomb interactions inside a nanoscale channel are directly related to the channel walls' electronic structure. \subsection{Specular reflection approximation} Even if $\chi_0$ is known, eq.~\eqref{dysonchi} must be solved numerically for every value of $q$ and $\omega$. A considerable simplification is achieved within the so-called specular reflection (SR) approximation~\cite{Griffin1976}, which allows one to solve~\eqref{dysonchi} analytically and express the surface response in terms of the bulk response. The SR approximation sets \begin{equation} \chi^0(q,z,z',\omega) = \chi^0_{\rm B} (q,z-z',\omega) + \chi^0_{\rm B} (q,z+z',\omega), \label{specular} \end{equation} where $\chi^0_{\rm B}$ is the bulk system's non-interacting density response. This ansatz does not correspond to any particular form of the wavefunctions in eq.~\eqref{chi0def}. It imposes phenomenologically that in the presence of a surface, the points $z$ and $z'$ may either interact directly, or through a specular reflection from the surface at $z=0$. It can be shown that the SR approximation thus amounts to neglecting quantum interference between electrons impinging on and electrons reflected from the surface~\cite{Griffin1976}. Inserting eq.~\eqref{specular} into eq.~\eqref{dysonchi} and carrying out Fourier transforms along the vertical direction (the computation is detailed, for example, in ref.\cite{Griffin1976}), one obtains: \begin{equation} \begin{split} &g (q,\omega) = \frac{1-q\ell_q(\omega)}{1+ q\ell_q(\omega)}, ~~~ \rm with \\ &\ell_q(\omega) = \frac{2}{\pi} \int_0^{+\infty} \frac{\d q_z}{(q^2+q_z^2)\epsilon(q,q_z,\omega)} \end{split} \label{gspecular} \end{equation} where $\epsilon(q,q_z,\omega) = 1 - \frac{e^2}{\epsilon_0 (q^2+q_z^2)} \chi^0_{\rm B}(q,q_z,\omega)$ is the bulk system's dielectric function. The bulk non-interacting density response function is obtained from the Fourier-transformed eq.~\eqref{chi0def}: \begin{equation} \begin{split} \chi^0_{\rm B} (q,q_z,\omega) = \sum_{\nu,\nu'} \int_{\rm BZ} \frac{\mathrm{d}^3k}{4\pi^3} |\langle \mathbf{k} + \mathbf{q},\nu | e^{i \mathbf{q \cdot r}}| \mathbf{k} ,\nu'\rangle|^2 \\ \frac{n_{\rm F}[E_{\nu}(\mathbf{k+q})]-n_{\rm F}[E_{\nu'}(\mathbf{k})]}{E_{\nu}(\mathbf{k}+\mathbf{q})-E_{\nu'}(\mathbf{k})-\hbar (\omega +i\delta) }, \end{split} \label{chi0bulk} \end{equation} where we have re-labeled the states $\lambda \mapsto (\mathbf{k},\nu)$, with $\nu$ a band index and $\mathbf{k}$ a vector within the (three-dimensional) first Brillouin zone. We report here an alternative derivation of eq.~\eqref{gspecular}, which has the advantage of being computationally simpler than the one reported in \cite{Griffin1976}. It is based on the work of Ritchie and Marusak~\cite{Ritchie1966}, who first proposed the SR approximation in their study of surface plasmons. The idea is that, when eq.~\eqref{specular} is enforced, the \emph{shape} of the density response of the semi-infinite medium to the potential $\phi_{\rm ext}(q,z,\omega) = \phi_{\rm ext}e^{qz}$ is the same as the \emph{shape} of the density response of an infinite medium to a symmetrized potential $\phi_{\rm eff} (q,z,\omega) = \phi_{\rm eff} e^{-q|z|}$. The amplitude $\phi_{\rm eff}$ is a priori non known, and it is determined by enforcing Maxwell boundary conditions at the interface. \begin{widetext} In the following, we will drop the frequency $\omega$ which plays no role in the computation. In response to the potential $\phi_{\rm eff}$, the induced charge density in the infinite medium reads \begin{align} \delta n (q,z) &= \phi_{\rm eff} \int_{-\infty}^{+\infty} \d z' \chi_{\rm B}(q,z-z') e^{-q|z'|}\\ & = \phi_{\rm eff}\frac{1}{2\pi} \int_{-\infty}^{+\infty} \d q_z \chi_{\rm B}(q,q_z) e^{iq_z z} \int_{-\infty}^{+\infty} \d z' e^{-q|z'|} e^{-i q_z z'} \\ & = \phi_{\rm eff} \frac{q}{\pi} \int_{-\infty}^{+\infty} \d q_z \frac{\chi_{\rm B}(q,q_z)}{q^2+q_z^2} e^{i q_z z}. \end{align} The induced potential $\phi_{\rm ind,m}$ (not to be confused with the induced potential $\phi_{\rm ind}e^{-qz}$ outside the medium) is \begin{align} \phi_{\rm ind,m} (q,z)& = \int_{-\infty}^{+\infty} \d z' \, \delta n (q,z') \frac{e^2}{4\pi \epsilon_0} \frac{2\pi}{q} e^{-q|z-z'|} \\ & = 2 \phi_{\rm eff} \frac{e^2}{4 \pi \epsilon_0} \int_{-\infty}^{+\infty} \d q_z \frac{\chi_{\rm B}(q,q_z)}{q^2+q_z^2} e^{i q_z z} \int_{-\infty}^{+\infty} \d z' e^{-q |z-z'|} e^{i q_z z'} \\ & = 4 \phi_{\rm eff} \frac{e^2}{4\pi \epsilon_0} \int_{-\infty}^{+\infty} \d q_z \frac{q \chi_{\rm B}(q,q_z)}{(q^2+q_z^2)^2} e^{i q_z z}. \label{phiind} \end{align} \end{widetext} At this point, we may introduce the bulk dielectric function $\epsilon(q,q_z)$. For the bulk interacting density response function, the RPA Dyson equation~\eqref{dysonchi} reduces to \begin{equation} \chi_{\rm B}(q,q_z) = \frac{\chi^0_{\rm B}(q,q_z)}{1-\frac{e^2}{\epsilon_0 (q^2+q_z^2)}\chi^0_{\rm B} (q,q_z)}. \end{equation} The dielectric function being defined according to $\epsilon(q,q_z) = 1 - \frac{e^2}{\epsilon_0 (q^2+q_z^2)} \chi^0_{\rm B}(q,q_z)$, we have the relation \begin{equation} \chi_{\rm B}(q,q_z) = \frac{\epsilon_0 (q_z^2+q^2)}{e^2} \left( \frac{1}{\epsilon(q,q_z)}-1 \right). \end{equation} When inserting this relation into eq.~\eqref{phiind}, we need to compute the integral \begin{equation} I(q) = \int_{-\infty}^{+\infty} \d q_z\, \frac{e^{i q_z z}}{q^2+q_z^2} = \frac{1}{q} \int_{-\infty}^{+\infty} \d u \, \frac{e^{iu qz}}{1+ u^2}. \end{equation} Specializing to the case $z<0$, and noticing that the integrand has poles at $i$ and $-i$, we may close the integration path in the lower complex plane, so that \begin{equation} I(q) = - \frac{2 i \pi}{q} \underset{u= -i}{\mathrm{Res}} \left[\frac{e^{iu qz}}{1+ u^2} \right] = \frac{\pi}{q} e^{qz}. \end{equation} Finally, \begin{equation} \phi_{\rm ind,m} (q,z) = \phi_{\rm eff} \left(\frac{q}{\pi} \int_{-\infty}^{+\infty} \frac{e^{i q_z z}}{(q^2+q_z^2) \epsilon(q,q_z)} -e^{qz} \right), \end{equation} so that the total potential in the half-space $z<0$ is \begin{align} \phi_m(q,z) &= \phi_{\rm eff} e^{qz} + \phi_{\rm ind,m}(q,z) \\ &= \phi_{\rm eff} \, \frac{q}{\pi} \int_{-\infty}^{+\infty} \frac{e^{i q_z z}}{(q^2+q_z^2) \epsilon(q,q_z)}. \end{align} We now need to determine $\phi_{\rm eff}$ in the actual semi-infinite medium by enforcing the boundary conditions at the surface, which are, as in the local dielectric case (section III.A), continuity of the potential and of the displacement field. Outside the medium, we may still express the potential as $\phi_{\rm ext} e^{qz} + \phi_{\rm ind} e^{-qz}$: the sum of the actual potential we are applying and the potential induced by the medium. The displacement field is produced only by the external charges, hence $\mathbf{D}(q,z) = - \epsilon_0 \nabla \phi_{\rm eff} (q,z)$ in the half-space $z<0$, so that the boundary conditions read: \begin{equation} \begin{split} &\phi_{\rm ext} + \phi_{\rm ind} = q \ell_q \phi_{\rm eff} \\ &\phi_{\rm ext} - \phi_{\rm ind} = \phi_{\rm eff}. \end{split} \label{BC_specular} \end{equation} We deduce \begin{equation} \phi_{\rm ind} = \frac{q \ell_q-1}{q \ell_q +1} \phi_{\rm ext}, \end{equation} and from the definition of the surface response function in eq.~\eqref{def_g_ph}, we recover eq.~\eqref{gspecular}. \subsection{Water-solid interface} So far, we have discussed the surface response of a solid exposed to vacuum. However, in order to evaluate effective Coulomb interactions in nanochannels according to eq.~\eqref{main_result}, we require the response of the solid in contact with a water slab. The generalization is straightforward if the water is described as a local dielectric medium with permittivity $\epsilon_w$. Then, if the solid is also a local dielectric (see Sec. III.A), the boundary conditions in eq.~\eqref{BC_local} become \begin{equation} \begin{split} &\phi_{\rm ext} + \phi_{\rm ind} = \phi_m \\ &\epsilon_w(\phi_{\rm ext} - \phi_{\rm ind}) = \epsilon_m \phi_m, \end{split} \label{BC_local_mod} \end{equation} so that \beq g(q) = \frac{\epsilon_m - \epsilon_w}{\epsilon_m + \epsilon_w} . \label{glocal} \eeq For an arbitrary solid material in the SR approximation, the boundary conditions in eq.~\eqref{BC_specular} are modified in a similar way, and one obtains \beq g (q,\omega) = \frac{1-\epsilon_w q\ell_q(\omega)}{1+ \epsilon_w q\ell_q(\omega)}. \label{gSR_water} \eeq We may further extend these results to the case where water has anisotropic permittivity, as is typically the case in nanoscale confinement~\cite{Schlaich2016,Fumagalli2018,Bonthuis2011}. In the absence of polarization within the solid wall, the potential created by a point charge $+e$ in anisotropic water with a permittivity tensor $\overline{\overline \epsilon}$ satisfies the Poisson equation \begin{equation} \nabla \left( \overline{\overline \epsilon} \cdot \nabla \phi \right) = -\frac{e}{\epsilon_0} \delta (\mathbf{r} ). \end{equation} For a charge placed at $(\rho = 0, z = 0)$, this is solved by \begin{equation} \phi_{\rm ext} (\rho,z) = \frac{e}{4 \pi \epsilon_0\sqrt{\epsilon_{\parallel} \epsilon_{\perp}} \sqrt{\rho^2 + \frac{\epsilon_{\parallel}}{\epsilon_{\perp}} z^2}}, \label{phiext_aniso} \end{equation} where $\epsilon_{\perp}$ (resp. $\epsilon_{\parallel}$) is the component of the permittivity tensor in the confined (resp. non-confined) direction. Eq.~\eqref{phiext_aniso} becomes, after Fourier transformation, \begin{equation} \phi_{\rm ext} (q,z) = \frac{e}{4\pi \epsilon_0 \sqrt{\epsilon_{\parallel} \epsilon_{\perp}} } \frac{2\pi}{q} e^{-aq|z|}, \end{equation} with $a = \sqrt{\epsilon_{\parallel}/\epsilon_{\perp}}$: this now replaces eq.~\eqref{phiext} for the external potential applied on the confining walls. Hence, taking into account the dielectric anisotropy amounts to replacing $\epsilon_w \mapsto \sqrt{\epsilon_{\parallel} \epsilon_{\perp}}$, and introducing factors $a$ in all the exponentials of the type $e^{-qz}$. In particular, eq.~\eqref{main_result} for the total potential in the channel midplane becomes \begin{equation} \phi_{\rm tot}(q,0) = \frac{e}{4\pi \epsilon_0\sqrt{\epsilon_{\parallel} \epsilon_{\perp}} } \frac{2\pi}{q} \left( 1- \frac{2 g_m(q) e^{-aqh}}{1+g_m(q)e^{-aqh}} \right). \label{phitotaniso} \end{equation} The surface response function of the dielectric solid is modified according to \begin{equation} g(q) = \frac{\epsilon_m - \sqrt{\epsilon_{\parallel} \epsilon_{\perp}} }{\epsilon_m + \sqrt{\epsilon_{\parallel} \epsilon_{\perp}} }, \label{ganiso} \end{equation} and in the SR approximation for an arbitrary solid \beq g (q,\omega) = \frac{1-\sqrt{\epsilon_{\parallel} \epsilon_{\perp}} q\ell_q(\omega)}{1+ \sqrt{\epsilon_{\parallel} \epsilon_{\perp}} q\ell_q(\omega)}. \eeq \section{Confined potential for model wall materials} In this section we use the results of sections II and III to discuss explicitly the nature of the confined Coulomb interactions for different channel wall materials. \subsection{Local dielectric: quasi-2D Coulomb interactions} A situation typically encountered in nanofluidics is the one of a channel with insulating walls that can be described by a local dielectric constant, that is much lower than that of water. Since it applies to biological nanopores, such a configuration has been extensively studied in a 1D geometry~\cite{Teber2005,Levin2006,Loche2019,Kavokine2019}, and the results have recently been extended to a 2D geometry~\cite{Robin2021} through a direct solution of Poisson's equation. The formal result obtained in ref.~\cite{Robin2021} can be recovered without computation in the surface response function framework. By simply replacing eq.~\eqref{glocal} into eq.~\eqref{main_result}, we obtain the Fourier-transformed Coulomb potential as \beq \phi_{\rm tot} (q) = \frac{e}{2 \epsilon_0 \epsilon_w q} \left( 1- \frac{2 (\epsilon_m - \epsilon_w ) e^{-qh}}{\epsilon_m+\epsilon_w + (\epsilon_m - \epsilon_w) e^{-qh}} \right). \label{hankquasi2D} \eeq The real-space potential can then be obtained by \PR{performing the inverse Fourier transform} according to eq.~\eqref{invhankel}. \PR{Expanding the last term of eq.~\eqref{hankquasi2D} in a geometric series, we obtain \beq \phi_{\rm tot}(\rho) = \frac{e}{4 \pi \epsilon_0\epsilon_w} \left(\frac 1 \rho + 2\sum_{n=1}^{+ \infty} \left( \frac{\epsilon_w - \epsilon_m}{\epsilon_w + \epsilon_m}\right)^n\frac{1}{\sqrt{\rho^2 + h^2 n^2}} \right). \label{realspace_quasi2D} \eeq The infinite sum in the above equation can be interpreted in terms of the induced electrostatic potential $\phi_{\rm ind}(\rho)$, created by all the image charges.} The result of eq.~\eqref{realspace_quasi2D} is plotted in Fig. 2a for a channel height $h = 2~\rm nm$. For simplicity, we assume that the channel is filled with water that has an isotropic dielectric constant $\epsilon_w$, and the wall material has a permittivity $\epsilon_m = 2$. Similarly to the case of a 1D nanotube~\cite{Kavokine2021}, the behavior of the potential as a function of the distance $\rho$ from the charge can be split into three regions. At short distances $\rho \ll h$, the test charge only "sees" the dielectric response of water, and $\phi(\rho) \sim 1/\epsilon_w \rho$. Conversely, at large distances $\rho \gg h$, the potential is mostly screened by the walls, and $\phi(\rho) \sim 1/\epsilon_m\rho$. At intermediate distances, there is a regime where the electric field lines remain parallel to the channel walls due to the dielectric contrast $\epsilon_w \gg \epsilon_m$, and the potential has a logarithmic behavior as a function of distance. These three limiting regimes can be captured in the following analytical expression~\cite{Robin2021}: \begin{equation} \phi(\rho) = \frac{e \mathcal{K}}{2\pi \epsilon_0 \epsilon_w h} \log \left( \frac{\rho + \xi}{\rho} \right), \label{log_potential} \end{equation} where $\mathcal{K} \approx 1.1$ is a geometrical factor, and $\xi = \epsilon_w h /(2\epsilon_m)$ is a lengthscale that sets the transition between the intermediate distance logarithmic and the long-distance $1/\rho$ regimes. This expression is plotted in Fig.~2a and is in excellent agreement with the exact solution. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{figure2.pdf} \caption{Confined potential in a 2D channel ($h = 2~\rm nm$) with insulating walls ($\epsilon_m = 2$). \textbf{a.} Potential along the channel axis, compared to its limiting expressions at short and long distances. \textbf{b.} Self-energy barrier as a function of channel height. The shaded region corresponds to channel radii for which the self-energy barrier is greater than $k_{\rm B} T$.} \label{interactions_2Dion} \end{figure*} The physical effects of interaction confinement with dielectric walls have been discussed, in the 1D channel case, in ref.~\cite{Kavokine2021}. We obtain a similar picture for a 2D channel, the essential point being that the effective potential in Fig. 2a is always larger than the bulk water Coulomb potential $\phi(\rho) = 1/4\pi \epsilon_0 \epsilon_w \rho$: confinement leads to enhanced Coulomb interactions. The significance of this interaction enhancement can be assessed by computing the corresponding self-energy (discussed in the Introduction): it can be obtained as $\mathcal{E}_s = e\phi_{\rm ind}(0,0)/2$. $\mathcal{E}_s$ is plotted as a function of channel height in Fig. 2b. It is found to be greater than $k_{\rm B}T$ for $h\lesssim 2~\rm nm$, which roughly establishes a threshold for the importance of interaction confinement effects in 2D geometry. \subsection{Perfect metal} We now consider the opposite limit for the dielectric behavior of the channel wall material: a perfect metal, defined by $\epsilon_m \to \infty$. Such a model has been applied to nanopores in carbon electrodes~\cite{Merlet2012,Lee2014}, and several computations of the corresponding effective Coulomb interactions, based on the solution of Poisson's equation, have been proposed both in 1D and 2D geometry~\cite{Weber1939,Kondrat2011,Loche2019}. In the framework of surface response functions, $\epsilon_m \to \infty$ implies $g(q) = 1$, so that eq.~\eqref{main_result} becomes \begin{equation} \phi_{\rm tot}(q,0) = \frac{e}{4\pi \epsilon_0\epsilon_w } \frac{2\pi}{q} \mathrm{tanh} (q h/2), \end{equation} and the real space potential, according to eq.~\eqref{invhankel}, is given by \begin{equation} \phi(\rho) = \frac{e}{4\pi \epsilon_0\epsilon_w } \int_0^{+\infty} \d q \, J_0(q \rho) \mathrm{tanh}(qh/2). \end{equation} We may compute this integral as a series expansion. First, we introduce the notation \begin{equation} \phi(\rho) = \frac{e}{4\pi \epsilon_0\epsilon_w } \frac{2}{h} \, \mathcal{I}(\tilde \rho), \end{equation} with $\tilde \rho \equiv 2 \rho / h$, and \begin{equation} \mathcal{I}(\tilde \rho) = \int_0^{+\infty} \d q \, J_0(q\tilde \rho) \mathrm{tanh}(q). \label{ImetalLaplace} \end{equation} Then, we make use of the property \begin{equation} \mathcal{I}(\tilde \rho) = \int_0^{+\infty} \d s \, \mathcal{L} [J_0(q\tilde \rho)](s) \mathcal{L}^{-1}[\mathrm{tanh}(q)](s), \end{equation} where $\mathcal{L}$ is the Laplace transform. For the Bessel function, we have $\mathcal{L} [J_0(q\tilde \rho)](s) = 1/\sqrt{\tilde \rho^2 + s^2}$. The hyperbolic tangent has poles at $iq_n = i(2n+1)\pi/2, n \in \mathbb{Z}$ on the imaginary axis. Hence, its inverse Laplace transform is given by \begin{equation} \mathcal{L}^{-1}[\mathrm{tanh}(q)](s) = \frac{1}{2i\pi} \int_{\delta - i \infty}^{\delta + i \infty} \d q \, \tanh(q) e^{qs}, \end{equation} with $\delta >0$. This integral is computed by closing the integration path in the left complex plane, and making use of the Cauchy residue theorem: \begin{equation} \mathcal{L}^{-1}[\mathrm{tanh}(q)](s) = \sum_{iq_n} \mathrm{Res}_{q= iq_n}[\tanh(q) e^{qs}], \end{equation} Since the residue of the hyperbolic tangent at the poles $iq_n$ is 1, we obtain \begin{align} \mathcal{L}^{-1}[\mathrm{tanh}(q)](s) &= \sum_{n=-\infty}^{+\infty} e^{i(2n+1)\pi s/2} \\ &= 2 \sum_{n=0}^{+\infty} \cos \left( \frac{2n+1}{2} \pi s \right). \end{align} Replacing into eq.~\eqref{ImetalLaplace} yields \begin{equation} \mathcal{I}(\tilde \rho) = 2 \sum_{n=0}^{+\infty} \int_0^{+\infty} \d s \frac{\cos ((2n+1)\pi s/2)}{\sqrt{\tilde \rho^2+s^2}}. \end{equation} Here, we may recognize the integral representation $K_0$, the modified Bessel function of the second kind of order 0: \begin{equation} \mathcal{I}(\tilde \rho) = 2 \sum_{n=0}^{+\infty} K_0 \left( \frac{2n+1}{2} \pi \tilde \rho \right). \end{equation} \normalsize Finally, we obtain for the potential in the perfect metal limit \begin{equation} \phi(\rho) = \frac{e}{\pi \epsilon_0\epsilon_w h } \sum_{n=0}^{+\infty} K_0 \left( (2n+1) \pi \frac{\rho}{h} \right). \end{equation} This result differs from the one given by Kondrat and Kornyshev~\cite{Kondrat2011}, which does not reduce to an unperturbed Coulomb potential in the limit $\rho \to 0$. However, in the (interesting) limit $\rho \gg h$, we recover the same asymptotic form as in ref.~\cite{Kondrat2011}: \begin{equation} \phi(\rho) \approx \frac{e}{\pi \epsilon_0\epsilon_w} \frac{e^{-\pi \rho/h}}{\sqrt{2 h \rho}}. \label{metal_scaling} \end{equation} This limiting expression reveals that the metallic walls produce exponential screening of the confined Coulomb potential. We leave a complete discussion of the confined potential until after we introduce the Thomas-Fermi model for the channel walls, which interpolates between a metallic and an insulating behavior. \subsection{Thomas-Fermi model} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{figure3.pdf} \caption{Coulomb potential created by an ion of charge $e$ inside a nano-slit, for different models of the confining walls' dielectric response. \textbf{a}. Slit of height $h = 2~\rm nm$ and water with isotropic permittivity ($\epsilon_w = 80$). \textbf{b}. Slit of height $0.7~\rm nm$ and water with anisotropic dielectric response ($\epsilon_{\parallel} = 80, \epsilon_{\perp} = 2$). The background dielectric constant is set to $\epsilon_m = 2$ in all instances.} \label{ionic_interactions_TF} \end{figure*} In order to continuously explore the range of screening properties between metal and insulator, we make use of the Thomas-Fermi model, which introduces a non-local form of the bulk dielectric function for the channel wall material~\cite{Mahan}: \begin{equation} \epsilon(q,q_z) = \epsilon_m + \frac{q_{\rm TF}^2}{q^2+q_z^2}. \label{epsilon_TF} \end{equation} Here $\epsilon_m$ is the background dielectric constant that accounts for screening by high-energy optical transitions, and $q_{\rm TF}$ is the Thomas-Fermi wavevector. Qualitatively, the Thomas-Fermi model introduces a screening length for the potential: $\lambda_{\rm TF} = q_{\rm TF}^{-1}$. In an insulator, $q_{\rm TF} = 0$ and the screening length is infinite, while in a perfect metal $q_{\rm TF} \to \infty$ and the potential is screened over an infinitely short distance below the surface. The reported computations\cite{Vorotyntsev1980,Kornyshev1980,Kornyshev1982,Kaiser2017} of Coulomb interactions above a single Thomas-Fermi surface -- that rely, again, on directly solving Poisson's equation -- are quite analytically involved, and, to our knowledge, no evaluation of the Coulomb potential in between two Thomas-Fermi walls has been proposed. The surface response function framework is particularly powerful for this purpose. Indeed, applying the SR approximation (eq.~\eqref{gSR_water}) to the dielectric function in eq.~\eqref{epsilon_TF} yields the surface response function in the Thomas-Fermi model: \begin{equation} g(q) = \frac{\epsilon_m f_{\rm TF}(q) - \epsilon_w}{\epsilon_m f_{\rm TF}(q) + \epsilon_w }, ~~~ f_{\rm TF}(q) = \sqrt{1+\frac{1}{\epsilon_m} \frac{q_{\rm TF}^2}{q^2}}. \end{equation} Replacing this into eq.~\eqref{main_result} directly yields the confined Coulomb potential in Fourier space. No convenient analytical expression can be given in this case for the potential in real space, which is obtained in practice by numerical integration following eq.~\eqref{invhankel}. \LB{This potential can however be implemented in molecular simulations using an appropriate fit of the numerically estimated potential, see next section and eq.(\ref{eqn:yukamod}). } In Fig. 3, we plot the Coulomb potential created by an ion in a slit-like channel for different values of the Thomas-Fermi wavevector $q_{\rm TF}$ of the confining walls. We consider two different cases: a channel of height $h = 2~\rm nm$ filled with isotropic water (permittivity $\epsilon_w = 80$), and a channel of height $h = 0.7 ~\rm nm$ filled with anisotropic water ($\epsilon_{\parallel} = 80, \epsilon_{\perp} = 2$)~\cite{Schlaich2016}. In both cases, we assume a background dielectric constant $\epsilon_m = 2$. Note that in the anisotropic case, we generalize our results as explained in Sec. III D. As soon as the wall material has conduction electrons (that is, $q_{\rm TF}$ is non-zero), the potential becomes exponentially screened at long distances. If the channel height $h \gg \lambda_{\rm TF}$, the finite size of the screening cloud within the wall plays no significant role and the perfect metal limit applies: at distances $\rho \gtrsim h$, the potential is exponentially screened over the lengthscale $(h/\pi) \sqrt{\epsilon_{\parallel}/\epsilon_{\perp}}$, as given by eq.~\eqref{metal_scaling}. But if $h \lesssim \lambda_{\rm TF}$, the potential is screened over a longer lenghtscale of order $\lambda_{\rm TF}$: as soon as the screening length is comparable to the channel size, the perfect metal model underestimates the confined Coulomb potential. At distances that are too short for the exponential screening to apply, the material behaves as an insulator with the background dielectric constant $\epsilon_m$. At large enough distances, the confined potential always becomes smaller than the bulk Coulomb potential. However, at small distances, Coulomb interactions may still be enhanced with respect to the bulk, depending on the screening length and dielectric anisotropy of water. This suggests a rich dependence of confined ion transport on the channel walls' electronic properties. \LB{As a last remark, } \PR{in the above discussion, we only considered the electrostatic potential within the midplane $z=0$ of the nanochannel. However, the potential can be evaluated numerically (and analytically in the dielectric case) for any value of $z$, and is found to vary by 10 \% at most across a channel of height $h = 7~\text{\normalfont \AA}$.} \section{Tuning Wien effect ion transport with interaction confinement} In this section, \LB{we illustrate the principle of interaction confinement by studying the transport of ions confined in 2D slits}. We demonstrate using molecular dynamics (MD) simulations that the electronic properties of a channel wall can impact the ion transport within the channel. Typically, the electronic properties of a solid can only be accounted for in computationally expensive ab initio MD; in the framework of classical MD, there are no electronic degrees of freedom (although methods for incorporating Thomas-Fermi screening within classical MD have been proposed~\cite{Scalfi2020,Schlaich2022}). Here, we make use instead of the much less expensive implicit solvent brownian dynamics simulations (Fig. 4a), where the effective ion-ion interactions computed in the previous section can be directly implemented. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figure4-bis.pdf} \caption{\textbf{a}. Illustration of the brownian dynamics simulation setup. Point-like ions restricted to move in two dimensions interact through an effective potential and are subject to a random force from the implicit solvent. \textbf{b}. Current-voltage characteristics as obtained from the brownian dynamics simulations. The characteristics display a Wien effect non-linearity\PR{, except in the case of metallic walls, where the linear Ohm's law is recovered. The solid line corresponds to the analytical prediction for ionic conduction through a dielectric nanochannel, according to ref. \cite{Robin2021}.} } \end{figure} \subsection{Simulation methods} We use implicit solvent overdamped Langevin dynamics implemented in the open-source LAMMPS software~\cite{Plimpton1995}. Our simulation box typically contains 100 ions of each sign from a divalent salt with diffusion coefficient $D = 10^{-9} \, \si{m^2 \cdot s^{-1}}$, confined in a nanochannel with height $h = 0.7 \, \si{nm}$ and lateral dimensions $200 \, \si{nm} \times 200 \, \si{nm}$. Ions are restricted to move within the center plane $z=0$ of the channel, with periodic boundary conditions in both $x$ and $y$. We apply a constant electric field in the $x$ direction and measure the resulting ionic current, under a Langevin thermostat at $T = 298 \, \si{K}$. Our simulation procedure is summarized in Fig. 4a. The simulation is let to run for $1 \, \si{ns}$ of physical time. The ions interact with the effective confined interactions corresponding to channel walls described within the Thomas-Fermi model. Since there is no convenient analytical form for the confined potential in real space, we fit the result of the numerical integration in eq.~\eqref{invhankel} with the following analytical form \PR{\begin{equation} \mathcal U(\rho) = \frac{A}{\rho} \left( 1 - \alpha e^{-\kappa \rho}\right) e^{- q_e \rho}, \label{eqn:yukamod} \end{equation} where $A, \kappa, \alpha$ and $q_e$ are adjustable parameters. In the case of insulating walls, there is no exponential screening of the potential at long distance, so that $q_e = 0$ and the above expression can be thought of as an interpolation between the two limiting behaviors of the potential: \begin{equation} \mathcal{U} \sim \begin{cases} \frac{e^2}{4 \pi \epsilon_0 \sqrt{\epsilon_\parallel \epsilon_\bot} \rho} \quad \text{for} \quad \rho \to 0\,\\ \frac{e^2}{4 \pi \epsilon_0 \epsilon_m \rho} \quad \text{for} \quad \rho \to + \infty. \end{cases} \end{equation} In practice, $A,\kappa$ and $\alpha$ are fitted so that the transition between the two regimes reproduces the exact numerical result for the potential. Ultimately, for a material with finite screening length, $q_e$ is fitted to reproduce the exponential decay of the potential at long distance. In the simulations, we also \LB{add} a \LB{short-distance} term, $A(1-\alpha) e^{-r/r_0}/r$, into the potential's \LB{expression} to avoid the divergence at short distance (with $r_0 = 1~\textrm{nm}$, of the order of the minimal approach distance between two ions). Numerically, we use $\epsilon_{\parallel} = 80$ and $\epsilon_{\perp} = 2$ for the components of the water dielectric permittivity and $\epsilon_m = 2$ for the wall's background dielectric constant. The corresponding values of the parameters are given in Table I. } \begin{table} \centering \begin{tabular}{ccccc} Material & $A~(\si{kcal \cdot \text{\normalfont \AA} \per \mole})$ & $\alpha$ & $\kappa~(\si{\nm^{-1}})$ & $q_e ~(\si{\nm^{-1}})$\\ \hline Dielectric & $134$ & $0.81$ & $0.16$& $0$\\ $q_\text{TF} = 1~\si{\nm^{-1}}$ & $88$ & $0.72$& $0.14$& $0.18$\\ $q_\text{TF} = 2.8~\si{\nm^{-1}}$ & $141$ & $0.80$& $0.05$& $0.28$ \\ Metal & $28.7$ & $0$ & --&$0.46$\\ \hline \end{tabular} \caption{Values of the parameters in eq.~\eqref{eqn:yukamod} used for our simulations.} \end{table} \subsection{Results} We present in Fig. 4b the current-voltage characteristics of 2D channels ($h = 7~\rm \text{\normalfont \AA}$), as obtained from our brownian dynamics simulations, \PR{for four different wall materials characterized by increasing values of the Thomas-Fermi screening length: $\lambda_{\rm TF} = 0$ (perfect metal), $\lambda_{\rm TF} = 0.36 ~\rm nm$, $\lambda_{\rm TF} = 1~\rm nm$ and $\lambda_{\rm TF} = \infty$ (insulator). We find that, in all but the perfect metal cases,} the measured current displays a non-linear dependence on the applied electric field and the channel conductance is \LB{hindered} with respect to the Ohm's law prediction: this is a signature of the second Wien effect. The Wien effect has been historically known to govern the conduction in weak electrolytes; it has recently been predicted to occur as well in strong electrolytes \LB{when confined} in nanoscale channels, because oppositely charged ions \LB{form Bjerrum pairs} due to the reinforced Coulomb interactions in confinement~\cite{Kavokine2019,Robin2021}. In refs.~\cite{Kavokine2019,Robin2021}, the Wien effect was studied in the case of insulating channel walls. Here, we find that \LB{the Wien effect can persist even in the presence of electronic screening and} the magnitude of the effect can be tuned by the channel wall's electronic properties. \LB{As shown in Fig.4b, the voltage-current characteristics is found to depart from the linear Ohm's law at small applied electric field and the effect is largest when the TF screening length is larger than the channel height.} \LB{These results are further compared with the theoretical predictions from ref. \cite{Robin2021}, which are based on the modelling of the Bjerrum pair dynamics in 2D under the application of an electric field. The analytical result -- Eq.(5) in ref. \cite{Robin2021} -- is shown as a solid line in Fig.4b -- we stress that this prediction includes no fitting parameter as it only incorporates the value of the dielectric constant, channel height and ion concentration --.} The conductance is found to be in very good agreement with the theoretical prediction, when the TF screening length is larger than the channel height. But the non-linearity is weakened as soon as the screening length becomes comparable to the channel height: the screening then modifies the potential at short enough distances to affect the binding energy of the pairs. In the perfect metal case, the screening essentially destroys the Bjerrum pairs and the current collapses onto the Ohm's law prediction. \section{Discussion and conclusions} In this paper, we have introduced the broad notion of \emph{interaction confinement}, defined as the regime where the interactions between particles inside a channel are affected by the nature of the channel wall. \PR{Focusing} on a two-dimensional channel geometry, we developed a new theoretical framework that allowed us to compute these modified interactions, typically for the case of ions within a confined electrolyte. Our framework is based on \emph{surface response functions}, that describe a solid surface's response to an external potential. It is hence very general: the confined interactions can be computed given any channel wall material, provided that its surface response function is known. We evaluate in particular the confined interactions for channel walls described within the Thomas-Fermi model, for which no expression was available in the literature. \PR{While our approach is limited to the 2D geometry, it allows us to shed some light on the properties of confined ions in general. We expect that most qualitative results to extend to other geometries, such as 1D tube-like channels. In other words, 2D nanochannels represent a model platform to explore the consequences of interaction confinement in ion and water transport.} Our framework can be used to estimate Coulomb interactions in experimentally accessible nanofluidic channels. For instance, graphite can be described as a Thomas-Fermi conductor with $\lambda_{\rm TF} = 1.2~\rm nm$ (ref.\cite{Miyazaki2008}) and $\epsilon_m \sim 4$ (ref.\cite{Hwang2007}), and boron nitride as an insulator with $\epsilon_m = 6$ (ref.\cite{Geick1966}). These estimates are important for predicting ion transport properties within such channels: we found indeed (Sec. V) that even simple observables such as current-voltage characteristics are affected by the channel wall's dielectric screening properties, that need to be treated beyond a simple local approximation. The limitation of our present results lies mainly in the local dielectric treatment for water. Although we do take into account the anisotropy induced by confinement, a more rigorous treatment involving the non-local dielectric response will be the subject of future work, as it may introduce corrections for the narrowest channels. Furthermore, our theory uses as an input the channel wall's surface response function \emph{in the presence of water}. If this response function is to be computed from first principles, renormalization of the wall's electronic properties by the presence of water should be taken into account, for example in the spirit of refs.~\cite{Misra2021,Robert2022}. Ultimately, we would like to emphasize that the notion of interaction confinement is not restricted to confined ion transport. For instance, the dynamics of confined water are determined by the Coulomb interactions between the water molecules. These interactions are also subject to screening by the nearby solid wall and depend even more subtly on its electronic properties: indeed, molecular scale water fluctuations are faster than ionic motion, and these may couple not only to static, but also to dynamical screening properties. In this way, the fluctuation-induced quantum friction phenomenon~\cite{Kavokine2022} can be seen as an effect of interaction confinement on water: it results essentially from the Coulomb interactions in water being dynamically screened by the solid's electronic excitations. Altogether, interaction confinement appears as a fundamental feature of fluid transport at the nanoscale. \begin{acknowledgments} The Flatiron Institute is a division of the Simons Foundation. L.B. acknowledges funding from the EU H2020 Framework Programme/ERC Advanced Grant agreement number 785911-Shadoks. This work was granted access to the HPC resources of CINES under the allocation A0090710395 made by GENCI. \end{acknowledgments} \section*{Data Availability Statement} The data that support the findings of this study are available from the corresponding author upon reasonable request.
1,314,259,993,105
arxiv
\section*{References}\frenchspacing\small \begin{list}{[\arabic{enumi}]} {\usecounter{enumi}\parsep=2pt\topsep 0pt \settowidth{\labelwidth}{[#1]} \leftmargin=\labelwidth\advance\leftmargin\labelsep \rightmargin=0pt\itemsep=1pt\sloppy}}{\end{list}} \numberwithin{equation}{section} \title{\textbf{Doubling, T-Duality and Generalized Geometry: \\ a Simple Model} \\ \vspace{0.5cm}} \date{} \author[1,3]{Vincenzo E. Marotta} \author[2]{Franco Pezzella} \author[1,2]{Patrizia Vitale} \affil[ ]{} \affil[1]{\textit{\footnotesize Dipartimento di Fisica ``E. Pancini'', Universit\`a di Napoli Federico II, Complesso Universitario di Monte S. Angelo Edificio 6, via Cintia, 80126 Napoli, Italy.}} \affil[2]{\textit{\footnotesize INFN-Sezione di Napoli, Complesso Universitario di Monte S. Angelo Edificio 6, via Cintia, 80126 Napoli, Italy.}} \affil[3]{\textit{\footnotesize Department of Mathematics, Heriot-Watt University Colin Maclaurin Building, Riccarton, Edinburgh EH14 4AS, U.K.}} \affil[ ]{} \affil[ ]{\footnotesize e-mail: \texttt{vm34@hw.ac.uk, franco.pezzella@na.infn.it, patrizia.vitale@na.infn.it}} \begin{document} \maketitle \begin{abstract} \small A simple mechanical system, the three-dimensional isotropic rigid rotator, is here investigated as a 0+1 field theory, aiming at further investigating the relation between Generalized/Double Geometry on the one hand and Doubled World-Sheet Formalism/Double Field Theory, on the other hand. The model is defined over the group manifold of $SU(2)$ and a dual model is introduced having the Poisson-Lie dual of $SU(2)$ as configuration space. A generalized action with configuration space $SL(2,\mathbb{C})$, i.e. the Drinfel'd double of the group $SU(2)$, is then defined: it reduces to the original action of the rotator or to its dual, once constraints are implemented. The new action contains twice as many variables as the original. Moreover its geometric structures can be understood in terms of Generalized Geometry.\\ {\it keywords: Generalized Geometry, Double Field Theory, T-Duality, Poisson-Lie symmetry} \end{abstract} \newpage \tableofcontents \section{Introduction} Generalized Geometry (GG) was first introduced by N. J. Hitchin in ref. \cite{hitchin1}. As the author himself states in his pedagogical lectures \cite{hitchin2}, it is based on two premises: the first consists in replacing the tangent bundle $T$ of a manifold $M$ with $T\oplus T^*$, a bundle with the same base space $M$ but fibers given by the direct sum of tangent and cotangent spaces. The second consists in replacing the Lie bracket on the sections of $T$, which are vector fields, with the Courant bracket which involves vector fields and one-forms. The construction is then extended to general vector bundles $E$ over $M$ so to have $E\oplus E^*$ and a suitable bracket for the sections of the new bundle. The formal setting of GG has recently attracted the interest of theoretical physicists in relation to Double Field Theory (DFT) \cite{HZ}. We shall propose in this paper a model whose analysis can help to establish more rigorously a possible bridge between the two through the doubled world-sheet formalism that generates DFT. DFT has emerged as a proposal to incorporate T-duality \cite{porrati, alvarez}, a peculiar symmetry of a compactified string on a $d$-torus $T^{d}$ in a $(G,B)$-background, as a manifest symmetry of the string effective field theory. In order to achieve this goal, the action of this field theory has to be generalized in such a way that the emerging carrier space of the dynamics be {\em doubled} with respect to the original. What makes T-duality a distinctive symmetry of strings is that these latter, as extended objects and differently from particles, can wrap non-contractible cycles. Such a wrapping implies the presence of winding modes that have to be added to the ordinary momentum modes which take integer values along compact dimensions. T-duality is an $O(d,d;Z)$ symmetry of the dynamics of a closed string under, roughly speaking, the exchange of winding and momentum modes and establishes, in this way, a connection between the physics of strings defined on different target spaces. DFT is supposed to be an $O(d,d;Z)$ manifest space-time {\em effective} field theory description coming from a manifestly T-dual invariant formulation of a string world-sheet, i.e. from a {\em doubled world-sheet} \footnote{Let us observe here that we retain the name {\em doubled world-sheet} since this has become of common use, but actually it is the string target-space which is {\em doubled} and not the world-sheet.}. In fact, a formulation of the world-sheet action of the bosonic string, in which T-duality is manifest, was already initially proposed in ref.s \cite{Tseytlin, Duff} and, later, in \cite{Hull, Berman, park, Copland} (see also more recent works in \cite{pezzella, Bandos, nibbelink, Ma}). This string action must contain information about windings and therefore it is based on two sets of coordinates: the usual ones $x^{a}(\sigma, \tau)$ and the ``dual'' coordinates $\tilde{x}_{a} (\sigma, \tau)$, $(a=1,...,d)$ conjugate to the winding modes. In this way the $O(d,d;Z)$ duality becomes a manifest symmetry of the world-sheet action. A corresponding doubling of {\em all} the $D$ space-time degrees of freedom (vielbeins in this case, not only relatively to the compact dimensions) in the low-energy effective action first occurred in ref. \cite{siegel} where, a manifestly $O(D,D;R)$ form of the target-space effective action was obtained, and such symmetry was realized linearly, even at the price of loosing manifest Lorentz invariance (in target-space). In a sense, this can be considered as a pionering work on what would be later defined Double Field Theory, where the coordinates of the carrier space-time, that are nothing but that the {\em fields} on the string world-sheet, are doubled in order to have a T-duality symmetric field theory. Despite the preamble, which gives credit to the strings related literature for focusing on the geometrical content of the doubled world-sheet and DFT, the interest for the subject is relevant in the broad area of field theory when one deals with duality symmetries of the dynamics which are not manifest at the level of the action A few remarks that clarify the philosophy of the paper are here in order. First of all, it is worth stressing again that, in the framework of string theory, the doubling takes place in the $D$-dimensional target space $M$ of the non-linear sigma model underlying the string action, by introducing new fields $\tilde x_i (\sigma,\tau)$, which are dual to $x^i(\sigma,\tau)$, with $i = 1, \dots, D$. From this point of view, a first analogy with Generalized Geometry is straightforward, by identifying $x^i, \tilde x_i $ with sections of a generalized bundle $E\oplus E^*$ over the world sheet of the string. Secondly, it is only when the target space is considered as the configuration space of the {\it effective field theory} we are going to deal with, that the doubling is reinterpreted as a doubling of the configuration space. Actually, the original non-linear sigma model has no doubled coordinates, but what is doubled are the field coordinates. When the effective field theory derived from the Polyakov string action is considered, then the dual fields $x^i, \tilde x_i$ are seen as coordinates of the carrier space of the effective dynamics, which corresponds to the string target space. DFT is thus formulated in terms of the background fields $G_{ij}$ (the target-space metric tensor) and $B_{ij}$ (the Kalb-Ramond field), with $i,j = 1, \dots, D$, in addition to a dilaton scalar field $\phi$. These fields depend, in that framework, on doubled coordinates $x^{i}$ and $\tilde{x}_{i}$ even if there is no doubling of their tensor indices. The gauge symmetry parameters for DFT are the vector fields $\xi^{i} (x, \tilde{x})$, which parametrize diffeomorphisms and are sections of the tangent bundle of the doubled manifold, together with the one-forms $\tilde{\xi}_{i} (x, \tilde{x})$, which describe gauge transformations of the Kalb-Ramond field $B_{ij}$ and are sections of the cotangent bundle of the doubled manifold. When considering vector fields and one-forms as components of a generalized (indeed doubled) vector field on the carrier space of the effective dynamics (which is itself doubled), one has, on the one hand, another instance of field doubling, but on the other hand, at the same time, a section of a generalized tangent bundle as in Generalized Geometry. The precise mathematical meaning of considering $\xi^{i} $ and $\tilde{\xi}_{i}$ on the same footing amounts to defining generalized Lie brackets, which encode a mutual non-trivial action of one onto the other \cite{HZ2}. These are the so-called $C$-brackets, first introduced, together with other relevant aspects of DFT, in ref.s \cite{siegel}. $C$-brackets provide an $O(D,D)$ covariant, DFT generalization of Courant brackets. More precisely, it can be shown that they reduce to Courant brackets if one drops the dependence of the doubled fields on the coordinates $\tilde{x}_{i}$. The geometry of the effective dynamics is thus more appropriately renamed {\em Doubled Geometry} (DG). To summarize, doubling can emerge at different stages: \begin{itemize} \item at the the level of fields on a given configuration space, for example the sigma-model fields $x^i,\tilde x_i$ both depending on the world sheet coordinates $(\sigma,\tau)$; \item at the level of configuration space coordinates, with fields $\phi$ depending on twice the initial configuration space variables, $\phi= \phi(x^i, \tilde x_i)$; \item at the level of both, fields and coordinates: an example is provided by the gauge fields $\xi^i(x, \tilde x), \tilde\xi_i(x, \tilde x)$. \end{itemize} There is therefore an interplay between GG and DG on the one hand and doubled world-sheet and DFT on the other hand which, within the framework we have sketched, emerges from the identification of the appropriate carrier space of the dynamics. Such interplay does not involve only the above mentioned T-duality to which one usually refers as Abelian T-duality, but it could be enlarged also to the other two dualities connecting non-linear sigma models, the non-Abelian T-duality and the Poisson-Lie T-duality. The term ``Abelian T-duality" refers to the presence of global Abelian isometries in the target spaces of both the paired sigma-models \cite{buscher, rocek} while ``non-Abelian" refers to the existence of a global non-Abelian isometry on the target space of one of the two sigma-models and of a global Abelian isometry on the other \cite{quevedo}. The ``Poisson-Lie T-duality" generalizes the previous definitions to all the other cases, including the one of a dual pair of sigma models both having non-Abelian isometries in their target spaces \cite{Klim, Klim2}. More easily, the classification of T-dualities is given by the types of underlying Drinfel'd doubles: Abelian doubles for the Abelian T-duality, semi-Abelian doubles for the non-Abelian T-duality and non-Abelian doubles for the Poisson-Lie T-duality. It is then clear that models whose carrier space is a Lie group $G$ can be very helpful in better understanding the above mentioned relation in all these cases, because the notion of dual of a Lie group is well established together with that of double Lie group and the so called Poisson-Lie symmetries \cite{drinfel'd, semenov}. The idea of investigating such geometric structures in relation to duality in field theory has already been applied to sigma models by Klim\v{c}\'ik and \v{S}evera in \cite{Klim} (also see \cite{sfetsos}, \cite{falceto}) where the authors first introduced the notion of Poisson-Lie T-duality. Since then, there has been an increasing number of papers in the literature, focusing on Poisson-Lie dual sigma models (see for example ref. \cite{lledo}). On the other hand, in ref. \cite{AF}, the phase space $T^*G$ was already proposed as a toy model for discussing conformal symmetries of chiral models, in a mathematical framework which is very similar to the one adopted here. Double Field Theory on group manifolds, including its relation with Poisson-Lie symmetries, has been analyzed in \cite{hassler}. In the present paper, we propose a fresh look at the subject in relation to the recent developments in GG and DFT by studying a model, the three-dimensional isotropic rigid rotator (IRR) that provides a one-dimensional simplification of a sigma model which can be doubled in order to have a manifestly Poisson-Lie duality invariant doubled world-sheet. This is the first of a series of two papers. We study the IRR having as configuration space the group manifold of $SU(2)$ and introduce a model on the dual group $SB(2,\mathbb{C})$. Their properties under Poisson-Lie transformations are considered as an extended model on the double group, the so-called classical Drinfel'd double $SL(2,\mathbb{C})$, that is formulated in terms of a generalized action, which we shall refer to as the parent action. In particular, we emphasize how a natural para-hermitian structure emerges on the Drinfel'd double and can be used to provide a ``doubled formalism'' for the pair of theories. An alternative description of the IRR model on the Drinfel'd double was already proposed in \cite{marmo:articolo1}, although no dual model was introduced there, being the accent on the possibility of describing the same dynamics with a different phase space, the group manifold $SL(2,\mathbb{C})$, which relies on the fact that the latter is symplectomorphic to the cotangent bundle of $SU(2)$ \cite{MI98}. Since our model describes an example of particle dynamics, the most appropriate doubling within those enumerated above is the doubling of the configuration space. For the same reason, we shall see that the model considered here is too simple to exhibit symmetry under duality transformation, although a generalization to field theory is possible. Indeed, we may look at the model as a $0+1$ field theory, thus paving the way for a genuine $1+1$ field theory, the $SU(2)$ principal chiral model, which, while being modeled on the IRR system, will exhibit interesting properties under duality transformations. This will be briefly discussed in the concluding section while the model will be analyzed in detail in a forthcoming paper \cite{MPV2}. The paper is organized as follows. In Sect. \ref{rigidrot} the dynamics of the IRR on the group manifold of the group $SU(2)$ is reviewed. In Sect. \ref{drinfel'd} an account of the mathematical framework that is going to be used is given, with Poisson-Lie groups and their Drinfel'd doubles discussed in some detail. In Sect. \ref{dualrot} a model on the dual group of $SU(2)$, the group $SB(2,\mathbb{C})$, is introduced and its dynamics analyzed. The two models are seen to be dual to each other in a precise mathematical meaning: their configuration spaces are dual partners in the description of the group $SL(2,\mathbb{C})$ as a Drinfel'd double. Moreover, the role of the two groups can be exchanged in the construction of the double and each model exhibits a global symmetry with respect to the action of the dual group. In Sect. \ref{gensec}, a dynamical model on the Drinfel'd double is proposed: it has doubled configuration variables with respect to the original IRR coordinates, and doubled generalized momenta $(I_i,{\tilde{I}}^i)$ whose Poisson brackets can be related to Poisson-Lie brackets on the two dual groups. The full Poisson algebra of momenta is isomorphic to the algebra of $SL(2,\mathbb{C})$, namely a semisimple group, with each set of momenta underlying a non-Abelian algebra. That is why we refer to the two models as non-Abelian duals giving rise, according to the above mentioned definitions, borrowed from the existing literature, to a Poisson-Lie T-duality. In Sections \ref{standardlag}, \ref{recdu}, we address the problem of recovering the IRR model and its dual. The generalized, or parent action, exhibits global symmetries, which can be gauged, as is customary in DFT (see for example \cite{Hull, park}). It is proven that, once chosen a parametrization for the group $SL(2,\mathbb{C})$, by gauging the left $SB(2,\mathbb{C})$ symmetry the IRR model is retrieved, whereas, by gauging the right $SU(2)$ symmetry the dual model is obtained. In Sect. \ref{hamform}, we introduce the Hamiltonian formalism for the double model and in \ref{canform} we study in detail the full Poisson algebra, together with the Hamiltonian vector fields associated with momenta $(I_i,{\tilde{I}}^i)$. The latter yield an algebra which is closed under Lie brackets, which can be seen as {\it derived} C-brackets \cite{deser1, deser2}. In Sect. \ref{PLsym} we discuss in some detail to what extent the two models introduced exhibit Poisson-Lie symmetries. Finally, in Section \ref{concl} we outline the generalization to 1+1 dimensions for the principal chiral model and give our conclusions. While completing the article we have become aware of the work in refs. \cite{lust, KS18}. In the first one non-Abelian T-duality is analyzed within the same mathematical framework, whereas the latter studies an interesting mechanical model, the electron-monopole system, within the DFT context. Their relation with the present work should be further investigated; we plan to come back to this issue in the future. \section{The Isotropic Rigid Rotator}\label{rigidrot} The Isotropic Rigid Rotator (IRR) provides a classical example of dynamics on a Lie group, the group being in this case $SU(2)$ with its cotangent bundle $T^*SU(2)$, the carrier space of the Hamiltonian formalism, carrying the group structure of a semi-direct product. In this section, the Lagrangian and Hamiltonian formulations of the model on the group manifold are reviewed. Although being simple, the model captures relevant characteristics of the dynamics of many interesting physical systems, both in particle dynamics and in field theory, such as Keplerian systems, gravity in 2+1 dimensions in its first order formulation, with and without cosmological constant \cite{witten}, Palatini action with Holst term \cite{Holst}, and principal chiral models \cite{gursey}. \subsection{The Lagrangian and Hamiltonian Formalisms} As carrier space for the dynamics of the three dimensional rigid rotator in the Lagrangian [Hamiltonian] formulation we can choose the tangent [cotangent] bundle of the group $SU(2)$. We follow ref. \cite{marmosaletan} for the formulation of the dynamics over Lie groups. A suitable action for the system is the following \begin{equation} S_0= \int_\mathbb{R} L _0~dt=-\frac{1}{4} \int_\mathbb{R} \Tr ( g^{-1} d g\wedge * g^{-1} dg) =-\frac{1}{4}\int_\mathbb{R} \Tr (g^{-1}{\dot g})^2 dt \label{lag} \end{equation} with $g:t\in \mathbb{R}\rightarrow SU(2)$, the group-valued target space coordinates, so that \begin{equation} g^{-1} d g = i \alpha^k \sigma_k \nonumber \end{equation} is the Maurer-Cartan left-invariant one-form, which is Lie algebra-valued, $\sigma_k $ are the Pauli matrices, $\alpha^k$ are the basic left-invariant one-forms, $*$ denotes the Hodge star operator on the source space $\mathbb{R}$, such that $* dt = 1$, and $\Tr$ the trace over the Lie algebra. Moreover, $g^{-1}{\dot g}$ is the contraction of the Maurer-Cartan one-form with the dynamical vector field $\Gamma=d/dt$, $g^{-1}{\dot g}\equiv (g^{-1}{ d g}) (\Gamma)$. Let us remind here that the Lagrangian is written in terms of the non-degenerate invariant scalar product defined on the $SU(2)$ manifold and given by $\langle a | b\rangle = \mbox{Tr}(ab)$ for any two group elements. The model can be regarded as a $(0+1)$-dimensional field theory which is group-valued. The group manifold can be parametrized with $\mathbb{R}^4$ coordinates, so that $g\in SU(2)$ can be read as $g= 2(y^0 e_0 +i y^i {e_i})$, with $(y^0)^2+ \sum_i (y^i)^2=1$, $e_0={\mathbb I}/2$, $e_i= \sigma_i/2$ the $SU(2)$ generators. One has then: \begin{equation} y^0= \Tr (g e_0), \;\;\; y^i=-{i} \Tr (g e_i) \; \; \; i=1, \dots ,3 \nonumber \end{equation} By observing that \begin{equation} g^{-1} \dot g = i (y^0 \dot y^i-y^i \dot y^0+ {\epsilon^{i}}_{jk} {y^j \dot y^k })\sigma_i = i \dot{Q}^{i} \sigma_{i} \label{qdot} \end{equation} we define the left generalized velocities $\dot Q^i$ as \begin{equation} \dot Q^i \equiv (y^0\dot y^i-y^i \dot y^0+ {\epsilon^{i}}_{jk} y^j \dot y^k) . \label{genvel} \end{equation} $(Q^i, \dot Q^i)$ $i=1, \dots ,3$ are therefore tangent bundle coordinates, with $Q^i$ implicitly defined. Starting with a Lagrangian written in terms of right-invariant one-forms, one could define right generalized velocities in an analogous way. They give an alternative set of coordinates over the tangent bundle. The Lagrangian $L_{0}$ in eq. \eqn{lag} can be rewritten as: \begin{equation} L_0=\frac{1}{2} (y^0\dot y^i-y^i \dot y^0+ {\epsilon^{i}}_{kl} y^k \dot y^l)(y^0\dot y^j-y^j \dot y^0+ {\epsilon^{j}}_{mn} y^m \dot y^n)\delta_{ij} =\frac{1}{2} \dot Q^i \dot Q^j \delta_{i j}. \nonumber \end{equation} In the intrinsic formulation, which is especially relevant in the presence of non-invariant Lagrangians, the Euler-Lagrange equations of motion are represented by: \begin{equation} {\sf L}_\Gamma \theta_{L} -d L_0=0 \nonumber \end{equation} being \begin{equation} \theta_L= \frac{1}{2}\Tr[ g^{-1} \dot g \;g^{-1} d g]= \dot Q^i \alpha^j \delta_{i j } \nonumber \end{equation} the Lagrangian one-form and ${\sf L}_{\Gamma}$ the Lie derivative with respect to $\Gamma$. By projecting along the basic left-invariant vector fields $X_i$ dual to $\alpha^i$, one obtains: \begin{equation} i_{X_i} [ {\sf L}_\Gamma \theta_{L} -d L_0]=0 \nonumber \end{equation} Since $ {\sf L}_{\Gamma}$ and $i_{X_i}$ commute over the Lagrangian one-form, one gets: \begin{equation} {\sf L}_\Gamma(\dot Q^j\, i_{X_i}\alpha^l )\delta_{jl} - {\sf L}_{X_i} L_0 = 0 \nonumber \end{equation} which implies \begin{equation} {\sf L}_\Gamma\dot Q^j \delta_{ji} - \dot Q^p \dot Q^q {\epsilon_{ip}}^k\delta_{qk} = {\sf L}_\Gamma\dot Q^j \delta_{ji}= 0 \label{eqmo} \end{equation} because of the rotation invariance of the product and the antisymmetry of the structure constants of $SU(2)$ as a manifestation of the invariance of the Lagrangian under rotation. Equivalently, the equations of motion can be rewritten as: \begin{equation} \frac{d}{dt} \left(g^{-1}\frac{dg}{dt}\right)=0\label{eom} \end{equation} being, from eq. \eqn{qdot}: \begin{equation} \delta_{ij} \dot{Q}^j= -{i}\Tr( g^{-1} \dot g \; e_i) . \end{equation} Cotangent bundle coordinates can be chosen to be $(Q^i, I_i)$ with the $I_i$'s denoting the left momenta: \begin{equation} I_i= \frac{\partial L_0}{\partial \dot Q^i}= \delta_{i j} \dot Q^j \nonumber \end{equation} An alternative set of fiber coordinates is represented by the right momenta, which are defined in terms of the right generalized velocities. The Legendre transform from $TSU(2)$ to $T^*SU(2)$ yields the Hamiltonian function: \begin{equation} H_0=[I_i \dot Q^i -L_{0}]_{\dot Q^i=\delta^{ij} I_j }= \frac{1}{2} \delta^{ij}I_i I_j \label{h0} \,\,. \end{equation} By introducing a dual basis $\{{e^i}^*\}$ in the cotangent space, such that $\langle{e^i}^*|e_j\rangle =\delta^i_j$, one can consider the linear combination: \begin{equation} I=i\; I_i {e^i}^*. \label{Iform} \end{equation} The dynamics of the IRR is thus obtained from the Hamiltonian \eqn{h0} and the following Poisson brackets \begin{eqnarray} \{y^i,y^j\}&=&0\label{pp}\\ \{I_i,I_j\}&=& {\epsilon_{ij\;}}^k I_k \label{xx}\\ \{y^i,I_j\}&=&\delta^{i}_j y^0 +{\epsilon^i}_{\; jk}y^k \;\;\; {\rm or ~ equivalently}\;\;\; \{g, I_j\}= 2 i g e_j \label{ij} \end{eqnarray} which are derived from the first-order formulation of the action functional \begin{equation} S_1= \int \langle I | g^{-1}\dot g\rangle dt - \int H_{0} \, dt \equiv \int \vartheta -\int H_{0} dt \nonumber \end{equation} with $\theta$ the canonical one-form. Indeed the symplectic form $\omega$ is readily obtained as \begin{equation} \omega= d \vartheta= d I_i \wedge \delta^{i}_j \alpha ^j - \frac{1}{2}I_i \delta_j^i {\epsilon^j}_{kl} \alpha^k\wedge \alpha^l \nonumber \end{equation} with $ d\alpha^k= \frac{i}{2}\,{\epsilon^{k}}_{ij}\alpha^i\wedge\alpha^j$. By inverting $\omega$ one finds the Poisson algebra \eqn{pp}-\eqn{ij}. The fiber coordinates $I_i$ are associated with the angular momentum components and the base space coordinates $(y^0, y^i)$ to the orientation of the rotator. The resulting system is rotationally invariant since $ \{I_i, H_0\} = 0. $ The Hamilton equations of motion for the system are: \begin{equation} \dot I_i= 0,\;\;\; g^{-1}\dot g= 2 i I_i \delta^{ij} e_j. \nonumber \end{equation} Thus the angular momentum $I_i$ is a constant of motion, while $g$ undergoes a uniform precession. Since the Lagrangian and the Hamiltonian are invariant under right and left $SU(2)$ action, as well-known right momenta are conserved as well, being the model super-integrable. Let us remark here that, while the fibers of the tangent bundle $TSU(2)$ can be identified, as a vector space, with the Lie algebra of $SU(2)$, $\mathfrak{su}(2)\simeq \mathbb{R}^3$, with $\dot Q^i$ denoting vector fields components, the fibers of the cotangent bundle $T^*SU(2)$ are isomorphic to the dual Lie algebra $\mathfrak{su}(2)^*$. As a vector space this is again $\mathbb{R}^3$, but the $I_i$ 's are now components of one-forms. This remark is relevant in the next section, when the Abelian structure of $\mathfrak{su}(2)^*$ is deformed. As a group, $T^*SU(2)$ is the semi-direct product of $SU(2)$ and the Abelian group $\mathbb{R}^3$, with the corresponding Lie algebra given by: \begin{eqnarray} \left[L_i,L_j\right] &=&i {\epsilon_{ij}}^k L_k \label{JJ}\\ \left[T_i,T_j\right] &=& 0 \label{PP}\\ \left[L_i,T_j\right] &=&i {\epsilon_{ij}}^k T_k. \label{JP} \end{eqnarray} Then, the non-trivial Poisson bracket on the fibers of the bundle, \eqn{xx}, can be understood in terms of the coadjoint action of the group $SU(2)$ on its dual algebra $\mathfrak{su}(2)^*\simeq \mathbb{R}^3$ and it reflects the non-triviality of the Lie bracket \eqn{JJ}. In this picture the Lie algebra generators $L_i$'s are identified with the linear functions on the dual algebra.\footnote{The group semi-direct product structure of the phase space $T^*SU(2)$ has been widely investigated in literature in many different contexts. Besides classical, well known applications, some of which we have already mentioned in the introduction, let us mention here applications in noncommutative geometry in relation to the quantization of the hydrogen atom \cite{duflo}, to the electron-monopole system \cite{noi}, and, recently, to models on three-dimensional space-time with $\mathfrak{su}(2)$ type non-commutativity \cite{vitalewallet, kupr, wallet, zoupanos}}. Before concluding this short review of the canonical formulation of the dynamics of the rigid rotator, let us stress the main points which we are going to elaborate further: \begin{itemize} \item The carrier space of the Hamiltonian dynamics is represented by the semi-direct product of a non-Abelian Lie group, $SU(2)$, and the Abelian group $\mathbb{R}^3$ which is nothing but the dual of its Lie algebra. \item The Poisson brackets governing the dynamics are the Kirillov-Souriau-Konstant brackets induced by the coadjoint action. \end{itemize} It has been shown in ref. \cite{marmo:articolo1} that the carrier space of the dynamics of the rigid rotator can be generalized to the semisimple group $SL(2, C)$, which is obtained by replacing the Abelian subgroup $R^3$ of the semi-direct product above, with a non-Abelian group. The generalization is obtained by considering the {\it double} Lie group of $SU(2)$. In this paper such generalization will be further pursued giving rise to a giving rise to the simplest instance of doubled dynamical model, together with its double geometry. The underlying mathematical construction of Drinfel'd double Lie groups and their relation with the structures of Generalized Geometry is the subject of the next section. \section{Poisson-Lie Groups and the Double Lie Algebra $\mathfrak{sl}(2,\mathbb{C})$}\label{drinfel'd} In this section we shortly review the mathematical setting of Poisson-Lie groups and Drinfel'd doubles, see~\cite{semenov,alex,wein} for details, with the aim of introducing, in the forthcoming sections, new Lagrangian and Hamiltonian formulations of the IRR with a manifest symmetry under duality transformation. More precisely, in Section \ref{dualrot}, a model which is dual to the one described in Section \ref{rigidrot} is introduced, while in Section \ref{gensec} \textcolor{magenta}{ a new model is built} with doubled dynamical variables and with a manifest symmetry under duality transformation. The dynamics derived from the new action describes two models, dual to each other, being one the ordinary rigid rotator, the other a``rotator-like" system, with the rotation group $SU(2)$ replaced by its Poisson-Lie dual, the group $SB(2,\mathbb{C})$ of Borel $2\times 2$ complex matrices. { A Poisson-Lie group is a Lie group \textcolor{magenta}{$G$ }equipped with a Poisson structure which makes the product $\mu :G\times G \rightarrow G$ a Poisson map if $G \times G$ is equipped with the product Poisson structure. Linearization of the Poisson structure at the unit $e$ of $G$ provides a Lie algebra structure over the dual algebra ${\gl g}^*=T^*_{e}(G)$ by the relation \begin{equation} \label{Liedual} [d\xi_{1}(e),d\xi_{2}(e)]^{*}=d\{\xi_{1},\xi_{2}\}(e) \end{equation} with $\xi_i\in C^\infty(G)$. The compatibility condition between the Poisson and Lie structures of $G$ yields the relation: \begin{equation} \label{comp} \left< [X,Y],[v,w]^*\right>+\left<\hbox{ad}_v^*X,\hbox{ad}_Y^*w\right> -\left<\hbox{ad}_w^*X,\hbox{ad}_Y^*v\right> -\left<\hbox{ad}_v^*Y,\hbox{ad}_X^*w\right>+\left<\hbox{ad}_w^*Y,\hbox{ad}_X^*v\right>=0\, \end{equation} with $v,w\in\mathfrak{g}^*, X,Y\in \mathfrak{g}$ and $ \hbox{ad}_X^*, \hbox{ad}_v^*$ the coadjoint actions of the Lie algebras $\mathfrak{g}, \mathfrak{g}^*$ on each other. This allows one to define a Lie bracket in ${\gl g}\oplus {\gl g}^*$ through the formula: \begin{equation} \label{Liesuma} [X+\xi,Y+\zeta]=[X,Y]+[\xi,\zeta]^{*}-ad^{*}_{X}\zeta + ad^{*}_{Y}\xi + ad^{*}_{\zeta}X - ad^{*}_{\xi}Y \,\, . \end{equation} If $G$ is connected and simply connected,~(\ref{comp}) is enough to integrate $[\ ,\ ]^*$ to a Poisson structure on $G$ that makes it Poisson-Lie and the Poisson structure is unique. The symmetry between ${\gl g}$ and ${\gl g}^*$ in~(\ref{comp}) implies that one has also a Poisson-Lie group $G^*$ with Lie algebra $({\gl g}^*,[\ ,\ ]^*)$ and a Poisson structure whose linearization at $e\in G^*$ gives the bracket $[\ ,\ ]$. $G^*$ is the dual Poisson-Lie group of $G$. The two Poisson brackets on $G$, $G^*$, which are dually related to the Lie algebra structure on ${\gl g}^*, {\gl g}$, respectively, when evaluated at the identity of the group are nothing but the Kirillov-Souriau-Konstant brackets on coadjoint orbits of Lie groups. The Lie group $D$, associated with the Lie algebra ${\gl d}= {\gl g}\bowtie {\gl g}^*$ is the Drinfel'd double group of $G$ (or $G^*$, being the construction symmetric).\footnote{Properly \cite{semenov} Drinfel'd doubles are the quantum version of double groups. The latter is introduced below. The notation classical and quantum Drinfel'd doubles is also used. }\footnote{We denote with the symbol $\bowtie$ the Lie algebra structure of $\mathfrak{d}$ which is totally noncommutative, being both Lie subalgebras non-Abelian. } There is a dual algebraic approach to the picture above, mainly due to Drinfel'd \cite{drinfel'd}, which starts from a deformation of the semi-direct sum $\mathfrak{g}~\dot\oplus ~\mathbb{R}^n$, with $\mathbb{R}^n \simeq\mathfrak{g}^*$, into a fully non-Abelian Lie algebra, which coincides with $\mathfrak{d}$. The latter construction is reviewed below. To be specific to our problem, we focus on the group $SU(2)$ whose Drinfel'd double can be seen to be the group $SL(2,\mathbb{C})$ \cite{drinfel'd}. } An action can be shown to be written on the tangent bundle of $D$, in such a way that the usual Lagrangian description of the rotator can be recovered by reducing the carrier manifold to the tangent bundle of $SU(2)$. \label{sl2} The structure of $\mathfrak{d}=\mathfrak{sl}(2,\mathbb{C})$ as a double algebra is shortly reviewed here. With this purpose, we start recalling that the complex Lie algebra $\mathfrak{sl}(2)$ is completely defined by the Lie brackets of its generators: \begin{equation} [t_3,t_1]=2t_1; \quad [t_3,t_2]=-2t_2; \quad [t_1,t_2]=t_3; \end{equation} with \begin{equation} t_1= \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} ; \quad t_2= \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} ; \quad t_3= \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} . \label{gensl} \end{equation} By considering complex linear combinations of the basis elements of $\mathfrak{sl}(2)$, say $e_i$, $b_i$, $i=1,2,3$, respectively given by: \begin{equation} \label{b1} e_1 =\frac{1}{2}(t_1+t_2)=\frac{\sigma_1}{2}, \; \;\; e_2 =\frac{i}{2}(t_2-t_1)=\frac{\sigma_2}{2}, \;\;\; e_3 =\frac{1}{2}t_3=\frac{ \sigma_3}{2} \end{equation} \begin{equation} b_i= i e_i \;\;\; i=1,2,3 \end{equation} the real algebra $\mathfrak{sl}(2,C)$ can be easily obtained with its Lie brackets: \begin{eqnarray} [e_i,e_j] &=& i{\epsilon_{ij}}^k e_k \label{su} \\ {[}e_i,b_j{]}&=& i{\epsilon_{ij}}^kb_k \\ {[}b_i,b_j{]}&=&-i{\epsilon_{ij}}^ke_k \end{eqnarray} with $\{e_i\}, i=1,2,3$, generating the $\mathfrak{su}(2)$ subalgebra. In a similar way, one can introduce the combinations: \begin{equation} \tilde e^1=it_1;\qquad \tilde e^2=t_1; \qquad \tilde e^3=\frac{i}{2} t_3, \label{basisdouble} \end{equation} which are the dual basis of the generators \eqref{b1}, with respect to the scalar product naturally defined on $\mathfrak{sl}(2,\mathbb{C})$ as: \begin{equation} \Braket{u,v}=2\, {\rm Im}(\,Tr(uv)\,), \quad \forall u,v \in \mathfrak{sl}(2,\mathbb{C}). \label{psd} \end{equation} Indeed, it is easy to show that \begin{equation} \Braket{\tilde e^i, e_j}=2\, {\rm Im}(\,Tr(\tilde e^i e_j)\,)=\delta^i_j \label{eupedown}. \end{equation} Hence, $\{\tilde e^j\}$ is the dual basis of $\{e_i\}$ in the dual vector space $\mathfrak{su}(2)^*$. Such a vector space is in turn a Lie algebra, the special Borel subalgebra $\mathfrak{sb}(2,\mathbb{C})$ with the following Lie brackets: \begin{equation} [\tilde e^1,\tilde e^2]=0; \qquad [\tilde e^1,\tilde e^3]=- i \tilde e^1; \qquad [\tilde e^2,\tilde e^3]=-i \tilde e^2. \end{equation} In a more compact form, the generators \eqref{basisdouble} can be written as: \begin{equation} \tilde e^i=\delta^{ij}(b_j+e_k{\epsilon^k}_{j3}), \end{equation} and the corresponding the Lie brackets can be derived: \begin{equation} [\tilde e^i, \tilde e^j]= i {f^{ij }}_k \tilde e^k \label{sb} \end{equation} and \begin{equation} [\tilde e^i,e_j]= i\epsilon^i_{\,jk}\tilde e^k+ i e_k {f^{ki}}_j \label{liemis} \end{equation} with ${f^{ij}}_k=\epsilon^{ij l}\epsilon_{l3k}$. For future convenience we also note that: \begin{equation} \tilde e^i {\tilde{e}}^j= -\frac{1}{4} \delta^{i3}\delta^{j3}\sigma_0 +\frac{i}{2} {f^{ij}}_k {\tilde{e}}^k.\label{ee} \end{equation} The following relations can be easily checked: \begin{equation} \Braket{e_i,e_j}=\Braket{\tilde e^i,\tilde e^j}=0 \end{equation} so that both $\mathfrak{su}(2)$ and $\mathfrak{sb}(2,\mathbb{C})$ are maximal isotropic subspaces of $\mathfrak{sl} (2,\mathbb{C})$ with respect to the scalar product \eqref{psd}.\footnote{Notice that another splitting of the $\mathfrak{sl} (2,\mathbb{C})$ Lie algebra into maximally isotropic subspaces with respect to the same scalar product is represented by the span of $\{e_i\}, \{b_i\}, i=1,2,3$, with $\Braket{e_i, e_j}=\Braket{b_i,b_j}=0, \Braket{e_i,b_j}=\delta_{ij}$. However the generators $\{b_i\}$ do not close a Lie subalgebra.} Therefore, the Lie algebra $\mathfrak{sl}(2,\mathbb{C})$ can be split into two maximally isotropic dual Lie subalgebras with respect to a bilinear, symmetric, non degenerate form defined on it. The couple ($\mathfrak{su}(2)$, $\mathfrak{sb}(2,\mathbb{C})$), with the dual structure described above, is a Lie bialgebra. Since the role of $\mathfrak{su}(2)$ and its dual algebra can be interchanged, $(\mathfrak{sb}(2,\mathbb{C})$, $\mathfrak{su}(2)$) is a Lie bialgebra as well. The triple $(\mathfrak{sl}(2,\mathbb{C}), \mathfrak{su}(2), \mathfrak{sb}(2,\mathbb{C}))$ is called a {\it Manin triple} \cite{drinfel'd}. The total algebra $\mathfrak{d}= \mathfrak{g}\bowtie \mathfrak{g}^*$ which is the Lie algebra defined by the Lie brackets \eqn{su}, \eqn{sb}, \eqn{liemis}, with its dual $\mathfrak{d}^*$ is also a Lie bialgebra. The couple $(\mathfrak{d},\mathfrak{d}^*)$ is called the {\em double} of $(\mathfrak{g},\mathfrak{g}^*)$ \cite{semenov}. The {\em double group} $D$ is meant to be the Lie group of $\mathfrak{d}$ endowed with some additional structures such as a Poisson structure on the group manifold compatible with the group structure; more details are given in the next section. The two partner groups, $SU(2)$ and $SB(2,\mathbb{C})$ with suitable Poisson brackets, are named {\it dual groups} and sometimes indicated by $G, G^*$. Their role can be interchanged, so that they share the same double group $D$. The splitting of $\mathfrak{sl}(2,\mathbb{C})$ is realized with respect to the scalar product \eqref{psd}. This is given by the {Cartan-Killing metric} $g_{ij}=\frac{1}{2}{c_{ip}}^q {c_{jq}}^p$, induced by the structure constants ${c_{ij}}^k$ of $\mathfrak{sl}(2,\mathbb{C})$ in its adjoint representation. But this is not the only decomposition of $\mathfrak{sl}(2,\mathbb{C})$ one can give. There is another non-degenerate, invariant scalar product, represented by \begin{equation} (u,v)=2 Re( \, Tr(u v) \,) \qquad \forall u,v \in \mathfrak{sl}(2,\mathbb{C}). \label{sp2} \end{equation} In this case, for the basis elements, one gets: \begin{equation} (e_i,e_j)=\delta_{ij}, \quad (b_i,b_j)=-\delta_{ij}, \quad (e_i,b_j)=0, \label{otherscalar} \end{equation} giving rise to a metric which is not positive-definite. With respect to the scalar product defined in eq. (\ref{sp2}), new maximal isotropic subspaces can be defined in terms of: \begin{equation} f_i^+=\frac{1}{\sqrt 2} (e_i+ b_i) \,\,\, \quad ; \,\,\, \quad f_i^-= \frac{1}{\sqrt 2}(e_i-b_i) \label{newb} \,\,\,. \end{equation} It turns out that: \begin{equation} (f^+_i,f^+_j)= (f^-_i,f^-_j)=\,\,0 \quad ; \quad (f^+_i,f^-_j)=\delta_{ij} \end{equation} whereas \begin{equation} \Braket{f^+_i,f^+_j}= \delta_{ij}, \quad \Braket{f^-_i,f^-_j}=-\delta_{ij}, \quad \Braket{f^+_i,f^-_j}=0\,\,. \end{equation} Let us notice that neither of them spans a Lie subalgebra. By denoting by $C_+$ and $ C_-$ the two subspaces spanned by $\{e_i\}$ and $\{b_i\}$ respectively, one can notice \cite{gualtieri:tesi} that the splitting ${\mathfrak d}= C_{+} \oplus C_{-}$ defines a positive definite metric $\mathcal{H}$ on ${\mathfrak d}$ via: \begin{equation} \mathcal{H}= (\;,\;)_{C_+}- (\;,\;)_{C_-} \label{metricG} \end{equation} As in ref. \cite{gualtieri:tesi}, the inner product is here used to identify ${\mathfrak d}$ with its dual, so that the metric $\mathcal{H}$ may be viewed as an automorphism of ${\mathfrak d}$ which is symmetric and which squares to the identity, i.e. $\mathcal{H}^2 = 1$. Let us indicate the Riemannian metric with double round brackets. One has then: \begin{equation} ((e_i,e_j)) \equiv (e_i,e_j); ~~~~~ ((b_i,b_j)) \equiv -(b_i,b_j);~~~~~ ((e_i,b_j)) \equiv (e_i,b_j)=0 \,. \label{riem} \end{equation} In order to come back to the main subject of the paper, namely the relation between GG and DFT, introducing the following notation for the $\mathfrak{sl}(2,\mathbb{C})$ generators reveals to be very helpful: \begin{equation} e_I=\begin{pmatrix}e_i\\ e^i \end{pmatrix}, \qquad e_i \in \mathfrak{su}(2), \quad e^i \in \mathfrak{sb}(2,\mathbb{C}), \label{doubledb} \end{equation} with $I=1, \dots 2d$, being $d= {\rm dim} \,\mathfrak{g}$. Then the scalar product \eqn{psd} becomes \begin{equation} \label{Lprod} \Braket{e_I,e_J}={\cal \eta}_{IJ}= \begin{pmatrix} 0 & \delta_i^j \\ \delta_j^i & 0 \end{pmatrix} . \end{equation} {This symmetric inner product has signature $(d,d)$ and therefore defines the non-compact orthogonal group $O(d,d)$, with $d=3$ in this case}. The Riemannian product \eqn{riem} yields instead: \begin{eqnarray} ((\tilde e^i, \tilde e^j))&=&\delta^{ip}\delta^{jq} ((b_p + e_l{\epsilon^l}_{p3} ))((b_q+ e_k{\epsilon^k}_{j3} )) \nonumber \\ &=& \delta^{ij}+\epsilon^i_{\;l3} \delta^{lk} \epsilon^j_{\;k3} \,\,\,\,\, ; \label{sp2ee}\\ (( e_i, \tilde e^j))&=& [(e_i, b_q) + {\epsilon^k}_{q3} (e_i, e_k)] \delta^{jq}={\epsilon_{3i}}^j \,\,\, .\label{mixed} \end{eqnarray} Hence, one has: \begin{equation} \label{Rprod} ((e_I,e_J))={\cal H}_{IJ}= \begin{pmatrix} \delta_{ij} & {\epsilon_{3i}}^j \\ -{\epsilon^{i}}_{j3} & \delta^{ij}+ \epsilon^i_{\;l3} \delta^{lk}\epsilon^j_{\;k3} \end{pmatrix} . \end{equation} This metric satisfies the relation: \begin{equation} {\cal H}^T {\cal \eta} {\cal H}= \eta \label{compatibile} \end{equation} indicating that ${\cal H}$ is a pseudo-orthogonal $O(3,3)$ matrix. It is interesting to see how the metric ${\cal \eta}$ in eq. (\ref{Lprod}) and the metric ${\cal H}$ in eq. (\ref{Rprod}) naturally emerge out in the framework here in exam. They correspond, in the usual context of DFT, to the $O(d,d)$ invariant metric and to the so-called {\em generalized metric} \cite{Tseytlin, Hohm}, respectively. In particular, in the latter, the role of the graviton field is played by the Kronecker delta $\delta_{ij} $ while the role of the Kalb-Ramond field is played by the three-dimensional Levi-Civita symbol $\epsilon_{ij3}$ with one of the indices being fixed. \subsection{Para-Hermitian Geometry of $SL(2,\mathbb{C})$} The two non-degenerate scalar products of $SL(2,\mathbb{C})$, discussed above, have been widely applied in many physical contexts where the Lorentz group and its universal covering $SL(2,\mathbb{C})$ play a role, starting from the pioneering work by E. Witten \cite{witten}. While the first scalar product, i.e. the one defined in eq. (\ref{psd}), is nothing but the Cartan-Killing metric of the algebra, the Riemannian structure ${\cal H}$ can be mathematically formalized in a way which clarifies its role in the context of Generalized Complex Geometry \cite{gualtieri:tesi, freidel} and gives a further example of doubled geometry \cite{hulled}. Let us shortly review the derivation. The splitting of $\mathfrak{sl}(2,\mathbb{C})$ in $\mathfrak{su}(2)$ and $\mathfrak{sb}(2,\mathbb{C})$ implies the existence of a $(1,1)$-tensor: \begin{equation} \mathcal{R}: \mathfrak{sl}(2,\mathbb{C}) \rightarrow \mathfrak{sl}(2,\mathbb{C}) \label{Rtensor} \end{equation} such that $\mathcal{R}^2=\mathds{1}$ and of eigenspaces given by $\mathfrak{su}(2)$, with eigenvalue $+1$, and $\mathfrak{sb}(2,\mathbb{C})$, with eigenvalue $-1$. This can be seen as the local expression of a $(1,1)$-tensor on $SL(2,\mathbb{C})$ called \emph{product structure}, since it has integrable eigenbundles $TSU(2)$ and $TSB(2,\mathbb{C})$, that, at every point of $SL(2,\mathbb{C})$, are given by $\mathfrak{su}(2)$ and $\mathfrak{sb}(2,\mathbb{C})$ and are such that $TSL(2,\mathbb{C})=TSU(2) \oplus TSB(2,\mathbb{C}).$ These two eigenbundles are maximal isotropic with respect to the scalar product \eqn{psd} and, being integrable, they give rise to two transversal foliations of $SL(2,\mathbb{C})$, the one with $SU(2)$ as leaves, the other with $SB(2,\mathbb{C})$. Moreover, the tensor $\mathcal{R}$ is compatible with the scalar product \eqn{psd} meaning that the following equation holds: \begin{equation} \Braket{\mathcal{R}(X),Y}+\Braket{\mathcal{R}(Y),X}=0 \quad \forall X,Y \in \Gamma (TSL(2,\mathbb{C})), \nonumber \end{equation} where $\Gamma (TSL(2,\mathbb{C}))$ denotes the vector fields over the group manifold. In a more compact form, one has: $\mathcal{R}^T {\cal \eta} \mathcal{R}=-L$, with ${\cal \eta}(X,Y)=\Braket{X,Y},\ \forall X,Y \in \Gamma (TSL(2,\mathbb{C}))$. This condition implies that a $2$-form field can be defined as: $$\omega(X,Y)= \Braket{\mathcal{R}(X),Y}, \quad \forall X,Y \in \Gamma (TSL(2,\mathbb{C})).$$ In other words, a \emph{para-Hermitian structure} \cite{freidel} can be defined on the manifold $SL(2,\mathbb{C})$, where $\mathcal{R}$ is the product structure, \eqn{psd} is the scalar product compatible with the Lorentzian signature and $\omega$ is the fundamental two-form. In this sense, the scalar product \eqn{Rprod} can be read as a metric with Riemannian signature considering the bases \eqn{newb}. In fact, expressing the bases \eqn{newb} as linear combinations of $\{e_i\}$ and $\{\tilde{e}^i\}$ yields the following: \begin{equation} f^+_i= \frac{1}{\sqrt{2}}\bigl( \delta_{ij}\tilde{e}^j+(\delta^j_i+{\epsilon_{3i }}^{j})e_j \bigr), \label{baseup} \end{equation} and \begin{equation} f^-_i= \frac{1}{\sqrt{2}}\bigl( -\delta_{ij}\tilde{e}^j+(\delta^j_i+{\epsilon_{3i }}^{j})e_j \bigr), \label{baseum} \end{equation} which generate, respectively, the subspaces $V_+$ and $V_-$ of $\mathfrak{sl}(2,\mathbb{C})$, maximal isotropic with respect to \eqn{sp2}. Then, given the splitting $\mathfrak{sl}(2,\mathbb{C})=V_+ \oplus V_-$, there exists the $(1,1)$-tensor $H$ such that \begin{equation} H (f^+_i)=f^+_i, \quad H(f^-_i)=-f^-_i, \label{locale} \end{equation} and $$H^2=\mathds{1},$$ implying that $V_+$ and $V_-$ are eigenspaces of $H$ with eigenvalues $+1$ and $-1$ respectively. One can immediately notice that the Lie bracket of any two elements in $\{f^+_i\}$ and $\{f^-_i\}$ is not involutive, hence $V_+$ and $V_-$ are not Lie subalgebras of $\mathfrak{sl}(2,\mathbb{C})$. As described at the beginning of this section, eq. \eqn{locale} can be read as the definition, at any point, of a $(1,1)$-tensor field $H$ as an \emph{almost product structure} on $SL(2,\mathbb{C})$, since the eigenbundles $\mathcal{V}_+$ and $\mathcal{V}_-$, obtained as distributions that, at any point, are $V_+$ and $V_-$ respectively, are not integrable. The local change of splitting implies that $TSL(2,\mathbb{C})=\mathcal{V}_+ \oplus \mathcal{V}_-$. In order to write down the dual bases $\{f^{j*}_+\}$ and $\{f^{j*}_-\}$ of $\{f^+_j\}$ and $\{f^-_j\}$ respectively, by using the duality relation between $\{e_i\}$ and $\{\tilde{e}^i\}$, the duality conditions have to be imposed: \begin{equation} f^+_i(f^{j*}_+)=\delta_i^j, \quad f^-_i(f^{j*}_+)=0 \nonumber \end{equation} and \begin{equation} f^-_i(f^{j*}_-)=\delta_i^j, \quad f^+_i(f^{j*}_-)=0 \nonumber \end{equation} which lead to \begin{equation} f^{i*}_+=\frac{1}{\sqrt{2}}\bigl(\tilde{e}^i+(\delta^{ik}+\epsilon^{ik3})e_k\bigr) \nonumber \end{equation} and \begin{equation} f^{i*}_-=\frac{1}{\sqrt{2}}\bigl(\tilde{e}^i-(\delta^{ik}+\epsilon^{ki3})e_k\bigr). \nonumber \end{equation} Therefore, the almost product structure $H$ turns out to be the following: \begin{equation} H=\delta^i_j f^+_i \otimes f_+^{j*} - \delta^i_j f^-_i \otimes f_-^{j*}, \nonumber \end{equation} which, in the bases $\{e_i\}, \ \{\tilde{e}^i\}$, becomes: \begin{equation} H=\delta_{ij}\tilde{e}^i \otimes \tilde{e}^j+ \delta_{ik}\epsilon^{kj3}\tilde{e}^i \otimes e_j + \delta_{kj}\epsilon^{ki3}e_i \otimes \tilde{e}^j + (\delta^{ij}+ \delta_{lk}\epsilon^{il3}\epsilon^{jk3})e_i \otimes e_j . \nonumber \end{equation} The metric \eqn{Lprod} can be used on $H$ for raising and lowering indices. In fact, in the doubled formalism, one can write the matrix: \begin{equation} {\cal H}_{IJ}={\cal \eta}_{IK} H^K_{\ J}= \begin{pmatrix} \delta_{ij} & {\epsilon_{i}}^{j3} \\ -{\epsilon^{i}}_{j3} & \delta^{ij}+\delta_{lk}\epsilon^{il3}\epsilon^{jk3} \end{pmatrix} \label{splitpos} \end{equation} which is exactly the generalized metric \eqn{Rprod}, i.e. $\mathcal{H}= \eta H$. The metric \eqn{splitpos} is a representative element in the coset $O(3,3)/O(3) \times O(3),$ i.e. it is defined by $3^2$ independent elements. Thus, the form of the generalized metric depends on the choice of polarization (the splitting of the double Lie algebra in two maximal isotropic subspaces). In fact, as in Generalized Geometry, the metric \eqn{splitpos} gives a reduction of the structure group of $TSL(2,\mathbb{C})$ to $O(3)\times O(3)$ and we can interpret such reduction, in this very specific case, as related to the other natural scalar product on $SL(2,\mathbb{C}).$ Moreover, the introduction of a generalized metric on Drinfel'd doubles is also discussed in \cite{hulled}, in which has been made clear how, from different choices of polarization of the Drinfel'd double, the generalized metric takes a different form which allows to recover different backgrounds. This is shown to be very useful in the description and gauging of non-linear sigma models with doubled target space. It is also easy to verify that the metric with Lorentzian signature arising from \eqn{sp2} is given by: \begin{equation} K=\delta_{ij} f_+^{i*} \otimes f_-^{j*} + \delta_{ij} f_-^{i*} \otimes f_+^{j*} \end{equation} which takes the form \begin{equation} K_{IJ}= \begin{pmatrix} \delta_{ij} & {\epsilon_{i}}^{j3} \\ -{\epsilon^{i}}_{j3} & -\delta^{ij}+\delta_{lk}\epsilon^{il3}\epsilon^{jk3} \end{pmatrix} \nonumber \end{equation} and is compatible with $H$, i.e. $H^T K H=-K.$ The compatibility condition of $K$ and $H$ gives a closed two-form \begin{equation} \Omega = K H= \delta_{ij} f^{i*}_+ \wedge f^{j*}_-. \end{equation} that, in the bases $\{e_i\}$ and $\{\tilde{e}^i\}$, gets the following expression: \begin{equation} \Omega=\delta^i_j e_i \otimes \tilde{e}^j - \delta_i^j \tilde{e}^i \otimes e_j . \nonumber \end{equation} This can be read as an explicit form of the product structure $\mathcal{R}$ and the fundamental two-form $\Omega$. In conclusion, it has been here shown that the natural scalar product \eqn{sp2} and the almost product structure $H$ define an \emph{almost para-Hermitian structure} on the manifold $SL(2,\mathbb{C}).$ Finally, let us describe the structure arising from \eqn{riem}. In the previous section, a positive definite metric $\mathcal{H}$ \eqn{metricG} on $SL(2,\mathbb{C})$ has been defined by the splitting $\mathfrak{sl}(2,\mathbb{C})=C_+ \oplus C_-.$ In order to explicitly write down the metric tensor $\mathcal{H}$, dual bases of $\{ e_i\}$ and $\{ b_i \}$ have been introduced. After noticing that $$b_i=\delta_{ij} \tilde{e}^j +\epsilon_i^{\ k3}e_k$$ and by imposing the conditions $$ e_i (b^{*j})=0\,\,\,\,\,, \ \ \ b_i (b^{j*})=\delta_i^j$$ and $$e_i (e^{j*})=\delta_i^j\,\,\,\,\,, \ \ \ b_i (e^{j*})=0\,\, , $$ one obtains: $$e^{i*}= \tilde{e}^i + \epsilon^{ik3}e_k\,\,\,\,\,, \qquad b^{i*}=\delta^{ij}e_j.$$ It is worth to stress that changing the splitting also changes the dual bases. Thus, the metric tensor of eq. \eqn{metricG} can be retrieved: \begin{equation} \mathcal{H}=\delta_{ij}e^{i*} \otimes e^{j*}+\delta_{ij}b^{i*} \otimes b^{j*}=(,)_{C_+}-(,)_{C_-}\,\,\,. \nonumber \end{equation} It takes the form \eqn{splitpos} in the bases ($\{e_i\}, \ \{\tilde{e}^i\}$), is symmetric and squares to the identity. Moreover, the metric $\mathcal{H}$ can be seen to be given by the composition of two {\it generalized complex structures} \cite{gualtieri:tesi}, $\mathcal{I}_J$ and $\mathcal{I}_{\omega}$, respectively defined by an almost complex structure $J$ and a symplectic structure $\omega$. Therefore, one has a pair $(\mathcal{I}_J,\mathcal{I}_{\omega})$ of commuting generalized complex structures on $TSL(2,\mathbb{C})$ inducing a positive definite metric $\mathcal{H}$. This is usually called a \emph{generalized K\"ahler structure}. Consequently, it has been shown how the almost para-Hermitian structure and the generalized metric $\mathcal{H}$ on the manifold $SL(2,\mathbb{C})$ are related. The discussion that has been just completed shows the existence of two non-degenerate, invariant scalar products on the total algebra $\mathfrak{d}$. In the forthcoming sections both products will be considered in order to define action functionals for the dynamical systems in exam. \section{The Dual Model}\label{dualrot} In the previous section the dual group of $SU(2)$, the group $SB(2,\mathbb{C})$, has been introduced as the partner of $SU(2)$ in a kind of Iwasawa decomposition of the group $SL(2,\mathbb{C})$. The latter has been regarded as a deformation of the cotangent bundle of $SU(2)$ with fibers $F\simeq \mathbb{R}^3$ replaced by the group $SB(2,\mathbb{C})$. It is then legitimate to reverse the paradigma and regard $SL(2,\mathbb{C})$ as a deformation of the cotangent bundle $T^*SB(2,\mathbb{C})$, with fibers $\tilde F\simeq\mathbb{R}^3$ now replaced by $SU(2)$. In this section a dynamical model on the configuration space $SB(2,\mathbb{C})$ is proposed with an action functional that is formally analogous and indeed dual to \eqn{lag}. The model is described with its symmetries in the Lagrangian and Hamiltonian formalisms. In Sect. \ref{gensec} a generalized action containing both the models is finally introduced on the whole group $SL(2,\mathbb{C})$. The Poisson algebra encoding the dynamics, as well as the algebra of generalized vector fields describing infinitesimal symmetries, turns out to be related to the so-called $C$-bracket of DFT. \subsection{The Lagrangian and Hamiltonian Formalisms} As carrier space for the dynamics of the dual model in the Lagrangian (respectively Hamiltonian) formulation one can choose the tangent (respectively cotangent) bundle of the group $SB(2,\mathbb{C})$. A suitable action for the system is the following: \begin{equation} {\tilde S}_0= \int_\mathbb{R} {\tilde L} _0~dt= - \frac{1}{4} \int_\mathbb{R} {\mathcal Tr} [{\tilde{g}}^{-1} d {\tilde{g}}\wedge * {\tilde{g}}^{-1} d{\tilde{g}}] = - \frac{1}{4}\int_\mathbb{R} {\mathcal Tr} [({\tilde{g}}^{-1}{ \dot {\tilde{g}}})({\tilde{g}}^{-1}{ \dot {\tilde{g}}})] dt \label{dualag} \end{equation} with ${\tilde{g}}:t\in \mathbb{R}\rightarrow SB(2,\mathbb{C})$, the group-valued target space coordinates, so that \begin{equation} {\tilde{g}}^{-1} d {\tilde{g}}= i \beta_k {\tilde{e}}^k \nonumber \end{equation} is the Maurer-Cartan left invariant one-form on the group manifold, with $\beta_k$ the left-invariant basic one-forms, $*$ the Hodge star operator on the source space $\mathbb{R}$, such that $* dt = 1$. The symbol ${\mathcal Tr} $ is here used to represent a {\it suitable} scalar product in the Lie algebra $\mathfrak{sb}(2,\mathbb{C})$. Indeed, since the algebra is not semi-simple, there is no scalar product which is both non-degenerate and invariant. Therefore, one has two possible different choices: the scalar product defined by the real and/or imaginary part of the trace, given by Eqs. \eqn{psd} and \eqn{sp2} which is $SU(2)$ and $SB(2,\mathbb{C})$ invariant, but degenerate; or one could use the scalar product induced by the Riemannian metric $G$, which, on the algebra $\mathfrak{sb}(2,\mathbb{C})$ takes the form \eqn{sp2ee} which is positive definite and non-degenerate, but only invariant under left $SB(2,\mathbb{C})$ action and $SU(2)$ invariant. Indeed, by observing that the generators ${\tilde{e}}^i$ are not Hermitian, \eqn{sp2ee} can be verified to be equivalent to: \vspace{0.3cm} \begin{equation} ((u,v)) \equiv 2{\rm Re}\Tr [(u)^\dag v] \label{2ndprod} \end{equation} so that $(({\tilde{g}}^{-1}{ \dot {\tilde{g}}}, {\tilde{g}}^{-1}{ \dot {\tilde{g}}}))= 2{\rm Re}\Tr [({\tilde{g}}^{-1}{ \dot {\tilde{g}}})^\dag {\tilde{g}}^{-1}{ \dot {\tilde{g}}}]$ which is not invariant under right $SB(2,\mathbb{C})$ action, since ${\tilde{g}}^{-1}\ne {\tilde{g}}^\dag$. The associated dynamical models are obviously different. The non-degenerate scalar product defined in eq. \eqn{sp2ee} is used here, therefore the Lagrangian \eqn{dualag} is only left/right $SU(2)$ and left-$SB(2,\mathbb{C})$ invariant, differently from the Lagrangian of the rigid rotator \eqn{lag} which is invariant under both left and right actions of both groups. As in the previous case, the model can be regarded as a $(0+1)$-dimensional field theory which is group-valued. The group manifold can be parametrized with $\mathbb{R}^4$ coordinates, so that ${\tilde{g}}\in SB(2,\mathbb{C})$ reads ${\tilde{g}}= 2( u_0 {\tilde{e}}^0 + i u_i {\tilde{e}}^i)$, with $u_0^2- u_3^2=1$ and ${\tilde{e}}^0= {\mathbb I}/2$. One has then: \begin{equation} u_i=\frac{1}{4} (( i{\tilde{g}}, {\tilde{e}}^i)), \;i=1,2,~~~~ \; u_3=\frac{1}{2} (( i{\tilde{g}}, {\tilde{e}}^3)), ~~~~\; u_0= \frac{1}{2}(({\tilde{g}}, {\tilde{e}}^0)) \nonumber \;\;\; \end{equation} where the last product is defined as twice the real part of the trace, in order to be consistent with the others. By observing that \begin{equation} {\tilde{g}}^{-1} \dot {\tilde{g}}=2 i (u_0\dot u_i-u_i \dot u_0+ {f_{i}}^{\,jk} {u_j \dot u_k }){\tilde{e}}^i \label{tiqdot} \end{equation} the Lagrangian in \eqn{dualag} can be rewritten as: \begin{equation} \tilde{{L}}_0= (u_0\dot u_i-u_i \dot u_0+ {f_{i}}^{\,jk} {u_j \dot u_k })(u_0\dot u_r-u_r \dot u_0+ {f_{r}}^{\,pq} {u_p \dot u_q })(({\tilde{e}}^i,{\tilde{e}}^r)) = \dot {\tilde{Q}}_i \dot {\tilde{Q}}_r h^{ir} \nonumber \end{equation} being \begin{equation} \dot {\tilde{Q}}_i \equiv u_0\dot u_i-u_i \dot u_0+ {f_{i}}^{\,jk} {u_j \dot u_k } \nonumber \end{equation} the left generalized velocities and \begin{equation} h^{ir} \equiv (\delta^{i r}+ {\epsilon^i}_{l3}{\epsilon^r}_{s3}\delta^{ls}) \label{hir} \end{equation} the metric defined by the scalar product. By repeating the analysis already performed for the IRR, one finds the equations of motion: \begin{equation} {\sf L}_\Gamma(\dot {\tilde{Q}}_j\, i_{{\tilde{X}}^i}\beta_l )h^{jl} - {\sf L}_{{\tilde{X}}^i} \tilde{L}_0 = 0. \label{LGamma} \end{equation} with ${\tilde{X}}^j$ being the left invariant vector fields associated with $SB(2,\mathbb{C})$. Differently from the IRR case, the Lagrangian now is not invariant under right action, therefore, being the left invariant vector fields the generators of the right action, the l.h.s. in eq. (\ref{LGamma}) is not expected to be zero and through a straightforward calculation it results to be: \begin{equation} {\sf L}_\Gamma\dot {\tilde{Q}}_j h^{ji} - \dot {\tilde{Q}}_p \dot {\tilde{Q}}_q {f^{ip}}_k h^{qk} = 0. \label{eqmodu} \end{equation} $({\tilde{Q}}_i, \dot {\tilde{Q}}_i)$ are therefore tangent bundle coordinates, with ${\tilde{Q}}_i$ implicitly defined. It has to be noticed here that, analogously to the IRR case, one could define the right generalized velocities on the fibers starting from right invariant one-forms, but, differently from that case, the right invariant Lagrangian is not equivalent to the left invariant one, as already stressed. The cotangent bundle coordinates are $({\tilde{Q}}_i, {\tilde{I}}^i)$ with ${\tilde{I}}^i$ the conjugate left momenta \begin{equation} {\tilde{I}}^j= \frac{\partial {\tilde{{ L}}}_0}{\partial \dot {\tilde{Q}}_j}= \dot {\tilde{Q}}_r (\delta^{j r}+ \epsilon^j_{\,l3}\epsilon^r_{s3}\delta^{ls}) = \frac{i}{2} (({\tilde{g}}^{-1}\dot{\tilde{g}},{\tilde{e}}^i))\delta_{i}^j \,\,\, . \nonumber \end{equation} The latter is in turn invertible, yielding: \begin{equation} \dot{\tilde{Q}}_j= {\tilde{I}}^i(\delta_{ij}-\frac{1}{2}\epsilon_{ip3}\epsilon_{jq3}\delta^{pq}), \nonumber \end{equation} so that the Legendre transform from $TSB(2,\mathbb{C})$ to $T^*SB(2,\mathbb{C})$ leads to the Hamiltonian function: \begin{equation} \tilde{H}_0=[{\tilde{I}}^j \dot {\tilde{Q}}_j -\tilde L]_{\dot {\tilde{Q}}=\dot {\tilde{Q}}({\tilde{I}})}= \frac{1}{2}{\tilde{I}}^i (h^{-1})_{ij }{\tilde{I}}^j \,\,\, ,\label{h0du} \end{equation} being \begin{equation} (h^{-1})_{ij } \equiv (\delta_{ij}-\frac{1}{2}\epsilon_i^{\,p3}\epsilon_j^{\,q3}\delta_{pq}) \label{hinvij} \end{equation} the inverse of eq. \eqn{hir}. Similarly to eq. \eqn{Iform}, the linear combination over the dual basis is introduced: \begin{equation} \tilde I= i \tilde I^j {{\tilde{e}}_j}^* \label{tIform} \end{equation} with $\langle {e_j}^* |{\tilde{e}}^i\rangle=\delta_j^i$. Then, the first order dynamics can be obtained from the Hamiltonian \eqn{h0du} and the following Poisson brackets: \begin{eqnarray} \{u_i,u_j\}&=&0\label{ppdu}\\ \{{\tilde{I}}^i,{\tilde{I}}^j\}&=& {f^{ij\;}}_k{\tilde{I}}^k \label{xxdu}\\ \{u_i,{\tilde{I}}^j\}&=&{\delta_{i}^j u_0 -{f_i}^{\; jk}u_k \;\;\; {\rm or ~ equivalently}\;\;\; \{{\tilde{g}}, {\tilde{I}}^j\}= 2 i {\tilde{g}} {\tilde{e}}^j }\label{ijdu} \end{eqnarray} which are derived from the first order formulation of the action functional. Since the results are slightly different from the IRR case, let us present the derivation in some detail. The first-order action functional reads in this case as: \begin{equation} \tilde S_1= \int \langle {\tilde{I}}| {\tilde{g}}^{-1}d {\tilde{g}}\rangle - \int \tilde H dt \equiv \int \tilde \vartheta -\int \tilde H dt \, . \end{equation} Observing that \begin{equation} \langle {\tilde{I}}| {\tilde{g}}^{-1}d {\tilde{g}}\rangle =i {\tilde{I}}^i \delta^k_i \beta_k \,\, , \end{equation} the symplectic form $\tilde \omega$ reads as: \begin{equation} \tilde \omega= d \tilde \vartheta= d {\tilde{I}}^j \wedge \beta_j - \frac{1}{2}{\tilde{I}}^j {f_j}^{lm} \beta_l\wedge \beta_m \label{tomega} \end{equation} where the relation $d [{\tilde{g}}^{-1}d {\tilde{g}}] = \frac{i}{2} \beta_i\wedge\beta_j { f^{ij}}_k{\tilde{e}}^k$ has been used. By inverting $\tilde \omega$, one finally finds the Poisson algebra \eqn{ppdu}-\eqn{ijdu}. Hamilton equations are readily obtained from the Poisson brackets. In particular one gets: \begin{equation} \dot{\tilde{I}}^j= \{{\tilde{I}}^j,\tilde H\}= f^{jk}_l{\tilde{I}}^l{\tilde{I}}^r h^{-1}_{kr} \nonumber \end{equation} which is consistent with eq. \eqn{eqmodu} and is different from zero, expressing the non-invariance of the Hamiltonian under right action. Vice-versa, by introducing the right momenta ${\tilde{J}}^i$ as the Hamiltonian functions of right-invariant vector fields, which in turn generate the left action, and observing that left and right invariant vector fields commute, one readily obtains: \begin{equation} \dot{\tilde{J}}^j= \{{\tilde{J}}^j,\tilde H\}= 0 \end{equation} namely, right momenta are constants of the motion and the Hamiltonian is invariant under left action, as we expected. By using \eqn{ijdu} it is possible to find: \begin{equation} {\tilde{g}}^{-1}\dot {\tilde{g}}= 2 i {\tilde{e}}^i (h^{-1})_{ij} {\tilde{I}}^j \nonumber \end{equation} consistently with eq. \eqn{tiqdot}. Right momenta are therefore conserved, as for the rigid rotator, while left momenta are not. Let us remark here that, while the fibers of the tangent bundle $TSB(2,\mathbb{C})$ can be identified, as a vector space, with the Lie algebra of $SB(2,\mathbb{C})$, $\mathfrak{sb}(2,\mathbb{C})\simeq \mathbb{R}^3$, with $\dot {\tilde{Q}}_i$ denoting vector fields components, the fibers of the cotangent bundle $T^*SB(2,\mathbb{C})$ are isomorphic to the dual Lie algebra $\mathfrak{sb}(2,\mathbb{C})^*$. As a vector space this is again $\mathbb{R}^3$, but ${\tilde{I}}^j$ are now components of one-forms. This remark will be relevant in the next section where the Abelian structure of $\mathfrak{sb}(2,\mathbb{C})^*$ is deformed. As a group, $T^*SB(2,\mathbb{C})$ is the semi-direct product of $SB(2,\mathbb{C})$ and the Abelian group $\mathbb{R}^3$, with Lie algebra the semi-direct sum represented by \begin{eqnarray} \left[B_i,B_j\right] &=& i {f_{ij}}^k B_k \label{BB}\\ \left[S_i,S_j\right] &=& 0 \label{SS}\\ \left[B_i,S_j\right] &=&i {f_{ij}}^k S_k. \label{BS} \end{eqnarray} Then, as before, the non-trivial Poisson bracket on the fibers of the bundle, \eqn{xxdu}, can be understood in terms of the coadjoint action of the group $SB(2,\mathbb{C})$ on $\mathfrak{sb}(2,\mathbb{C})^*\simeq \mathbb{R}^3$ , i.e. its dual algebra, and it reflects the non-triviality of the Lie bracket \eqn{BB} with the Lie algebra generators $B_i$ identified with linear functions on the dual algebra. To summarize the results of this section, the model that has been introduced is dual to the Isotropic Rigid Rotator in the sense that the configuration space $SB(2,\mathbb{C})$ is dual, as a group, to $SU(2)$. Moreover, as we shall see, the Poisson brackets of the momenta $I_i, {\tilde{I}}^i$ are dually related. In the next section, a generalized action is constructed on the Drinfel'd double group and it encodes the duality relation between the two models and the global symmetries that have been discussed. \section{A New Formalism for the Isotropic Rotator: the Doubled Formulation}\label{gensec} In the previous sections, two dynamical models have been introduced with configuration spaces being Lie groups which are dually related. The Poisson algebras for the respective cotangent bundles, $T^*SU(2)$, $T^*SB(2,\mathbb{C})$, which we restate for convenience in the form: \begin{eqnarray} \{g,g\}&=&0,\;\;\;\;\{I_i,I_j\}= {\epsilon_{ij}}^k I_k , \;\;\;\;\; \{g, I_j\}= 2 i g e_j \label{sudue}\\ \{{\tilde{g}},{\tilde{g}}\}&=&0,\;\;\;\; \{{\tilde{I}}^i,{\tilde{I}}^j\}= {f^{ij}}_k{\tilde{I}}^k ,\;\;\;\; \{{\tilde{g}}, {\tilde{I}}^j\}= 2 i {\tilde{g}} {\tilde{e}}^j \,\,\, , \label{sbdue} \end{eqnarray} have both the structure of a semi-direct sum dualizing the semi-direct structure of the Lie algebras $\mathfrak{su}(2)\dot\oplus\mathbb{R}^3$ and $\mathfrak{sb}(2,\mathbb{C}) \dot \oplus \mathbb{R}^3$. By identifying the dual algebras $\mathbb{R}^3$, in both cases, with an Abelian Lie algebra, we have that each semi-direct sum has the the form \eqn{Liesuma}, with $\mathbb{R}^3$ generators satisfying trivial brackets and with a trivial $ad^*$ action: \begin{equation} [X+\xi,Y+\zeta]=[X,Y]-ad^{*}_{X}\zeta + ad^{*}_{Y}\xi \label{sumatriv} \,\,\,. \end{equation} To this, it is sufficient to expand the group variables, $g,{\tilde{g}}$ \begin{equation} g\simeq \mathds{1}+i \lambda J^i e_i + O(\lambda^2), \;\;\;\; {\tilde{g}}\simeq \mathds{1}+i \mu \tilde{J_i} {\tilde{e}}^i + O(\mu^2) \label{groupvariables} \end{equation} and compute the related Poisson brackets in Eqs. \eqn{sudue}, \eqn{sbdue} to first order in the parameters. One gets: \begin{eqnarray} \{J^i, J^j\}&=& 0,\;\;\;\;\{I_i,I_j\}= {\epsilon_{ij}}^k I_k , \;\;\;\;\;\{J^i, I_j\}= -{\epsilon^{i}}_{jk} J^k \label{suduee}\\ \{\tilde {J}_i, \tilde {J}_j\}&=& 0,\;\;\;\;\{{\tilde{I}}^i,{\tilde{I}}^j\}= {f^{ij}}_k{\tilde{I}}^k ,\;\;\;\;\{\tilde {J}_i, {\tilde{I}}^j\}= - \tilde{J}^k{f_{ki}}^j.\label{sbduee} \end{eqnarray} The new dynamical variables $J^i, \tilde {J}_i$ will be identified in the forthcoming section with $I^i, {\tilde{I}}_i$ by unifying the cotangent bundles $T^*SU(2), T^*SB(2,\mathbb{C})$ into the Drinfel'd double $SL(2,\mathbb{C})$. The brackets \eqn{suduee}, \eqn{sbduee} will then emerge naturally as appropriate limits of the Poisson-Lie brackets on the dual groups $G$, $G^*$, when evaluated at the identity of the respective groups as in eq. \eqn{Liesuma}. \subsection{The Lagrangian Formalism} We are now ready to introduce the new action for the Isotropic Rigid Rotator using the Lagrangian formalism on $TSL(2,\mathbb{C})$. As in the conventional formulation described above, its description can be read as a $(0+1)$-dimensional field theory which is group-valued, with $g(t)\in SU(2)$ now replaced by $\gamma:t\in \mathbb{R} \rightarrow \gamma(t)\in SL(2,\mathbb{C})$. The left invariant one-form on the group manifold is then: \begin{equation} \gamma^{-1} \mathrm{d}\gamma= \gamma^{-1} \dot \gamma\; dt \equiv \dot {\bf Q}^I e_I \mathrm{d}t \label{gammagamma} \end{equation} with $e_I=(e_i, \tilde e^i)$ the $\mathfrak{sl}(2,\mathbb{C})$ basis introduced in eq. \eqn{doubledb} and $ \dot {\bf Q}^I$, the left generalized velocities. By defining the decomposition $\dot {\bf Q}^I \equiv( A^i, B_i)$ one has: \begin{equation} \gamma^{-1} \dot \gamma \;dt= (A^i e_i + B_i \tilde e^i) dt \nonumber \end{equation} where, however, both components are tangent bundle coordinates for $SL(2,\mathbb{C})$\footnote{We could alternatively interpret $(A^i, B_i)$ as fiber coordinates of the generalized bundle $T\oplus T^*$, with base manifold $SU(2)$, so that the model is an instance of both Generalized Geometry and Doubled Geometry.}. By using the scalar product \eqn{psd}, the components of the generalized velocity can be explicitly obtained: \begin{equation} A^i= 2{\rm Im} \Tr ( \gamma^{-1}\dot \gamma \tilde e^i); \;\;\; B_i= 2{\rm Im} \Tr (\gamma^{-1}\dot \gamma e_i). \nonumber \end{equation} Since \begin{equation} *\gamma^{-1} \mathrm{d}\gamma= \dot {\bf Q}^I e_I \,\, , \nonumber \end{equation} with the Hodge operator defined as previously, namely $* dt = 1$, the proposed action is the following: \begin{equation} {S}= \int_R {L} dt= \frac{1}{2}\int_{\mathbb{R}}\bigl( k_1\Braket{\gamma^{-1}\mathrm{d}\gamma\stackrel{\wedge}{,}* \gamma^{-1}\mathrm{d}\gamma} +k_2 ((\gamma^{-1}\mathrm{d}\gamma \stackrel{\wedge}{,} * \gamma^{-1}\mathrm{d} \gamma)) \bigr), \label{newac} \end{equation} where $k_1,k_2$ are real parameters, and $ \Braket{\gamma^{-1}\mathrm{d}\gamma \stackrel{\wedge}{,} * \gamma^{-1}\mathrm{d}\gamma}$ is defined in terms of the scalar product in eq. \eqn{Lprod} while $((\gamma^{-1}\mathrm{d}\gamma \stackrel{\wedge}{,} * \gamma^{-1}\mathrm{d}\gamma))$ is defined in terms of the scalar product in eq. \eqn{Rprod}, namely: \begin{eqnarray} \Braket{\gamma^{-1}\mathrm{d}\gamma \stackrel{\wedge}{,} * \gamma^{-1}\mathrm{d}\gamma} &=& \dot {\bf Q}^I \dot {\bf Q}^J \Braket {e_I,e_J}= \dot {\bf Q}^I \dot {\bf Q}^J \eta_{IJ}\\ ((\gamma^{-1}\mathrm{d}\gamma\stackrel{\wedge}{,} *\gamma^{-1}\mathrm{d}\gamma))&=& \dot {\bf Q}^I \dot {\bf Q}^J ((e_I,e_J))= \dot {\bf Q}^I \dot {\bf Q}^J \mathcal{H}_{IJ}. \label{prodotto2} \end{eqnarray} Explicitly, in terms of the chosen splitting of the Drinfel'd double $\mathfrak{sl}(2,\mathbb{C})=\mathfrak{su}(2) \bowtie \mathfrak{sb}(2,\mathbb{C})$, one has, up to an overall constant: \begin{equation} \label{explito} {L}= \frac{1}{2} ( k\,\Braket{e_I,e_J}\dot {\bf Q}^I \dot {\bf Q}^J + (e_I,e_J)\dot {\bf Q}^I \dot {\bf Q}^J )= \frac{1}{2} ( k\, {\cal \eta}_{IJ} + {\cal H}_{IJ})\dot {\bf Q}^I \dot {\bf Q}^J \end{equation} with \begin{equation} k\, {\cal \eta}_{IJ}+ {\cal H}_{IJ}= \begin{pmatrix} \delta_{ij}& k \delta_i^j + {\epsilon_{3i}}^{j} \\ -{\epsilon^i}_{j3}+k \delta_i^j& \delta^{ij}+ {\epsilon^i}_{l3} \epsilon^j_{k3}\delta^{lk} \end{pmatrix} \nonumber \end{equation} and where the position $k_1/k_2 \equiv k $ has been made. This leads to: \begin{equation} \label{newlag} {L} = \frac{1}{2} \bigl[ \delta_{ij} A^i A^j + ( k \delta_{i}^j+ {\epsilon_{i}}^{j3}) A^i B_j + ( k \delta^{i}_j- {\epsilon^{i}}_{j3}) B_i A^j + ( \delta^{ij}+\delta^{lk} \epsilon^i_{l3} \epsilon^j_{k3}) B_i B_j \bigr] . \end{equation} The Lagrangian one-form is therefore: \begin{equation} {\boldsymbol{\theta}}_{L}=(k\, {\cal \eta}_{IJ}+ {\cal H}_{IJ})\dot {\bf Q}^I {\boldsymbol{\alpha}}^J \end{equation} and the equations of motion read as: \begin{equation} {\sf L}_\Gamma\dot {\bf Q}^I( k\, {\cal \eta}_{IJ}+ \, {\cal H}_{IJ}) -\dot {\bf Q}^P \dot {\bf Q}^Q C_{IP}^K ( k\, {\cal \eta}_{QK}+ \,{\cal H}_{QK}) =0 \label{eomd} \end{equation} where $C_{IP}^K$ are the structure constants of $\mathfrak{sl}(2,\mathbb{C})$. The matrix $ k\, {\cal \eta}_{IJ}+ \, {\cal H}_{IJ}$ is non-singular, provided $k^2 \ne 1$, which will be assumed from now on. \subsection{Recovering the Standard Description}\label{standardlag} The standard dynamics of the isotropic rigid rotator is now shown to be recovered from the new Lagrangian. To be definite, let us fix a local decomposition for the elements of the double group $SL(2,\mathbb{C})$: $\gamma= \tilde{g}g$, with $g \in SU(2)$ and $\tilde{g} \in SB(2,\mathbb{C})$. From eq. \eqref{newac}, one can see that $ L$ is invariant under left and right action of the group $SU(2)$, but only under left action of the group $SB(2,\mathbb{C})$, given by \begin{equation} SB(2,\mathbb{C})_L: \gamma \rightarrow \tilde{h} \gamma=\tilde{h}\tilde{g}g, \quad \forall\tilde{h} \in SB(2,\mathbb{C}). \end{equation} In order to recover the usual description of the rotator, the $SB(2,\mathbb{C})_L$ invariance has to be promoted to a gauge symmetry. One has then: \begin{equation}\label{sbgauge} \gamma^{-1}d\gamma\rightarrow \gamma^{-1}D_{\tilde C}\gamma= (\gamma^{-1}\dot\gamma+\gamma^{-1}\tilde C\gamma) dt \end{equation} with \begin{equation} \tilde C= \tilde {C_i}(t)\tilde e^i \label{gaug} \end{equation} the gauge connection. The following split can be performed: \begin{equation} \gamma^{-1}\dot\gamma+\gamma^{-1}\tilde C\gamma=\gamma^{-1}\dot\gamma+ \tilde{C_i} \gamma^{-1} \tilde e^i \gamma= \mathcal{U}_i\tilde e^i + \mathcal{W}^i e_i \nonumber \,\,\,. \end{equation} Then, eq. \eqn{eupedown} implies: \begin{eqnarray} \mathcal{U}_i&=& 2{\rm Im}\Tr[(\gamma^{-1}\dot\gamma+\tilde{C_j} \gamma^{-1} \tilde e^j \gamma) e_i]= B_i+ \tilde{C_j} \;2{\rm Im}\Tr(\gamma^{-1} \tilde e^j \gamma e_i) \label{Ui} \\ \mathcal{W}^i&=& 2{\rm Im}\Tr[(\gamma^{-1}\dot\gamma+\tilde{C_j}\gamma^{-1} \tilde e^j \gamma) \tilde e^i]= A^i+ \tilde{C_j} \;2{\rm Im}\Tr(\gamma^{-1} \tilde e^j \gamma \tilde e^i) \,\,\, .\label{Vi} \end{eqnarray} Let us explicitly compute the two terms in the r.h.s. of Eqs. \eqn{Ui}, \eqn{Vi} corresponding to the adjoint action of SL(2;C), in the chosen parametrization. After observing that the infinitesimal adjoint action of $g, \tilde g$ on $e_j, \tilde e^j$ is represented by the Lie brackets \eqn{su}, \eqn{sb}, \eqn{liemis}, one gets: \begin{equation} \Tr(\gamma^{-1} \tilde e^j \gamma e_i) = \Tr[(\tilde g^{-1} \tilde e^j \tilde g)(g e_i g^{-1})] = \Tr (Ad_{\tilde g}\tilde e^j) (Ad_{g^{-1}} e_i )]= \Tr[( { {\rm a}({\tilde{g}})}^j_k\tilde e^k)({ {\rm h^{-1}}(g)}_{i}^ s e_s)] \nonumber \end{equation} so that, from \eqn{psd} we have \begin{equation} 2{\rm Im}\Tr(\gamma^{-1} \tilde e^j \gamma e_i)= { {\rm a}^j}_k { {\rm h^{-1}}_i}^ s\delta_s^k \nonumber \end{equation} which yields: \begin{equation} \mathcal{U}_i= B_i+ \tilde{C_j} { {\rm a}}^j_k ( { {\rm h^{-1}}})_{i }^k. \end{equation} Analogously, one can compute: \begin{eqnarray} \Tr(\gamma^{-1} \tilde e^j \gamma \tilde e^i)&=& \Tr[(\tilde g^{-1} \tilde e^j \tilde g)(g \tilde e^i g^{-1})] = \Tr (Ad_{\tilde g}\tilde e^j) (Ad_{g^{-1}} \tilde e^i )] \nonumber\\ &=& \Tr[( { {\rm a}({\tilde{g}})}^j_k\tilde e^k)({ {\rm b^{-1}}(g)}_s^i \tilde e^s+{ {\rm d^{-1}}(g)}^{i s} e_s)] \nonumber \end{eqnarray} and, from \eqn{psd}: \begin{equation} 2{\rm Im}\Tr(\gamma^{-1} \tilde e^j \gamma \tilde e^i)= { {\rm a}({\tilde{g}})}^j_k { {\rm d^{-1}}(g)}^{i s}\delta^k_s \nonumber \end{equation} that is \begin{equation} \mathcal{W}^i= A^i+ \tilde{C_j} { {\rm a}}^j_k { {\rm d}}^{i k}. \label{calU} \end{equation} After replacing the Lagrangian in \eqn{newac} with the gauged Lagrangian \begin{equation} L_{\tilde C}=\frac{1}{2}\bigl[ k \Braket{\gamma^{-1}\mathrm{D}\gamma\stackrel{\wedge}{,}* \gamma^{-1}\mathrm{D}\gamma} +((\gamma^{-1}\mathrm{D}\gamma \stackrel{\wedge}{,} * \gamma^{-1}\mathrm{D} \,\,\,, \gamma)) \bigr] \end{equation} one gets: \begin{equation} L_{\tilde C}=\frac{1}{2} (k \,{\cal \eta}_{IJ} + {\cal H}_{IJ})\dot {\bf {\widehat Q}}^I \dot {\bf \widehat Q}^J \end{equation} with \begin{equation} \dot {\bf \widehat Q}^I= ({\mathcal{W}^i, \mathcal{U}_i}) \end{equation} namely \begin{equation} L_{\tilde C}=\frac{1}{2} \bigl[ \delta_{ij} \mathcal{W}^i \mathcal{W}^j + 2 (k \delta_{i}^j+ \epsilon_{i}^{j3}) \mathcal{W}^i \mathcal{U}_j +( \delta^{ij}+\delta^{lk} \epsilon^i_{l3} \epsilon^j_{k3}) \mathcal{U}_i \mathcal{U}_j \bigr] \,\, . \end{equation} Let us introduce now the combination: \begin{equation}\label{tildeV} \widehat{\mathcal{ W}}^i= \mathcal{ W}^i+ (k \delta^{is} -\epsilon^{is}_3) \mathcal{U}_s \,\,\, , \end{equation} allowing to rewrite the Lagrangian $L_{\tilde C}$ as follows: \begin{equation} L_{\tilde C}=\frac{1}{2} \bigl[ \delta_{ij} \widehat{\mathcal{W}}^i \widehat{\mathcal{W}}^j + (1-k^2)\delta^{ij}{\mathcal U}_i{\mathcal U}_j\bigr] \,\,\,. \end{equation} This can be used for writing the partition function of the system under analysis as: \begin{equation} Z=\int \mathcal{D}g \mathcal{D}\tilde{g}\mathcal{D}{\tilde C} e^{-S_{\tilde C}} \label{partitt} \end{equation} and integrate over the gauge potential. Therefore, the integration with respect to $\tilde{C_i}$ can be traded for the integration with respect to ${\mathcal U}_i$. The functional integral \eqref{partitt} can be performed by changing the integration variable. Therefore, by inverting the relation \eqref{calU}, one can calculate $ \det\biggl(\frac{\delta \tilde{C_i}}{\delta {\mathcal U}_j}\biggr)$ and see that it is a constant, because the matrices involved in the definition of ${\mathcal U}_i$ are all invertible. Consequently, the functional integral in the partition function becomes: \begin{equation} Z=\int \mathcal{D}g \mathcal{D}\tilde{g} e^{-\frac{1}{2}\int_{\mathbb{R}}\mathrm{d}t ( \delta_{ij}\widehat{\mathcal{W}}^i \widehat{\mathcal{W}}^j)} \int \mathcal{D}{\mathcal U} e^{-\frac{1}{2}\int_{\mathbb{R}}\mathrm{d}t(1-k^2)\delta^{ij}{\mathcal U}_i {\mathcal U}_j}, \label{patint} \end{equation} where \begin{equation} \int \mathcal{D}{\mathcal U} e^{-\frac{1}{2}(1-k^2) \int_{\mathbb{R}}\delta^{ij}{\mathcal U}_i {\mathcal U}_j}=(2\pi)^{\frac{3}{2}}(\det(\delta^{ij}))^{-\frac{1}{2}}. \label{fint} \end{equation} It is worth noticing that, in \eqn{tildeV}, the tensor $T^{ij}=k \delta^{ij}- \epsilon^{ij3}$ defines, for $k\neq 0,$ a constant invertible map $T: \mathfrak{sb}(2,\mathbb{C}) \rightarrow \mathfrak{su}(2),$ so one can introduce the endomorphism $E$ of $\mathfrak{d}= \mathfrak{su}(2) \oplus \mathfrak{sb}(2,\mathbb{C})$ which preserves the splitting, defined by the constant matrix: \begin{equation} E^I_J= \begin{pmatrix} \delta^i_j & T^{ij} \\ -(T^{-1})_{ij} & \delta_i^j \end{pmatrix} \label{endom} \end{equation} This acts on any element of $\mathfrak{d}$ in the following way: \begin{equation} \begin{pmatrix} \delta^i_j & T^{ij} \\ -(T^{-1})_{ij} & \delta_i^j \end{pmatrix} \begin{pmatrix} \mathcal{W}^j \\ \mathcal{U}_j \end{pmatrix} = \begin{pmatrix} \widehat{\mathcal{W}}^i \\ \widehat{\mathcal{U}}_i \end{pmatrix} \nonumber \end{equation} where $\widehat{\mathcal{W}}^i$ is given by \eqn{tildeV} and $\widehat{\mathcal{U}}_i=\mathcal{U}_i - (T^{-1})_{ij} \mathcal{W}^j.$ We can write down the left invariant forms $$g'^{-1} \mathrm{d} g' = \widehat{\mathcal{W}}^i e_i \mathrm{d}t$$ and $$\tilde{g}'^{-1} \mathrm{d} \tilde{g}' = \widehat{\mathcal{U}}_i \tilde{e}^i \mathrm{d}t.$$ The constant endomorphism \eqn{endom} induces a map $\exp(E): SL(2,\mathbb{C}) \rightarrow SL(2,\mathbb{C})$ such that $\gamma= \tilde{g}g \rightarrow \gamma' = \tilde{g}' g'.$ Then, one can see that the path integral measure can be transformed giving $\mathcal{D}g \mathcal{D}\tilde{g} = \mathcal{D}g' \mathcal{D}\tilde{g}'$ up to a constant factor, i.e. the determinant of the constant map $\exp(E)$. Thus the path integral \eqn{patint} can be written, up to constant factors, as: \begin{equation} Z=\int \mathcal{D}\tilde{g}' \int \mathcal{D}g' e^{-\frac{1}{2} \int_{\mathbb{R}} \Tr[g'^{-1} \mathrm{d} g' \wedge *g'^{-1} \mathrm{d} g' }] \end{equation} where the path integral over $\tilde{g}'$ gives a constant and the other integral is exactly the partition function of the action of the IRR defined up to a constant factor. \subsection{Recovering the Dual Model}\label{recdu} The dual model described by the action functional \eqn{dualag} can be recovered along the same lines as in the previous section. We consider the parent action \eqn{newac}, with the same parametrization as before, namely $\gamma=\tilde g g$, and explore the global invariance under right $SU(2)$ action \begin{equation} SU(2)_R: \gamma \rightarrow \gamma h=\tilde{g}g h , \quad \forall {h} \in SU(2). \end{equation} Hence, in complete analogy with eq. \eqn{sbgauge}, we gauge this symmetry, by introducing the $\mathfrak{su}(2)$-valued connection one-form $ C= C^i (t) e_i$ so that \begin{equation}\label{sugauge} \gamma^{-1}d\gamma\rightarrow \gamma^{-1}D\gamma= (\gamma^{-1}\dot\gamma+\gamma^{-1} C\gamma) dt \end{equation} Notice that in this case we could gauge the left $SU(2)$ action, in which case it would be convenient to use the other parametrization of $\gamma$ as $\gamma =k \tilde k , ~k\in SU(2), \tilde k\in SB(2,\mathbb{C})$. From \eqn{sugauge} we have: \begin{equation} \gamma^{-1}D\gamma=\tilde {\mathcal U}_i {\tilde e}^i + \tilde {\mathcal W}^i e_i \end{equation} with \begin{eqnarray} \tilde{\mathcal U}_i&=& 2{\rm Im}\Tr[(\gamma^{-1}\dot\gamma+ C^j \gamma^{-1} e_j \gamma)e_i]= B_i+ C^j \;2{\rm Im}\Tr(\gamma^{-1} { e}_j \gamma e_i) \label{tUi} \\ \tilde{\mathcal W}^i&=& 2{\rm Im}\Tr[(\gamma^{-1}\dot\gamma+C^j \gamma^{-1} e_j \gamma) \tilde e^i]= A^i+ C^j \;2{\rm Im}\Tr(\gamma^{-1} e_j \gamma \tilde e^i) \,\,\, .\label{tVi} \end{eqnarray} By using the adjoint action of $SL(2,\mathbb{C})$ on $e_i, \tilde e^i$, we obtain: \begin{equation} \Tr(\gamma^{-1} e_j \gamma e_i) = \Tr[(\tilde g^{-1} e_j \tilde g)(g e_i g^{-1})] = \Tr (Ad_{\tilde g}e_j) (Ad_{g^{-1}} e_i )]= \Tr[\left( { {\rm l}({\tilde{g}})}_j^k e_k + {\rm m }_{jk}(\tilde g)\tilde e^k\right)\left({ {\rm h^{-1}}(g)}_{i}^ s e_s\right)] \end{equation} so that, from \eqn{psd} one gets: \begin{equation} 2{\rm Im}\Tr(\gamma^{-1} e_j \gamma e_i) = {\rm m}_{jk} {( {\rm h^{-1}})_i}^ k \nonumber \end{equation} which yields: \begin{equation} \tilde{\mathcal{U}}_i= B_i+ C^j {\rm m}_{jk} { ({\rm h^{-1}})_i}^ k. \label{caltU} \end{equation} Analogously, one can compute: \begin{eqnarray} \Tr(\gamma^{-1}e_j \gamma \tilde e^i)&=& \Tr[(\tilde g^{-1} e_j \tilde g)(g \tilde e^i g^{-1})] = \Tr (Ad_{\tilde g}e_j) (Ad_{g^{-1}} \tilde e^i )] \nonumber\\ &=& \Tr[( { {\rm l}({\tilde{g}})}_j^ke_k+{\rm m }_{jk}(\tilde g) \tilde e^k )({ {\rm b^{-1}}(g)}_s^i \tilde e^s+{ {\rm d^{-1}}(g)}^{i s} e_s)] \nonumber \end{eqnarray} and, from \eqn{psd} \begin{equation} 2{\rm Im}\Tr(\gamma^{-1}e_j \gamma \tilde e^i)= {\rm l}_j^k ({\rm b^{-1}})_k^i + { {\rm m}}_j^k ( {\rm d^{-1}})^{i k} \nonumber \end{equation} that is \begin{equation} \tilde{ \mathcal{W}}^i= A^i+ \tilde C^j\left ( {\rm l}_j^k ({\rm b^{-1}})_k^i + { {\rm m}}_j^k ( {\rm d^{-1}})^{i k} \right) \label{caltW} \end{equation} The gauged Lagrangian reads then \begin{equation} { L}_{ C}= \frac{1}{2}\left(k {\cal \eta}_{IJ}+ {\cal H}_{IJ}\right){\dot{\tilde {\mathbf Q}}^I} {\dot{\tilde {\mathbf Q}}^J} \end{equation} with $\dot{\tilde {\mathbf { Q}}}^I \equiv \left( \tilde{\mathcal{W}}^i, \tilde{\mathcal{U}}_i\right) $ so that \begin{eqnarray} { L}_{ C}&=& \frac{1}{2}\left[\delta_{ij} \tilde{\mathcal{W}}^i \tilde{\mathcal{W}}^j + 2 (k \delta_i^j +{ \epsilon_i}^{j3}) \tilde{\mathcal{W}}^i \tilde{\mathcal{U}}_j + (\delta^{ij} + \delta^{lk}{ \epsilon^i}_{l3}{\epsilon^j}_{k3}) \tilde{\mathcal{U}}_i \tilde{\mathcal{U}}_j \right] \nonumber\\ &=& \frac{1}{2}\left[\delta_{ij} \tilde{\mathcal{W}}^i \tilde{\mathcal{W}}^j + 2 (k \delta_i^j +{ \epsilon_i}^{j3}) \tilde{\mathcal{W}}^i \tilde{\mathcal{U}}_j + h^{ij} \tilde{\mathcal{U}}_i \tilde{\mathcal{U}}_j \right]. \end{eqnarray} We can proceed as in previous section and introduce \begin{equation}\label{tildeUdu} \breve{\mathcal{ U}}_i= \tilde{ \mathcal{ U}}_i+ \tilde{ \mathcal{W}}^s {T_s}^l (h^{-1})_{il} \,\,\, , \end{equation} with \begin{equation} {T_s}^l = k\delta_s^l+{\epsilon_s}^{l3} \end{equation} and the inverse metric \begin{equation} (h^{-1})_{il}= \delta_{il}-\frac{1}{2}\delta_{pq}{\epsilon_i}^{p3} {\epsilon_l}^{q3} \end{equation} allowing to rewrite the Lagrangian ${\widehat L}_C$ as follows: \begin{equation} {L}_{ C}=\frac{1}{2} \left[ \left(\delta_{ij}- {T_i}^k {T_j}^l(h^{-1})_{kl} \right) \tilde{\mathcal{W}}^i \tilde{\mathcal{W}}^j +h^{ij} \breve{\mathcal{U}}_i \breve{\mathcal{U}}_j \right]\,\,\,. \end{equation} Thus we can write the partition function of the system under analysis as: \begin{equation} Z=\int \mathcal{D}g \mathcal{D}\tilde{g}\mathcal{D}{ C} e^{-S_{ C}} \label{partittdu} \end{equation} and integrate over the gauge potential. Let us stress that the difference with respect to the previous case is that now the gauge connection is an $SU(2)$ one, therefore allowing to trade the integration over ${C}^i$ for the integration over $\tilde{\mathcal{W}}^i $. By repeating exactly the same steps as in Sec. \ref{standardlag}, one arrives at: \begin{equation} Z=\int \mathcal{D}g \mathcal{D}\tilde{g} e^{-\frac{1}{2}\int_{\mathbb{R}}\mathrm{d}t h^{ij}\breve{\mathcal U}_i \breve{\mathcal U}_j} \int \mathcal{D}\breve{\mathcal{W}} e^{-\frac{1}{2}\int_{\mathbb{R}}\mathrm{d}t \left(\delta_{ij}- {T_i}^k {T_j}^l(h^{-1})_{kl} \right) \breve{\mathcal{W}}^i \breve{\mathcal{W}}^j}, \label{patintdu} \end{equation} and \begin{equation} \int \mathcal{D}\breve{\mathcal{W}} e^{-\frac{1}{2}\int_{\mathbb{R}}\mathrm{d}t \left(\delta_{ij}- {T_i}^k {T_j}^l(h^{-1})_{kl} \right) \breve{\mathcal{W}}^i \breve{\mathcal{W}}^j}=(2\pi)^{\frac{3}{2}}\left(\det (\delta_{ij}- {T_i}^k {T_j}^l(h^{-1})_{kl} ) \right) ^{-\frac{1}{2}}. \label{fintdu} \end{equation} It is worth noticing that, as in \eqn{tildeV}, also in \eqn{tildeUdu} the tensor $\breve{T}_{ij}=(h^{-1})_{il} T^l_j$ defines a constant invertible map $\breve{T}:\mathfrak{su}(2) \rightarrow \mathfrak{sb}(2,\mathbb{C}),$ so that we can use the split-preserving endomorphism $\breve{E}$ of $\mathfrak{d}= \mathfrak{su}(2) \oplus \mathfrak{sb}(2,\mathbb{C})$, defined below, to get: \begin{equation} \breve{E}\begin{pmatrix} \tilde{\mathcal{W}}^j \\ \tilde{\mathcal{U}}_j \end{pmatrix}=\begin{pmatrix} \delta^i_j & -(\breve{T}^{-1})^{ij} \\ \breve{T}_{ij} & \delta_i^j \end{pmatrix} \begin{pmatrix} \tilde{\mathcal{W}}^j \\ \tilde{\mathcal{U}}_j \end{pmatrix} = \begin{pmatrix} \breve{\mathcal{W}}^i \\ \breve{\mathcal{U}}_i \end{pmatrix} \end{equation} where $\breve{\mathcal{U}}_i$ is given by \eqn{tildeUdu} and $\breve{\mathcal{W}}^i=\tilde{\mathcal{W}}^i - (\breve{T}^{-1})^{ij} \tilde{\mathcal{U}}_j.$ The constant endomorphism $\breve{E}$ induces a map $\exp(\breve{E}): SL(2,\mathbb{C}) \rightarrow SL(2,\mathbb{C})$ which preserves the chosen parametrization, namely, $\exp(\breve{E}): \gamma= \tilde{g}g \rightarrow \gamma' = \tilde{g}' g'.$ Then, one can see that the path integral measure can be transformed giving $\mathcal{D}g \mathcal{D}\tilde{g} = \mathcal{D}g' \mathcal{D}\tilde{g}'$ up to a constant factor, i.e. the determinant of the constant map $\exp(\breve{E})$. Finally, by introducing the left invariant forms $$g'^{-1} \mathrm{d} g' = \breve{\mathcal{W}}^i e_i \mathrm{d}t$$ and $$\tilde{g}'^{-1} \mathrm{d} \tilde{g}' = \breve{\mathcal{U}}_i \tilde{e}^i \mathrm{d}t$$ the path integral \eqn{patintdu} can be written, up to constant factors, as: \begin{equation} Z=\int \mathcal{D}{g}' \int \mathcal{D}\tilde{g}' e^{-\frac{1}{2}\int_{\mathbb{R}}{\mathcal Tr} [{\tilde g}'^{-1} \mathrm{d} {\tilde g}' \wedge *\tilde{g}'^{-1} \mathrm{d}\tilde{ g}' } ] \end{equation} where the path integral over ${g}'$ gives a constant and the other integral is exactly the partition function of the dual model defined up to a constant factor. \subsection{The Hamiltonian Formalism}\label{hamform} In the doubled description introduced above, the left generalized momenta are represented by: \begin{equation} {\bf P}_I = \frac{\partial \widehat L}{\partial \dot {\bf Q}^I}= ( {\cal \eta}_{IJ}+ k\,{\cal H}_{IJ})\dot {\bf Q}^J \label{genP} \end{equation} The Hamiltonian reads then as: \begin{equation} \widehat{H}= ({\bf P}_I \dot{\bf Q}^I - \widehat L)_{{\bf P}}= \frac{1}{2} [( {\cal \eta}+ k\, {\cal H})^{-1}]^{IJ} {\bf P}_I{\bf P}_J \nonumber \end{equation} with \begin{equation} [( {\cal \eta} + k\, {\cal H})^{-1}]^{IJ}= \frac{1}{2} (1-k^2)^{-1} \begin{pmatrix} \delta^{ij}+ \epsilon^i_{l3} \epsilon^j_{k3}\delta^{lk}& -{\epsilon^i}_{j3}-k \delta^i_j \\ {\epsilon_i}^{j3}-k \delta_i^j& \delta_{ij} \,\,\, \end{pmatrix} \,\,. \nonumber \end{equation} From \eqn{genP} one can explicitly write the generalized momenta ${\bf P}_I$ in terms of the components of $\dot{\bf Q}^I\equiv(A^i, B_j)$, finding: \begin{equation} {\bf P}_I \equiv ( I_i, {\tilde{I}}^i)=\left(\delta_{ij} A^j+(k\delta_i^j+ \epsilon_i^{j3})B_j, (k \delta^i_j-\epsilon^i_{j3})A^j+[\delta^{ij}+\delta^{lk}\epsilon^i_{l3}\epsilon^j_{k3}]B_j\right). \nonumber \end{equation} In terms of the components $I_i, {\tilde{I}}^j$, it turns out that: \begin{eqnarray} \widehat{H} &=&\frac{1}{2}(1-k^2)^{-1}\left( \delta^{ij} I_i I_j + \delta_{ij} {\tilde{I}}^i {\tilde{I}}^j + \epsilon^i_{l3} \epsilon^j_{k3}\delta^{lk} I_i I_j -2 k \delta^i_j I_i {\tilde{I}}^j + 2\epsilon_i^{j3} {\tilde{I}}^iI_j \right) \nonumber \\ &=& \frac{1}{2}(1-k^2)^{-1} \left ( (1-k^2) \delta^{ij} I_i I_j + \delta_{ij}({\tilde{I}}^i -I_s(k \delta^{si}+ {\epsilon^{si}}_3)) ({\tilde{I}}^j -I_r(k \delta^{rj}+ {\epsilon^{rj}}_3))\right) \nonumber \end{eqnarray} which can be rewritten as \begin{equation} {\widehat{H}} =\frac{1}{2}(1-k ^2)^{-1}\left((1-k ^2) \delta^{ij} I_i I_j + \delta_{ij}\tilde{\cal I}^i \tilde{\cal I}^j\right) \nonumber \end{equation} after having defined \begin{equation} \tilde{\cal I}^i \equiv {\tilde{I}}^i -I_s(k \delta^{si}+ {\epsilon^{si}}_3)= \delta^{ij} (1-k^2) B_j. \nonumber \end{equation} In order to obtain the Hamilton equations for the generalized model on the Drinfel'd double, one can proceed as in the previous section with the determination of Poisson brackets from the first-order action functional: \begin{equation} {\widehat{\mathcal{S}}}= \int \langle {\bf P} | \gamma^{-1}d \gamma\rangle - \int \widehat{H} dt \equiv \int {\boldsymbol{\theta}} -\int \widehat{H} dt \nonumber \end{equation} with \begin{eqnarray} {\bf P}&=& i\; {\bf P}_I {e^I}^*= i\;(I_i{e^i}^* + {\tilde{I}}_i {\tilde{e}}_i^*) \nonumber \\ \gamma^{-1}d\gamma&=& i\, {\boldsymbol{\alpha}}^J e_J= (\alpha^k e_k + \beta_k {\tilde{e}}^k) \nonumber \,\, . \end{eqnarray} We stress once again that ${\bf P_I}$, $ {\boldsymbol{\alpha}}^J$ are respectively generalized momenta and basis one-forms on the doubled configuration space $SL(2,\mathbb{C})$. The symplectic form on $T^*SL(2,\mathbb{C})\simeq SL(2,\mathbb{C})\times \mathfrak{sl}(2,\mathbb{C})^*$ is therefore: \begin{eqnarray} {\boldsymbol{\omega}}= d {\boldsymbol{\theta}}&=& dI_i\wedge \alpha^i + d{\tilde{I}}^i\wedge \beta_i +\frac{1}{2}{\tilde{I}}^l\left(\alpha^j\wedge\beta_k {\epsilon^k}_{jl}- \beta_j\wedge \alpha^k {\epsilon^j}_{kl}- \beta_j\wedge\beta_k {f^{jk}}_l\right) \nonumber\\ &+& \frac{1}{2}I_l\left(-\alpha^j\wedge\alpha^k {\epsilon^l}_{jk}+ \alpha^j\wedge \beta_k {f^{lk}}_{j}- \beta_j\wedge\alpha^k {f^{lj}}_k\right) \nonumber \end{eqnarray} which yields for the generalized momenta the Poisson brackets: \begin{eqnarray} \label{remark} \{I_i, I_j\}&=& {\epsilon_{ij}}^k I_k \\ \{{\tilde{I}}^i, {\tilde{I}}^j\}&=& {f^{ij}}_k {\tilde{I}}^k\\ \{I_i, {\tilde{I}}^j\}&=& {\epsilon^j}_{il} {\tilde{I}}^l- I_l {f^{lj}}_i \;\;\;\;\{{\tilde{I}}^i, I_j\}= -{\epsilon^i}_{jl} {\tilde{I}}^l+ I_l {f^{li}}_j \label{remark3} \end{eqnarray} while the Poisson brackets between momenta and configuration space variables $g,{\tilde{g}}$ are unchanged with respect to $T^*SU(2), T^*SB(2,\mathbb{C})$. We shall come back to the Poisson algebra \eqn{remark} in the next subsection. In order to derive Hamilton equations, it is sufficient to write in compact form: \begin{equation} \{{\bf P}_I, {\bf P}_J\}= C_{IJ}^K {\bf P}_K \nonumber \end{equation} with $C_{IJ}^K$ the structure constants specified above. We have then: \begin{equation} \frac{d}{dt} {\bf P}_I= \{ {\bf P}_I, \widehat H\}= [( {\cal \eta}+ k\, {\cal H})^{-1}]^{JK} \{ {\bf P}_I, {\bf P}_J\} {\bf P}_K= [( {\cal \eta}+ k\, {\cal H})^{-1}]^{JK} C_{IJ}^L {\bf P}_L{\bf P}_K \nonumber \end{equation} which is not zero, consistently with \eqn{eomd}. \subsection{The Poisson Algebra}\label{canform} The generalized formulation of the isotropic rotator is completed by discussing the Poisson brackets on the double group $SL(2,\mathbb{C})$, which correctly generalize those on the cotangent bundle stated in eq.s \eqn{pp}-\eqn{ij} as well as in Eqs. \eqn{ppdu}-\eqn{ijdu}. These have been introduced long time ago in \cite{semenov, alex} in the form \begin{equation} \{\gamma_1,\gamma_2\}= -\gamma_1\gamma_2 r^* -r \gamma_1\gamma_2 \label{gammagamma2} \end{equation} where $\gamma_1= \gamma\otimes 1, \gamma_2= 1\otimes \gamma_2$ while $r \in \mathfrak{d} \otimes \mathfrak {d}$ is the classical Yang-Baxter matrix: \begin{equation} r = e^i\otimes e_i \label{rmatrix} \end{equation} satisfying the modified Yang-Baxter equation \begin{equation} [r_{12},r_{13}+r_{23}]+ [r_{13},r_{23}]= h \nonumber \end{equation} with $r_{12 }=e^i\otimes e_i \otimes \mathds{1}$, $r_{13}=e^i\otimes\mathds{1}\otimes e_i $, $r_{23}= \mathds{1\otimes e^i\otimes e_i }$, and $h\in \mathfrak{d}\otimes \mathfrak{d}\otimes\mathfrak{d}$ and adjoint invariant element in the enveloping algebra. The matrix \begin{equation} r^*= - e_i\otimes e^i \label{rmatrix2} \end{equation} is also solution of the Yang-Baxter equation. The group $D$ equipped with the Poisson bracket \eqn{gammagamma2} is also called the Heisenberg double \cite{semenov,alex}. On writing $\gamma$ as $\gamma= \tilde g g$ it can be shown that \eqn{gammagamma2} is compatible with the following choice \begin{align} \{g_1,g_2\} &= [r^*,g_1g_2], \label{pbm1}\\ \{{\tilde g}_1,g_2\} &=- {\tilde g}_1r g_2 \label{finalpoi}\\ \{\tilde g_1,\tilde g_2\} &=-[r,\tilde g_1\tilde g_2], \label{pbm2} \end{align} with $g_1=g\otimes \mathds{1}$, $g_2=\mathds{1}\otimes g$, $\tilde g_1={\tilde g}\otimes \mathds{1}$ and ${\tilde g}_2= \mathds{1} \otimes {\tilde g}$. eq.s \eqn{pbm2} \eqn{pbm1} are the so-called Sklyanin brackets \cite{skly}. We also have $\{ { g}_1,{\tilde g}_2\} =- {\tilde g}_2 r^* g_1$. Let us verify that we actually recover eq.s \eqn{pp}-\eqn{ij}. In order to obtain the PB on the fibers of the cotangent bundle $T^*SU(2)$, the matrix $r$ is rescaled by a real parameter $\lambda$ and the elements of $G^*$ are made dependent on the same parameter. By expanding up to the first order, one gets: \begin{equation} \tilde g(\lambda)=e^{i\lambda I_i e^i} = 1+i\lambda I_i e^i + \mathcal{O}(\lambda^2). \label{expansion1} \end{equation} Substituting this in \eqref{pbm2} yields, for the left-hand side: \begin{equation} \{\tilde g_1,\tilde g_2\}=\{\tilde g \otimes \mathds{1}, \mathds{1}\otimes \tilde g\}\simeq -\lambda^2 e^i\otimes e^j \{I_i,I_j\} + \mathcal{O}(\lambda^3), \nonumber \end{equation} and for the right-hand side: \begin{equation} \label{calc1} \begin{split} [r,\tilde g_1 \tilde g_2]\simeq & -\lambda \left([e^i, i \lambda I_j e_j] \otimes e_i + e^i \otimes [e_i, i \lambda I_j e^j]\right) + \mathcal{O}(\lambda^3) \\ = & -i \lambda^2 I_j \bigl([e^i,e_j]\otimes e_i + e^i \otimes [e_i,e^j]\bigr)+ \mathcal{O}(\lambda^3) \\ = & \lambda^2 I_j (f^{ij}_r e^r \otimes e_i - \epsilon^j_{ir}e^i\otimes e^r-f^{rj}_i e^i\otimes e_r)+ \mathcal{O}(\lambda^3) \\ = & - \lambda^2 I_k \epsilon^k_{\,ij}e_i\otimes e_j+ \mathcal{O}(\lambda^3). \end{split} \end{equation} By equating the two sides, in the limit $\lambda \rightarrow 0$, one obtains the Poisson bracket: \begin{equation} \{I_i,I_j\}=\epsilon^k_{\,ij}I_k. \label{first} \end{equation} Let us consider the second Poisson bracket, eq. \eqn{finalpoi}. In order to compute its l.h.s. we use for $g$ the parametrization $g= y^0 \sigma_0 + i y^i \sigma_i$. We have, up to the first order in $\lambda$: \begin{equation} \{\tilde g_1,g_2\}=2 i\lambda\left( \{I_i, y^0\} e^i\otimes e_0+i \{I_i, y^j\} e^i\otimes e_j\right)+ O(\lambda^2)\label{lhs2} \end{equation} while for the r.h.s. \begin{eqnarray} -\tilde g_1 r g_2&\simeq& -2 \left((\mathds{1} + i\lambda I_i e^i )\otimes \mathds{1}\right)(\lambda e^k\otimes e_k)\left(\mathds{1}\otimes (y^0 e_0 + i y^j e_j)\right) \nonumber\\ &=& -2\lambda e^k\otimes e_k \left(\mathds{1}\otimes (y^0 e_0 + i y^j e_j)\right)+ O(\lambda^2) \nonumber\\ &=& -2 \lambda( \frac{1}{2}y^0 e^k\otimes e_k+ i y^j e^k\otimes e_k e_j)+ O(\lambda^2) \nonumber\\ &=& - \lambda( y^0 e^k\otimes e_k+ i y^j e^k\otimes (\delta_{kj}e_0+ i \epsilon_{kj}^i e_i)\label{rhs2} \end{eqnarray} After equating \eqn{lhs2} with \eqn{rhs2}, one finally gets at order $\lambda$: \begin{eqnarray}\label{second} \{I_i, y^0\}&=& - y^j \delta_{ij} \nonumber \\ \{I_i, y^j\}&=& y^0 \delta_i^j - y^k \epsilon_{ki}^j \nonumber \end{eqnarray} where the first one is compatible with the second one, by using $(y^0)^2= 1- \sum_k y^k y^k$. Finally, let us consider \eqref{pbm1}. The l.h.s. yields: \begin{equation} \{{g}_1, {g}_2\}= \{y^0, y^j\} i (\sigma_0\otimes \sigma_j-\sigma_j\otimes \sigma_0) - \{y^i, y^j\} \sigma_i\otimes \sigma_j \label{lhsgg} \end{equation} which does not depend on $\lambda$. The r.h.s. instead reads as: \begin{equation} [r^*, g_1g_2]= -\lambda[e_k\otimes e^k, g\otimes g]+O(\lambda^2)\label{rhsgg} \end{equation} which is at least first order in $\lambda$. Therefore, by comparing \eqn{lhsgg} with \eqn{rhsgg}, one obtains: \begin{equation} \{y^0, y^j\} = \{y^i, y^j\} = 0 + O(\lambda) \,\,\, . \label{third} \end{equation} Thus, Eqs. \eqn{first}, \eqn{second}, \eqn{third} reproduce correctly the canonical Poisson brackets on the cotangent bundle in Eqs. \eqn{pp}-\eqn{ij}. In order to underline the symmetric role played by the group $SU(2)$ and its dual, one can perform a slightly different analysis by considering $r^*$ as an independent solution of the Yang-Baxter equation \begin{equation} \rho= -\mu e_k\otimes e^k \label{altrmat} \end{equation} and expanding $g\in SU(2)$ as a function of the parameter $\mu$: \begin{equation} g= \mathds{1} + i \mu {\tilde{I}}^i e_i + O(\mu^2) \label{gexp} \,\,\, . \end{equation} By repeating the same analysis as above, it is straightforward to prove that the Poisson structure induced by $\rho$ is the one that correctly reproduces the canonical Poisson brackets on the cotangent bundle of $G^*=SB(2,\mathbb{C})$ derived in Eqs. \eqn{ppdu}-\eqn{ijdu}. Indeed, by substituting \eqn{gexp} in the LHS of \eqn{pbm1} one finds: \begin{equation} \{ g_1, g_2\}\simeq -\mu^2 e_i\otimes e_j \{{\tilde{I}}^i,{\tilde{I}}^j\} + \mathcal{O}(\mu^3), \nonumber \end{equation} and for the right-hand side: \begin{equation}\label{calcdu} \begin{split} [\rho, g_1 g_2]\simeq & -\mu \left([e_i, i \mu {\tilde{I}}^j e_j] \otimes e^i + e_i \otimes [e^i, i \mu {\tilde{I}}^j e_j]\right) + \mathcal{O}(\mu^3) \\ = & -i \mu^2 {\tilde{I}}^j \bigl([e_i,e_j]\otimes e^i + e_i \otimes [e^i,e_j]\bigr)+ \mathcal{O}(\mu^3) \\ = & \mu^2 {\tilde{I}}^j (\epsilon_{ij}^r e_r \otimes e^i + { f^{ri}}_j e_i\otimes e_r+{\epsilon^i}_{jr} e_i\otimes e^r)+ \mathcal{O}(\mu^3) \\ = & \mu^2 {\tilde{I}}^k {f^{\,ij}}_k e_i\otimes e_j+ \mathcal{O}(\mu^3). \end{split} \end{equation} By equating the two sides, in the limit $\mu \rightarrow 0$, one obtains the Poisson bracket: \begin{equation} \{{\tilde{I}}^i,{\tilde{I}}^j\}=f_k^{\,ij}{\tilde{I}}^k. \label{scnd} \end{equation} Last but not least, it is possible to consider a different Poisson structure on the double, given by \cite{semenov} : \begin{equation} \{\gamma_1,\gamma_2 \}= \frac{\lambda}{2}\left[\gamma_1(r^*-r) \gamma_2 - \gamma_2(r^*-r)\gamma_1\right].\label{gammargamma} \end{equation} This is the one that correctly dualizes the bialgebra structure on $\mathfrak{d}$ when evaluated at the identity of the group $D$. To this, let us expand $\gamma\in D$ as $\gamma= \mathds{1}+ i\lambda I_i \tilde e^i+ i\lambda {\tilde{I}}^i e_i$ and rescale $r, r^*$ by the same parameter $\lambda$. It is straightforward to obtain, on the l.h.s. of eq. \eqn{gammargamma}, \begin{equation} \{\gamma_1,\gamma_2 \}= -\lambda^2 \left(\{I_i, I_j\} \tilde e^i\otimes \tilde e^j + \{{\tilde{I}}^i, {\tilde{I}}^j\} e_i\otimes e_j +\{I_i, {\tilde{I}}^j\} (\tilde e^i\otimes e_j- e_j \otimes \tilde e^i) \right) \nonumber \end{equation} while, on the r.h.s. of the same equation: \begin{equation} -\lambda^2 \left(I_s \epsilon^s_{\,ij} \tilde e^i\otimes \tilde e^j+ {\tilde{I}}^s f_s^{\,ij} e_i\otimes e_j+ I_s f_i^{\,sj}(\tilde e^i\otimes e_j-e_j\otimes \tilde e^i ) + {\tilde{I}}^s \epsilon^j_{\,si}(\tilde e^r\otimes e_j - e_j\otimes \tilde e^i)\right) \,\,. \nonumber \end{equation} By equating the two results one obtains: \begin{eqnarray} \{I_i, I_j\}&=& {\epsilon_{ij}}^k I_k \nonumber \\ \{{\tilde{I}}^i, {\tilde{I}}^j\}&=& {f^{ij}}_k {\tilde{I}}^k \nonumber \\ \{I_i, {\tilde{I}}^j\}&=& - {f_i}^{ jk}I_k - {\tilde{I}}^k {\epsilon _{ki}}^{ j} \nonumber \end{eqnarray} which is nothing but the Poisson bracket induced by the Lie algebra structure of the double \eqref{liemis}. By using the compact notation $I= i I_i {e^i}^*, {\tilde{I}} = i {\tilde{I}}_i {{\tilde{e}}_i}^*$, one can rewrite the Poisson algebra as follows: \begin{equation} \{I +{\tilde{I}}, J+\tilde{J}\}= \{I,J\} -\{J, {\tilde{I}}\}+ \{I,\tilde{J}\} + \{{\tilde{I}},\tilde{J}\}. \label{cb} \end{equation} This is a very interesting structure, which represents a Poisson realization of the C-bracket for the generalized bundle $T\oplus T^*$ over $SU(2)$, once one considers the isomorphisms \begin{equation} TSL(2,\mathbb{C})\simeq SL(2,\mathbb{C})\times \mathfrak{sl}(2,\mathbb{C}) \nonumber \end{equation} with the fiber: \begin{equation} \mathfrak{sl}(2,\mathbb{C})\simeq \mathfrak{su}(2)\oplus \mathfrak{sb}(2,\mathbb{C})\simeq TSU(2)\oplus T^*SU(2). \nonumber \end{equation} That is, we recognize $I= i I_i {e^i}^*, J= i J_i {e^i}^*$ as one-forms, with ${e^i}^*$ being a basis over $T^*$ and ${\tilde{I}} = {\tilde{I}}^i {{\tilde{e}}_i}^*, \tilde J= \tilde J^i {{\tilde{e}}_i}^*$ as vector-fields, with ${{\tilde{e}}_i}^*$ a basis over $T$. Namely, the couple $(I_i, {\tilde{I}}^i)$ identifies the fiber coordinate of the generalized bundle $T\oplus T^*$ of $SU(2)$. In order to complete the analysis, let us look at the Lie algebra of Hamiltonian vector fields associated with the momenta $I, J$. Hamiltonian vector fields are defined in terms of Poisson brackets in the standard way \begin{equation} X_f \equiv \{\cdot \;, f\} \nonumber \end{equation} so that, by indicating with $X_i= \{\cdot \;, I_i\}, \tilde X^i= \{\cdot \;, \tilde I^i\}$ the Hamiltonian vector field associated with $I_i, \tilde I^i$ respectively, one has, after using the Jacobi identity, the following Lie algebra: \begin{eqnarray} [X_i, X_j] &=& \{\{\cdot \;, I_i\}, I_j\}-\{\{\cdot \;, I_j\}, I_i\}=\{\cdot \; ,\{I_i,I_j\}={ \epsilon_{ij}}^k\{\cdot \; ,I_k\}= {\epsilon_{ij}}^kX_k \nonumber \\ {[}{\tilde X}^i, {\tilde X}^j{] }&=& \{\{\cdot \;, {\tilde{I}}^i\}, {\tilde{I}}^j\}-\{\{\cdot \;, {\tilde{I}}^j\}, {\tilde{I}}^i\}=\{\cdot \; ,\{{\tilde{I}}^i,{\tilde{I}}^j\}\}={ f^{ij}}_k\{\cdot \; ,{\tilde{I}}^k\}= {f^{ij}}_k{\tilde{X}}^k \nonumber\\ {[}X_i,{\tilde X}^j{] }&=& \{\{\cdot \;, I_i\}, {\tilde{I}}^j\}-\{\{\cdot \;, {\tilde{I}}^j\}, I_i\}=\{\cdot \; ,\{I_i,{\tilde{I}}^j\}\}= - {f_i}^{ jk}\{\cdot \;,I_k\} - \{\cdot \;, {\tilde{I}}^k\} {\epsilon _{ki}}^{ j} \nonumber\\ &=& - {f_i}^{ jk} X_k -{\tilde X}^k {\epsilon _{ki}}^{ j} \nonumber \end{eqnarray} namely \begin{equation} [X+{\tilde{X}}, Y+{\tilde{Y}}]= [X,Y]+ {\sf L}_X {\tilde{Y}}-{\sf L}_Y {\tilde{X}} + [{\tilde{X}}, {\tilde{Y}}] \nonumber \end{equation} which shows that C-brackets can be obtained as derived brackets, in analogy with the ideas of ref.s \cite{deser1, deser2}, with the remarkable difference that, in this case, they are derived from the canonical Poisson brackets of the dynamics. \subsection{Poisson-Lie simmetries}\label{PLsym} Let us explicitly address the nature of symmetries of the dual models introduced in the previous sections. In particular we want to discuss to what extent the models possess Poisson-Lie symmetries. We closely follow \cite{marmo:articolo1} for this subsection. Poisson-Lie symmetries are Lie group transformations implemented on the carrier space of the dynamics via group multiplication, which, in general, are not canonical transformations as they need not preserve the symplectic structure. However, if the Poisson structure is of the form \eqn{gammagamma2} with carrier space $D$ itself, or \eqn{pbm1}, \eqn{pbm2} if we are looking at $G$, $G^*$ respectively, Poisson brackets can be made invariant if the parameters of the group of transformations are imposed to have nonzero Poisson brackets with themselves. Group multiplication is then said to correspond to a Poisson map. We have for example, for the right transformations of $G$ on $D$, \begin{equation}\label{rightGact} \gamma\rightarrow \gamma h \;,\; h \in G\;,\; \gamma \in D \end{equation} and the left action of $G^*$ on $D$, \begin{equation}\label{leftG*act} \gamma\rightarrow \tilde h \gamma \;,\; h^* \in G^*\; \; \gamma \in D . \end{equation} In terms of the coordinates $(\tilde g,g)$ this implies \begin{equation} g\rightarrow gh\;, \quad \tilde g\rightarrow \tilde g \;, \end{equation} for the former and \begin{equation} g\rightarrow g\;, \quad \tilde g\rightarrow \tilde h \tilde g \;, \end{equation} for the latter. By themselves these transformations do not preserve the Poisson brackets \eqn{pbm1}-\eqn{pbm2}. But they can be made to be invariant if we require that the parameters of the tranformation, $h$, have the following Poisson brackets \begin{equation}\label{hh} \{h_1,h_2\}=[\;r^*\;,\;h_1 h_2\;] \;, \end{equation} and zero Poisson brackets with $g$ and $\tilde g$. Then the $SU(2)$ right multiplication is a Poisson map and \eqn{rightGact} corresponds to a Poisson-Lie group transformation. For \eqn{leftG*act} to be a Poisson-Lie group transformation, $\tilde h$ must have the following Poisson bracket with itself \begin{equation}\label{hsthst} \{\tilde h_1,\tilde h_2\}=-[\;r\;,\;\tilde h_1 \tilde h_2\;] \;, \end{equation} and zero Poisson brackets with $g$ and $\tilde g$. Since the right-hand-sides of \eqn{hh} and \eqn{hsthst} vanish in the limit $\lambda \rightarrow 0 $, the transformations \eqn{rightGact} and \eqn{leftG*act} become canonical in the limit. \\ Moreover, Poisson brackets \eqn{pbm1}-\eqn{pbm2} are invariant under the simultaneous action of both $G$ and $G^*$ via \eqn{rightGact} and \eqn{leftG*act}, if we assume that{}\begin{equation} \{\tilde h_1,h_2\}=0\;. \end{equation} By comparing with eq. \eqn{finalpoi} we conclude that the algebra of the observables $g$ and $\tilde g$ is different from the algebra of the symmetries parametrized by $h$ and $\tilde h$. Therefore, dynamics on the group manifold of $SL(2,\mathbb{C})$ and on the two partner groups $SU(2)$ and $SB(2,\mathbb{C})$ possesses Poisson-Lie group symmetries, when endowed with the above mentioned brackets. Let us go back to the symplectic structures of the IRR and the dual model, respectively given by Eqs. \eqn{xx} and \eqn{xxdu}. The former is obtained from \eqn{pbm2} while the latter is obtained from \eqn{pbm1}, for small (but non-zero) value of the parameters $\lambda$ and $\mu$, as we have shown in \ref{canform} (see Eqs. \eqn{calc1}, \eqn{calcdu}). We can therefore conclude that the momentum variables of each model inherit their Poisson brackets from the Poisson-Lie structure of the dual group, which in turn exhibits Poisson-Lie symmetry in the sense elucidated above. \section{Conclusions and Outlook}\label{concl} Starting from an existing description of the dynamics of the Isotropic Rigid Rotator on Heisenberg doubles \cite{marmo:articolo1}, we have introduced a new dynamical model which is dual to the standard IRR. To this, we have used the notion of Poisson-Lie groups and Drinfel'd double for understanding the duality between the carrier spaces of the two models. Specifically, we have used the Drinfel'd double of the group $SU(2)$ as the target configuration space for the dynamics of a generalized model, with doubled degrees of freedom. This model exhibits non-Abelian duality and is an ideal arena to analyze in a simple context the meaning to physics of generalized and double geometry structures. Moreover, we have shown that, from the generalized action, the usual description with half the degrees of freedom, can be recovered by gauging one of its symmetries. The simple model of the IRR is especially interesting as a toy model for field theories with non-trivial target spaces such as Principal Chiral Models. In their original formulation \cite{gursey} these are nonlinear sigma models with the principal homogeneous space of the Lie group $SU(N)$ as its target manifold, where $N$ is the number of quark flavors. Therefore, the dynamical fields of the model, so called currents take value in the cotangent bundle of the Lie group, while the canonical formalism is described by a Poisson algebra which takes the form of a semi-direct sum. The analogy with the IRR is thus very strict: the analysis we have performed can be readily generalized, starting from an alternative description of Principal Chiral Models given in ref.s \cite{rajeev:bosonization}, \cite{vitale1, vitale2, delduc} (also see \cite{reid} where Principal Chiral Models are analyzed in the DFT context). A Principal Chiral Model is a field theory with target space given by a Lie group $G$ and base space given by the two-dimensional space $\mathbb{R}^2$ endowed with the metric $h_{\alpha \beta}=\mathrm{diag}(-1,1)$. \\ It describes the dynamics of two dimensional fields $g: \mathbb{R}^{1,1}=(\mathbb{R}^2,h) \rightarrow G.$ The action may be written in terms of Lie algebra valued left invariant one forms \begin{equation} g^{-1}\mathrm{d}g=g^{-1}\partial_{t}g \mathrm{d}t+g^{-1}\partial_{\sigma}g \mathrm{d}\sigma \end{equation} so to have \begin{equation} S=\frac{1}{2}\int_{\mathbb{R}^2}Tr(g^{-1}\mathrm{d}g \wedge * g^{-1}\mathrm{d}g),\label{chimoac} \end{equation} where trace is understood as the scalar product on the Lie algebra $\mathfrak{g}.$ The Hodge operator exchanges the time and space derivatives \begin{equation} * (g^{-1}\mathrm{d}g)= *( \dot Q^i dt+{Q^i}' d\sigma ) e_i= ( \dot Q^i d\sigma-{Q^i}'dt ) e_i \end{equation} with $ \dot Q^i =\Tr g^{-1}\partial_{t}g e_i$, $ { Q^i}' =\Tr g^{-1}\partial_{\sigma}g e_i$. The action \eqn{chimoac} is the two-dimensional analogue of the IRR action. Notice that in this case the Hodge operator maps one-forms into one-forms while exchanging time and space derivatives. When passing to the Hamiltonian formalism the momenta $I_i= \dot Q^j \delta_{ji}$ and the space derivatives $J^i:= {Q^i}'$ close a Poisson algebra, which, upon an equivalent reformulation of the model \cite{rajeev:bosonization}, \cite{vitale1, vitale2}, results to be isomorphic to the Kac-Moody algebra $\widehat{\mathfrak{sl}(2,\mathbb{C})}$. It is therefore natural to conceive a dual model with the same underlying $\widehat{\mathfrak{sl}(2,\mathbb{C})}$ structure but with the role of $I_i, J^i$ exchanged. The action of the dual model is the natural two-dimensional analogue of \eqn{dualag}, with $\tilde g=\tilde g(\sigma,t)$. Moreover a parent action encoding both models can be introduced, which is in turn the analogue of \eqn{newac}, with $\gamma=\gamma(\sigma, t)$. The symmetries of the two models under duality transformations are addressed as well. Because of the presence of time and space derivatives that are exchanged by the Hodge operator, the structure is richer than the one exhibited by the particle dynamical systems considered here. We are completing the analysis and the results will be detailed in a forthcoming paper \cite{MPV2}. \noindent{\bf Acknowledgements} P. V. acknowledges support by COST (European Cooperation in Science and Technology) in the framework of COST Action MP1405 QSPACE. The authors are indebted with F. Ciaglia and G. Marmo for the many invaluable discussions and suggestions all over the preparation of the manuscript. F. P. would like to thank Jeong-Hyuck Park for helpful discussions and Sogang University for their kind hospitality in an early stage of this work. V. Marotta is indebted with R. Szabo for enlightening discussions and useful indications about the existing literature.
1,314,259,993,106
arxiv
\section{Introduction} Neutrino physics is closely connected to nuclear physics, a connection which goes beyond the evident connection between neutrino detection and the nuclear structure of the target isotopes. For example in a core-collapse supernova understanding neutrino cooling of the newly formed proto-neutron star benefits from knowledge of the nuclear equation of state. In such environments or in merging of two neutron stars, neutrinos determine the neutron-to-proton ratio, the parameter controlling yields of nucleosynthesis. An old problem in nuclear physics is to accurately calculate neutrino-nucleus cross sections and beta decay rates. A firm knowledge of the nuclear matrix elements for the neutrinoless double beta decay is crucial to assess the experimental outlook for observing possible violation of lepton number, a fundamental symmetry of the Universe. For many aspects of supernova physics we need to know what happens when a 10 to 40 MeV neutrino hits a nucleus. Longstanding questions include distribution of the Gamow-Teller and tensor strengths as well as the value of the effective axial vector strength factor, $g_A$. As the incoming neutrino energy increases, the contribution of hard to calculate expectation values increase, including first- and even second-forbidden transitions. Forbidden transitions may be the key to understand decays of isotopes in the nuclear fuel of power reactors and the resulting reactor neutrino spectra. Several recent experiments emphasize the need for better nuclear data in connection with fundamental science, either exploring new physics beyond the Standard Model or exploring astrophysical phenomena. For example short-baseline reactor neutrino experiments successfully measured the neutrino parameters they set out to measure, but they also identified an excess of reactor antineutrinos with energies around 5 MeV as well as a reduction from the predicted value of the flux \cite{An:2016srz,An:2015nua,Abe:2015rcp,RENO:2015ksa}. This result raises some very interesting nuclear physics questions regarding neutrino interactions some of which we discuss below. A key development during the last few decades has been the appreciation of the close relationship between neutrinos and nucleosynthesis as physicists and astronomers ascertained the fact that neutrino properties figure prominently in many astrophysical environments. Consequently all the properties of neutrinos could significantly impact description of astrophysical environments. Understanding where and how various nuclei are synthesized during the evolution of the Universe is one of the key questions of modern science. Element synthesis is thought to be a multi-site and multi-epoch process. Tackling the question of the origin of elements requires a multitude of tools: High-quality observations of stellar spectra, laboratory atomic physics data, modeling stellar photospheres as well as theoretical and experimental investigations of the relevant nuclear processes. Typically copious amounts of neutrinos are present in most nucleosynthesis sites. This feature makes neutrino physics and neutrino-nucleus interactions salient components of many nucleosynthesis scenarios. The interaction of the neutrinos with ordinary matter is rather feeble except when the density is very large. Consequently neutrinos can easily transfer a significant amount of energy and entropy over astronomical distances. (For example almost the entire gravitational binding energy of a pre-supernova star is released as neutrinos). Clearly such energy transfers could be very important in astrophysics and cosmology, making a thorough understanding of neutrino interactions crucial to explore many such phenomena. Status and challenges of neutrino-nucleus scattering for a wide range of energies was recently summarized in Ref. \cite{Alvarez-Ruso:2017oui}. In this proceedings contribution the discussion is limited to a few examples of interactions of neutrinos with very low energies (up to few tens of MeV) and nuclei. \section{Some cross section calculations} In this section calculations of three different neutrino-nucleus cross sections, chosen to illustrate three different techniques utilized to calculate such cross sections, are briefly discussed. Determining the interaction between two nucleons is a long-standing problem. During the last decade both the nuclear structure physics and nuclear reactions communities increasingly made use of the effective field theory approach. With the advent of the effective field theory methods there had been a renewed interest in deriving the nucleon-nucleon interaction from the fundamental theory. In effective field theories describing low-energy physics one integrates over the degrees of freedom associated with physics coming into play at higher energies. However one has to introduce counter terms to cancel divergences which may arise at higher orders. At energies below the pion threshold nucleon-nucleon interaction is particularly simple: $^3S_1 \rightarrow ^3S_0$ transition dominates and one has to introduce a single counter term, dubbed $L_{1A}$, characterizing the unknown isovector axial two-body current. The cross sections for the reactions \[ \nu_e + d \rightarrow p + p + e^- \] and \[ \overline{\nu}_e + d \rightarrow n + n + e^+ \] can then be calculated in a pionless effective field theory as a function of this unknown term \cite{Butler:1999sv,Butler:2000zp}. The resulting cross sections can be written as \[ \sigma (E_{\nu}) = \sigma_0 (E_{\nu}) + L_{1A} \> \sigma_1 (E_{\nu}) \] where the terms $\sigma_0 (E_{\nu})$ and $\sigma_1 (E_{\nu})$ can be easily evaluated. The value of $L_{1A}$ can be estimated either from reactor anti-neutrino deuteron breakup reactions \cite{Butler:2002cw} or from solar neutrino experiments \cite{Chen:2002pv,Balantekin:2003ep,Balantekin:2004zj}. From these considerations one obtains a value of $L_{1A} \sim 4$ fm$^3$. Very recently this parameter was calculated using lattice QCD at a renormalization scale set by the physical pion mass to be $L_{1A} = 3.9 (0.1) (1.0) (0.3) (0.9) $fm$^3$ \cite{Shanahan:2017bgi}. Hence we have an accurate description of weak breakup of the deuteron and the reverse reaction of proton-proton fusion. The latter reaction cannot be directly measured, but is crucial input into the stellar models. Extending this program to heavier nuclei would quickly get impractical because of the need to introduce three- and four-body forces and multiple counter terms. Another interesting neutrino-nucleus reaction is the coherent elastic neutrino scattering off nuclei, $\nu +A \rightarrow \nu +A$. This is a Standard Model process, but only recently has been observed \cite{Akimov:2017ade}. Neutrino-nucleus coherent elastic scattering differential cross section is given by \cite{Freedman:1977xn} \begin{equation} \label{coherent} \frac{d \sigma}{dT} (E_{\nu},T) = \frac{G_F^2}{8 \pi} M \left[ 2 - \frac{2T}{T_{\rm max}} + \left( \frac{T}{E_{\nu}} \right)^2 \right] Q_W^2 \left[ F(Q^2)\right]^2 \end{equation} where $E_{\nu}$ is the energy of the incoming neutrino, $M$ and $T$ are the mass and the recoil energy of the target nucleus, respectively. $T_{\rm max}$ is the maximum value of $T$. The weak charge of the nucleus, \begin{equation} Q_W = N -(1-4 \sin^2\theta_W)Z, \nonumber \end{equation} primarily receives contributions from neutrons since $\sin^2 \theta_W \sim 1/4$. The form factor $F(Q^2)$, which is a function of the momentum transfer $Q$, corrects for contributions to scattering that are not completely coherent as $E_{\nu}$ gets large. Contributions of the neutron density to this form factor is dominant since the proton density is again suppressed because of the smallness of the factor $4 \sin^2 \theta_W - 1$. Indeed this reaction was proposed as a tool to measure neutron densities inside nuclei \cite{Patton:2012jr,Patton:2013nwa}. It can also be useful in supernova detection \cite{Horowitz:2003cz}. Coherent elastic scattering of solar and atmospheric neutrinos is the background for the experiments searching for particle dark matter by measuring the recoil of target nuclei after they are struck by dark matter particles. Integration of Eq. (\ref{coherent}) over nuclear recoil energies yields the total elastic cross section. If one ignores the nuclear form factor (i.e, $F(Q^2) =1$) this yields $\sigma \propto E_{\nu}^2$ as expected. However inclusion of nuclear structure effects reduces the cross section from this maximal value. Hence a careful calculation of the nuclear structure effects is important if one would like use this process as a probe to explore other physics such as the flux loss due to active-sterile neutrino mixing \cite{Anderson:2012pn}. One should mention that there are also subdominant contributions to the coherent elastic neutrino nucleus cross section such as those coming from non-zero neutrino magnetic moments. In a minimally extended Standard Model these contributions are expected to be finite, but very small. However new physics beyond the Standard Model may substantially increase them \cite{Balantekin:2013sda}. Most of the carbon in organic scintillators is in the form of $^{12}$C. Since the natural abundance of $^{13}$C is 1.07 \%, a sizable detector would already contain a substantial amount of this isotope. SFO Hamiltonian, enhancing monopole terms of the matrix elements in the $p_{1/2}$ and $p_{3/2}$ orbitals, includes tensor components consistent with the general sign rule for the tensor-monopole terms \cite{Suzuki:2003,Otsuka:2005zz,Suzuki:2007zza}. A persistent problem for weak interactions in nuclei is the need to quench the axial-vector coupling strength $g_A$. Part of this quenching comes from the limited size of the model space and the effective interactions used. Calculations with this Hamiltonian reproduces the measured neutrino-$^{12}$C cross sections with a reduced quenching of $g_A$, as compared to the previous calculations \cite{Suzuki:2006qd}. These cross sections at the reactor energies are calculated in Ref. \cite{Suzuki:2012aa}. It was found that a configuration space including up to $2\hbar \omega$ interactions with a small (five percent) quenching of the $g_A$ and spin g factor, this Hamiltonian considerably improves the cross sections as compared with the earlier treatments using Cohen-Kurath interactions \cite{Fukugita:1989wv}. \section{Reactor neutrino flux} Short-baseline reactor neutrino experiments successfully measured the neutrino parameters they set out to measure, but they also identified a shape distortion in the 5-7 MeV range as well as a reduction from the predicted value of the flux \cite{An:2015nua}. This result and some of the other anomalies observed in neutrino experiments can be interpreted as mixing of sterile neutrinos with active ones \cite{Abazajian:2012ys}. It was argued that there exists a discrepancy in reactor neutrino experiments between observed antineutrino fluxes near the reactor core and the predicted values \cite{Mention:2011rk}. This anomaly can be fitted with additional sterile neutrino states. Sterile neutrino explanation of the reactor flux discrepancy is not a universally agreed conclusion \cite{Balantekin:2016vjt}. A careful analysis concludes that the corrections that lead to the reactor antineutrino anomaly are uncertain for the 30\% of the flux that arises from forbidden decays \cite{Hayes:2013wra}. Very recently the flux of neutrinos coming from the fissions of $^{235}$U and $^{239}$Pu in the cores of Daya Bay reactors were measured \cite{An:2017osx} and were found to be about 5\% less than predictions of the models \cite{Huber:2011wv,Mueller:2011nm}. Uncertainties in the subdominant corrections to beta-decay dominate the reactor neutrino spectra \cite{Hayes:2016qnu}, the resolution of which would require measuring fission products of many isotopes \cite{Sonzogni:2017wxy}. For example three beta decays $^{92}$Rb, $^{96}$Y, and $^{142}$Cs contribute 43\% of the antineutrino flux emitted by nuclear reactors near 5.5 MeV. The latest measurement of these beta decays substantially modify the feedings of $^{142}$Ba from $^{142}$Cs decays, increasing the discrepancy between the observed and the expected reactor antineutrino flux between 5 and 7 MeV \cite{Rasco:2016leq}. One way to estimate the reactor neutrino spectra is first to measure electron spectra from thermal fission products and convert that to neutrino spectra. In this method many fission products are measured together in a single experiment. It was pointed out \cite{Sonzogni:2017wxy} that including a shape correction of about +6\% MeV$^{-1}$ in conversion calculations fits the experimental Daya Bay spectrum better. The ultimate resolution of this issue from the neutrino side lies in further experiments as one needs to precisely measure any relative distortion of the $\bar{\nu}_e$ spectrum as a function of both energy and baseline. PROSPECT, a precision oscillation and spectrum experiment, located at the High Flux Isotope Reactor (HFIR) at ORNL will measure the antineutrinos from a research reactor at a distance of less than 10m to resolve these questions \cite{Ashenfelter:2015uxt}. From the nuclear side one could envision measuring precise electron spectra of 50 or so fission products that can contribute. The shape factors for a least some of these can, in principle, be explored in rare ion facilities such as the Facility for Rare Ion Beams. \section{Experimental Outlook} Recent developments with experimental techniques made it possible to measure charge-exchange reactions with unprecedented precision. This development enables nuclear experimentalists to make a very precise determination of the Gamow-Teller strength distributions. For example the rate of the reaction $^{71}$Ga($\nu_e,e^-$) was recently deduced from the ($^3$He,$t$) charge-exchange reaction, leading to a slight change in the capture rate of the solar neutrinos coming from the pp reaction \cite{Frekers:2015wga}. Direct measurements of neutrino-nucleus cross sections are possible with intense neutrino sources. For relatively low energies, aside from nuclear power reactors, the list of such sources may include spallation neutron sources and beta beam facilities. In spallation neutron sources one can obtain a rather intense neutrino flux. Pulsed nature of this neutrino flux can then be used to eliminate much of the background \cite{Bolozdynya:2012xv}. Indeed such a facility was used to measure the coherent elastic neutrino-nucleus scattering \cite{Akimov:2017ade}. Beta-beam facilities were proposed, but they are not currently under consideration. In such facilities beta-decay of boosted radioactive nuclei can be used to obtain an intense, collimated and pure neutrino beam. For low-energy neutrino-nucleus cross section measurements one can either use a low energy beta beam \cite{Volpe:2003fi} or utilize lower energy neutrinos at off-axis from a high-energy beta-beam \cite{Lazauskas:2007va}. \vskip 0.3cm This work was supported in part by the US National Science Foundation Grant No. PHY-1514695.
1,314,259,993,107
arxiv
\section{Introduction} Uncovering the mechanism of electroweak symmetry-breaking (EWSB) will be a central goal of future experiments at the Large Hadron Collider (LHC) and the planned International Linear Collider (ILC) \cite{Heinemeyer:2005gs}. Although no direct evidence for the Standard Model Higgs boson exists and it is possible -- as in many models of EWSB -- that there exist additional scalar degrees of freedom, precision electroweak data favors at least one light scalar particle with properties akin to those of the SM Higgs boson. If it is discovered at the LHC, then measuring its properties will be an important part of the LHC and ILC program. If only a single Higgs scalar ($H$) is seen at the LHC, it is quite possible that its interactions will differ from those of the SM Higgs due to heavier degrees of freedom that are not directly accessible at the next generation of colliders. In this case, deviations of Higgs boson properties from SM expectations could provide indirect clues about the nature of physics above the TeV scale. This possibility has recently been analyzed in a model-independent way by the authors of Ref.~\cite{Barger:2003rs}, who considered the prospective effects of dimension ($n$) six, purely (scalar) bosonic operators on $H$ production at the ILC, and in Ref.~\cite{Manohar:2006gz}, where the the potential impact of $n=6$ bosonic operators on $H$ production at the LHC were analyzed. In both cases, substantial deviations from SM expectations appear to be possible. For recent related work, see \cite{Grinstein:2007iv}. Here, we consider the possible impact of $n=6$ operators containing fermions on Higgs production at a 500 GeV or 1 TeV linear collider, following the spirit of Refs. \cite{Barger:2003rs,Manohar:2006gz}. Such operators can be generated when heavy degrees of freedom, associated with a scale $\Lambda$ lying well above the EWSB scale (given by the Higgs vacuum expectation value, $v\approx 246$ GeV), are integrated out of the larger theory in which the SM is ultimately embedded. In this case, physics at low scales is described by an effective Lagrangian \begin{equation} \label{eq:leff} {\cal L}_{\rm eff} = \sum_{n\geq 4,\, j} \, \frac{C_n^j}{\Lambda^{n-4}}\, {\cal O}_{n,j}\ \ \ , \end{equation} where the ${\cal O}_{n,j}$ are operators built entirely from SM fields (and possibly right-handed neutrino fields) and where the index \lq\lq $j$" runs over all independent operators of a given dimension. The operators with $n=4$ are just those of the SM (including a Dirac neutrino mass term), while the coefficients $C_n^j$ of the higher dimension operators are determined by the details of physics above the scale $\Lambda$. The effective theory described by Eq.~(\ref{eq:leff}) will be valid so long as $\Lambda >> \sqrt{s}$. One may analyze the possible effects of $n>4$ operators by making rather gentle assumptions about the magnitude of the operator coefficients. In the case of the $n=6$ operators of interest here, we find it useful to consider the ratio of the $C_6^j/\Lambda^2$ to the Fermi constant, $G_F=1/\sqrt{2}v^2$, that characterizes the strength of $n=6$ effective operators in the SM. Assuming that the $n=6$ operators arise from one-loop amplitudes containing particles of mass $\Lambda$, one would expect $|C_6^j/G_F\Lambda^2|\lesssim v^2/16\pi^2\Lambda^2$ or $|C_6^j v^2/\Lambda^2|\lesssim 10^{-2}$ for $v\sim\Lambda$. Taking $|C_6^j v^2/\Lambda^2|\sim 10^{-2}$, thus, gives a conservative benchmark for the magnitude of the operator coefficients\footnote{Since our effective theory is valid only when $\Lambda>>\sqrt{s} > v$, one would expect it to be applicable only when the $|C_6^j v^2/\Lambda^2|$ are much smaller than $10^{-2}$ unless the $C_6^j$ are not loop suppressed.}. In analyzing the general features $n=6$ operator contributions to Higgs production in $e^+e^-$ annihilation, we will generally adopt this benchmark, bearing in mind that if the new physics involves strong dynamics, the $C_6^j$ could be considerably larger\footnote{This possibility was considered more broadly in Ref.~\cite{Barger:2003rs}. See also the discussion in Ref. \cite{Black:2002wh}}. Doing so will allow us to determine which operators may have the largest possible effects. After identifying the potentially most significant operators, we derive constraints on the $C_6^j v^2/\Lambda^2$ from electroweak precision observables (EWPO) and other considerations. It is well known that EWPO imply stringent bounds on operators that interfere with the SM amplitudes for $e^+e^-\to f{\bar f}$, and these bounds correspond to $\Lambda\gtrsim 10$ TeV or more for $C_6^j=1$ \cite{Barbieri:1999tm,Han:2004az}. Below, we update the limits obtained in Refs.~\cite{Barbieri:1999tm,Han:2004az} on the operators with the largest prospective effects on Higgs production in $e^+e^-$ annihilation. However, operators that contain right-handed neutrino fields do not interfere with the SM amplitudes for $e^+e^-\to f{\bar f}$, and their coefficients are not all constrained by EWPO. For such operators, we turn to other considerations, such as low-energy studies of weak decays and neutrino mass \lq\lq naturalness" considerations. From our study of the $n=6$ operators containing both scalar and fermion fields, we arrive the following highlights: \begin{itemize} \item[(i)] In contrast to the situation with purely bosonic $n=6$ operators, we show that the effects of $n=6$ operators containing fermions are generally required to be smaller, due in large part to existing precision electroweak data that agrees with SM predictions and that constrains many of the relevant operators \cite{Barbieri:1999tm,Han:2004az}. As noted above, the latter constraints are particularly strong on operators that interfere with SM amplitudes for $e^+e^-\to Z^0\to f{\bar f}$. However, we find that substantial deviations from SM Higgs production cross-sections are possible in some cases. In particular, $n=6$ operators that contribute to the $e^+e^-\to HZ^0$ channel can generate large corrections to the SM Higgsstrahlung (HZ) cross-section at the energies considered here. The HZ cross-section can be separated from the gauge boson fusion process through appropriate choice of final states or study of the missing mass spectrum in $e^+e^- \to H \nu_e{\bar\nu}_e$. Thus, a dedicated study of HZ would provide the most sensitive probe of operators considered here. \item[(ii)] Although operators containing right-handed neutrino fields have not been emphasized in earlier effective operator studies of collider physics \cite{Barbieri:1999tm,Han:2004az}, the observation of neutrino oscillations and the implication of non-vanishing neutrino mass motivate us to include RH neutrinos\footnote{In doing so, we consider only Dirac neutrinos, deferring the case of Majorana neutrinos to a future study}. Direct experimental limits on operators containing RH neutrino fields leave room for appreciable effects in Higgs production in the missing energy (${\not\!\! E}$) channel, $e^+e^-\to H +\nu{\bar \nu}$. It is possible, however, to argue for more stringent limits on these effects by invoking neutrino mass \lq\lq naturalness" considerations\cite{Bell:2005kz,Erwin:2006uc}. Below, we argue that if the only particles lighter than the SM Higgs boson are other SM particles, then the observation of large deviations from SM expectations for Higgs production with missing energy without corresponding deviations in the $H q \bar{q}$ and $H \ell \bar{\ell}$ channels would imply fine tuning in order to be consistent with the small scale of neutrino mass. \item[(iii)] With the possible exception of operators which would give magnetic moments to the quarks, operators containing both Higgs and quark fields, which contribute directly only to the $e^+e^-\to H {\bar q} q$ channel, yield small contributions since their contributions are kinematically suppressed relative to SM HZ for the energies of interest here and since their operator coefficients are strongly constrained by $Z^0$ pole precision observables (except for top quarks). While we do not directly constrain the coefficients of the quark magnetic moment operators, we find for reasonable values of these coefficients that their contributions to $e^+e^-\to H {\bar q} q$ would also be small. \item[(iv)] The possible effects of $n=6$ bosonic-fermionic operators are quite distinctive from those associated with purely bosonic operators. Effects of the latter are rather generic to a variety of Higgs production channels in $e^+e^-$ annihilation, as they enter primarily through modifications of the Higgs self-couplings and Higgs coupling to gauge bosons ~\cite{Barger:2003rs} and do not change the topology or analytic properties of the Higgs production amplitudes. Moreover, these modified couplings can enter strongly in both the HZ and gauge boson fusion cross-sections and can, in principle, substantially modify the $e^+e^-\to H {\bar q} q$, $H+{\not\!\! E}$, and $H{\ell}^+{\ell}^-$ channels. In contrast, the impact of the $n=6$ operators considered here is quite channel specific, with the largest effects arising in processes dominated by SM HZ. Moreover, the analytic structure and kinematic dependence of the amplitudes generated by the $n=6$ Higgs-fermion operators is distinct from that of the SM HZ and gauge boson fusion amplitudes, a feature not associated with the purely scalar operators. Thus, a comprehensive program of Higgs production studies would provide an interesting way to disentangle the possible effects of purely bosonic and Higgs-fermion operators in Higgs production at a linear collider. \end{itemize} In the remainder of the paper, we provide details of the analysis leading to these observations. In Section \ref{sec:smhiggsprod} we briefly review Higgs production in the SM. While the latter is well-known, we include a short discussion here to provide a backdrop for discussion of possible deviations from SM expectations, as the impact of the operators we consider depends strongly on both the production mechanism and energy as well as on the mass of the $H$. Section \ref{sec:basis} contains a discussion of the $n=6$ operator basis. The heart of our study lies in Sections \ref{sec:newhiggs} and \ref{sec:oplimits} that contain, respectively, an analysis of prospective deviations from SM Higgs production due to the operators of Section \ref{sec:basis} and an evaluation of bounds on the corresponding operator coefficients obtained from various phenomenological considerations. In arriving at the latter, we follow a somewhat different procedure than used by the authors of Ref.~\cite{Barbieri:1999tm}, though the numerical differences are small. Section \ref{sec:conclusions} contains a discussion of our results and their implications. Before proceeding, we make a few additional comments about our analysis. \begin{itemize} \item[(a)] For simplicity we have considered the case of a linear collider with unpolarized beams, although the ILC will likely have one or both beams partially polarized (see Ref.~\cite{Moortgat-Pick:2005cw} and references therein). \item [(b)] We do not discuss changes in the Higgs production cross-section caused solely by modifications of the fermion-gauge boson vertices in the SM Higgs production amplitudes. Effects of this type do not entail any change in the analytic structure or kinematic-dependence of the SM amplitudes, and the constraints implied by precision electroweak data and neutrino mass preclude the introduction of any significant deviations from SM Higgs production cross-sections due to changes in these couplings. \item[(c)] In principle, one should also consider modifications of the SM Higgs-gauge boson couplings due to contributions from $n=6$ fermionic operators to the $\mu$-decay amplitude. The $HWW$ coupling depends on both the SU(2)$_L$ gauge coupling, $g_2$, and $M_W$, while the $HZZ$ coupling depends on $g_2$, $M_Z$, and $\cos\theta_W$, where $\theta_W$ is the weak mixing angle. The $W$ boson mass, weak mixing angle, and $g_2$ are derived quantities that depend on the Fermi constant obtained from muon decay, corrected for $\mu$-decay dependent radiative corrections and possible new physics contributions to the muon decay amplitude. Thus, any $n=6$ operators that contribute to the $\mu$-decay amplitude will affect the $HWW$ and $HZZ$ couplings. In practice, the constraints implied by precision electroweak data are too strong to allow for observable effects in Higgs production cross-sections due to changes in the Higgs-gauge boson couplings generated by $n=6$ fermionic operator contributions to $\mu$-decay. \item[(d)] We concentrate on single Higgs production for simplicity, though the extension to $HH$ production is straightforward. \item[(e)] In this work, we do not consider operators that contain top quark fields. We direct the interested reader to Ref.~\cite{Han:1999xd}. \end{itemize} \section{Higgs Production in the Standard Model} \label{sec:smhiggsprod} In the Standard Model, the Higgs boson can be produced in $e^+ e^-$ collisions primarily by three mechanisms \cite{Gunion:1989we}. In the Higgsstrahlung process (HZ), the $H$ is produced with an accompanying $Z^0$ boson, which then decays to a fermion-antifermion pair. In the WW-fusion (WWF) and ZZ-fusion (ZZF) processes, the $H$ is produced with an accompanying $\nu_e \bar{\nu}_e$ and $e^+ e^-$ pair, respectively. The cross-sections for these three processes are shown in Fig.~\ref{fig:smhiggs} for $\sqrt{s}=500$ GeV and $1$ TeV for a range of Higgs masses. At $\sqrt{s}=1$ TeV, the WW-fusion diagram dominates, while at $\sqrt{s}=500$ GeV, WW-fusion and Higgsstrahlung can be comparable. At lower energies (not shown here), Higgsstrahlung dominates. The ZZ-fusion cross-section is smaller than WWF cross-section by about an order of magnitude at all energies. Thus, for $\sqrt{s}=1$ TeV, the Higgs is primarily produced in conjunction with missing energy. At lower $\sqrt{s}$ where HZ is important, however, one must consider final states corresponding to all possible $Z$ decay products: $q \bar{q}$ ($70\%$), missing energy ($20\%$), and charged leptons $\ell^+ \ell^-$ ($10\%$). In general, consideration of specific final state topologies associated with Higgs production and decay as well as $Z^0$-decay can be used to select the production mechanism. For 114 GeV $\leq m_H \lesssim 130$ GeV, the Standard Model Higgs decays primarily to $b \bar{b}$; for higher Higgs masses, the main decay channel is $W^+ W^-$. Thus, a final state with two $b$-jets and missing energy would arise either from WWF (high $\sqrt{s}$), HZ (low $\sqrt{s}$ with $Z^0\to\nu{\bar \nu}$ and $H\to b{\bar b}$), or a combination (intermediate $\sqrt{s}$), and the corresponding event topologies at a linear collider have been studied \cite{Desch:2001at} for light values of $m_H$. The analysis of Ref.~\cite{Desch:2001at} concluded that obtaining measurement of $\sigma_{WWF}$ with $\sim 10\%$ precision or better would be feasible at a 500 GeV linear collider. When $H$ production is accompanied by a charged lepton-antilepton pair ($e^+ e^-$ or $\mu^+ \mu^-$ in the case of HZ and $e^+ e^-$ in the case of ZZF), the Higgs production cross-section and mass can be measured independently of its decay channel (including non-SM decays) \cite{Garcia-Abia:1999kv}. The mass can be reconstructed from the recoil mass of the $\ell^+ \ell^-$ system. The study of Ref.~\cite{Garcia-Abia:1999kv} considered the HZ process at $\sqrt{s}=350$ and $500$ GeV for 120 GeV $\leq m_H\leq$ 160 GeV and found that a measurement of the combined $He^+e^-$ and $H\mu^+\mu^-$ HZ cross-section with $\sim 3\%$ precision could be achieved. Additionally, studies have also been performed for the case of HZ where $Z\rightarrow q \bar{q}$ \cite{Garcia-Abia:2005mt,Meyer:2004ha}. In what follows, we assume that each of these event topologies can be identified experimentally, and we study the corresponding impact of $n=6$ operators assuming only SM decays of the $H$. We show that for some operators, deviations from the SM Higgs production cross-sections could be larger than the experimental error \lq\lq benchmarks" indicated above. \begin{figure}[h] \epsfxsize=2in \epsfig{figure=smhiggs_xs.eps,width=6.in} \caption{SM contributions to the Higgs production cross-section.} \label{fig:smhiggs} \end{figure} \section{Operator Basis} \label{sec:basis} The basis of $n=6$ operators containing the Standard Model fields has been enumerated in previous works \cite{Leung:1984ni,Buchmuller:1985jz,Barbieri:1999tm,Han:2004az,Manohar:2006gz,Bell:2005kz,Erwin:2006uc}. Here, we include only those containing 1) the SM Higgs doublet $\phi$ with hypercharge $Y=1$ and 2) SM fermion and/or RH neutrino fields. It is useful to distinguish three classes of such operators: (A) mass operators; (B) operators containing only fields that transform non-trivially under SM gauge symmetries ({\em i.e.}, do not contain $\nu_R$ fields): and (C) operators containing right-handed neutrinos that are not mass operators.\\ \vskip 0.1in \noindent {\em Class A}. We begin with the mass operators , of which there are two: \begin{eqnarray} {\cal O}^{\ell}_{M,\, AB} &\equiv & (\bar{L}^A \phi \ell_{R}^B)(\phi^{+}\phi) \nonumber + {\rm h.c.}\\ {\cal O}^{\nu}_{M,\, AB} &\equiv & (\bar{L}^A\widetilde{\phi}\nu_{R}^B)(\phi^{+}\phi) + {\rm h.c.}\ \ \ , \nonumber \end{eqnarray} where $L^A$ and $\ell ^A$ are left-handed lepton doublet and singlet fields, respectively, and $A$, $B$ are generation indices. (Mass operators for quark fields are analogous.) Operators containing a contracted pair of Pauli matrices, such as ${\bar L}\tau^a \phi \ell_R (\phi^\dag \tau^a \phi)$ can be related to the two operators above via a Fierz transformation. \begin{figure}[h] \epsfxsize=2in \epsfig{figure=massop1and2.eps,width=6.in} \caption{Contribution of Class A operators $(a)$ ${\cal O}^{\ell}_{M,\, AB}$ and $(b)$ ${\cal O}^{\nu}_{M,\, AB}$ to Higgs production.} \label{fig:massop1and2} \end{figure} The ${\cal O}^{\ell}_{M,\, AB}$ and ${\cal O}^{\nu}_{M,\, AB}$ can contribute to Higgs production via the diagrams shown in Fig.~\ref{fig:massop1and2}. In the absence of fine-tuning with the $n=4$ Standard Model mass operators, their coefficients $C_M^{\ell}$ and $C_M^{\nu}$ are tightly constrained by the $\ell$ and $\nu$ mass, respectively: \begin{eqnarray} \frac{\left| C_{M,ee}^{\ell} \right|}{\Lambda^2} &\lesssim & \frac{ 2 \sqrt{2} m_e}{v^2}\nonumber\\ \frac{\left| C_{M,AB}^{\nu} \right|}{\Lambda^2} &\lesssim & \frac{ 2 \sqrt{2}m_{\nu,AB}}{v^2}\ \ \ ,\nonumber \end{eqnarray} where $m_{\nu,AB}$ is an element of the neutrino mass matrix before diagonalization. In addition to this (large) suppression, the interference of these diagrams with the SM Higgs production diagrams is additionally mass-suppressed due to the fermion chiralities. Thus, the contributions of these two operators to Higgs production are negligible, and we will not consider them further.\\ \vskip 0.1in \noindent {\em Class B}. These operators contain only fields that are not SM singlets ({\em i.e.}, no $\nu_R$): \begin{eqnarray} {\cal O}_{VR,AB} &\equiv& i(\bar{f}_{R}^A\gamma^{\mu}f_{R}^B)(\phi^{+}D_{\mu}\phi) + {\rm h.c.} \nonumber \\ {\cal O}_{VL,AB} &\equiv& i(\bar{F}^A\gamma^{\mu}F^B)(\phi^{+}D_{\mu}\phi) \nonumber + {\rm h.c.}\\ {\cal O}_{VL\tau,AB} &\equiv& i(\bar{F}^A\gamma^{\mu}\tau^{a}F^B)(\phi^{+}\tau^{a}D_{\mu}\phi) + {\rm h.c.}\nonumber \\ {\cal O}_{{\tilde V},\, AB}^q &\equiv& i(\bar{d}_{R}^A\gamma^{\mu}u_{R}^B)(\phi^{+}D_{\mu}\widetilde{\phi}) + {\rm h.c.} \nonumber \\ {\cal O}_{W,AB}^{f} &\equiv& g_{2}(\bar{F}^A \sigma^{\mu\nu}\tau^{a}\phi)f_{R}^B W_{\mu\nu}^{a} + {\rm h.c.} \nonumber \\ {\cal O}_{B,AB}^{f} &\equiv& g_{1}(\bar{F}^A \sigma^{\mu\nu}\phi)f_{R}^B B_{\mu\nu} + {\rm h.c.}\ \ \ , \nonumber \end{eqnarray} where $F^A$ indicates either the left-handed lepton ($L$) or quark ($Q$) doublet for generation $A$ and $f^A$ indicates the RH fields for quarks or charged leptons of generation $A$. We have included the \lq\lq $R$" subscript on the latter for clarity. The fields $u_R^A$ and $d_R^A$ denote the up- and down-type RH quarks of generation $A$. The operator ${\cal O}_{{\tilde V},\, AB}^{q} $ does not contribute to Higgs production in $e^+e^-$ annihilation since it contains no neutral current component, so we will not discuss it further. \vskip 0.1in \noindent{\em Class C}. Lastly, we consider operators containing $\nu_R$ that are not mass-suppressed and that contribute only to the missing energy channel: \begin{eqnarray} {\cal O}_{V\nu,\,AB} &\equiv& i(\bar{\nu}_{R}^A\gamma^{\mu}\nu_{R}^B)(\phi^{+}D_{\mu}\phi) + h.c. \nonumber \\ {\cal O}_{{\tilde V},\, AB} &\equiv& i(\bar{\ell}_{R}^A\gamma^{\mu}\nu_{R}^B)(\phi^{+}D_{\mu}\widetilde{\phi}) + {\rm h.c.} \nonumber \\ {\cal O}_{W,\,AB} &\equiv& g_{2}(\bar{L}^A \sigma^{\mu\nu}\tau^{a}\widetilde{\phi})\nu_{R}^B W_{\mu\nu}^{a} + {\rm h.c.} \nonumber \\ {\cal O}_{B,\,AB} &\equiv& g_{1}(\bar{L}^A \sigma^{\mu\nu}\widetilde{\phi})\nu_{R}^B B_{\mu\nu} + {\rm h.c.} \nonumber \end{eqnarray} For ${\cal O}_{{\tilde V},\, AB}$, ${\cal O}_{W,\,AB}$, and ${\cal O}_{B,\,AB}$, we follow the notation of Refs.~\cite{Bell:2005kz,Erwin:2006uc}. Due to the presence of the $\nu_R$ field, interference of tree-level diagrams containing these operators with the Standard Model Higgs production amplitudes is suppressed by the neutrino mass. Hence, we do not consider these interference effects here and compute only the contributions that are quadratic in their coefficients. As a result, their contributions can be appreciable only if the corresponding $C_6^j$ are not loop suppressed. \section{Contributions to Higgs Production} \label{sec:newhiggs} \subsection{General Considerations} Before considering in detail the corrections to various production channels, we make a few general observations regarding the operators and amplitudes that one may expect to be largest. To that end, we show in Figure \ref{fig:nonur} the $H$ production amplitudes generated by the operators of Class B and in Figure \ref{fig:nur} those generated by Class C operators. The amplitudes in Figs. \ref{fig:nonur}(a,b) and \ref{fig:nur}(a) correspond to taking the SM HZ amplitude and contracting one of the two $Z^0$ propagators to a point. In SM HZ, the initial $Z^0$ is far off shell for the energies considered here, while the final $Z^0$ propagator is resonant. Thus, we expect the contributions associated with Figs. \ref{fig:nonur}(b) and \ref{fig:nur}(a) to be highly suppressed relative to the SM cross-section since they contain no resonating $Z^0$ propagator. In contrast, the amplitude of Fig. \ref{fig:nonur}(a) contains a nearly on-shell $Z^0$ propagator but no off shell $Z^0$ propagator. Consequently, it can be kinematically enhanced relative to the SM HZ amplitude and can generate an appreciable contribution to $H$ production, even in the presence of strong constraints on the corresponding operator coefficient (see Sec. \ref{sec:oplimits}). The corrections generated by the amplitudes of Figs. \ref{fig:nonur}(c,d) and \ref{fig:nur}(b,c) contribute to the $Hl^A{\bar l}^B$ (where at least one of $A$ and $B=e$) and missing energy channels. For large $\sqrt{s}$, the $H+{\not\!\! E}$ channel is dominated by WWF wherein both $W$ bosons are off shell. Thus, the amplitudes of Figs. \ref{fig:nonur}(c,d) and \ref{fig:nur}(b,c) experience no kinematic suppression relative to the SM cross-section\footnote{This situation contrasts with that of Fig. \ref{fig:nonur}(b), which corresponds to shrinking the resonating $Z^0$ propagator in HZ to a point, thus leading to a kinematic suppression relative to the SM HZ amplitude.}. Even in the intermediate energy regime, where WWF and HZ yield comparable contributions, the effects of Figs. \ref{fig:nonur}(c,d) and \ref{fig:nur}(b,c) can, in principle, be appreciable. We reiterate, however, that for the operators containing $\nu_R$ fields, the amplitudes of Fig. \ref{fig:nur} do not interfere appreciably with the SM amplitudes, and their contributions can only be large when the operator coefficients are not loop-suppressed. We now turn to a detailed discussion of various operator effects. \begin{figure}[h] \epsfxsize=2in \epsfig{figure=nonur.eps,width=5.in} \caption{Contribution of Class B operators to Higgs production.} \label{fig:nonur} \end{figure} \begin{figure}[h] \epsfxsize=2in \epsfig{figure=nur.eps,width=6.in} \caption{Contribution of Class C operators to Higgs production.} \label{fig:nur} \end{figure} \subsection{Class B Operators} Here, we discuss in detail the possible effects of operators in Class B, which contain only fields that transform non-trivially under SM symmetries. \vskip 0.1in \noindent{${\cal O}_{VR,AB}$} \vskip 0.1in The contributions from operator ${\cal O}_{VR,AB}$ depends on its flavor indices $A, B$. For $A=B=e$, ${\cal O}_{VR,ee}$ contributes to all Higgs production channels via the diagram in Fig. \ref{fig:nonur} (a) and additionally to the $H e^+ e^-$ channel via the diagrams in \ref{fig:nonur} (b-d). In all cases, the exchanged gauge boson is a $Z^0$. As noted above, the analytic structure of the amplitude for Fig. \ref{fig:nonur}(a) differs from that of the SM HZ amplitude only by the absence of the off-shell $Z^0$ propagator. The ratio of its interference with the SM HZ amplitude to the SM HZ cross-section is, thus, given by \begin{equation} \frac{\sigma_{3(a)-HZ\, {\rm int}}}{\sigma_{HZ}} = - \frac{C v^2}{\Lambda^2} \frac{(s-M^2_Z)}{M^2_Z} \frac{\sin^2 \theta_W}{2(\sin^4 \theta_W- \frac{1}{2} \sin^2 \theta_W + \frac{1}{8})}\ \ \ , \label{eq:2lrhzint} \end{equation} where we have omitted the label on the operator coefficient for simplicity. For $C v^2/\Lambda^2 =10^{-2}$, this ratio is $\sim -0.54$ and $\sim -2.2$ for $\sqrt{s}=500$ GeV and $1$ TeV, respectively. The effect of $\sigma_{3(a)-HZ \, {\rm int}}$ relative to $\sigma_{HZ}$ can be large for the values of $\sqrt{s}$ studied here since in the SM HZ amplitude the initial $Z^0$ is far off shell with $M_Z \ll \sqrt{s}$; thus, the SM HZ amplitude contains a kinematic suppression of roughly $\Lambda^2/s$ that does not enter the amplitude of Fig. \ref{fig:nonur}(a). For any of the final states of $H f\bar{f}$ with $f=\mu$, $\tau$, $\nu_\mu$, $\nu_\tau$, or $q$, Eq. (\ref{eq:2lrhzint}) gives the ratio of the contribution of ${\cal O}_{VR,ee}$ to the SM cross-section. For the $H \nu_e \bar{\nu_e}$ final state, the SM also receives a contribution from the WWF process\footnote{Since the neutrinos in the missing energy channel are not detected, one may discuss the relative magnitudes of non-SM contributions using the neutrino flavor basis.}. Interference between WWF -- which involves only a LH (RH) initial state electron (positron) -- and diagram \ref{fig:nonur}(a) containing ${\cal O}_{VR,ee}$ requires a Yukawa coupling on each of the initial-state fermion lines, and is thus strongly suppressed. For the $H e^+ e^-$ production channel, we must include the interference of all of the diagrams shown in Fig. \ref{fig:nonur} with both SM HZ and ZZF. We have computed the contribution of ${\cal O}_{VR,ee}$ arising from interference with the SM amplitudes\footnote{ Here, we neglect the contributions that are not due to interference with the SM; we will defer discussion of the non-interference terms to Section \ref{sec:conclusions}.} to the total $H$ production cross-section using the calchep package \cite{Pukhov:1999gg,Pukhov:2004ca}. Results are shown in Fig. \ref{fig:2lree500}, where we give the ratio $\sigma_{{\rm int}}/{\sigma_{\rm SM}}$ as a function of the Higgs mass for different final state topologies, where $\sigma_{{\rm int}}$ is the contribution to the cross-section of the interference between all of the diagrams in Fig. \ref{fig:nonur} and all of the relevant SM diagrams. We observe that for the $H f\bar{f}$ channels with $f=\mu$, $\tau$, $\nu_\mu$, $\nu_\tau$, or $q$, the ratio is independent of $m_H$, as implied by Eq.~(\ref{eq:2lrhzint}). In contrast, for the $He^+e^-$ and $H+{\not\!\! E}$ channels, the ratio varies with $m_H$ due to the additional contributions from the SM WWF and ZZF processes as well as other diagrams in Fig \ref{fig:nonur}. We also note that the effect of ${\cal O}_{VR,ee}$ can be large compared with the SM HZ cross-section. Thus, one could in principle discern the effects of this operator by analyzing events that cannot be produced by the WWF process, such as a dilepton pair and two $b$-jets or two $b$-jets and two other jets. In contrast, the relative effect of ${\cal O}_{VR,ee}$ on the $He^+e^-$ and $H+{\not\!\! E}$ channels is considerably smaller, due to the much larger SM ZZF and WWF contributions in these cases. \begin{figure}[h] \epsfxsize=2in \epsfig{figure=2lree.01_noqq.eps,width=6.in} \caption{Ratio of contribution of ${\cal O}_{VR,ee}$ to SM Higgs production cross-section for $(top)$ $\sqrt{s}=500$ GeV and $(bottom)$ $1$ TeV for $C_{VR,ee} v^2/{\Lambda^2}=10^{-2}$. For $\sqrt{s}=1$ TeV, the line for the $Hq\bar{q}$, $H\mu^+\mu-$ and $H\tau^+\tau^-$ channels is not shown; it has the value of $-2.2$, independent of Higgs mass.} \label{fig:2lree500} \end{figure} In contrast to the situation with ${\cal O}_{VR,ee}$, the operator ${\cal O}_{VR,AA}$, $A = \mu, \tau,q$ contributes only through diagram \ref{fig:nonur}(b). This diagram interferes only with the HZ amplitude and contributes only to the $H\mu^+\mu^-$, $H\tau^+\tau^-$ and $Hq \bar{q}$ channels. The contribution of ${\cal O}_{VR,\mu\mu}$ to the $H\mu^+\mu^-$ channel -- relative to the SM cross-section -- is shown in Fig.~\ref{fig:2lrmumu500} as a function of $m_{H}$. The results for ${\cal O}_{VR,\tau\tau}$ are identical; those for ${\cal O}_{VR,qq}$($q\not=t$) differ from Fig.~\ref{fig:2lrmumu500} only due to the difference between the $Zqq$ and $Z\ell^+\ell^-$ SM couplings. As indicated in Fig.~\ref{fig:2lrmumu500}, the contribution from ${\cal O}_{VR,\mu\mu}$ to the $H\mu^+\mu^-$ channel is $\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}}} 10^{-3}$ of the SM cross-section, and we do not show the correspondingly small correction from ${\cal O}_{VR,qq}$ to the $Hq{\bar q}$ channel. Comparing the contributions of ${\cal O}_{VR,ee}$ and ${\cal O}_{VR,\mu\mu}$ to the $H\mu^+ \mu^-$ channel in Figs.~\ref{fig:2lree500} and ~\ref{fig:2lrmumu500}, we can see that the effects of diagram \ref{fig:nonur}(b) are strongly suppressed relative to those of diagram \ref{fig:nonur}(a). As noted above, this suppression is to be expected, since in the amplitude of Fig. \ref{fig:nonur}(b) the $Z^0$ is always off-shell ($M_Z \ll \sqrt{s}$), whereas for the values of $\sqrt{s}$ of interest here, on-shell production of both the $H$ and $Z^0$ can occur for the amplitude of Fig. \ref{fig:nonur}(a). As the same arguments will hold for ${\cal O}_{VL,AB}$ and ${\cal O}_{VL\tau}$, we will not consider the case of $A=B=\mu,\tau$ for those operators below.\\ \begin{figure}[h] \epsfxsize=2in \epsfig{figure=2lrmumu.01.eps,width=6.in} \caption{Ratio of contribution of ${\cal O}_{VR,\mu \mu}$ to SM Higgs production cross-section for $(top)$ $\sqrt{s}=500$ GeV and $(bottom)$ $1$ TeV for $C_{VR,\mu \mu} v^2/\Lambda^2=10^{-2}$. Curves for ${\cal O}_{VR,\tau \tau}$ are identical. } \label{fig:2lrmumu500} \end{figure} \vskip 0.1in \noindent{${\cal O}_{VL,ee}$} \vskip 0.1in As with ${\cal O}_{VR,ee}$, the operator ${\cal O}_{VL,ee}$ contributes to Higgs production via the diagrams in Fig. \ref{fig:nonur}(a-d). In all four diagrams, the gauge boson exchanged is always a $Z^0$. Diagram \ref{fig:nonur}(a) contributes to all channels, in analogy with ${\cal O}_{VR,ee}$ above. This contribution of the interference of this diagram with HZ obeys \begin{equation} \label{eq:2llhzint} \frac{\sigma_{3(a)-HZ int}}{\sigma_{HZ}} = \frac{Cv^2}{\Lambda^2} \frac{(s-M^2_Z)}{M^2_Z} \frac{(\frac{1}{2}-\sin^2 \theta_W)}{2(\sin^4 \theta_W- \frac{1}{2} \sin^2 \theta_W + \frac{1}{8})} \end{equation} This expression gives the ratio of the contribution of ${\cal O}_{VL,ee}$-SM HZ interference to the SM cross-section for the final states of $Hf\bar{f}$ for $f=\mu$, $\tau$, $\nu_{\mu,\tau}$, and $q$. However, in contrast to the situation with ${\cal O}_{VR,ee}$, the insertion of this operator diagram \ref{fig:nonur}(a) will also interfere with WWF without electron mass insertions (as well as with HZ and ZZF). Additionally, ${\cal O}_{VL,ee}$ contributes to the $H e^+ e^-$ channel through diagrams \ref{fig:nonur}(b-d), all of which interfere with HZ and ZZF, and to the $H \nu_e \bar{\nu_e}$ through diagram \ref{fig:nonur}(b) (although this latter contribution is strongly kinematically suppressed for the reasons discussed above). These contributions are summarized in Fig. \ref{fig:2Lee01} for $C v^2/\Lambda^2=10^{-2}$ as a function of $m_{H}$. As before, the relative effect on the $Hf\bar{f}$ cross-section is $m_H$-independent for $f=\mu$, $\tau$, $\nu_{\mu,\tau}$, and $q$, whereas for the $H e^+ e^-$ and $H+{\not\!\! E}$ channels, the relative importance decreases with $m_H$ owing to the increasing ZZF and WWF contributions. \begin{figure}[h] \epsfxsize=2in \epsfig{figure=2L.01_noqq.eps,width=6.in} \caption{Ratio of contribution of ${\cal O}_{VL,ee}$ to SM Higgs production cross-section for (top) $\sqrt{s}=500$ GeV and (bottom) $\sqrt{s}=1$ TeV for $C_{VL,ee} v^2/\Lambda^2=10^{-2}$. For $\sqrt{s}=1$ TeV, the line for the $Hq\bar{q}$, $H\mu^+\mu-$ and $H\tau^+\tau^-$ channels is not shown; it has the value of $2.6$, independent of Higgs mass.} \label{fig:2Lee01} \end{figure} As in the case of ${\cal O}_{VR,AA}$, the contribution from ${\cal O}_{VL,AA}$ for $A=\mu$, $\tau$, or $q$ arises only from Fig. \ref{fig:nonur}(b). Since the corresponding effects are highly suppressed, we do not discuss this case further. \vskip 0.1in \noindent{${\cal O}_{VL\tau,ee}$} \vskip 0.1in As in the previous cases, ${\cal O}_{VL\tau,ee}$ contributes to the Higgs production cross-section through all of the diagrams in Fig.~\ref{fig:nonur}. However, unlike the operators ${\cal O}_{VR,ee}$ and ${\cal O}_{VL,ee}$, ${\cal O}_{VL\tau,ee}$ also contains a charge-changing component. Thus, the gauge boson in diagrams \ref{fig:nonur}(c) and (d) can be either a $Z^0$ or a $W^\pm$, so the insertion of ${\cal O}_{VL\tau,ee}$ in these diagrams contributes to both the $He^+e^-$ and $H+{\not\!\! E}$ channels. Inserting ${\cal O}_{VL\tau,ee}$ in diagram \ref{fig:nonur}(a) generates the same contribution to all decay channels in the same manner as $O_{VL,ee}$, yielding the same contribution to the HZ cross-section as for ${\cal O}_{VL\tau,ee}$ (see, {\em e.g.}, Eq.~(\ref{eq:2llhzint})). The insertion of ${\cal O}_{VL\tau,ee}$ in diagram \ref{fig:nonur}(a) also interferes with ZZF and WWF in the $H e^+ e^-$ and $H \nu_e \bar{\nu_e}$ channels, respectively. Additionally, ${\cal O}_{VL\tau,ee}$ contributes to these channels via diagrams \ref{fig:nonur}(b-d). The contributions of ${\cal O}_{VL\tau,ee}$ to the Higgs production cross-section are are shown in Fig. \ref{fig:2Ltauee01} for $C v^2/\Lambda^2=10^{-2}$. \begin{figure}[h] \epsfxsize=2in \epsfig{figure=2Ltau.01_noqq.eps,width=6.in} \caption{Ratio of contribution of ${\cal O}_{VL\tau,ee}$ to SM Higgs production cross-section for (top) $\sqrt{s}=500$ GeV and (bottom) $1$ TeV for $C_{VL\tau,ee} v^2/\Lambda^2=10^{-2}$. For $\sqrt{s}=1$ TeV, the line for the $Hq\bar{q}$, $H\mu^+\mu-$ and $H\tau^+\tau^-$ channels is not shown; it has the value of $2.6$, independent of Higgs mass.} \label{fig:2Ltauee01} \end{figure} As in the case of ${\cal O}_{VR,AA}$, the contribution from ${\cal O}_{VL\tau,AA}$ for $A=\mu$, $\tau$, or $q$ arises only from Fig. \ref{fig:nonur}(b). Since the corresponding effects are highly suppressed, we do not discuss this case further. \vskip 0.1in \noindent{${\cal O}_{W,AB}^{f}$ and ${\cal O}_{B,AB}^{f}$} \vskip 0.1in The operators ${\cal O}_{W}^{f}$ and ${\cal O}_{B}^{f}$ contribute to the magnetic and electric dipole moments of the charged leptons. Stringent limits on the electric dipole moments and non-SM contributions to the magnetic moments exist for the cases $A=B=e$ and $A=B=\mu$ \cite{Yao:2006px}. Limits on the branching fractions $\mu \rightarrow e \gamma$, $\tau \rightarrow e \gamma$, and $\tau \rightarrow \mu \gamma$ tightly constrain the cases where $A$ and $B$ are lepton fields and $A\ne B$ \cite{Yao:2006px}. Thus, here we will only consider the possibilities $A=B=\tau$ and $A,B=q^Aq^B$. ${\cal O}_{W,\tau\tau}^{f}$ and ${\cal O}_{B,\tau\tau}^{f}$ will contribute only to the $H\tau^+ \tau^-$ final state; production occurs only through diagram \ref{fig:nonur} (b). Due to the derivative on the gauge boson field in each of these operators, the kinematic suppression of this diagram is not as severe as in the previous cases of ${\cal O}_{VR,AB}$, ${\cal O}_{VL,AB}$ and ${\cal O}_{VL\tau,AB}$. We have calculated the contributions of ${\cal O}_{W,\tau\tau}^{f}$ and ${\cal O}_{B,\tau\tau}^{s}$ to the $H\tau^+ \tau^-$ cross-section for $C^j v^2/\Lambda^2=10^{-2}$, neglecting the Yukawa-suppressed contribution to the cross-section due to the interference of diagram \ref{fig:nonur} (b) with the SM HZ process. We find that the contribution to the cross-section is generally less than $0.1\%$ for $\sqrt{s}=500$ GeV, and less than $2\%$ for $\sqrt{s}=1$ TeV. We also find that the interference of diagram \ref{fig:nonur} (b) with other (tiny) SM processes which contain a Higgs insertion on one of the $\tau$ lines could give comparable contributions to the $H\tau^+ \tau^-$ cross-section. For the case where $A$ and $B$ are light quark fields ($u$, $d$, and $s$), interference with the SM diagrams can be neglected as these contributions are Yukawa-suppressed. There is a contribution to the $Hq^A\bar{q}^B$ cross-section that is $N_C=3$ times larger than the $A=B=\tau$ noninterference cross section discussed above and is, thus, negligible . In the case where $A=B=b$ or $c$, interference with the SM diagrams can give additional contributions with magnitude comparable to the non-intereference contributions. Current limits \cite{Yao:2006px} on the $\tau$ magnetic moment allow values for $C_{B,\tau\tau}^{f}v^2/\Lambda^2$ and $C_{W,\tau\tau}^{f}v^2/\Lambda^2$ of order unity. Somewhat improved limits, but still significantly weaker than $C_{B,W,\tau\tau}^{f}v^2/\Lambda^2=10^{-2}$ can be obtained from $\Gamma(Z\rightarrow \tau^+ \tau^-)$. Similarly weak limits on the quark magnetic moment operators can be obtained from $\Gamma(Z\rightarrow q^A \bar{q}^B)$. However, we will take $10^{-2}$ as an estimate of the upper bound for $C_{B,W}^f v^2/\Lambda^2$, as we do not expect new physics to make a contribution to the magnetic moments greater than the QED Schwinger term. Nevertheless, we do not rule out the possibility that the coefficients of these operators could be considerably larger due to strong dynamics above the scale $\Lambda$. \subsection{Class C Operators} All of the Class C operators contribute only to the missing energy channel since they contain $\nu_R$ fields. The Higgs production diagrams for these operators are shown in Fig.~\ref{fig:nur}. For each operator, the interference of any amplitude in Fig.~\ref{fig:nur} with relevant SM amplitude is $m_\nu$-suppressed, so we do not include the interference contributions here. The resulting corrections to the SM Higgs production cross-sections are, thus, quadratic in the operator coefficients. Since the final state neutrino-antineutrino pair is not observed, we do not require their flavors to be the same. As discussed above, the contribution from diagram \ref{fig:nur}(a) in is kinematically suppressed due to the off-shell $Z^0$ boson, so we expect that only those operators contributing through diagrams \ref{fig:nur}(b) and (c) will be able to generate substantial contributions. The comparison between the contribution from these operators to the $H+{\not\!\! E}$ channel is given in Fig. \ref{fig:nur500Gev} for $C v^2/\Lambda^2=10^{-2}$. For $C v^2/\Lambda^2=10^{-2}$ as assumed above, the correction induced by the Class C operators is generally less than $10^{-3}$ of the SM cross-section. However, if these operators are generated by strong dynamics or tree-level gauge interactions, their relative effects could be substantially larger. In this respect, the operator ${\cal O}_{\tilde{V},AB}$ is particularly interesting, as an operator of this type could arise in models with mixing between LH and RH gauge bosons. Moreover, it is not as strongly constrained by precision electroweak data as the Class B operators, since it does not interfere with the SM amplitudes that contain only LH neutrino fields. In Section \ref{sec:oplimits} we discuss the various phenomenological and theoretical constraints on ${\cal O}_{\tilde{V},AB}$, including implied by the scale of neutrino mass and naturalness considerations. \vskip 0.1in \noindent{${\cal O}_{V\nu,AB}$} \vskip 0.1in The operator ${\cal O}_{V\nu,AB}$ contributes to the missing energy channel only via the diagram in Fig. \ref{fig:nur}(a) where the exchanged gauge boson is a $Z^0$ and the final state contains a right-handed neutrino and a left-handed antineutrino. Thus, the contribution of this operator is strongly kinematically suppressed, as reflected in Fig.~\ref{fig:nur500Gev}.\\ \vskip 0.1in \noindent{${\cal O}_{\tilde{V},AB}$} \vskip 0.1in The gauge boson in ${\cal O}_{\tilde{V},AB}$ is always a $W^\pm$, and this operator contributes to the missing energy channel via the diagrams in Fig. \ref{fig:nur} (b) and (c). The final state contains one right-handed neutrino and one right-handed antineutrino, in the case of \ref{fig:nur}(b), or a left-handed neutrino and antineutrino in the case of \ref{fig:nur}(c). As this operator contributes through diagrams (b) and (c) whose effect on the production cross-section is not kinematically suppressed relative to WWF , the relative importance of its contribution is larger than that of ${\cal O}_{V\nu,AB}$. \vskip 0.1in \noindent{${\cal O}_{W,AB}$ and ${\cal O}_{B,AB}$} \vskip 0.1in The neutrino dipole operators ${\cal O}_{W,AB}$ and ${\cal O}_{B,AB}$ contribute to Higgs production via diagram \ref{fig:nur}(a) wherein the exchanged gauge boson is either a $Z^0$ or a $\gamma$ and the final state contains a neutrino and an antineutrino that are either both right-handed or both left-handed. The insertion of ${\cal O}_{W,AB}$ in diagrams \ref{fig:nur}(b) and (c) only contain the $W^\pm$ boson; they contribute to the same final states does ${\cal O}_{\tilde{V},AB}$ . Note that since ${\cal O}_{B,AB}$ contributes only through \ref{fig:nur}(a), its contribution will be suppressed relative to that of ${\cal O}_{W,AB}$. Again, this feature can be seen from Fig.~\ref{fig:nur500Gev}. \begin{figure}[h] \epsfxsize=2in \epsfig{figure=nur.01.eps,width=6.in} \caption{Contributions of operators containing $\nu_R$ to Higgs missing energy final state for $\sqrt{s}=500$ GeV . Results are as a fraction of total the Standard Model $H \nu \bar{\nu}$ cross-section, summed over the three flavors. Curves are drawn for the case $C^j v^2/\Lambda^2=10^{-2}$.} \label{fig:nur500Gev} \end{figure} \subsection{Flavor Nonconserving Operators} \label{sec:flavc} Now, we consider the case $A\ne B$ for those operators having the potentially largest effects in the flavor conserving channels: ${\cal O}_{VR,AB}$, ${\cal O}_{VL,AB}$, and ${\cal O}_{VL\tau,AB}$. Here, we have two distinct cases, $A$ or $B=e$, and both $A$, $B\ne e$. The latter case can only contribute through diagram \ref{fig:nonur}(b), whose effect is kinematically suppressed. Hence, we ignore this case. For all three of these flavor nonconserving operators, Higgs production can occur through diagrams \ref{fig:nonur}(b), and (c) or (d), giving a final state containing $e^{\pm} \mu^{\mp}$ or $e^{\pm} \tau^{\mp}$. Although diagrams \ref{fig:nonur}(b) (in the case of ${\cal O}_{VL,AB}$ or ${\cal O}_{VL\tau,AB}$) and (c), and (d) (for ${\cal O}_{VL\tau,AB}$ only) could also contribute to the missing energy final state, given the small number of events involved (to be seen in Section \ref{sec:oplimits} ), we consider only the final states with charged leptons, due to their unique flavor-nonconserving signature. Results for the case $C v^2 / \Lambda^2 =10^{-2}$ are shown in Table \ref{table:fcncs} in units of $\mbox{ab}^{-1}$. For a linear collider with $1 \, \mbox{ab}^{-1}$ of data, these numbers can be interpreted as numbers of events. \begin{table} \caption{Cross-sections for flavor-nonconserving processes $e^+ e^- \rightarrow H e^{\pm} l^{\mp}$, $l=\mu,\tau$ for $Cv^2/\Lambda^2=10^{-2}$. Both charge combinations are included. Results are in units of $10^{-6}$ pb.} \label{table:fcncs} \begin{tabular}{cccc|ccc } & & $\sqrt{s}=500$ GeV & & & $\sqrt{s}=1$ TeV & \\ $m_H$ & $100$ GeV & $250$ GeV & $400$ GeV & $100$ GeV & $300$ GeV & $500$ GeV\\ \hline ${\cal O}_{VR,e\ell}$ & $3.4$ & $0.72$ & $0.024$ & $28.$ & $14.$ & $4.2$\\ ${\cal O}_{VL,e\ell}$, ${\cal O}_{VL\tau,e\ell}$ & $3.2$ & $0.67$ & $0.023$ & $27.$ & $13.$ & $4.1$\\ \end{tabular} \end{table} \section{Limits on Operator Coefficients} \label{sec:oplimits} Precision electroweak data constrains the magnitude of many of the $C_6^j v^2/\Lambda^2$ to be considerably smaller than the $10^{-2}$ reference value used in Section \ref{sec:newhiggs}. Constraints on a subset of the Class B operator coefficients have been obtained using data from LEP $Z^0$-pole data\cite{Barbieri:1999tm} and from a wider array of precision electroweak observables that includes studies at LEP2 and low-energy experiments\cite{Han:2004az}. Both analyses relied on the assumption of U(3)$^5$ symmetry and \cite{Han:2004az} performed fits to EWPO including the effects of more than one operator simultaneously. Here, we up-date these earlier analyses in a way that focuses on the Class B and Class C operators with the potentially largest effects in Higgs production. For the Class B case, these operators are ${\cal O}_{VR,\, ee}$, ${\cal O}_{VL,\, ee}$, and ${\cal O}_{VL\tau,\, ee}$. For the Class C operators, the direct experimental limits on the coefficient of ${\cal O}_{{\tilde V},\, AB}$ are weaker than our reference value of $10^{-2}$. Since the effect of this operator is quadratic in the corresponding coefficient, any significant increase in its value could lead to a several percent effect in the missing energy channel. We discuss the direct experimental and indirect constraints on these operators below. In order to obtain constraints on ${\cal O}_{VR,\, ee}$, ${\cal O}_{VL,\, ee}$, and ${\cal O}_{VL\tau,\, ee}$, we have performed a fit to EWPO using the GAPP routine\cite{Erler:1999ug}. The precision observables included in this fit include the data collected from $Z^0$ pole studies at LEP and SLD and a variety of low-energy precision observables, including cesium atomic parity violation\cite{Wood:1997zq}, parity-violating M\o ller scattering\cite{Anthony:2003ub}, elastic neutrino-electron scattering\cite{Vilain:1994qy} and deep inelastic neutrino-nucleus scattering\cite{Zeller:2001hh} (for a complete list of EWPO used, see Ref.~\cite{Yao:2006px}). We have used the value $171.4 \pm 2.1$ GeV given in \cite{Brubaker:2006xn} for $M_t$. For each operator, we derive bounds on the corresponding $C_6^j v^2/\Lambda^2$ by including both the direct contributions to a given observable as well as indirect effects that enter through modifications of the SM input parameters. The ${\cal O}_{VL\tau,\, ee}$, for example, contains both neutral and charged current components. The neutral current component modifies the coupling of LH electrons to the $Z^0$ and enters all $e^+ e^-$ annihilation observables as well as those involving low energy parity violating processes. The charge current component contributes to the amplitude for muon decay. Inclusion of the latter contribution modifies the value of the Fermi constant, $G_\mu$, extracted from the experimental muon lifetime and that is used to normalize all electroweak amplitudes in the SM. It also indirectly affects the value of $\sin^2{\hat\theta}_W(M_Z)$ that is a derived quantity in the SM given $G_\mu$, $\alpha$, and $M_Z$ as inputs. Our procedure differs that followed by Refs.~\cite{Barbieri:1999tm,Han:2004az} in a few respects. First, we do not assume a U(3)$^5$ symmetry that relates operators involving different fermion generations. For example, ${\cal O}_{VR,\, ee}$ and ${\cal O}_{VR,\, \mu\mu}$ are treated as distinct. Although it is quite reasonable to assume that flavor-dependent effects from physics above the scale $\Lambda$ are determined by Yukawa interactions (as in models with minimal flavor violation) and are, thus, suppressed, we will not make that assumption here. Second, the fits performed in Refs.~\cite{Barbieri:1999tm,Han:2004az} allowed for the simultaneous contribution from multiple effective operators and were correspondingly performed for a fixed value of $m_H$. Here, we instead include the effect of only one operator and allow the value of $m_H$ to remain a fit parameter. The results for the three most important Class B operators are given in Table \ref{table:gapplimits}, where we show the $1\sigma$ results and 95\% C.L. ranges for the $C_6^j v^2/\Lambda^2$ in the second and third columns, respectively. In the last column, we give the fit results for $m_H$; for comparison, an SM fit, with the $C_6^j$ set to $0$, gives $m_H = 84 +33 -24$ GeV. We find that inclusion of the operator containing $e_R$ fields tends to lower the best fit value for $m_H$, although it still falls within $2\sigma$ of the direct search lower bound, $m_H=114.4$ GeV. In contrast, the two operators containing first generation lepton doublet fields increases the best fit value for $m_H$. \begin{table} \caption{Bounds on coefficents $C_6^j$ of the $n=6$ leptonic operators obtained implied by electroweak precision observables (EWPO). First column lists the operator. Second column gives result for $C_6^j v^2/\Lambda^2$ obtained from fit to all EWPO using the GAPP routine\cite{Erler:1999ug}. Third column gives the 95\% C.L. range on $C_6^j v^2/\Lambda^2$, while the last column gives the corresponding fit values for the Higgs mass, $m_H$. } \label{table:gapplimits} \begin{tabular}{c|c|c|c} Operator & $C_6^j v^2/\Lambda^2$ & 95\% C.L. range & $m_H$ \\ \hline $O_{VR,ee}$ & $-0.00037 \pm 0.00041$ &$-0.0012, \rightarrow 0.00044 $& $72 + 35 -24$ GeV \\ $O_{VL,ee}$ & $0.00053\pm 0.00035$ &$-0.00015 \rightarrow 0.0012$ & $95 + 38 -28$ GeV \\ $O_{VL\tau,ee}$ & $0.00039 \pm 0.00039$ & $-0.00036 \rightarrow 0.0011 $ & $90 +36 -26 $ GeV \\ \end{tabular} \end{table} We also observe that the constraints given in Table \ref{table:gapplimits} are somewhat weaker than those obtained in Ref.~\cite{Han:2004az}, presumably because we have not invoked a U(3)$^5$ symmetry and have allowed the value of $m_H$ to vary\footnote{In the notation of Ref.~\cite{Han:2004az}, the operators ${\cal O}_{VR,\, ee}$, ${\cal O}_{VL,\, ee}$, and ${\cal O}_{VL\tau,\, ee}$ correspond to ${\cal O}_{he}$, ${\cal O}_{h\ell}^s$, and ${\cal O}_{h\ell}^t$ when a U(3)$^5$ symmetry is assumed.}. The results of our fit -- together with the analysis of Section \ref{sec:newhiggs} -- thus, indicate the largest possible effects that one might anticipate for Class B operators. We have also checked that EWPO do not allow the $|C_6^j v^2/\Lambda^2|$ to be large than $10^{-2}$ for the other flavor-conserving Class B operators by considering the $Z^0$ pole observables alone and comparing SM predictions for a range of $m_H$ with the results obtained from LEP and SLD. To this end, we obtain the SM predictions using ZFITTER \cite{Bardin:1999yd} \cite{Arbuzov:2005ma}, which requires input values for $M_Z$, $M_t$, $m_H$, $\alpha_s(M_Z)$, and $\Delta \alpha^{(5)}_{had}$. We take the following for our ZFITTER inputs: \begin{eqnarray} \label{eq:inputs} M_Z &=& 91.1876 \pm 0.0021 \,\mbox{GeV} \, \mbox{\cite{Yao:2006px}} \nonumber\\ M_t &=& 171.4 \pm 2.1 \,\mbox{GeV} \, \mbox{\cite{Brubaker:2006xn}} \nonumber\\ m_H &=& 200 \pm 100 \,\mbox{GeV} \\ \alpha_s(M_Z) &=& 0.1176 \pm 0.002 \,\mbox{\cite{Yao:2006px}} \nonumber\\ \Delta \alpha^{(5)}_{had} (\alpha_s(M_Z) = 0.1176) &=& 0.02772 \pm 0.0002 \nonumber \end{eqnarray} where the value for $\Delta \alpha^{(5)}_{had}$ is a linear interpolation of points given in \cite{Erler:1998sy}. The range on $m_H$ is chosen to be (possibly artificially) large to accomodate any possibility that the current upper bounds on $m_H$ could be evaded with the addition of the operators ${\cal O}_{6,j}$. The authors of \cite{Barbieri:1999tm} find, for a particular Higgs mass, ranges of the operator coefficients for which $\chi^2-\chi^2_{min}<3.85$, where $\chi^2_{min}$ is the $\chi^2$ of the SM fit with the operator coefficients set to zero. They find values of the coefficients of ${\cal O}_{VR}$ and ${\cal O}_{VL\tau}$ which satisfy this criterion for values of $m_H$ as high as $300$ GeV. Even when we include the error for this broad range of Higgs mass, we still find limits on the operator coefficients that are tighter than our reference value of $10^{-2}$. These yield the following predictions for the SM observables: \begin{eqnarray} \Gamma(Z\rightarrow \mbox{inv}) &=& 501.399 +0.216 -0.201 \,\mbox{MeV} \nonumber \\ \Gamma(Z\rightarrow e^+ e^-) &=& 83.932 +0.053 -0.044\, \mbox{MeV} \nonumber \\ \Gamma(Z\rightarrow \mu^+ \mu^-) &=& 83.932 +0.053 -0.044\, \mbox{MeV} \nonumber \\ \Gamma(Z\rightarrow \tau^+ \tau^-) &=& 83.742 +0.053 -0.044\,\mbox{MeV}. \nonumber \end{eqnarray} The errors on these values were obtained by separately computing the errors due to the uncertainties on the input parameters given in Eqs. (\ref{eq:inputs}) and adding them in quadrature. The asymmetry in the errors is due to the dependence of the results on $\ln{m_H}$. These predictions are to be compared with the experimental values for the $Z$ widths and branching fractions \cite{Yao:2006px}: \begin{eqnarray} \Gamma(Z\rightarrow \mbox{inv}) &=& 499.0 \pm 1.5 \,\mbox{MeV} \nonumber \\ \Gamma(Z\rightarrow e^+ e^-) &=& 83.91 \pm 0.12 \, \mbox{MeV} \nonumber \\ \Gamma(Z\rightarrow \mu^+ \mu^-) &=& 83.99 \pm 0.18 \, \mbox{MeV} \nonumber \\ \Gamma(Z\rightarrow \tau^+ \tau^-) &=& 84.08 \pm 0.22 \, \mbox{MeV} \nonumber \\ BR(Z\rightarrow e^{\pm} \mu^{\mp}) &=& < 1.7 \times 10^{-6} \, \mbox{at} \, 95\% \, \mbox{CL}\nonumber \\ BR(Z\rightarrow e^{\pm} \tau^{\mp}) &=& < 9.8 \times 10^{-6} \, \mbox{at} \, 95\% \, \mbox{CL}\nonumber \nonumber \end{eqnarray} The largest source of theoretical error in the SM predictions, as well as the asymmetry in the theoretical error, arises from the range taken for $m_H$. However, the experimental error dominates over the theoretical error for all of the above observables. The resulting bounds on the $Cv^2/\Lambda^2$ for the Class B operators are given in Table \ref{table:oplimits}. We do not include bounds on the ${\cal O}_{VR,ee}$, ${\cal O}_{VL,ee}$, and ${\cal O}_{VL\tau,ee}$ operators in this table because the GAPP fit provides significantly tighter limits than using the $Z$ partial widths alone. From the limits on the branching fractions of the Z to $e^{\pm} \mu^{\mp}$ and $e^{\pm} \tau^{\mp}$, we can deduce limits on the coefficients for ${\cal O}_{VR,AB}$, ${\cal O}_{VL,AB}$, and ${\cal O}_{VL\tau,AB}$, where $A\ne B$ and $A$ or $B=e$. We obtain \begin{eqnarray} \left| \frac{C_{e \mu} v^2}{\Lambda^2} \right| &<& 0.0071 \nonumber \\ \left| \frac{C_{e \tau} v^2}{\Lambda^2} \right| &<& 0.017 \\ \end{eqnarray} at $95\%$ CL for all three operators. As these coefficients enter into the cross-sections for these processes quadratically, we can see from Table \ref{table:fcncs} that these limits allow, for example, as many as $\sim 80$ $H e^{\pm} \tau^{\mp}$ events for a Higgs in the low-mass region at a linear collider with $\sqrt{s}=1$ TeV. It will be interesting to explore the feasibility of observing these events at a Linear Collider. \begin{table} \caption{$95\%$ CL intervals on the coefficents $C_6^j$ of the 6D leptonic operators, multiplied by $v^2/{\Lambda^2}$. In the case of ${\cal O}_{\nu_R,AB}$, the limit is instead on $\sum_{A,B} \left|C_{\nu_R}^{AB}\right|^2 v^4/\Lambda^4$.} \label{table:oplimits} \begin{tabular}{l|ll } Operator & $Min(\frac{C^j v^2}{\Lambda^2})$ & $Max(\frac{C^j v^2}{\Lambda^2})$ \\ \hline ${\cal O}_{VR,\mu\mu} $ & $-0.0027 $& $0.0020$ \\ ${\cal O}_{VR,\tau\tau} $ & $-0.0050$& $0.0007$\\ ${\cal O}_{VR,e\mu}$ & $-0.0071$ & $0.0071$ \\ ${\cal O}_{VR,e\tau}$ & $-0.017$ & $0.017$\\ ${\cal O}_{VL,\mu\mu} $ & $-0.0017$ & $0.0023$ \\ ${\cal O}_{VL,\tau\tau} $ & $-0.0006$ & $0.0043$ \\ ${\cal O}_{VL,e\mu}$ & $-0.0071$ & $0.0071$ \\ ${\cal O}_{VL,e\tau}$ & $-0.017$ & $0.017$\\ ${\cal O}_{VL\tau,\mu\mu} $ & $-0.0039$ & $0.0054$ \\%{\bf Jen's note to self: double-check these numbers} \\ ${\cal O}_{VL\tau,\tau\tau} $ & $-0.0006$ & $0.0043$ \\ ${\cal O}_{VL\tau,e\mu}$ & $-0.0071$ & $0.0071$ \\ ${\cal O}_{VL\tau,e\tau}$ & $-0.017$ & $0.017$\\ ${\cal O}_{\nu_R,AB}$ & & $<.0068$ \\ \end{tabular} \end{table} Some, but not all, of the Class C operators are also constrained by EWPO. To constrain $C_{V\nu,AB}$, we consider the contribution of ${\cal O}_{V\nu,AB}$ to the invisible width of the $Z$ boson, $\Gamma_{\rm inv}$. Although the measured value of $\Gamma_{\rm inv}$ disagrees slightly with the SM prediction (the experimental value is $1.6\sigma$ below the SM expectation) , ${\cal O}_{V\nu,AB}$ cannot explain this small discrepancy, as it does not interfere with the SM process and can only increase the cross-section for $Z\rightarrow \nu \bar{\nu}$. We calculate the limit on this operator using the procedure for obtaining one-sided confidence level intervals given in Ref.~\cite{Feldman:1997qc}. \\ For the remaining operators, all of which contain $\nu_R$, we consider first direct experimental constraints. For example, the operator ${\cal O}_{\tilde{V},eB}$ also contributes to the Michel spectrum for the decay of polarized muons. From the recent global analysis of muon decay measurements reported in Ref.~\cite{Gagliardi:2005fg} we obtain \begin{equation} \left\vert C_{{\tilde V},\, eB} v^2 / \Lambda^2\right| \leq 0.208 \end{equation} at 90 \% C.L. In contrast to the situation with the Class B operators and ${\cal O}_{V\nu,AB}$, the direct constraints on ${\cal O}_{\tilde{V},eB}$ are considerably weaker than our benchmark $10^{-2}$ value for $C_6^j v^2/\Lambda^2$. Considerably more stringent expectations can be obtained by observing that ${\cal O}_{\tilde{V},eB}$ contributes to the $n=6$ neutrino mass operator ${\cal O}^\nu_{M,\, AB}$ through radiative corrections. A complete renormalization group analysis of the mixing between these operators was carried out in Ref.~\cite{Erwin:2006uc}. In order to avoid \lq\lq unnatural" fine tuning, the radiative contributions to the neutrino mass matrix element $m_\nu^{AB}$ due to ${\cal O}_{V\nu,AB}$ cannot be substantially larger than the scale of neutrino mass itself. Using an upper bound of 1 eV for this scale we obtain the following naturalness bound on ${C_{\tilde{V},eB} v^2}/{\Lambda^2}$ \begin{equation} \left| \frac{C_{\tilde{V},eB} v^2}{\Lambda^2} \ln \frac{v}{\Lambda} \right| < (0.5-3) \times 10^{-3}. \label{eq:mnubounds} \end{equation} where the range on $C_{\tilde{V},eB}$ corresponds to $114 \, \mbox{GeV} < m_H < 185 \, \mbox{GeV}$. The latter affects the renormalization group analysis since the entries in the anomalous dimension matrix depend on the Higgs boson quartic self coupling, $\lambda=m_H^2/2v^2$. The coefficients of the magnetic moment operators are bounded by upper limits on neutrino magnetic moments that range from $10^{-10}$ to $10^{-12}$ Bohr magnetons \cite{Raffelt:1999gv,Sutherland:1975dr,Xin:2005ky,Daraktchieva:2005kn,Liu:2004ny,Beacom:1999wx}. Taking the upper limit of these bounds implies that $|C_{W,AB} v^2/\Lambda^2|$ and $|C_{B,AB} v^2/\Lambda^2|$ are no larger than $\sim 10^{-5}$. Neutrino mass naturalness considerations imply bounds that are roughly four orders of magnitude more stringent than those obtained directly from magnetic moment limits. Either way, the effects of these operators on Higgs production will be unobservable. \section{Discussion and Conclusions} \label{sec:conclusions} The bounds we obtain on the operator coefficients generally satisfy $|Cv^2/\Lambda^2|<10^{-2}$, implying smaller corrections to the Higgs production cross-sections than those given in Figures \ref{fig:2lree500}-\ref{fig:nur500Gev}, for which we have used $Cv^2/\Lambda^2 = 10^{-2}$. Nevertheless, comparing the bounds on $|Cv^2/\Lambda^2|$ for ${\cal O}_{VR,ee}$, ${\cal O}_{VL,ee}$, and ${\cal O}_{VL\tau,ee}$ with the results in Figures \ref{fig:2lree500}, \ref{fig:2Lee01}, and \ref{fig:2Ltauee01}, we see that the interference with the SM HZ process can be substantial in the $Hf{\bar f}$ channel with $f=\mu$, $\tau$, or $q$, with corrections of more than 5\% (20\%) allowed for $\sqrt{s}=500$ GeV (1 TeV). The relative impact of these operators on the $He^+e^-$ and $H+{\not\!\! E}$ channels is considerably smaller, since the SM cross-section receives large WWF and ZZF contributions. Additionally, we have checked the non-interference contributions of these operators and find that, for $|C v^2/\Lambda^2|=10^{-3}$ (toward the upper end of the $95\%$ CL range) the non-interference terms can contribute an additional $3\%$ to the $Hf{\bar f}$ cross-section for $\sqrt{s}=1$ TeV. The contributions of the non-interference terms to the $Hf{\bar f}$ channel at $\sqrt{s}=500$ GeV and to the $H+{\not\!\! E}$ and $He^+ e^-$ channels at either $\sqrt{s}$ are all $< 1\%$. Conversely, despite the less stringent limits on their coefficients, the operators ${\cal O}_{VR, AA}$, ${\cal O}_{VL,AA}$, and ${\cal O}_{VL\tau,AA}$ for $A=\mu$, $\tau$, or $q$ cannot generate significant corrections to the $HA{\bar A}$ production cross-section, due to the kinematic suppression of the corresponding interference amplitude relative to SM HZ. In the case of the Class C operators, which contribute only to the $H+{\not\!\! E}$ channel, the magnitude of possible corrections is generally smaller than $10^{-3}$ of the SM cross-section, assuming $Cv^2/\Lambda^2=10^{-2}$. Amplitudes containing these operators do not interfere with SM amplitudes as they contain RH neutrino states, so the quadratic dependence of their contribution to the cross-section on the operator coefficients can lead to considerable suppression. From our analysis of the limits in Section \ref{sec:oplimits}, we conclude that for ${\cal O}_{V\nu_R,AB}$, whose coefficient is constrained by the invisible width of the $Z^0$, the possible effect is negligible. A similar conclusion applies to ${\cal O}_{W}$ and ${\cal O}_{B}$, which are constrained by limits on neutrino magnetic moments. For the operator ${\cal O}_{\tilde V,eB}$, the constraint on the coefficient implied from the $\mu$-decay Michel spectrum is more than an order of magnitude weaker than assumed in obtaining Figure \ref{fig:nur500Gev}, and would allow the corresponding correction to the missing energy channel to be of order 10\% or more (recall that the dependence on the coefficient is quadratic). On the other hand, the bound obtained from neutrino mass naturalness considerations is substantially smaller than $|Cv^2/\Lambda^2|=10^{-2}$, suggesting an unobservable contribution from this operator to the $H+{\not\!\! E}$ cross-section. Thus, the observation of a deviation in this channel without similar deviations in the $Hq\bar{q}$ and $H\ell\bar{\ell}$ channels-- though unlikely -- would imply the presence of fine tuning in order to avoid unacceptably large radiative contributions to neutrino mass. Summarizing the situation more broadly, we find that there exists considerably less room for effects on Higgs production from higher dimension operators containing fermions than from purely bosonic operators. Constraints from EWPO generally imply $|C v^2/\Lambda^2| << 10^{-2}$. The impact of this suppression can be overcome only in channels that are dominated by SM HZ due to the absence of an off-shell $Z^0$-boson propagator in amplitudes containing any of the operators ${\cal O}_{VR,ee}$, ${\cal O}_{VL,ee}$, and ${\cal O}_{VL\tau,ee}$. In contrast, purely bosonic operators, such as $\partial^\mu(\phi^\dag\phi)\partial_\mu(\phi^\dag\phi)$, can lead to potentially significant deviations in a variety of channels simultaneously, since (a) they affect the couplings of the Higgs to gauge bosons and (b) the constraints from EWPO are weak\cite{Barger:2003rs}. A comprehensive study of Higgs production in a variety of channels at a linear collider would allow one to disentangle possible effects from different classes of effective operators, thereby providing new clues about physics at high scales\footnote{Studies of polarization observables or angular distributions may also allow one to distinguish the effects of different effective operators, along the lines suggested in Ref. \cite{Barger:1993wt}. We thank V. Barger for bringing this possibility to our attention.}. \acknowledgments The authors are particularly indebted to J. Erler for making the GAPP code available and for several helpful discussions about fits to electroweak precision observables. We also thank V. Barger, N. Bell, V. Cirigliano, M. Gorshteyn, T. Han, P. Langacker, P. Vogel, and M. Wise for helpful discussions. This work was supported in part under U.S. Department of Energy contracts FG02-05ER41361 and DE-FG03-ER40701 and National Science Foundation award PHY-0555674. \bibliographystyle{h-physrev}
1,314,259,993,108
arxiv
\section{Introduction} The task of 3D reconstruction from monocular video is a longstanding task in computer vision. The state-of-the-art algorithm for dense monocular 3D reconstruction involves steps including Simultaneous Localization and Mapping (SLAM) or Structure from Motion (SfM) to get a semi-dense or sparse 3D reconstruction and camera pose estimates. Subsequently multi-view stereo (MVS) methods are used to get dense 3D reconstructions. Despite the progress in current visual SLAM \cite{LSDSLAM,ORBSLAM,DTAM}, SfM \cite{COLMAP,wu2013towards,agarwal2011building,frahm2010building} and MVS algorithms \cite{schonberger2016pixelwise,furukawa2010towards,furukawa2010accurate}, this reconstruction pipeline still has some inherent limitations; it can only work in static scenes with rigid objects; it requires a sufficient motion baseline for the cameras; and it assumes static lighting condition and Lambertian surface reflection. Our algorithm addresses all three limitations to enable highly flexible, video-based joint camera motion and dense geometry estimation. Recently, convolutional neural networks (CNN) \cite{liu2015deep,eigen2015predicting,garg2016unsupervised,zhou2017unsupervised,ummenhofer2017demon} have began to produce results of comparable quality to traditional geometric computer vision methods for depth estimation. However, most methods can take only a single frame or pair of frames as input, or report no benefit from additional frames. For example, Zhou \textit{et al.} \cite{zhou2017unsupervised} report that adding more frames for their technique does not improve the estimation accuracy, as their CNN can only capture the spatial relationships of the input. When their network receives stacked images as input, the temporal ordering is lost. \begin{figure}[h] \centering \includegraphics[width=12.5cm]{teasor.png} \caption{Our proposed DenseSLAMNet takes successive video frames as input and outputs a high quality depth map and camera pose for every input frame.} \end{figure} We have developed a recurrent neural network (RNN) for dense visual SLAM that simultaneously estimates the camera poses and dense depth maps from a video sequence taken by a monocular camera. In an RNN, the input to each layer includes information about the previous prediction, and thus explicitly takes temporal information into account. As far as we know, this is the first learning-based dense SLAM method that can estimate camera motion and dense depth maps in an unconstrained multi-view environment. We have improved upon existing deep single- and two-view stereo depth estimation methods by interleaving Long Short-Term Memory (LSTM) units with convolutional layers to effectively utilize multiple previous frames in each estimated depth maps. In this paper, we present DenseSLAMNet, a network that can sequentially estimate depths and camera motion from monocular video. Our primary innovation is to incorporate LSTM units, commonly used in natural language processing, into a depth estimation network. These LSTM units allow the depth and camera motion estimation to become a multi-view process. We evaluate our network on several 3D benchmark datasets (SUN3D, RGBD-SLAM, NYUDepthV2, KITTI, and Make3D) and real patient endoscopic data. We analyze the effectiveness of our method on both deformable and rigid scenes. We summarize our contributions as follows: \begin{itemize} \item We introduce a new RNN architecture for depth estimation from multiple views. \item We show that our multi-view depth estimation outperforms existing single-view methods. \item We demonstrate the successful application of our framework on endoscopic videos, a particularly challenging data modality for depth estimation. \end{itemize} \section{Related work} SfM and SLAM are the two most prevalent frameworks for sparse 3D reconstruction of rigid geometry from images. SfM is typically used for offline 3D reconstruction from unordered image collections, while visual SLAM aims for a real-time solution using a single camera \cite{MONOSLAM,PTAM}. Sch\"{o}nberger and Frahm \cite{COLMAP} review the state-of-the-art in SfM and propose an improved incremental SfM method. More recent works on sparse SLAM systems include ORB-SLAM \cite{ORBSLAM} and DSO \cite{DSO}. While sparse methods use detected feature points for reconstruction, dense (or semi-dense) methods attempt to reconstruct all pixels from the 2D image. LSD-SLAM \cite{LSDSLAM} is a semi-dense SLAM method that operates directly on image intensities both for tracking and mapping. The DTAM framework \cite{DTAM} creates a dense 3D surface model through direct dense image registration and immediately uses it for camera tracking. Our DenseSLAMNet falls into this category of dense reconstruction methods. Multi-view stereo (MVS) \cite{schonberger2016pixelwise,furukawa2010towards,furukawa2010accurate} is another dense reconstruction method that generates dense depth maps using camera poses and raw image data. Frequently, MVS is used together with SfM. All above sparse and dense reconstruction methods require a static scene, constant illumination, and sufficient camera motion baseline for accurate reconstruction. In this paper, we present a method that can perform single- and multi-view dense 3D reconstruction for both static and deformable scenes with either constant or inconsistent light conditions. Recently, researchers have started to apply CNNs to the 3D reconstruction problem. Eigen \textit{et al.} \cite{eigen2015predicting} and Liu \textit{et al.} \cite{liu2015deep} propose end-to-end networks, while other work has used CNNs for components of the pipeline, including correspondence matching \cite{yi2016lift,ilg2017flownet}, camera pose estimation \cite{kendall2015posenet}, and stereo \cite{luo2016efficient,kendall2017end}. Common output representations include depth maps, point clouds, and voxels. The advantage of these learning-based methods over the classical SfM-MVS pipeline is that we can leverage semantic supervision during the training process. This can lead to better reconstructions of texture-less or occluded surfaces and very thin structures, both of which are challenging for purely geometric techniques. A particular case of dense geometry estimation is monocular depth estimation. Monocular depth estimation has gained interest because regressing the depth representation is similar to the segmentation problem and thus the structure of CNNs can be easily adapted to the task of depth estimation \cite{FCN}. Eigen \textit{et al} \cite{eigen2015predicting} proposed an early multi-scale, end-to-end, per-pixel depth estimation framework. Laina \textit{et al}. \cite{laina2016deeper} extended Eigen’s work with a deeper residual network. More recently, incorporating elements of view synthesis \cite{zhou2016view} and Spatial Transform Networks \cite{jaderberg2015spatial}, Gordard \textit{et al}. \cite{godard2017unsupervised}, Garg \textit{et al}. \cite{garg2016unsupervised}, Zhou \textit{et al}. \cite{zhou2017unsupervised}, have trained end-to-end monocular depth estimation networks without ground-truth. This was done by transforming the depth estimation problem into an image reconstruction problem where the depth is the intermediate product that integrates into the image reconstruction loss. Despite the fact that these unsupervised depth estimation methods eliminate the complication of obtaining ground-truth depth, none outperform the traditional SfM or SLAM methods \cite{COLMAP,agarwal2011building,DTAM,LSDSLAM}. Two-view or multi-view stereo methods have traditionally been the most common techniques for dense depth estimation. For the interested reader, Scharstein and Szeliski \cite{scharstein2002taxonomy} give a comprehensive review on two-view stereo methods. Newcombe \textit{et al.} \cite{DTAM} demonstrates that estimated depth accuracy becomes more precise as the number of views increases, even with small baseline motion. We leverage this result, explicitly learning correspondences between nearby frames which results in a similar multi-view benefit. Recently, Ummenhofer \textit{et al}. \cite{ummenhofer2017demon} formulated two-view stereo as a learning problem. They showed that by explicitly incorporating dense correspondences estimated from optical flow into the two-view depth estimation, they can force the network to utilize stereo information on top of the single view priors. There is currently a very limited body of CNN based multi-view reconstruction methods. Choy \textit{et al}. \cite{choy20163d} use an RNN to reconstruction the object in the form of a 3D occupancy grid from multiple viewpoints. Rezende \textit{et al}. \cite{rezende2016unsupervised} introduced a family of generative models of 3D structures and recover these structures from 2D images via probabilistic inference. They learn the complex 3D to 2D projection through a generative model in an unsupervised way. However, these methods target single object reconstruction and fail for deformable objects. Our approach is most closely related to dense visual SLAM in that camera motion and depth maps are estimated from multiple views in a sequential manner. Tanteno \textit{et al}.\cite{tateno2017cnn} proposed CNN-SLAM, which predicts a depth map as an initial guess and subsequently refines it with a direct SLAM scheme relying on small-baseline stereo matching. Our DenseSLAMNet, as shown in Figure \ref{refine}, implicitly performs the small-baseline refinement via the information preserved across time-steps by the hidden layers of the LSTM. \begin{figure}[h] \centering \includegraphics[width=11cm]{eg3} \caption{An example of the small-baseline refinement using LSTM layers.} \label{refine} \end{figure} \section{Network architecture} \begin{figure}[h] \centering \includegraphics[width=12cm]{overall} \caption{Overall network architecture of DenseSLAMNet.} \label{overall} \end{figure} Our DenseSLAMNet can simultaneously estimate dense depth maps and camera poses from a monocular video sequence under different scenarios (indoor, outdoor, endoscopy). We incorporate recurrent units into a CNN to leverage temporal information in our depth estimation, making it more accurate for continuous video sequences. However, unlike DeMoN \cite{ummenhofer2017demon} which is restricted to two-view input, our DenseSLAMNet takes a single frame at a time as input, but can operate over longer image sequences. It can also perform single-view depth estimation when required. Although our network incorporates temporal information through the recurrent units, it operates on each individual frame independently during training. This is contrary to Zhou \textit{et al}. \cite{zhou2017unsupervised}, Godard \textit{et al}. \cite{godard2017unsupervised}, and dense SLAM methods \cite{LSDSLAM,ORBSLAM,DTAM} which utilize relative geometry between frames. Therefore, our method is not restricted to static scenes or constant scene illumination. Figure \ref{endo} (a) shows an example of method estimating depth from endoscopic videos, where the scene frequently deforms and the light source moves with the camera, changing the scene illumination throughout the video sequence. The overall architecture of our network is shown in Figure \ref{overall}. It takes a single RGB frame $I_t$ and the hidden states $h_{t-1}$ from the previous time step as input. The hidden states are transmitted internally through the LSTM units. The output of our network is the depth map $z_t$ and the camera pose $\{R_t, T_t\}$ of the current frame. Similar to single-view depth estimation networks, our DenseSLAMNet takes only a single frame at a time as input. Therefore, our network can perform both single-frame and multi-frame depth estimation. This makes our DenseSLAMNet more flexible than both CNN-based single-view depth estimation methods and visual SLAM methods. \begin{figure}[h] \centering \includegraphics[width=12.5cm]{detail.png} \caption{(Best viewed in color) Our network architecture at a single time step. We use the DispNet architecture. The width and height of each rectangular block indicates the size and the number of the feature map at that layer. Each increase and decrease of size represents a change factor of 2. The first convolutional layer has 32 feature maps. The kernel size for all convolution layers is 3, except for the first two convolution layers, which are 7 and 5, respectively.} \label{detail} \end{figure} Figure \ref{detail} shows our network at a single time step in more detail. Different colors encode the different units: yellow is a convolutional layer, red is an LSTM block, dark gray is a deconvolutional layer, and blue is and input/output layer. Our network uses a U-shape network architecture similar to DispNet \cite{mayer2016large}. The height of each rectangle in Figure \ref{detail} represents the size of its feature maps, where each smaller feature map is half the size of the preceding feature map. The down-sampling from a previous layer to the next is done by a stride-2 convolution instead of max-pooling. The lines connecting corresponding layers in the encoder and decoder are skip-connections. We denote the size of our temporal window by $N$. In all experiments, we use $N=10$ as the length of our temporal sequence. Hence, the network in Figure \ref{detail} is replicated 10 times as shown in Figure \ref{overall}, with the temporal information being passed between the three LSTM blocks at each time-step. \section{Training procedure} For the ease of training and data preparation, we use a temporal window size of $N=10$, but ideally, similar to natural language processing, the network should take an arbitrary length sequence as input for training. During training, we feed frames to the network and compute losses from all frames in a temporal window. However, there is no input length constraint at test time. Even though the network can only store information from up to ten frames, longer sequences can still yield better results because the each prior frame has already been boosted by its previous ten frames. Figure \ref{training} shows a example of our training data. \begin{figure}[h] \centering \includegraphics[width=12.5cm]{training.png} \caption{Example training data with a temporal window size of ten.} \label{training} \end{figure} \subsection{Loss function} Our loss function is a composition of a point-wise depth loss, a camera pose loss, and a scale-invariant gradient loss. Similar to DeMoN \cite{ummenhofer2017demon}, we use disparity, the reciprocal of depth, $\xi=\frac{1}{z}$ as our direct estimation because it can represent points at infinity and account for the localization uncertainty of points at increasing distance. For camera pose, we use the Euler angle $R$ and the translation vector $T$. In total there are 6 parameters in the pose parameterization. Our point-wise depth loss is formulated as follows: \begin{equation} L_{depth} = \sum_{t}^N\sum_{i,j}|\xi_t(i,j)-\hat{\xi_t}(i,j)| \end{equation} where $i,j$ is the pixel location in a depth map and $t$ represents the time-step. In this work, we use temporal window is $N=10$, so $t \in [0,9]$. For depth we use an $L_1$ loss due to its robustness to noise. The overall depth loss integrates over all pixels as well as all frames in a temporal window. To ensure the smoothness and sharpness of the estimated depth, we have adopted a loss on a scale-normalized gradient-like measurement, as introduced by Ummenhofer \textit{et al.} \cite{ummenhofer2017demon}. This loss is defined as \begin{equation} L_{grad} = \sum_t\sum_{h\in\{1,2,4,8,16\}}\sum_{i,j}||g_{h,t}(i,j)-\hat{g}_{h,t}(i,j)||_2 \end{equation} where $h$ is a spatial step size for computing $g_{h,t}$ at different scale. The vector $g_{h,t}$ is a scale-normalized, discretized measurement of the local changes of $\xi_t$. The measurement is defined as \begin{equation} g_{h,t} = (\frac{\xi_t(i+h,j)-\xi_t(i,j)}{|\xi_t(i+h,j)|+|\xi_t(i,j)|}, \frac{\xi_t(i,j+h)-\xi_t(i,j)}{|\xi_t(i,j+h)|+|\xi_t(i,j)|})^T \end{equation} $L_{grad}$ in Eq. (2) emphasizes the depth discontinuities, such as occlusion boundaries and sharp edges, as well as the smoothness in homogeneous regions. This property encourages the estimated depth map to preserve more details and reduce noise. Therefore, we put highly weight on this component of the loss. For the camera poses, the losses are weighted separately for rotation $R$ and translation $T$. \begin{equation} \begin{aligned} L_{rot} &= \sum_t||r_t-\hat{r_t}||_2\\ L_{trans} &= \sum_t||t_t-\hat{t_t}||_2 \end{aligned} \end{equation} The overall loss is a weighted sum of $L_{depth}$, $L_{grad}$, $L_{rot}$, and $L_{trans}$, where the weights are chosen empirically. \textbf{Training details.} We set the weights for depth loss, scale-invariant gradient loss, camera rotation loss, and camera translation loss to 500, 1000, 500, and 100, respectively. We use the Adam \cite{kingma2014adam} optimizer with $\beta_1=0.9$, $\beta_2=0.999$. The initial learning rate is 0.0002 and decays exponentially every 10,000 steps by a factor of 0.9. For indoor scenes and endoscopic data, we resized the images to 192$\times$256. For outdoor scenes we resize the images to 128$\times$416. The image sizes are chosen for both computational efficiency and to be consistent with existing methods. We trained and evaluated our network on indoor and outdoor scenes separately. Different datasets have different camera intrinsic parameters, so we explicitly crop and resize images to ensure uniform intrinsic parameters. This step assures that the non-linear mapping between color and depth is consistent across all training datasets. \section{Experiments} We evaluate our method on multiple datasets and compare against the state-of-the-art for learning-based, depth estimation methods. \subsection{Training datasets} \textbf{Indoor}. We use two publicly available datasets for indoor scenes. The first one is \textbf{SUN3D} \cite{xiao2013sun3d}, which is a large dataset with ground truth depth maps and camera poses. We selected 192 scenes from a total of 354 scenes as our training data. Then we randomly selected 30 scenes from the remaining 162 scenes as for validation and testing. The second dataset is \textbf{RGBD-SLAM} \cite{sturm12iros}, which is a smaller dataset but with higher camera pose accuracy. RGBD-SLAM provides a training and validation split of their dataset, so we directly use their split. In addition, we used the \textbf{NYUDV2} \cite{Silberman:ECCV12} dataset for generalization evaluation. \textbf{Outdoor}. We use the KITTI dataset \cite{Geiger2013IJRR} for outdoor scenes. To perform a consistent comparison with existing methods, we used the \textbf{Eigen Split} \cite{eigen2015predicting} to train and evaluate our network. The \textbf{Make3D} \cite{saxena2009make3d} dataset is used to evaluate generalization. \textbf{Endoscopy} (challenge dataset). We also explore the 3D reconstruction of humans' inner body surfaces from endoscopic videos. For qualitative evaluation and training, we generated an endoscopic dataset containing 65,235 frames of video from 16 patients. We generate depth maps and camera poses using the SFMS method \cite{wang2017improving}. We train our model on 14 patients and test on 2 patients. \subsection{Evaluation metrics} At test time, our DenseSLAMNet runs in real-time, at approximately 40 frames-per-second on a machine with a GeForce GTX1080 GPU. We evaluate DenseSLAMNet using five error metrics: \begin{equation} sc-inv(z,\Hat{z}) = \sqrt{\frac{1}{n}\sum_id^2_i-\frac{1}{n^2}(\sum_id_i)^2} \end{equation} where $d_i=log_{10}(z_i)-log_{10}(\Hat{z_i})$. $Sc-inv$ is a scale invariant error \cite{eigen2015predicting} that can evaluate depth regardless of scale. \begin{equation} Abs-rel(z,\Hat{z}) = \frac{1}{n}\sum_i\frac{|z_i-\Hat{z_i}|}{\hat{z_i}}, Abs-inv(z,\Hat{z}) = \frac{1}{n}\sum_i|\frac{1}{z_i}-\frac{1}{\Hat{z_i}}| \end{equation} $Abs-rel$ measures the relative difference of output predictions and the ground truth depth. It emphasizes close objects in the ground truth. $Abs-inv$ also measures this relative difference, but even further emphasizes closer objects. \begin{equation} RMSE(z,\Hat{z}) = \sqrt{\frac{1}{n}\sum_i(z_i-\Hat{z_i})^2}, RMSE-log(z,\Hat{z}) = \sqrt{\frac{1}{n}\sum_i(d_i)^2} \end{equation} $RMSE$ and $RMSE-log$ are two of the most commonly used error measurements \cite{eigen2015predicting,zhou2017unsupervised,godard2017unsupervised,garg2016unsupervised,kuznietsov2017semi}, the first measuring absolute depth error and the second measuring absolute log-depth error. \subsection{Comparison with Existing Methods}\label{sec:compare} \begin{figure}[h] \centering \includegraphics[width=12.5cm]{compare1.png} \caption{Visual comparison of ours vs. DeMoN's results \cite{ummenhofer2017demon} on SUN3D \cite{xiao2013sun3d} dataset. As can be seen in row (a) and (d), our DenseSLAMNet performs better at large distance.} \label{compare} \end{figure} We compared our DenseSLAMNet to state-of-the-art CNN-based, single- and two-view depth estimation methods. We compared to Eigen \textit{et al.} \cite{eigen2015predicting}, Liu \textit{et al.} \cite{liu2015deep}, and DeMoN \cite{ummenhofer2017demon}. Eigen \textit{et al.} and Liu \textit{et al.} are single-frame depth estimation methods, and DeMoN is a two-view depth estimation method. We take their publicly available pre-trained models and test on our prepared testing data. Eigen \textit{et al.} and Liu \textit{et al.} methods are trained on NYUDV2 dataset. DeMoN is trained on several indoor, outdoor, and synthetic datasets, including SUN3D and RGBD-SLAM. In order to evaluate the full capability of our network on single-frame depth estimation, we feed DenseSLAMNet video sequences of size $N=10$ during testing and report results on the last frame. When evaluating against DeMoN, we report their result as the pair of frames within each temporal window that gives best score for their method. Table \ref{table:1} shows that our method outperforms the existing state-of-the-art across every quantitative metric for indoor scenes. Figure \ref{compare} shows a visual comparison of our DenseSLAMNet with other methods. It can be seen that our DenseSLAMNet produces sharper results than DeMoN, the second best performing method. Rows (a) and (d) in Figure \ref{compare} also demonstrate that we perform significantly better for larger distances. Figure \ref{range} shows a detailed comparison between DeMoN and our method at different depth ranges. We measure the average $sc-inv$ error at different depth ranges, eg. 0 to 1 meters, 1 to 2 meters and so on, across all testing images. As can be seen, our method consistently performs better at all ranges and especially at large distances. \begin{figure}[h] \centering \includegraphics[width=10cm]{range_plot.png} \caption{Comparison with DeMoN \cite{ummenhofer2017demon} at different ranges. We outperform DeMoN in all ranges, especially at large distance.} \label{range} \end{figure} \begin{table}[h!] \centering \begin{tabular}{ |p{3cm}||p{2cm}|p{2cm}|p{2cm}| } \hline Methods & Sc-inv & Abs-inv & Abs-rel \\ \hline Eigen \textit{et al.} \cite{eigen2015predicting} &0.190 & 0.068& 0.177\\ Liu \textit{et al.} \cite{liu2015deep} &0.215 & 0.070& 0.212\\ DeMoN \cite{ummenhofer2017demon} &0.128 & 0.047& 0.109\\ \textbf{DenseSLAMNet} & \textbf{0.112} & \textbf{0.039}& \textbf{0.091}\\ \hline \end{tabular} \caption{Quantitative comparison of our DenseSLAMNet with the state-of-the-art CNN-based methods on SUN3D \cite{xiao2013sun3d} dataset. Lower numbers are better.} \label{table:1} \end{table} \begin{table}[h!] \centering \begin{tabular}{ |p{3cm}||p{1.2cm}||p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}| } \hline Methods & Dataset & Abs-rel & Sq-rel & RMSE& RMSE-log\\ \hline Eigen \textit{et al}. \cite{eigen2015predicting} & K & 0.203 & 1.548 & 6.307 & 0.282 \\ Liu \textit{et al}. \cite{liu2015deep} & K & 0.202 & 1.614 & 6.523 & 0.275 \\ Kuznietsov \textit{et al.} \cite{kuznietsov2017semi} & K & 0.113 & 0.741 & \textbf{4.621} & 0.189\\ Zhou \textit{et al.} \cite{zhou2017unsupervised} & CS+K & 0.198 & 1.836 & 6.565 & 0.275 \\ Godard \textit{et al.} \cite{godard2017unsupervised} & CS+K & \textbf{0.097} & 0.896 & 5.093 & \textbf{0.176} \\ \textbf{DenseSLAMNet} & K & 0.129 & \textbf{0.704}& 4.743 & 0.199 \\ \hline \hline \textbf{DenseSLAMNet} & K & \textbf{0.058} & \textbf{0.205}& \textbf{2.538}& \textbf{0.087} \\ \hline \end{tabular} \caption{Quantitative comparison of our DenseSLAMNet with other state-of-the-art CNN-based methods on KITTI \cite{Geiger2013IJRR} dataset using the Eigen Split \cite{eigen2015predicting}. The last row is our performance on continuous sequences. Lower numbers are better. K and CS stand for KITTI and Cityscapes, \cite{Cordts2016Cityscapes} respectively. All results are capped at 80m depth.} \label{table:2} \end{table} \begin{figure}[h] \centering \includegraphics[width=12.5cm]{compare2.png} \caption{Visual comparison between the results of Eigen \textit{et al}.\cite{eigen2015predicting} and ours on KITTI dataset \cite{Geiger2013IJRR}. Groundtruth depth is interpolated for visualization purpose.} \label{KITTI} \end{figure} Table \ref{table:2} shows a quantitative comparison on outdoor scenes. We compare to Eigen \textit{et al}. \cite{eigen2015predicting}, Garg \textit{et al}. \cite{garg2016unsupervised}, Godard \textit{et al}. \cite{godard2017unsupervised}, Liu \textit{et al}. \cite{liu2015deep}, Zhou \textit{et al.} \cite{zhou2017unsupervised}, and Kuznietsov \textit{et al.} \cite{kuznietsov2017semi}. To perform a consistent comparison to state-of-the-art methods, we use the 697 test images from the Eigen Split \cite{eigen2015predicting} for evaluation. However, these 697 images are randomly selected from 28 scenes and do not form a continuous sequence. Therefore, they do not fully demonstrate the capability of our network. Despite this fact, Table \ref{table:2} shows that our method performs similarly to the state-of-the-art on this test set. In the last row of the table, we show an evaluation result of our method on continuous sequences that we randomly selected from the KITTI test dataset. It can be seen that our network gets a significant performance boost when dealing with continuous sequences, which we explore in more depth in Section \ref{sec:ablation}. We demonstrate DenseSLAMNet's ability to handle non-static scenes, moving light sources, and non-Lambertian surface reflections by training and testing it on an endoscopic dataset. Figure \ref{endo} shows a visual result of our DenseSLAMNet on this data modality. In order to perform a quantitative evaluation, we 3D printed a textured phantom throat model using geometry extracted from CT scans and performed an endoscopy process on it to capture video. DenseSLAMNet obtains 0.271 in $sc-inv$, 0.216 in $abs-rel$, and 0.010 in $abs-inv$. Figure \ref{endo} (b) shows a visual result on phantom dataset. \begin{figure}[h] \centering \includegraphics[width=12.5cm]{endo.png} \caption{Visual results on real patient and phantom endoscopic data. The vocal cord is visibily closing in the image sequence in (a). In both datasets, one can see the illumination changes due to the moving light source.} \label{endo} \end{figure} \subsection{Pose Estimation} We provide a qualitative evaluation of the pose estimation task by plotting the predicted poses together with the ground truth camera poses. As can be seen from Figure \ref{cam}, our DenseSLAMNet can handle smooth and small camera motions very well, but fails on large sudden jumps and random camera motions. This is expected because, in the training data, the camera motions are small and smooth causing the network adapts to this specific type of camera motion. To handle random camera motion and large magnitude motion between frames, we suspect we might apply weights to different types of camera motion, or explicitly generate a balanced set of different types of camera motion sequences for training. We see this as an opportunity for future work. \begin{figure}[h] \centering \includegraphics[width=12.5cm]{cam_pose.png} \caption{Camera pose estimation evaluation. Red cameras represent the ground truth camera positions and blue cameras represent our estimated camera positions. Here we only plot the point cloud of the last camera.} \label{cam} \end{figure} \subsection{Generalization to new data} \begin{table}[h!] \centering \begin{tabular}{ |p{3cm}||p{2cm}|p{2cm}|p{2cm}| } \hline Methods & Sc-inv & Abs-inv & Abs-rel\\ \hline DeMoN \cite{ummenhofer2017demon} &0.203 & 0.079& 0.201\\ \textbf{DenseSLAMNet} & \textbf{0.181} & \textbf{0.071}& \textbf{0.171}\\ \hline \end{tabular} \caption{Generalization comparison between the depth estimation of DeMoN and our DenseSLAMNet on NYUDV2 dataset \cite{Silberman:ECCV12}.} \label{table:3} \end{table} We evaluate the generalization ability of our DenseSLAMNet on both indoor and outdoor scenes. For indoor scenes, we use the NYUDV2 \cite{Silberman:ECCV12} dataset. The NYUDV2 does not provide the ground truth camera poses, so we cannot use it for training. Table \ref{table:3} shows the quantitative comparison results, and Figure \ref{compare} shows the visual results. Again our method outperforms DeMoN across every quantitative metric. \begin{table}[h!] \centering \begin{tabular}{ |p{3.5cm}||p{2cm}|p{2cm}|p{2cm}|p{2cm}|} \hline Methods & Sq-rel & Abs-rel & RMSE& $log_{10}$ \\ \hline Godard \textit{et al}. \cite{godard2017unsupervised} &11.990 & 0.535& 11.513& 0.156\\ zhou \textit{et al}. \cite{zhou2017unsupervised} &5.321 & 0.383& 10.47& 0.478 \\ Kuzenietsov \textit{et al}. \cite{kuznietsov2017semi} & - & 0.421& 8.237& 0.190\\ \textbf{DenseSLAMNet} & \textbf{2.404} & \textbf{0.275}& \textbf{6.476}& \textbf{0.102}\\ \hline \end{tabular} \caption{Generalization comparison on Make3D dataset \cite{saxena2009make3d}. All results are capped at 70 meters depth.} \label{table:4} \end{table} \begin{figure}[h] \centering \includegraphics[width=12.5cm]{general.png} \caption{Our prediction on unseen outdoor dataset (Make3D).} \label{make3d} \end{figure} For outdoor scenes, we evaluate our DenseSLAMNet on the unseen Make3D dataset \cite{saxena2009make3d}. The Make3D dataset is very different from KITTI (used for training) in that its image resolution is 2272$\times$1702, whereas KITTI's is 375$\times$1280. Table \ref{table:4} shows the quantitative comparison results. Our method generalizes best among the state-of-the-art methods that are not trained on the Make3D dataset. From Figure \ref{make3d} we can see that our predicted depth map preserves fine details like trees, cars, and pillars. \subsection{Ablation studies}\label{sec:ablation} In order to justify the effectiveness of the different components of our network architecture, we did a series of ablation studies using the test dataset of SUN3D. \begin{table}[h!] \centering \begin{tabular}{ |p{3cm}||p{2cm}|p{2cm}|p{2cm} | } \hline Methods & Sc-inv & Abs-inv & Abs-rel \\ \hline CNN-SINGLE & 0.131 & 0.049 & 0.103 \\ CNN-STACK & 0.144 & 0.060 & 0.124 \\ \textbf{DenseSLAMNet} & \textbf{0.112} & \textbf{0.039}& \textbf{0.091}\\ \hline \end{tabular} \caption{The use of LSTM in DenseSLAMNet gives the best depth estimation accuracy. Simply stacking up frames for training actually leads to worse performance than using just a single frame.} \label{table:5} \end{table} We compared 3 types of networks. We trained a CNN-SINGLE network that uses the network architecture in Figure \ref{detail} but without LSTM (RNN) units. Then we trained another network, CNN-STACK, that uses the same network architecture as CNN-SINGLE. Instead of taking a single image as input, CNN-STACK takes a stack of ten images as input. Table \ref{table:5} shows the quantitative results of our analysis. These results demonstrate that the LSTM units make an important contribution to preserving temporal information across a video sequence, which leads to better depth maps. \section{Conclusions} In this paper, we presented a real-time, RNN-based, multi-view dense SLAM method for depth and camera pose estimation from single or multiple frames. Our method effectively utilizes the temporal relationships between neighboring frames through LSTM units, which we show is more effective than simply stacking multiple frames together as input. Our DenseSLAMNet outperformed nearly all of the state-of-the-art CNN-based, single-frame depth estimation methods on both indoor and outdoor scenes and showed better generalization ability. It also predicted more accurate depth at large distance compared to the existing state-of-the-art. In addition, we demonstrated its capability to estimate depth from especially difficult data: endoscopic videos with dynamic scene geometry and illumination. In the future, we would like to further investigate the camera pose estimation component to make our network robust to highly varied camera motion, as well as explore the possibility of training on variable-length temporal sequences. \clearpage \bibliographystyle{splncs}
1,314,259,993,109
arxiv
\section{Field-directed polymerisation: growth mechanism} Field-directed polymerisation (FDP) allows for the growth of polymeric fibres with a dendritic morphology. The synapse formation is carried out with the area between two electrodes immersed in a solution of acetonitrile containing 50 mM 3,4-Ethylenedioxythiophene (EDOT) and 1 mM tetrabutylammonium hexafluorophosphate (TBAPF). This recipe was suggested in ref. \cite{koizumi2016electropolymerization} in which a reducing agent is also added, that we found not strictly necessary. In addition, we employ much lower voltages (2 to 5 V). Example of voltage-triggered polymerization are reported in literature, but rarely with an alternating signal \cite{eickenscheidt2019pulsed,gerasimov2019evolvable}, and never, to our knowledge, the material properties, branching degree and directed growth are used to build evolvable electronics or neuromorphic devices. The polymerisation is triggered by an AC bias applied across two arbitrary electrodes in the solution (see Fig. \ref{setup}a). The AC amplitude applied must be sufficient to sustain the radicalisation of the monomer (EDOT) that is stabilised by the dopant (PF$_6^-$). We suggest that the reaction is a nucleation-and-precipitation process: during the positive part of the periodic waveform, radicals form at the interface with the electrode, where they drift away under the field or get neutralized by a dopant anion PF$_6^-$. In the latter case, oligomerization quickly occurs\cite{qiu1992electrochemically} and long, insoluble chains deposit at the interface (see Video S1). Fig. S3 shows a microscopic image of a fibre, with visible nucleation clusters. The time available for the reaction to occur is inversely proportional to the frequency of the signal, resulting in thin branches for high frequency (Fig. \ref{setup}c). Also important is the waveform: to obtain dendritic shapes, the waveform shall show both positive and negative polarity, to attract and repel the ions sequentially. Interestingly, we can grow synapses using an amplified action potential waveform as shown in Fig. S4. These promising results might inspire research and development into new materials that can oxidatively polymerise at reduced voltages to interact with neural signals, grow in response to neuronal spikes and achieve neural interfacing.\\ The reaction always happens at the fiber extremity where, occasionally, bifurcation occurs. This stems from the higher local field (tip effect) that in turn accumulates more dopants. On the contrary, the field between two already-grown fibers is negligible (Faraday cage) and, even though monomers oxidise in this region, they drift away because no net dopant excess concentration exists. \section{Long-term and short-term plasticity} The dynamic plasticity of biological synaptic networks occurs over many orders of magnitude in time - an essential property to execute complex tasks such as perception, computation, filtering, or long memory storage. The uniqueness of FDP allows for the formation of organic semiconductor-based artificial neural networks that can mimic dynamic plasticity over more than 9 orders of magnitude in time, from spike-timing-dependent plasticity in $\mu$s regime, over synaptic reinforcement at the second scale, to long term depression occurring at the scale of weeks (Fig. S5). In Fig. \ref{fig:regrow}a, learning-induced synaptogenesis is mimicked: the conductance of the connection is enhanced by stimulation via action potentials. In the initial phase, the fibres have not yet bridged the gap between the electrodes and the measured current comes from the electrolytic solution. A sharp increase indicates the first connection. Koizumi \textit{et al.} reported an abrupt termination of the growth upon contact\cite{koizumi2016electropolymerization}; however, we observe a prolonged branch formation and utilise this effect to further increase synapse conductance and emulate synaptic reinforcement, a key biological mechanism leading to memory encoding\cite{bailey2015structural} and long-term memory retention. The final saturation is achieved when the transport in solution becomes negligible compared to the current carried by the connecting fibers. The resulting S-shaped curve has potential applications in artificial neural networks, where self-limiting learning (saturation after synaptic reinforcement) is key to avoid overtraining\cite{tetko1995neural}, and the initial plateau ensures the presence of an activation threshold. The fine-tuning of the electrical conductance through a structural/physical change of the fibres is further proved in Fig. \ref{fig:regrow}d. Here, the OECT characterization of a dendritic connection after three sequential updates of its synaptic weight is reported.\\ As we learn new skills, we forget old ones: for the brain, forgetting is as essential as learning. Hence, it is mandatory to successfully implement a controlled forgetting/depression mechanism on brain-inspired hardware. A major process underlying the gradual erasure of long-term memories is synaptic decay\cite{hardt2013decay,sadeh2014we}. Here, we tune the decay time of a synapse by controlling the degree of synaptic reinforcement. In Fig. \ref{fig:regrow}b we grow and characterise the decay of two synapses with different degrees of synaptic reinforcement. A highly reinforced synapse with a dense fibre network ensures fault tolerance, increased conductance, and longer retention. In contrast, a weak connection witnesses a conductivity drop of 50\% in 48 hours. We assign the decay to the mechanical stress following the swelling of the PEDOT\cite{biessmann2018monitoring}: this causes the formation of large dopant domains with reduced doping efficiency. On shorter timescales (milliseconds to minutes\cite{zucker2002short}), short-term plasticity assists the brain in computational tasks and ensures quick adaptation. To achieve short-term depression, synaptic depression at the millisecond scale is emulated by employing a third electrode as a pre-synaptic neuron, with the source being the post-synaptic neuron. A pulsed signal is applied, causing cations in the solution to accumulate within the synapse (Fig. S6) and dedoping the semiconducting polymer, thus modulating the source current. This effect, known as short-term depression (STD) is stronger if the pulses are applied in rapid sequence (due to an effect called pulsed-pair ? \cite{van2017non}, or weak if the ions have enough time to redistribute in the solution, causing the recovery of the original channel conductivity. Here, we vary the channel properties and morphology in order to control the time constant that governs STD. In Fig. 1b, STD is quantitatively reported as a function of the polymerisation frequency. For higher frequencies, the channel is easily depressed (halved for pulses separated by 200 ms). On the other hand, lower frequencies allow the formation of networks that, in order to undergo depression, must be subject to frequent pulses (for pulses separated by 200 ms its conductance decreases by 10\%). Such modulation of the post-synaptic neuron's conductance based on the temporal pattern of the presynaptic firing is analogous to biological nerves, and we highlight the impact of achieving control over the neuromorphic timescales via material parameters, that can be helpful in growing synapses and networks with specific time-dependent learning rules.\\ As an essential element of the short-term plasticity, we analyse the potentiation and depression of a synapse based on the relative spiking interval between the pre- and post-synaptic neuron. Such mechanism, referred in neuroscience as spike-timing-dependent plasticity (STDP), is at the heart of the potentiation/depression processes of a synapse. In simple words, if a presynaptic spike is likely to cause the post-synaptic neuron to fire, that synapse is going to reinforce; on the contrary, the synapse remains unaltered or weakens when such causality does not apply\cite{dan2004spike}. In OECTs, this mechanism is generally emulated by applying an action potential to the presynaptic neuron (gate), and another one of same amplitude but displaced in time, to the postsynaptic neuron (drain) \cite{alibart2012memristive,fu2018flexible}. The presynaptic signal modifies the ion distribution within the channel and increases or decreases the conductivity depending on the temporal offset. Fig. \ref{fig:regrow}c reports the modulation of the post-synaptic signal analogous to biological synapses, known as STDP. \begin{figure} \includegraphics[width=\linewidth]{images/pavlov2} \caption{\textbf{Classical conditioning: the example of Pavlov's dog: a},t he electrode $food$ is connected electrically to the electrode $salivation$ while a third electrode $bell$ is not uncoupled initially. \textbf{b}, If the action potentials are applied to $bell$, the conditioning is unsuccessful. \textbf{c}, Only when the signal is applied to both the $bell$ and $food$ with the right temporal pattern, the learning mechanism begins and \textbf{d} results in the electrical contact between $bell$ and $salivation$. \textbf{e}, Key for a realistic emulation of the biological process is the period between 6 and 15 s: from 6 to 9s, the synapse does not grow when the pulse is applied only to the $bell$; from 9 to 12s, action potentials are applied both to $bell$ and $food$ with a time offset of 10 ms, resulting again in an unsuccessful training. The dog actually couple the $bell$ to the $salivation$ only when is exposed to $bell$ and $food$ simultaneously, as it happens between 12 and 15 s, in which the new synapse is generated.} \label{pavlov} \end{figure} \section{Pavlovian conditioning} More complex neuromorphic features are attained by multiple electrodes. In this way, interconnectivity is implemented and the electric field in the solution is shaped to better direct the growth: this is achieved by using multiple electrodes at a potential lower than V$_{ox}$. In this way, the growth between two selected electrodes is allowed or prevented by the potential applied to neighboring electrodes (see scheme in Fig. S8). Furthermore, the need of such a threshold potential assures the presence of an activation function to begin the reaction; similarly to biological nerves, where the weighted sum of the inputs is passed through an activation function, here the summed effect of multiple electrode can trigger or inhibit the learning/growth process at one electrode. We demonstrate the potential of such a feature by achieving classical conditioning: in 1903, Pavlov showed that the brain can pair a potent, hardwired, biological stimulus to initially neutral stimulus in a learning process that leads to the same physiological response when the subject is exposed to either stimulus\cite{pavlov1926conditioned}. To begin, an already existing connection conducts current between the "$food$" node and the "$salivation$" node, while the "$bell$" node is initially decoupled from the system (Fig. \ref{pavlov}a): before conditioning, the dog instinctively salivates to the smell of food but does not respond to the ring of the bell and a signal to "$bell$" does not produce any output at "$salivation$" (0 to 3 s in Fig. \ref{pavlov}e). To pair the two stimuli, a connection between "$bell$" and "$salivation$" must be produced. The bell ring is here represented as an action potential (2 V, 50 Hz); this signal alone does not suffice for polymerisation as the voltage is below V$_{ox}$ (Fig. \ref{pavlov}b). We thus have an initial independence of the two inputs, i.e. no conditioning occurs if the dog is exposed to the bell ring without food. The conditioning is also unsuccessful when both "$food$" and "$bell$" fire, but with a time offset of 10 ms. These two scenarios in which no potentiation occur are an essential feature in Pavlovian conditioning because they assure the necessity of a precise temporal synchronization for the synapse to undergo potentiation. It is indeed successful when the firing is applied simultaneously i.e. the bell is rung when the food is served. This happens in panel c, where action potentials are applied between $bell$ and $salivation$, and between $food$ and $salivation$, causing the growth from $bell$ to $salivation$ (12-15 s). As a consequence of the training, a signal applied to either $bell$ or $food$ causes an output at $salivation$ (Fig. \ref{pavlov}d,e 15 to 18 s). The time-dependence of the conditioning is coherent with the Hebbian learning rule: "cells that fire together wire together". The method described can be extended to a larger number of electrodes without requiring the complex circuitry needed for systems previously described\cite{ziegler2012electronic,van2017non}. Notably, the lateral connection of synapses, such as the one in Fig. \ref{pavlov}d, emulates another well-know biological process, but poorly implemented on devices, known as heterosynaptic plasticity\cite{humeau2003presynaptic}. \begin{figure} \centering \includegraphics[width=\linewidth]{images/panel44} \caption{\textbf{Pattern recognition: a,} training: a signal corresponding to the pixel value is applied, causing the growth of the synapses where signal is applied. \textbf{b,} readout: The intensity of the signal correlates to the similarity of the trained input.} \label{fig:numberrecognition} \end{figure} \section{Pattern recognition} We show the potential of FDP for in-memory neuromorphic computing by demonstrating a new device concept to train and recognise patterns. Here, we use a 3x5 pixel image as an input (the device can be extended to an arbitrary number of inputs/outputs). In the training phase, an action potential signal (50 Hz) is applied for 3 seconds to 15 input electrodes. The amplitude depends on the pixel value: 0 V if the pixel is void, and 3 V if the pixel is filled, as sketched in Fig. \ref{fig:numberrecognition}a, for the number 5. The voltage triggers the reaction and polymeric fibres grow towards the output electrodes (See Fig. S9 and methods for a detailed explanation).\\ The readout consists of the application of a constant bias or pulse to an arbitrary set of inputs while an aqueous solution of 1 mM NaCl immerses the network. The output current, corresponding to the recognition confidence, is integrated at the output nodes, and reported normalised in Fig. \ref{fig:numberrecognition}b. The largest output is recorded when the input is equal to the trained pattern. One could add/remove pixels, thus reading out a wrong or semi-wrong pattern with respect to the one that was trained. Indeed, single-pixel error patterns (6 or 9 for instance) result in lower confidence while dissimilar inputs output even smaller values. This new device concept stems from the unique ability of FDP to grow networks composed of conductive/excitatory and inhibiting synapses: if a connection grows, input and output will be in electrical contact; if no connection is present at a specific input pad after the learning epoch, and a potential is applied during the readout, the electrode acts as a gate and dedopes/inhibits the neighbouring synapses (Fig. S9c), hence reducing the output. \section{Conclusion} We propose field-directed polymerisation (FDP) as a novel approach to grow polymeric dendritic connections composed of many artificial synapses. FDP allows to guide the polymerization, tune the branching, and control both resistance and capacitance. The growth resembles the synaptogenesis process and the synapses show plastic responses triggered by time-dependent electrical stimuli. We demonstrate key features needed in synaptic electronics and neuromorphic computing such as spike-timing dependent plasticity, long-term potentiation, and depression from the millisecond to week timescale, and employ FDP to achieve Pavlovian conditioning and Hebbian learning. We achieve this by controlling both the temporal pattern of excitations and material properties of the neural networks: specifically, we change the growth parameter to influence the resistance and the capacitance, and we use them to affect the neuromorphic functions. Finally, we combine these features to show a new device concept capable of recognizing numeric patterns. \section*{Supplementary Information} \begin{figure} \includegraphics[height=7cm]{images/transfer} \caption{\textbf{ | Characterisation of a PEDOT artficial synapse as an OECT:} Transfer curve relative to the device in Fig. 2d after the third update of the synaptic weight (V$_{DS}$=500 mV).} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{images/resistance} \caption{\textbf{ | Directed growth and strengthening}: a) a synapse is grown between a selected input electrode and a neuron-like electrode with an AC signal of 50 Hz. \textbf{b} A second and \textbf{c} a third electrodes are selected and synapses are grown at 50 Hz and 100 Hz respectively. \textbf{d} A synapse already exisiting can be reinforced: in this case, the one grown in panel b. e) IV curve of the synapses. Scale bar is 200 $\mu$m.} \end{figure} \begin{figure} \includegraphics[height=7cm]{images/sem} \caption{\textbf{| Fibre's morphology:} Scanning electron microscopy picture of a PEDOT:PF6 fiber. Nucleation sites are visible as spherical structures. Scale bar is 2$\mu$m. } \end{figure} \begin{figure} \includegraphics[height=7cm]{images/actionpotential2} \caption{\textbf{ | Artificial synapse grown with an action potential-like waveform.} A signal resembling the action potential at (5 V, 50 Hz) was used to grow an artificial synapse with FDP.} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{images/timeline} \caption{\textbf{ | Plasticity over different timescales.} We achieve plastic effects that span from short-term plasticity (ms range) up to days and week scale (long-term plasiticity)} \end{figure} \begin{figure} \includegraphics[height=7cm]{images/std.png} \caption{\textbf{ | Short-term depression}. By applying a positive pulsed signal (600 mV, 100 Hz, see inset) to a third electrode between 10 and 20 s, and between 35 and 40 s, the channel is dedoped and short-term plasticity at the millisecond scale is mimicked.} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{images/EIS2} \caption{\textbf{ | Impedance spectra of synapses grown at different frequency} The different morphology that the synapses acquire when varying the polymerization frequency (See Figure 1c) is reflected in a different capacitance. \textbf{b} shows the circuit used to fit the data: the gate and electrolytic solution are kept fixed (within an error of 20\%). The top and middle panel of \textbf{b} show the data acquired with impedance spectroscopy and the relative fit. It follows that the devices have a large capacitance of 0.10, 0.39 and 1.93 $\mu$F for 10, 60 and 200 Hz respectively. This, in turn, affects the charging time ($\tau$=RC time), as shown in the bottom panel.} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{images/triggered} \caption{\textbf{ | Schematics of the mechanism of triggered growth}: on the left, an AC signal is applied but, being insufficient for the oxidation of the monomer, it does not cause the formation of the fiber. The growth can be triggered without changing its potential, rather, applying a voltage to neighbouring electrodes/neurons (right panlel). Hence, it is possible to grow at low voltage and control it by shaping the field in the electrolyte.} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{images/patternSI.png} \caption{\textbf{| Pattern recognition system. a,} Pattern recognition device after the learning epoch for the number 0. \textbf{b}, Trained pattern corresponding to panel a. \textbf{c}, Output current for the exact pattern and for a single-pixel error pattern (number 8): the extra input does not carry current to the output. Rather, it attracts negatively charged dopants in the solution and dedopes neighbouring synapses, hence reducing the total output.} \end{figure} \section*{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \end{document}
1,314,259,993,110
arxiv
\section{#1}\renewcommand{\theequation} {\mbox{\arabic{section}.\arabic{equation}}}\setcounter{equation}{0}} \newcommand{\app}[1]{\section{Appendix: #1}\renewcommand{\theequation} {\mbox{\Alph{section}.\arabic{equation}}}\setcounter{equation}{0}} \renewcommand{\author}[1]{\begin{center}\Large #1\end{center}} \newcommand{\hostauthor}[1]{\begin{center}\Large #1\end{center}} \renewcommand{\date}[1]{\par\bigskip\par\sl\hfill #1\par\medskip\par\rm} \newcommand{\email}[1]{e-mail: \sl #1@alpha.science.unitn.it} \newcommand{\hostemail}[1]{e-mail: \sl stefan.steidl@uibk.ac.at} \newcommand{\femail}[1]{\footnote{\email{#1}}} \newcommand{\hostfemail}[1]{\footnote{\hostemail{#1}}} \newcommand{\pacs}[1]{\smallskip\noindent{\sl PACS number(s): \hspace{0.3cm}#1}\par\bigskip\rm} \newcommand{\hrule\par\begin{description}\item{Abstract: }\it}{\hrule\par\begin{description}\item{Abstract: }\it} \newcommand{\par\end{description}\hrule\par\medskip\rm}{\par\end{description}\hrule\par\medskip\rm} \newcommand{\ack}[1]{\par\section*{Acknowledgements} #1} \renewcommand{\vec}[1]{{\bf #1}} \renewcommand{\L}{{\bar L}} \newcommand{\cc}{{\phi_c}} \newcommand{\M}{{\cal M}} \newcommand{\ca}[1]{{\cal #1}} \newcommand{\hs}{\qquad\qquad} \newcommand{\nn}{\nonumber} \newcommand{\beq}{\begin{eqnarray}} \newcommand{\eeq}{\end{eqnarray}} \newcommand{\beqn}{\begin{eqnarray}} \newcommand{\eeqn}{\end{eqnarray}} \newcommand{\ap}{\left.} \newcommand{\at}{\left(} \newcommand{\aq}{\left[} \newcommand{\ag}{\left\{} \newcommand{\cp}{\right.} \newcommand{\ct}{\right)} \newcommand{\cq}{\right]} \newcommand{\right\}} %%% close }{\right\}} \newtheorem{Theorem}{Theorem} \newtheorem{Lemma}{Lemma} \newcommand{\R}{\mbox{$I\!\!R$}} \newcommand{\N}{\mbox{$I\!\!N$}} \newcommand{\Z}{\mbox{$Z\!\!\!Z$}} \newcommand{\C}{\mbox{$I\!\!\!\!C$}} \newcommand{\ii}{\infty} \newcommand{\X}{\times\,} \newcommand{\fr}[2]{\mbox{$\frac{#1}{#2}$}} \newcommand{\tr}{\,\mbox{tr}\,} \newcommand{\Tr}{\,\mbox{Tr}\,} \newcommand{\PP}{\,\mbox{PP}\,} \newcommand{\Res}{\,\mbox{Res}\,} \newcommand{\ach}{\,\mbox{cosh$^{-1}$}\,} \newcommand{\ash}{\,\mbox{sinh$^{-1}$}\,} \newcommand{\ath}{\,\mbox{tanh$^{-1}$}\,} \newcommand{\acth}{\,\mbox{coth$^{-1}$}\,} \renewcommand{\Re}{\,\mbox{Re}\,} \renewcommand{\Im}{\,\mbox{Im}\,} \newcommand{\lap}{\Delta} \newcommand{\alpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\gamma}{\gamma} \newcommand{\delta}{\delta} \newcommand{\varepsilon}{\varepsilon} \newcommand{\zeta}{\zeta} \newcommand{\iota}{\iota} \newcommand{\kappa}{\kappa} \newcommand{\lambda}{\lambda} \newcommand{\varrho}{\varrho} \newcommand{\sigma}{\sigma} \newcommand{\omega}{\omega} \newcommand{\varphi}{\varphi} \renewcommand{\th}{\theta} \newcommand{\vartheta}{\vartheta} \newcommand{\upsilon}{\upsilon} \newcommand{\Gamma}{\Gamma} \newcommand{\Delta}{\Delta} \newcommand{\Lambda}{\Lambda} \newcommand{\Sigma}{\Sigma} \newcommand{\Omega}{\Omega} \newcommand{\Theta}{\Theta} \newcommand{\Upsilon}{\Upsilon} \begin{document} \preprint{UTF 361 \\ gr-qc/9509036} \begin{titolo} A Bisognano-Wichmann-like Theorem in a Certain Case of a {\em Non} Bifurcate Event Horizon related to an Extreme Reissner-Nordstr\"om Black Hole \end{titolo} \author{Valter Moretti \femail{moretti}} \dip \hostauthor{Stefan Steidl \hostfemail{steidl}} \hostdip \date{July-August-September 1995} \hrule\par\begin{description}\item{Abstract: }\it \\ Thermal Wightman functions of a massless scalar field are studied within the framework of a ``near horizon'' static background model of an extremal R-N black hole. This model is built up by using global Carter-like coordinates over an infinite set of Bertotti-Robinson submanifolds glued together. The analytical extendibility beyond the horizon is imposed as constraints on (thermal) Wightman's functions defined on a Bertotti-Robinson sub manifold. It turns out that only the Bertotti-Robinson vacuum state, i.e. $T=0$, satisfies the above requirement. Furthermore the extension of this state onto the whole manifold is proved to coincide exactly with the vacuum state in the global Carter-like coordinates. Hence a theorem similar to Bisognano-Wichmann theorem for the Minkowski space-time in terms of Wightman functions holds with vanishing ``Unruh-Rindler temperature''. Furtermore, the Carter-like vacuum restricted to a Bertotti-Robinson region, resulting a pure state there, has vanishing entropy despite of the presence of event horizons. Some comments on the real extremal R-N black hole are given. \par\end{description}\hrule\par\medskip\rm \pacs{04.62.+v, 04.70.Dy, 11.10.Wx} \newpage \section*{Introduction} In a space-time with {\em non empty} intersection event horizons, i.e. Rindler-Schwarzschild-like space-time, several methods for determining the possible equilibrium state of a scalar field propagating therein exist. They select one special temperature only, the Rindler-Unruh-Hawking temperature. These theorems use the KMS condition \cite{kms} and the Haag Narnhofer Stein principle (i.e., the "Haag scaling prescription") \cite{HNS,FH} or demand a stationary and Hadamard behaviour of the Wightman functions \cite{kaywald}.\\ These theorems can not be employed in the case of an extremal Reissner-Nordstr\"{o}m black hole due to the appearence of a null surface gravity as well as a {\em non-bifurcate event horizons}. In fact the future event horizon and the past event horizon do not intersect there. However, Anderson, Hiscock and Loranz \cite{AHL} proved that only the Reissner-Nordstr\"{o}m vacuum state has a regular stress-tensor on the horizon and thus only this state is a possible equilibrium state in the framework of semiclassical quantum gravity. Finally, in recent works \cite{moretti, moretti2}, Moretti shows that the Haag Narnhofer and Stein principle for the behaviour of Wightman's function on the horizon of a black hole results to be unable to determine the really admissible thermal quantum states in the case of an extremal R-N black hole, but a further development of the previous principle, the Hessling principle \footnote{Roughly speaking, these two principles correspond respectively to a weaker and a stronger version of a Quantum Einstein's Equivalence Principle. They require a weaker and a stronger ``Minkowskian behaviour'' of two point Wightman functions in the limit of vanishing geodesical distance between the arguments.}\cite{hessling,moretti,moretti2} determines only the Reissner-Nordstr\"om quantum vacuum as physically admissible (i.e. $T=0$). Similar result, but using very different analysis, appeared in \cite{vanzo}. These facts seem to improve the topological result obtained by the method of the elimination of conical singularities from the Euclidean, time extended manifold, which accepts any value of the temperature for a R-N black hole\cite{HHR,GM}.\\ Almost all the previously mentioned papers deal with quantum field states at least defined in a certain space-time region (boundary included) bounded by event horizons, e.g. the external region of a black hole. \\ On the other hand, it is obvious that the classical field is not blocked by the horizons and thus it seems to be necessary to demand the existence of global extensions of physically sensible quantum field states. This request result to be satisfyed by the Minkowski vacuum in the Rindler wedge theory and by the Hartle-Hawking state in the Schwarzschild black hole theory. \\ As well-known, the extremal Reissner-Nordstr\"om manifold can be maximally extended into Carter's manifold and thus it seems to be interesting to study quantum field states defined on the whole Carter manifold (if they exist).\\ In this paper, we shall study a ``near horizon'' model of Carter's and Reissner-Nordstr\"om manifolds. Following an algebraic approach to quantum field theory and starting from KMS quantum states initially defined in a Reissner-Nordstr\"om-like submanifold only, we shall study the existence of {\em analytical} extensions beyond the horizons of their Wightman functions and thus in the whole Carter-like manifold.\\ In particular, as our first result, we shall prove the possibility of an algebraic quantum field formulation on our manifold despite of the fact that this is non globally hyperbolic.\\ Moreover, as our second result, we shall prove that only the approximated vacuum state corresponding to the Reissner-Nordstr\"{o}m vacuum state (with zero temperature) can be extended beyond the horizons.\\ Furthermore we shall see that there exists a relation between our model-manifold endowed with Bertotti-Robinson sub-manifolds and Minkowski space-time endowed with the well-known couple of Rindler wedges. These two structures act as ``toy models'' of two different kinds of black holes: the extremal R-N black hole and the eternal Schwarzschild black hole respectively. Implementing this analogy, as our third result, we shall recover the equivalent of the the {\em Bisognano-Wichmann theorem} for the Minkowski space-time\cite{sewell}. In this contest, the analog of the Minkowski vacuum is the vacuum defined with respect to the global Carter-like coordinates of our manifold. The $\beta=2\pi$-Rindler-KMS state corresponds to the vacuum of the R-N-like coordinates.\\ Thus, an important difference arises. The analog of the Rindler-Unruh temperature is now {\em zero} and thus {\em no} KMS prescription appears, but the {\em stationarity} of the state remains, i.e., the functional dependency of only the difference of the temporal arguments.\\ In {\bf Section 1} we shall introduce the well-known Carter representation for a maximally analytically extendible manifold for an extremal R-N black hole. Furthermore, we shall perform the necessary approximations in order to deal with a neighbourhood of the horizon.\\ In {\bf Section 2}, using approximated Carter coordinates and the Bertotti-Robinson metric, we shall construct a ``near horizon'' toy model of Carter's manifold which results to be non globally hyperbolic. We shall prove that it is possible to define a quantum field theory. Finally, we shall study the analytical extension of Wightman's functions beyond the horizons proving also a Bisognano-Wichmann-like theorem in terms of Wightman functions.\\ In {\bf Section 3}, we shall point out our conclusions and we shall look at the real extremal R-N black hole. \section{Carter's Manifold and Approximations near its Horizons} \input epsf The general form of Reissner-Nordstr\"om black-hole metric is given by \cite{hawking libro} \beq ds^{2}= -\left( 1-\frac{2M}{\bar{r}}+ \frac{Q^2}{\bar{r}^2} \right)^{2} dt^{2} + \frac{1}{ \left( 1-\frac{2M}{\bar{r}}+ \frac{Q^2}{\bar{r}^2}\right)^2} d\bar{r}^{2} + \bar{r}^{2}\left( d\theta^{2}+\sin^{2}\theta d\varphi^{2}\right) \nn \:, \eeq where $M$ is the mass and $Q$ the charge of the black hole. We are interested in {\em extremal} case $Q=M$. Thus we have \beq ds^{2}= -\left( 1-\frac{M}{\bar{r}}\right)^{2} dt^{2} + \frac{1}{\left( 1-\frac{M}{\bar{r}}\right) ^{2}} d\bar{r}^{2} + \bar{r}^{2}\left( d\theta^{2}+\sin^{2}\theta d\varphi^{2}\right) \nn \:. \eeq For sake of simplicity, we shall chose an extremal black hole of unit mass by a suitable choice of the units of measure. In addition, we shall use the abbreviation $d\Omega_{2} := \left( d\theta^{2}+\sin^{2}\theta d\varphi^{2}\right)$. Thus, our metric reads \beq ds^{2}= -\left( 1-\frac{1}{\bar{r}}\right)^{2} dt^{2} + \frac{1}{\left( 1-\frac{1}{\bar{r}}\right) ^{2}} d\bar{r}^{2} + \bar{r}^{2}d\Omega_{2}\label{RN} \:. \eeq As well-known, the above chart does not cover the whole manifold as $ds^{2}$ is singular at $\bar{r} = 1$. This inconvenience can be avoided, following the Schwarzschild case, by introducing Kruskal-like coordinates\footnote{It must be stressed that in contrast to Schwarzschild coordinates in the interior of the event horizon, where, roughly speaking, the temporal and radial coordinates change roles (in the usual interpretation), the coordinates $\{\bar{r}, t\}$ in the R-N chart keep their meaning also for $\bar{r} < 1$.}, i.e., {\em Carter's coordinates} \cite{carter}. These define a {\em maximally analytically extended} manifold obtained from the R-N manifold $(\bar{r}>1)$. To begin with, we introduce two functions $u(t,\bar{r})$ and $w(t,\bar{r})$ by \begin{eqnarray} u &=& r^{\ast} + t \:,\label{X}\\ w &=& r^{\ast} - t \:, \label{Y} \end{eqnarray} where $r^{\ast}$ is given by the invertible function of $\bar{r}>1$ \beq r^{\ast}(\bar{r}) = \int \frac{d\bar{r}}{\left( 1-\frac{1}{\bar{r}}\right) ^{2}} = \bar{r}\frac{\bar{r}-2}{\bar{r}-1}+2\ln \left| \bar{r}-1\right|\label{ara} \:. \eeq Let us introduce Carter's coordinates in the Reissner-Nordstr\"om manifold $\{T, R,\theta ,\varphi\}$ \cite{carter}\footnote{ In this paper, in order to simplify calculations, we define Carter's coordinates changing original Carter's definition by a trivial linear transformation, $T=T_C/2- 3\pi/4$, $R=R_C/2+\pi/4$. Where $R_C$ and $T_C$ are Carter's coordinates as they appear in \cite{carter}. } through the equations \begin{eqnarray} u & = & - \cot \left( T + R \right)\label{Z}\:,\\ w & = & + \cot \left( T - R \right) \label{T}\:, \end{eqnarray} and thus \begin{eqnarray} 2T & = & \cot^{-1}(w) - \cot^{-1}(u) \:,\\ 2R & = & -\cot^{-1}(w) -\cot^{-1}(u)\:, \end{eqnarray} where $T\in ] -\pi/2, +\pi/2[$ and $R\in ] 0, \pi[$. The metric in Eq.(\ref{RN}) reads now \beq ds^{2} = Q\left( -dT^{2}+dR^{2}\right) + \bar{r}^{2} d\Omega_{2} \label{RNC}\:, \eeq where $Q$ is given by \beq Q = \left( 1-\frac{1}{\bar{r}}\right)^{2}\csc^{2}\left( T+ R \right) \csc^{2}\left( T - R \right) \label{q}\:. \eeq This form of metric can be extend to a larger manifold where $T$ ranges from $-\infty$ to $+\infty$ and $R$ ranges, from $0$ to $2\pi$ (the angular variables having their customary range).\\ A part of the complete manifold is represented in {\bf figure~1}. The initial form of the metric (\ref{RN}) holds in each of the R-N regions, conversely, the new form (\ref{RNC}) holds in the whole manifold.\\ Note that the right edges as well as the intersection points of the horizons of the infinite number of R-N zones \beq R=0 \:\:\:\: T=0,\: \pm \pi, \:\pm2 \pi,\: \pm3 \pi,\: ... \eeq are not in the manifold (the diagram is really a {\em Penrose's diagram}). In fact, these points are infinitely far away from internal points of the manifold if the distance along time-like or space-like geodesics is taken or the affine parameter distance along light-like geodesics is considered. From this property it also follows that it is not possible to (analytically) extend the manifold any further.\\ The {\em time-like} irremovable singularity is represented by all the points which have $R=0$ and $T$ taking values different from $k\pi$, $k\in \Z$. The open R-N regions, i.e., the R-N regions without boundary and event-horizons, are globally hyperbolic, the ``lines'' $T= k \pi$ being Cauchy-surfaces.\\ Returning to Eq.(\ref{q}), we observe that, in order to define $Q$ on the whole manifold, one has to analytically extend and then invert (in the variable $\bar{r}$) \beq -\cot\left( T + R \right) + \cot\left( T - R \right) = 2 r^{\ast}(\bar{r}) = 2\bar{r}\frac{\bar{r}-2}{\bar{r}-1}+4\ln \left| \bar{r}-1\right| \label{r estesa} \:, \eeq Then, by means of Eq.~(\ref{r estesa}), it is possible to restore the same form of the metric of Eq.~(\ref{RN}) also {\em outside} of the Reissner-Nordstr\"{o}m region, excluding the horizons. In fact, once one uses Carter's coordinates, one may redefine the $r^{\ast}$ variable outside of the R-N region (towards the future horizon for example) by Eq.~(\ref{r estesa}) and the $\bar{r}$ variable by means of Eq.(\ref{ara}) as $\bar{r}<1$. Finally the $t$ variable is restored by a trivial use of Eq.s (\ref{X}), (\ref{Y}), (\ref{Z}), (\ref{T}). Note that, in this way, one can also construct a {\em time-like Killing vector} on the whole manifold (with the exception of the horizons) simply by considering the tangent vector to the $t$ coordinate. \begin{center} \leavevmode \epsfbox{fig1.eps} \end{center} \bigskip To conclude, we observe that Carter's manifold is not the only manifold which one can build up starting from R-N's manifold. For example it is possible to identify two local charts of Carter's chain and thus obtain a new manifold containing a finite number of R-N zones. However this kind of extension trivially violates the {\em weaker causality condition}\cite{wald}, hence it is not clear whether a quantum field theory can be defined there\footnote{However, Kay et al. investigated the possibilities of a QFT in similar backgrounds \cite{kay,higuchi} recently. }.\\ Now we shall consider an approximated metric near the horizons. In the shaded region near the horizons represented in {\bf figure 2}, the metric (\ref{RNC}) can be approximated by a {\em static} metric \beq ds^2 \sim ds^2_0 := \frac{4}{\sin^2 2R} (-dT^2+dR^2) + d\Omega_2 \label{dso} \:. \eeq The vector $\partial_T$ defines an approximated {\em time-like Killing vector} near the horizons.\\ In the same region, but considering R-N coordinates, $ds^2$ can be approximated by the {\em Bertotti-Robinson} metric \cite{BR}, as well-known \cite{referee}. Thus we have \beq ds^2 \sim ds^2_{BR}:= \frac{-dt^2+dr^2+d\Omega_2}{r^2} \label{aBR}\:, \eeq where \begin{eqnarray} r&:=& -r^\ast \:\:\:\: \mbox{if} \:\:\:\: R>T \:\:\:\: \mbox{or} \\ r&:=& +r^\ast \:\:\:\: \mbox{if} \:\:\:\: R<T \:. \end{eqnarray} \begin{center} \leavevmode \epsfbox{fig2.eps} \end{center} \bigskip Finally, in the considered region, the transformation law between $r,t$ and $R,T$ is \begin{eqnarray} 2 r &\sim& | \cot (R-T) +\cot(R+T) | = \frac{2\sin 2R}{|\cos 2T\: -\cos2R |} \:, \label{ar}\\ 2 t &\sim& \:\:\cot (R-T) -\cot(R+T) \:\: =\frac{2\sin 2T}{\cos 2T\: -\cos2R } \:, \label{at} \end{eqnarray} All the previous approximations are carefully examined in {\bf Appendix A} (see also \cite{referee}). \section{Our Toy Model and its Thermal Wightman Functions} \input epsf In this section we consider the Wightman functions of a massless scalar field obtained by quantizing it in a new manifold, built up by using the Bertotti-Robinson metric only. Obviously we suppose these Wightman functions approximate to the ``true'' Wightman functions of the Reissner-Nordstr\"{o}m metric near the horizon (inside the R-N zone), where the two metrics are undistinguishable. A similar hypothesis was also used to calculate the renormalized stress tensor. Then, an independent check proved that this approximation was correct for that purpose at least \cite{AHL}. Furthermore, a similar assumption was used to obtain the Hawking temperature in the case of a Schwarzschild black hole \cite{HNS} and the result was proved to be correct.\\ In {\bf Appendix B} we return on this assumptions with some general mathematical comments.\\ In order to have a mathematically well defined background of our field theory, we shall build up a complete manifold by gluing together an infinite number of Bertotti-Robinson charts as pointed out in {\bf figure~3}. $T$ and $R$ are {\em global coordinates} of this manifold. These are connected to Bertotti-Robinson variables $t$ and $r$, in every B-R region, by the following equations (see Eq.s (17) and (18)) \begin{eqnarray} 2 r &=& | \cot (R-T) +\cot(R+T) | = \frac{2\sin 2R}{|\cos 2T\: -\cos2R |} \:, \label{r}\\ 2 t &=& \:\:\cot (R-T) -\cot(R+T) \:\: =\frac{2\sin 2T}{\cos 2T\: -\cos2R }\:, \label{t} \end{eqnarray} where $R\in ] 0, \pi/2 [$ and $T\in \R$. It can be easy shown that, considering the form of the metric and the relations between $r,t$ and $R,T$ near every horizon, one find the same equations as in the previous section. In this sense our manifold is a toy model of Carter's manifold. The global form of the metric (which is regular on the horizons) is the static metric of Eq.~(\ref{dszero}) \beq ds^{2} = \frac{4}{\sin^{2}2 R}(-dT^{2} + dR^{2} + \sin^{2}2R \:d\Omega_{2})\:,\label{quasi einstein} \eeq The above metric is conformal to the metric of the {\em Einstein static universe}\footnote{Usually, the metric of Einstein's static universe is written in terms of $R':=2R$ and $T':=2T$ so that the global factor $4$ disappears.} by the factor $1/\sin^{2}2R$.\\ Let us look at our manifold as it is represented in {\bf figure~3}.\\ The edges look like singularities of the metric, but it is not the case. In fact, the intersection points of the horizons ($r=+\infty$) are not in the manifold because each geodesic reaching them from an inner point spans an infinite Riemannian length (or an infinite affine parameter gap if it is a null geodesic). Similarly, by calculating also the geodesics which reach $r=0$, it is possible to prove the same property for all the remaining points on the manifold's edges. Hence the edges are not in of the manifold and thus it is not possible to extend the manifold any further.\\ \begin{center} \leavevmode \epsfbox{fig3.eps} \end{center} \bigskip Finally we stress that passing to the Euclidean time $t_{E}= it$ and using $r,t_{E}$ coordinates, {\em no conical singularity} arises for any choice of the Euclidean time period $\beta$\footnote{The point $r=0$ which (which is a conical singularity of the Euclidean manifold in the Rindler or Schwarzschild cases) does not belong to the manifold now.}. Hence, following \cite{HHR} and \cite{GM}, we should accept all the values of the temperature of the KMS states defined inside of a B-R zone.\\ It is possible to compare Carter's manifold to Kruskal manifold in the following sense.\\ Let us consider Kruskal manifold. There, the metric looks like that of a Rindler space if Schwarzschild's coordinates are used, or, that of a Minkowski space if Kruskal's coordinates are adopted. Also, the transformation laws between these two coordinate systems locally resemble the corresponding transformation laws in the Minkowski manifold. Furthermore, near the horizon, the Kruskal time defines an approximated time-like Killing vector which becomes a global time-like Killing vector in the Minkowski space-time.\\ Considering Carter's manifold, the same features will arise. We have to consider Carter's coordinates as Kruskal's coordinates, our global Carter-like model as a Minkowski's manifold and Bertotti-Robinson manifold as a Rindler wedge. Then, near the horizons, Carter's metric looks like that of Bertotti-Robinson if we use Reissner-Nordstr\"{o}m coordinates, or the metric conformal to the Einstein static metric if we used Carter's coordinates and so on. In particular, the approximated time-like Killing vector near the horizon in Carter's manifold becomes an exact time-like Killing vector in our Carter-like global manifold.\\ Roughly speaking, the quantum field theory in a Rindler wedge on the background of a Minkowski space-time appears as a simplified quantum field theory in a Schwarzschild space-time on the background of a Kruskal manifold. The coincidence of the Unruh-Rindler state with the Minkowski vacuum (Bisognano-Wichmann theorem) appears as a simplified version of the coincidence of $\beta= 4\pi$-Schwarzschild-KMS state \footnote{Using opportune units of measure.} with the Hartle-Hawking state. It is reasonable to expect a similar situation for a quantum field theory on the Carter manifold. \\ In the following we want to implement a part of this idea proving, as our third result, the equivalent of the Bisognano-Wichmann theorem in our Carter-like manifold. \\ Like Carter's manifold, our Carter-like manifold is not globally hyperbolic \cite{wald, hawking libro} because near the edge $r=0$ it is possible to find a pair of points $p$, $q$ with $J^{+}(p) \cap J^{-}(q)$ not closed and thus not compact; furthermore, differently from Carter's manifold, all the ``patch manifolds'' (B-R zones) are non globally hyperbolic. We shall prove, that inside any of these regions as well as in the whole Carter-like manifold, a ``quasi-standard'' quasifree scalar field theory can be defined. \subsection{Possibility of a QFT} In order to point out the possibility of a QFT on our manifolds, we shall follow the {\em algebraic approach} used in \cite{kaywald} based on {\em Weyl algebra}.\\ First we consider the B-R submanifolds. From now on, we shall understand $x^{0},x^{1},x^{2},x^{3}$ (posing also $t:=x^{0}$, $\vec{x}\equiv (x^{1},x^{2},x^{3})$ and $r:=\mid \vec{x}\mid$) as Minkowski coordinates in Minkowski space as well as Bertotti-Robinson coordinates in the Bertotti-Robinson space.\\ Let us start by considering that a very simple connection between the solutions of the massless Klein-Gordon equation in Minkowski space and in the Bertotti-Robinson space exists. If $\phi(\vec{x},t)_{M}$ indicates a generic $C^{\infty}$ solution with compact spatial support of the massless Minkowskian K-G equation, then \beq \phi(\vec{x},t)_{BR} = r \phi(\vec{x},t)_{M}\:, \label{identita'} \eeq where $r > 0 $, $t \in \R$ and $\phi(\vec{x},t)_{BR}$ is a solution of the massless Bertotti-Robinson K-G equation of the same order of smoothness but without compact spatial support in general. In order to build up the Weyl algebra \cite{haag libro,fulling,kaywald}, we have to consider the following bilinear symplectic form or indefinite scalar product \beq \sigma(\phi_{1},\phi_{2}) :=\int_{t=\mbox{const.}}\phi_{1} {\stackrel{\leftrightarrow}{\nabla}}_{\mu}\phi_{2}\: n^{\mu} \sqrt{h}\:\: dx_{1} dx_{2} dx_{3}\:, \label{simplettica} \eeq where $n$ is the normal (and normalized) vector to the {\em Cauchy} surface $t=$constant and $h$ is the determinant of the induced metric on this surface.\\ In the case of Minkowski space the above surfaces are Cauchy surfaces, but this is not true in the case of Bertotti-Robinson space and thus we can not deal with the standard theory. However, if we decide to restrict the possible vector space $S$ of solutions of the K-G equation by considering only the scalar fields on the left hand side of Eq.~(\ref{identita'})\footnote{Roughly speaking, this is similar to a boundary condition requirement on the fields solutions of the K-G equation in the B-R manifold.}, we shall trivially find the following identity \beq \sigma(\phi_{BR1},\phi_{BR2})_{BR}=\sigma(\phi_{M1},\phi_{M2})_{M}\:. \eeq Following the notations of \cite{kaywald}, we can formally define the ``field operator'' $\hat{\phi}_{BR} $ on B-R manifold by posing \beq \sigma(\hat{\phi}_{BR},\phi_{BR})_{BR}=\sigma(\hat{\phi}_{M}, \phi_{M})_{M}\:, \eeq where $\phi_{BR} \in S$. Starting from the set $S$ and using the previous identities one can construct the usual theory of quantum fields for quasifree states in the algebraic approach based on the Weyl algebra (\cite{kaywald}) on the Bertotti-Robinson background, too. In particular, (quasi-free) states can be built, using the (quasi-free) states of Minkowski theory as follows \beq \lambda(\phi_{BR1}...\phi_{BRn})_{BR} := \lambda(\phi_{M1}...\phi_{Mn})_{M} \:, \label{F} \eeq where $\lambda_{BR}$ ( $\lambda_{M}$) denotes a generic $n$-point function evaluated on the K-G solutions $\phi_{BRk}$ ($\phi_{Mk}$) and these fields are related to each other by Eq.~(\ref{identita'}).\\ In terms of integral kernels the above identity reads as \beq \lambda(x_1,x_2,...x_n)_{BR} := r_1 r_2...r_n\:\:\lambda(x_1,x_2,...x_n)_{M} \:, \label{F'} \eeq We note that this formulation of quantum field theory in B-R space-time agrees with Kay's general formulation for generally {\em non} globally hyperbolic manifold based on {\em F-locality} \cite{kay,higuchi}. This follows using test functions with compact support {\em inside of the B-R manifold}, re-formulating the theory in terms of a ``four-smeared'' field theory and using the ``advanced minus retarded'' Green function induced by antisymmetryzing Eq.(\ref{F'}) for $n=2$. \footnote{Successively, one could use the Fewster-Higuchi theorem \cite{higuchi}, for example.}\\ The thermal Wightman functions relative to the vacuum state arising by canonical quantization in Bertotti-Robinson coordinates obviously satisfy \beq W_{\beta}^{\pm}(x,x{'})_{BR} = rr^{'}\:W_{\beta}^{\pm}(x,x{'})_{M}\:, \eeq which follows directly from Eq.~(\ref{F'}).\\ Summing over the normal modes of the Minkowskian K-G equation and then thermalizing by employing the well-known {\em sum over images method} \cite{fr,mahan,kapusta,mova} we easily obtain the thermal Wightman functions (really distributions) for a massless scalar field in the B-R background. They read, dropping the index 'BR' \beq W^{\pm}_{\beta}= \frac{\mid \vec{x}\mid \mid \vec{x^{'}}\mid}{4\pi^2} \frac{\pi\left\{\coth\frac{\pi}{\beta} (|\vec{x}-\vec{x}^{'}|+t-t^{'}\mp i\varepsilon)+\coth \frac{\pi}{\beta}(|\vec{x}-\vec{x}^{'}|-t+t^{'}\pm i\varepsilon)\right\} }{2\beta |\vec{x}-\vec{x}^{'}|} \label{wightman distribuzione} \eeq Or, for $T= 1/\beta =0$ \beq W^{\pm}(x,x^{'})= \frac{\mid \vec{x}\mid \mid \vec{x^{'}}\mid}{4\pi^2} \frac{1}{ \mid \vec{x} - \vec{x^{'}}\mid^{2} - \left(t - t^{'}\mp i\varepsilon\right)^{2} } \label{wightman zero distribuzione} \:. \eeq We observe that, in the sense of usual limit of functions but also using a weak limit interpretation, one obtains the second Wightman function as $\beta \rightarrow +\infty$ in the first one.\\ Note that the forms (\ref{wightman distribuzione}) and (\ref{wightman zero distribuzione}) for the Wightman functions hold in the interior of every B-R zone of the complete manifold only.\\ In order to prove the possibility of a QFT on the whole Carter-like manifold we shall use the {\em Dowker-Schofield scaling property} \cite{dowkerschofield}, generalizing the previous proof. Remind that the static Einstein's universe is globally hyperbolic and thus a standard algebraic QFT can be defined there. Following Dowker and Schofield \cite{dowkerschofield}, let us suppose to have two {\em static} metrics which are conformally related \beq ds^{2}=g_{00}(\vec{x})(dx^{0})^{2}+g_{ij}(\vec{x})dx^{i}dx^{j} \label{cuno} \eeq and \beq ds^{'2}=g^{'}_{00}(\vec{x})(dx^{0})^{2}+g^{'}_{ij}(\vec{x})dx^{i}dx^{j}\:, \label{cdue} \eeq where \beq g^{'}_{\mu\nu}=\lambda^{2}(\vec{x}) g_{\mu\nu} \eeq and let us consider the solutions of the respective Klein-Gordon-like equations \beq \left(\Box +\xi R + m^{2}\right) \phi(x) =0 \label{KG1} \eeq and \beq \left(\Box^{'} +\xi R^{'} +\left(\xi-\frac{1}{6}\right)\Box^{'} \left(\lambda^{-2}\right)+ m^{2}\lambda^{-2}\right) \phi^{'}(x)=0\:, \label{KG2} \eeq where $R$ is the scalar curvature. Then, the solutions of the Klein-Gordon equations above are connected to each other by the Dowker-Schofield scaling property \cite{dowkerschofield} \beq \phi(x)= \lambda(\vec{x}) \phi^{'}(x) \:, \label{scaling} \eeq In our case, we consider $ds^{'2}$ as the metric of Einstein's static universe and $ds^{2}$ as the metric of Eq.~(\ref{quasi einstein}). Thus $\lambda^{2}=\sin^{2}2R$.\\ The fields propagating in the whole Carter-like manifold satisfy the Klein-Gordon equation (\ref{KG1}) with $m=0$ and $R=0$. In fact the metric of Eq.~(\ref{quasi einstein}) has vanishing scalar curvature. We are free to choose $\xi=1/6$ ({\em conformal coupling}). Thus, if $\phi_{ESU}$ is a solution of the massless, {\em conformally coupled} Klein-Gordon equation in Einstein's Static Universe ($\equiv$'$ESU$'), the field \beq \phi(\vec{R},T)_{CEU} := \sin{2R}\:\:\phi(\vec{R},T)_{ESU}\:, \label{esu} \eeq will satisfy the massless, {\em minimally coupled} Klein-Gordon equation with the metric (\ref{quasi einstein}) ('$CEU$'$\equiv$ 'Conformal to Einstein's Universe'). Due to Eq.(\ref{esu}), one finds also \beq \sigma(\phi_{CEU1},\phi_{CEU2})_{CEU} =\sigma(\phi_{ESU1},\phi_{ESU2})_{ESU}\:. \eeq Like in the previous case, it is possible to find an algebraic (quasi-)standard field theory for the metric (\ref{quasi einstein}) by starting from the vector space $S'$ of K-G solutions defined by Eq.~(\ref{esu}), while the right hand side covers the vector space of (conformally coupled) K-G solutions in the Einstein's static universe of the class $C^{\infty}$\footnote{It is not necessary to demand a spatial compact support because the spatial section of Einstein's static universe is compact as well known, being homeomorfic to $S^{3}$. Moreover, the observation above on Kay's F-locality remains valid also in this case.}.\\ Finally, the Wightman functions satisfy ($T=1/\beta \geq 0$) \beq W_{\beta}^{\pm}(X,X^{'})_{CEU} = \sin^{2}2R \: \sin^{2}2R^{'} \:\: W_{\beta}^{\pm}(X,X^{'})_{ESU}\:. \label{relazione conforme} \eeq Similar identities hold for any kind of Green functions.\\ \subsection{Extendibility beyond the Horizons} In order to obtain final Wightman functions which are defined also when the two arguments are on {\em opposite sides} of a horizon and furthermore, Wightman functions which are defined on the whole manifold, we shall study whether it is possible to {\em analytically} extend the previous Bertotti-Robinson Wightman functions. We shall find this possibility holding in the case of $T=1/\beta=0$ only.\\ Taking into account the existence of a global time-like Killing vector $\partial_{T}$, we expect to find a time-translationally invariant function also with respect to the global time $T$, as it happens for the Minkowski vacuum in the Rindler wedge theory.\\ Our main idea is to consider the case $\varepsilon=0$, avoiding light-like correlated arguments, understanding the Wightman functions as proper functions; furthermore to keep fixed an argument in the interior of a certain fixed B-R region and posing the second near the (future or past) event horizon. Finally we want to translate the Wightman function from R-N variables into variables $R,T$ (which are regular on the horizon) and to check whether the obtained function of the second argument is {\em analytically} extendible beyond the horizon into an other B-R region.\\ Obviously we have to perform an analogous procedure which starts in the latter region and extends the function into the former region. It is reasonable to demand that the obtained extended functions are the same in both cases.\\ First we consider the simple case $T=1/\beta = 0$. Starting from Eq.~(\ref{wightman zero distribuzione}) and passing to variables $R$ and $T$ by means of Eq.s~(\ref{r}) and (\ref{t}) in the case $\varepsilon=0$ it arises \beq W^{\pm}(X,X^{'})_{BR} = \frac{(4 \pi^{2})^{-1}\:\:\sin 2R\:\:\sin 2R^{'}} {(\cos 2R \:-\cos 2R^{'})^{2}+ |\sin 2\vec{R}\: - \sin 2\vec{R^{'}}|^{2} - 4 \sin^{2}(T-T^{'})}\:, \label{primo caso} \eeq where $sin 2\vec{R}$ means the $3$-vector parallel to $\vec{x}$ carrying a length $|\sin 2R|$.\\ This formula holds when both arguments belong to the interior of the {\em same} B-R region. It is now evident that, keeping one point fixed inside a B-R zone (but not on the horizon), the above function is analytic in the second variable, also {\em on the horizon}. Hence Eq.~(\ref{primo caso}) can be analytically extended for arguments inside of two {\em different} B-R zones, too. It is important to note that the resulting function is {\em invariant} under $T-$translations. We also observe that the validity of the Haag, Narnhofer and Stein scaling prescription \cite{HNS,FH,haag libro,moretti} as well as the Hessling prescription \cite{hessling,moretti} is quite straightforward to prove employing the form in Eq.(\ref{primo caso}) of Wightman functions; the same result was proved in \cite{moretti}, but by applying a different coordinate frame.\\ We return to the above function later in order to discuss its interpretation as distribution after restoring the $\varepsilon$-prescription.\\ Let us consider the case $\beta$= finite and look for possible analytical continuations on the whole manifold. We have to translate the right hand side of Eq.~(\ref{wightman distribuzione}) into $R$ and $T$ in the case $\varepsilon=0$ (we shall drop the index $BR$ everywhere).\\ We shall analyze separately the different terms which appear therein. \\ First we translate the external factor \beq F(x,x^{'}):=\frac{\mid \vec{x}\mid \mid \vec{x^{'}}\mid}{|\vec{x}-\vec{x}^{'}|}\:. \eeq when both arguments remain in the same B-R region (for example, in the B-R region containing the $R$-axis). In terms of the global coordinates this reads \beq F(X,X^{'})=\left| \frac{\cos 2T -\: \cos 2R}{\sin 2R} \vec{n} - \frac{\cos 2T^{'} -\: \ cos 2R^{'}}{\sin 2R^{'}}\vec{n^{'}} \right|^{-1} \:,\label{es} \eeq where we used the notation \beq \vec{n} := \frac{\vec{x}}{r} \:\:\:\mbox{and} \:\:\: \vec{n}^{'} := \frac{\vec{x}^{'}}{r^{'}} \eeq Keeping fixed one argument $X^{'}$ away from the horizon ($T^{'}=\pm R^{'}$) and considering the function of the remaining argument $X$, one can demonstrate that there exists a region which crosses a part of the horizon where the absolute value in the expression (\ref{es}) does not vanish. Eq.~(\ref{es}) defines an analytic function in this region. Furthermore one finds the same function starting from the opposite side of the horizon. However, it is important to point out that the translational time invariance is lost. It is also obvious that the {\em coth} in the remaining part of the Wightman function does not cancel these ``bad'' terms. Hence, for $T=1/\beta>0$, it is {\em not} possible to extend the thermal B-R states to {\em stationary} states (thermal or not) of the global time $T$.\\ Still choosing both arguments in the same B-R region (for example in the B-R region containing the $R$-axis), we analyse the two different arguments of the {\em coth} in the case of $W_{\beta}^{+}$ in Eq.(\ref{wightman distribuzione}) \beq A^{\pm}(x,x^{'}):=[|\vec{x}-\vec{x}^{'}|\pm(t-t^{'})] \:. \eeq We shall prove they produce a {\em discontinuity} in the Wightman functions if we suppose Eq.(\ref{wightman distribuzione}), translated into Carter-like coordinates, holds also when the arguments stay on the opposite side of an horizon.\\ Stepping over to global null coordinates and rearranging them in a more useful form we find \beq A^{\pm}(X,X^{'})=\frac{1}{2} (\cot U \:+ \cot W )\times \nn \eeq \beq \times \sqrt{1+\left(\frac{\cot U^{'}\:+\cot W^{'}}{\cot U \:+\cot W} \right)^{2} -2 \:\left(\frac{\cot U^{'}\:+\cot W^{'}}{\cot U\:+\cot W}\right)\cos \theta } \:+\nn \eeq \beq \pm \frac{1}{2} [ \cot U \:-\cot W - (\cot U^{'}\:- \cot W^{'})]\:, \label{mostro} \eeq where $\theta$ is the angle between $\vec{n}$ and $\vec{n}^{'}$ defined above; furthermore, $U^{'}$, $W^{'}$ and the associated angular coordinates are fixed while $U$, $W$ and the corresponding angular coordinates are varying. In particular we want to reach the future horizon, $W \rightarrow 0^{+}$. In this way we find \beq \coth A^{-} (W)\rightarrow 1 \:,\nn \eeq and \beq \coth A^{+} (W)\rightarrow \coth \left[\cot U\: - \frac{1+2 \cos \theta}{2}\:\cot U^{'} +\frac{1-2\cos \theta}{2}\:\cot W^{'}\right] \:.\nn \eeq Supposing Eq.(\ref{mostro}) makes sense also when its arguments are on opposite sides of the future horizon, we calculate the limit as the argument $X$ approaches the future horizon from the region $T>R$ while the argument $X^{'}$ is fixed in the region $R>T$. By this way we obtain \beq \coth A^{-} (W)\rightarrow -1 \:,\nn \eeq and \beq \coth A^{+} (W)\rightarrow \coth \left[\cot U\: - \frac{1+2 \cos \theta}{2}\:\cot U^{'} +\frac{1-2\cos \theta}{2}\:\cot W^{'}\right] \:.\nn \eeq Thus a {\em discontinuity} appears which propagates directly into the final form of the function $W^{+}_\beta(X,X^{'})$ because all the other functions used to build up $W^{+}_\beta$ are continuous on the horizon, and in particular $F(X,X^{'})$ is not vanishing there. Hence, we cannot suppose the general validity of Eq.~(\ref{mostro}) on the whole manifold {\em sic et simpliciter}. Then, another chance is to calculate a Taylor series (in several variables) of the running argument on the horizon, using the limits of the derivatives towards the horizon, when both arguments stay inside of the same region. If the convergence radius is not zero this determines an extension of $W^{+}(X,X^{'})$ beyond the horizon.\\ If we examine the $W$-derivative we obtain for $W\rightarrow 0^{+}$ \beq \frac{\partial^{n}\:\:\:\:}{\partial W^{n}}\coth A^{-} (W)\rightarrow 0 \:,\nn \eeq and \beq \frac{\partial^{n}\:\:\:\:}{\partial W^{n}}\coth A^{+} (W)\rightarrow \mbox{finite expression} \nn \eeq It arises from the result of the former limit that the convergence radius of the Taylor series of the function $A^{-}(X,X^{'})$ ($X^{'}$ fixed) vanishes on the horizon and thus it is not possible to reconstruct the function on {\em both sides} of the horizon with the help of just this Taylor series. The function does not admit any analytical extension beyond the horizon. It is simple to conclude that also the function $W^{+}_\beta(X,X^{'})$ ($\beta<+\infty$) can not be analytically extended beyond the horizon analytically.\\ Here, it is important to remind that the B-R KMS states with $\beta>0$ (as well as the B-R vacuum state at $T=1/\beta=0$) satisfy the HNS prescription also on the horizon \cite{moretti}, but (differently from the vacuum state) they carry an infinite renormalized stress tensor on the horizon \cite{AHL} and they do {\em not} satisfy Hessling's prescription \cite{moretti,moretti2}.\\ \subsection{A Bisognano-Wichmann-like Theorem} Now we return to the case $\beta=0$. We shall prove a {\em Bisognano-Wichmann-like theorem} as our third result.\\ We interpret the Wightman functions defined in Eq.~(\ref{wightman zero distribuzione}) as distributions working on four-smeared functions \cite{fulling,haag libro,kaywald,kay,higuchi} with support enclosed in the B-R considered region. It is possible to prove that these Wightman functions coincide with the Wightman functions of the vacuum state defined by quantizing with respect to the global coordinates $R$ and $T$ when we restrict the latter in the interior of a R-N sub-manifolds.\\ By the {\em GNS theorem} or similar theorems \cite{haag libro,kaywald} we are able to extend this property from the Wightman functions onto the respective quantum states. This fact corresponds to the Bisognano-Wichmann theorem in Minkowski space-time \cite{sewell}. In this sense the analog to the Unruh-Rindler temperature in the ``B-R wedges'' is exactly $T=1/\beta=0$ and thus the KMS conditions does not appear, but the dependence of $t-t^{'}$ remains in the Wightman functions of the analog to the $\beta=2\pi$-Rindler-KMS state. The $\beta=2\pi$-Rindler-KMS state corresponds to the B-R vacuum now and the Minkowski vacuum is represented by the Carter-like global vacuum. \\ We shall prove our theorem employing the following way. First, we shall express Wightman functions in therms of {\em Feynman propagators}, then, we shall prove that the coincidence inside of B-R submanifolds of the Feynman propagators involves the coincidence of Wightman functions there. Finally, we shall prove the coincidence of Feynman propagators.\\ We can extract the Wightman functions from the Feynman propagator using well-known properties working in {\em static, globally hyperbolic} space-times \cite{fulling,haag libro}. In the case of the B-R space-time and also in the case of our complete Carter-like manifold, the following identities arise directly from the analog identities which hold in the respective conformal related ultrastatic manifold, using Eq.s~(\ref{identita'}) and (\ref{esu}).\\ Let us start with the first part of the proof. In general coordinates \beq iG_F =\theta(\tau-\tau^{'})\:W^+ +\theta(\tau^{'}-\tau)\:W^- =\theta(\tau-\tau^{'})\:W^+ + \theta(\tau^{'}-\tau)\:W^{+\ast}\:, \nn \eeq where $G_F$ denotes the Feynman propagator. It arises from the above identity \beq i\theta(\tau-\tau^{'})G_{F}=\theta(\tau-\tau^{'})W^{+}\:\:\:\: \mbox{and} \:\:\:\:i\theta(-\tau+\tau^{'})G_{F}= \theta(-\tau+\tau^{'})W^{+\ast}\:,\label{soprat} \eeq and thus: \beq W^{\pm}=i\theta(\pm(\tau-\tau^{'}))G_{F}-i \theta(\pm(\tau^{'}-\tau))G_{F}^{\ast}\:. \label{confronto} \eeq Suppose now the coincidence of Carter-like propagator and Bertotti-Robinson propagator were proved inside of a B-R sub manifold, then, the coincidence of Wightman functions follows as well. In fact, whenever the arguments of the Wightman functions are {\em space-like} related, the field operators commute and thus $W^+\equiv W^-\equiv G_F$ from the previous formulas. Then, the coincidence of Wightman functions follows from the coincidence of Feynman propagators. On the other hand, if the arguments of the Wightman functions are {\em time-like} or {\em light-like} related, the functions $\theta(T-T^{'})$ and $\theta(t-t^{'})$ which appear in Eq.(\ref{confronto}) as well as in the Feynman propagators trivially coincide and thus the Wightman functions coincide, too.\\ We have to prove the coincidence of Feynman propagators in the remaining of this section.\\ The Feynman propagator of a massless field in the Minkowski space-time is well-known (see for example \cite{itzykson}). Taking into account Eq.~(\ref{identita'}) we get \beq G_{F}(x,x^{'})_{BR}= \frac{-i}{4\pi^{2}}\frac{rr^{'}}{|\vec{x}-\vec{x}^{'}|^{2} -(t-t^{'})^{2}} - \frac{ rr^{'}}{4\pi} \delta(|\vec{x}-\vec{x}^{'}|^{2}-(t-t^{'})^{2}) \:. \label{feynmanBR} \eeq We introduce the Feynman propagator in Carter-like manifold. This can be calculated from Feynman propagator in the Einstein's static universe with spatial radius $\rho=1$ (which is our case) for a conformally coupled scalar field. We report this in {\bf Appendix C}. \beq G_{F}(T-T^{'}, \vec{R}, \vec{R}^{'})_{CEU}= \nn \eeq \beq \frac{i\:\sin 2R \: \sin2R^{'}}{4\pi^{2}} \frac{1}{2-2\cos\sigma -4 \sin^{2}(T-T^{'})} + \nn \eeq \beq + \frac{\sin 2R \sin 2R^{'} }{4\pi} \sum_{n\in \Z} \frac{\sigma + 2\pi n}{\sin \sigma} \delta( (2T-2T^{'})^{2}-(\sigma+2n \pi)^{2} ) \:,\label{feynmanCEU} \eeq where $\sigma$ is the {\em minimal} geodesical length between the points determined by $\vec R$ and $\vec{R}'$ on a $3-$sphere $S^3$. Using our coordinates $\vec{X}\equiv (R, \theta, \varphi)$ on the above $3$-sphere, it is possible to prove that $\sigma$ satisfies \beq 2-2\cos\sigma(\vec{X},\vec{X})=(\cos 2R\: -\cos 2R^{'})^{2} + |\sin \vec{2R} \: -\sin \vec{2R}^{'}|^{2}\:.\label{sigma} \eeq Now we prove that, {\em in the interior a B-R submanifold}, the Feynman propagator previously evaluated coincides with the Feynman propagator in Eq.~(\ref{feynmanBR}). In order to prove this coincidence in a B-R zone, it is sufficient to demonstrate the following identity \beq \sin2R \:\sin 2R^{'} \sum_{n\in \Z}\frac{1}{4\pi}\frac{\sigma+2\pi n}{\sin\sigma} \delta((2T-2T^{'})^{2}-(\sigma +2n\pi)^{2} ) = \nn \eeq \beq = \frac{ rr^{'}}{4\pi} \delta(|\vec{x}- \vec{x}^{'}|^{2}-(t-t^{'})^{2}) \:.\label{ultimissima} \eeq In fact, the first term on the right hand side of Eq.~(\ref{feynmanCEU}) trivially coincides with the first term on the right hand side of Eq.~(\ref{feynmanBR}) if one uses Eq.~(\ref{sigma}). This is nothing but Eq.(\ref{primo caso}).\\ Let us prove identity (\ref{ultimissima}), reminding that both arguments belong to the interior of a R-N zone and noting that the minimal geodesical length $\sigma$ is contained in the interval $[0, \pi]$ and thus $\sin \sigma = |\sin \sigma |$. \beq \sin2R \:\sin 2R^{'} \sum_{n\in \Z}\frac{1}{4\pi}\frac{\sigma+2\pi n}{\sin\sigma} \delta((2T-2T^{'})^{2}-(\sigma +2n\pi)^{2} ) = \nn \eeq \beq \sum_{n\in \Z} \frac{ \sin 2R \:\sin 2R^{'} (\sigma+ 2\pi n)}{4\pi \sin (\sigma+2\pi n)} \left[ \frac{\delta(\sigma+2\pi n-(2T-2T^{'}))}{2(\sigma+2\pi n)}+ \frac{\delta(\sigma+2\pi n +(2T-2T^{'}))}{2(\sigma+2\pi n)} \right]=\nn \eeq \beq =\frac{\sin 2R\: \sin 2R^{'}}{4\pi} \delta(-2 \cos\sigma + 2\cos (2T-2T^{'}) ) =\nn \eeq \beq =\frac{ \sin 2R\: \sin 2R^{'}}{4\pi} \delta \left( \frac{-2 \cos\sigma + 2\cos (2T-2T^{'})}{\sin 2R \: \sin 2R^{'}}\sin 2R \: \sin 2R^{'} \right)= \nn \eeq \beq =\frac{ \sin 2R\: \sin 2R^{'}}{4\pi} \delta\left( \frac{|\vec{x}-\vec{x}^{'}|^{2}-(t-t^{'})^{2}}{r r^{'}} \sin 2R \: \sin 2R^{'} \right)\:. \nn \eeq We used Eq.(\ref{primo caso}) (holding inside of any B-R region) once again in the argument of the delta function.\\ Considering $t$ as the integration variable and keeping $r$, $t^{'}$, $r^{'}$ fixed, using standard manipulations on delta function, we find that the above term can also be written as \beq \frac{1}{4\pi} rr^{'} \delta(|\vec{x}-\vec{x}^{'}|^{2}-(t-t^{'})^{2})\:.\nn \eeq We just obtained the second term on the right hand side of Eq.~(\ref{feynmanBR}), i.e., we proved the coincidence of $G_{F\:BR}$ and $G_{F\:CEU}$ in the interior of a B-R zone. \\ Just two technical notes to conclude.\\ First, we write Wightman functions of the Carter-like manifold in a more concise form. Using the identity \beq \frac{1}{x\pm i\varepsilon}= Pv\: \frac{1}{x} \mp i \pi \delta(x) \nn \eeq where $Pv$ denotes the {\em principal value} and taking into account that the first term on the right hand side of Eq.~(\ref{feynmanCEU}) is to be understood just in the sense of the principal value \cite{fulling}, it arises from Eq.~(\ref{confronto}) \beq W^{\pm}(T-T^{'}, \vec{R}, \vec{R}^{'})_{CEU}= \nn \eeq \beq \frac{\sin 2R \: \sin2R^{'}}{4\pi^{2}} \frac{1}{2-2\cos\sigma -4 \sin^{2}(T-T^{'}\mp i \varepsilon)} \:. \label{nota} \eeq Finally, we can also observe that the coincidence of the Wightman functions (in a B-R zone) in the case of $\varepsilon=0$ is equivalent to the coincidence of the {\em Hadamard functions} therein. We can calculate the Hadamard functions as \beq G^{(1)} := W^{+}+W^{-}\:.\nn \eeq In the case of the B-R metric, the Hadamard function reads (to be understood in the sense of the principal value) \beq G^{(1)}(x,x^{'})_{BR}= \frac{1}{2\pi^{2}} \frac{rr^{'}}{|\vec{x}-\vec{x}^{'}|^{2}-(t-t^{'})^{2}} \:, \label{hadamardBR} \eeq on the other hand, in the case of the global metric, the Hadamard function reads \beq G^{(1)}(X,X^{'}) = \nn \eeq \beq =\frac{\sin 2R \: \sin2R^{'}}{2\pi^{2}} \frac{1}{2-2\cos\sigma -4 \sin^{2}(T-T^{'})} \:. \label{hadamardCEU} \eeq These functions coincide as proved above in Eq.(\ref{primo caso}). \section{Conclusions and Outlooks on Exact Extremal R-N Black Holes} The most important result of this paper is the proof of the coincidence of the global Carter-like vacuum and the Bertotti-Robinson vacuum. Notice that the global vacuum which is represented by a {\em pure} state also inside of a B-R submanifold, has {\em vanishing entropy} there, despite of the presence of horizons. This is probably due to the fact that the horizons do not separate different spatial regions differently from the Minkowski-Rindler and Kruskal-Schwarzshild case. In these latter cases the whole spatial Cauchy surface at $t=0$ (where $t$ is the Minkowski or Kruskal time) is separated into two Cauchy surfaces within two (Rindler or Schwarzschild) wedges. Let us consider the Minkowkian case. Formally employing a von Neumann approach, this separation of the Minkowski Cauchy surface involves a ``separation'' of the field Hilbert space which results to be a tensorial product of two Hilbert spaces related with the two Rindler wedges. Then, a pure state (with vanishing entropy) of the whole Hilbert space appears as a mixed state (with a non vanishing entropy) in each factor Hilbert space. However, in our case the situation is more complicated due to the fact that the $T=0$ surface is not a Cauchy surface. \\ Another point is the following. Supposing that physically sensible KMS (including the case $T=1/\beta=0$) quantum states are analytically extendible on the whole manifold, we have to accept {\em only} $T=0$ as possible temperature without the use of further physical requests. This fact arises regardless of all the topological consideration on {\em conical singularities} in the Euclidean formulation. In fact, our manifold does not produce conical singularities for any choice of the Euclidean time period $\beta$ and thus, in the framework of the Euclidean formalism, one should accept every value for the temperature to be possible. \\ We expect that it should be possible to develop further the analogy between Minkowski space-time and our model in order to prove the above coincidence of vacuum states also for the case of the extremal Reissner-Nordstr\"{o}m space-time and the Carter space-time. In the case of the extremal R-N black hole, the hardest problem is to deal with the {\em time-like} singularity in the region beyond the horizon. It is not possible to develop a standard quantum field theory there. However it seems to be possible to employ a more general theory based on the Kay F-locality \cite{kay,higuchi} (or something similar) inside the manifold resulting from Carter's manifold by excluding all the points belonging to the time-like singularity. Following this way, it should be possible to define a global {\em advanced-minus-retarded} fundamental solution which agrees to that one defined inside of each B-R manifold.\footnote{Remind that this ``Green function'' does not depend on the considered quantum state.} Using Carter's coordinate, the idea is to analytically extend beyond the horizons the (thermal) Hadamard function built up inside of a B-R region, defining a global Wightman function and thus a global quantum state.\\ We expect that {\em only} the B-R vacuum defines a similar global extension. Furthermore, if this is proved to be correct, following the results in \cite{cvz}, no quantum one loop corrections (generally singular) which arise from the (massless scalar) fields propagating outside of the extremal R-N black hole, need to be added to the gravitational entropy. \ack{We would like to thank Luciano Vanzo for his bright lectures on several topics of this paper as well as Marco Toller, Sergio Zerbini and Giuseppe Nardelli for many helpful hints.\\ Stefan Steidl would like to thank the Dipartimento di Fisica dell'Universit{\`a} di Trento for its kind hospitality during his stay in Trento and especially the persons mentioned above, including also Guido Cognola, for the cordial atmosphere they provided. } \section*{Appendix A. Approximations near the Horizons in Carter's Map} Let us consider $ds^{2}$ in Eq.(\ref{RNC}) as the quadratic form \beq ds_{x}^{2}(\vec{X}):= g_{\mu\nu}(x)\:dX^{\mu}\:dX^{\nu}\:, \nn \eeq where $dX^{\mu}\equiv (dT,dR,d\theta,d\varphi)$ are the components of the $4$-vector $\vec{X}$ at $x\equiv (T,R,\theta,\varphi)$.\\ In order to deal with the approximated metric near different points on the horizon, we shall consider the following expansion as $\bar{r} \rightarrow 1$ (i.e. near the horizons) which arises {from} the definition of $r^{\ast}$ (Eq.(\ref{ar})) \beq \left( 1-\frac{1}{\bar{r}}\right)^{2} = \frac{1}{r^{\ast 2}} (1+ O ((\bar{r}-1) \ln |\bar{r}-1|)) \:, \label{sviluppo base} \eeq and also, trivially \beq \bar{r}= 1+ (\bar{r}-1) = 1+ O(\bar{r}-1)\:. \label{banale} \eeq Let us define the approximated form of the above metric \beq ds_{0x}^{2}(\vec{X}):= \frac{1}{r^{\ast 2}} \csc^{2}(R+T)\csc^{2}(R-T)\: (-dT^{2}+ dR^{2}) + d\Omega_{2}(\vec{X})\:, \label{dszero0} \eeq where we posed also $d\Omega_{2}(\vec{X}) := d\theta^{2}+\sin^{2}\theta\: d\varphi^{2}$.\\ Thus, it holds by definition \beq ds_{x}^{2}(\vec{X}) = ds_{0x}^{2}(\vec{X}) + (ds_{0x}^{2}(\vec{X}) - d\Omega_{2}(\vec{X})) O_{\vec{X}} ((\bar{r}-1) \ln |\bar{r}-1|)) + O_{\vec{X}}(\bar{r}-1) d\Omega_{2}(\vec{X}) \:. \nn \eeq Taking the leading order only as $\bar{r} \rightarrow 1$ we have \beq ds_{x}^{2}(\vec{X}) \sim ds_{0x}^{2}(\vec{X})\:. \nn \eeq The metric $ds^2_0$ can be written in a more useful form by employing the formula \beq \frac{1}{r^{\ast 2}} \csc^{2}(R+T)\csc^{2}(R-T) = \frac{1}{\sin^{2} 2R} \label{2R}\:; \eeq in this way we find the static metric of Eq.(\ref{dso}) \beq ds_{0x}^{2}(\vec{X}):= \frac{4}{\sin^{2} 2R}\: (-dT^{2}+ dR^{2}) + d\Omega_{2} \label{dszero}\: \eeq The vector field $\partial_{T}$ defines an {\em approximated} Killing vector inside the regions where $ds^{2}_{0}$ approximates to $ds^{2}$.\\ {From} now on, we shall drop the index $x$ and the explicit dependence from $\vec{X}$ for sake of simplicity.\\ Let us specify the regions, in Carter's picture, where we may employ the previous approximated form of the metric using Carter's coordinates as well as R-N coordinates.\\ We start by considering the part of the future horizon between the origin $O$ and its opposite point $O^{'}$ (see {\bf figure 2}). We define the coordinate $W$ and $U$ in the two regions \begin{eqnarray} R-T &=:& W \sim 0 \:\:\:\:\: R>T\:\:\: or \:\:\: R<T \label{prima}\\ R+T &=:& U \:\:\: \mbox{finite}\:. \label{seconda} \end{eqnarray} By using Eq.~(\ref{r estesa}) one finds that, fixing $\varepsilon>0$, $\bar{r}\rightarrow 1^{\pm}$ {\em uniformly} in $U \in [ \varepsilon, \pi/\sqrt{2}\:-\varepsilon]$ as $W \rightarrow 0^{\pm} $. We conclude that it is possible to use the form of the metric of Eq.s~(\ref{dszero0}) and ({\ref{dszero}).\\ Let us consider the form of the metric in the considered region employing R-N coordinates. We shall find the {\em Bertotti-Robinson} metric. We start by consider that $ds^{2}$ reads in terms of $U$ and $W$ Carter's null coordinates \beq ds^{2} \sim \frac{1}{r^{\ast 2} \sin^{2}W \:\sin^{2} U} dU dW + d\Omega_{2} \label{settima}\:. \eeq Employing the following identities \begin{eqnarray} du &=& \frac{1}{\sin^{2} U} dU \nn \:,\\ dw &=& \frac{1}{\sin^{2} W} dW \nn \end{eqnarray} and \beq du \: dw = \frac{1}{\sin^{2} W \: \sin^{2} U } dU dW \:; \nn \eeq and translating into R-N null coordinates, we finally find \beq ds^{2} \sim \frac{1}{r^{\ast 2}} du\: dw + d\Omega_{2} \label{settimaprimo}\:. \eeq Thus, in coordinates $r^{\ast}$, $t$, the metric of Eq.(\ref{settima}) reads \cite{referee,AHL,moretti} \beq ds^{2} \sim \frac{-dt^{2} + dr^{\ast 2}}{r^{\ast 2}} + d\Omega_{2} = \frac{-dt^{2} + dr^{2} + r^{2} d\Omega_{2}}{r^{2}} \label{ottava} \:, \eeq where we posed $r:= -r^{\ast}$ if $R>T$, or $r:=r^{\ast}$ if $R<T$.\\ This metric is the well-known {\em Bertotti-Robinson metric} \cite{BR}.\\ Now, let us consider the regions near the extremal points of the horizon. We start by considering the ``origin'' $O \equiv (R=0, T=0)$\footnote{Really $O$ is a {\em 2-sphere}.} \begin{eqnarray} T&\sim& 0 \:\:, \:\: T > 0 \:\:,\nonumber\\ R&\sim& 0 \:\:, \:\: R > 0 \:\:. \label{regione} \end{eqnarray} We observe that $r^{\ast}$ and $\bar{r}$ are defined in terms of $R$ and $T$. $\bar{r}$ reaches {\em uniformly} its value $1$ in the sense of $\R^{2}$ when $(R,T) \rightarrow (0,0) $ in any wedge of the form ($\varepsilon>0$) \beq R > (1-\varepsilon)\mid T \mid\:, \:\:\:\: T>0\:, \label{wedge} \eeq or, inside the region beyond the future horizon \beq \frac{R}{\varepsilon}> T > (1+\varepsilon) R \:, \:\:\:\: R>0 \:\:\:\: T>0 \:. \label{wedge2} \eeq Thus we can use Eq.~(\ref{dszero}) for the approximated metric.\\ We can notice another interesting fact. By means of Eq.~(\ref{2R}), we also obtain in these regions \beq Q = \frac{1}{R^{2}}+ O(R,T) \nonumber \:, \eeq where $O(R,T)$ is an infinitesimal function as $(R,T) \rightarrow (0,0)$. Thus, the metric inside of the considered wedges, employing the leading order approximation as $(R,T) \rightarrow (0,0)$, reads \beq ds^{2} \sim \frac{1}{R^{2}} \left( -d T^{2}+d R^{2}\right) + 1\:d\Omega_{2} \nn \:, \eeq or \beq ds^{2} \sim \frac{1}{R^{2}} \left( -d T^{2}+d R^{2} + R^{2}\:d\Omega_{2} \right) \label{brc} \:. \eeq We found the Bertotti-Robinson metric also in Carter's coordinates.\\ It can easily be proved by hand that the above approximation also holds in the R-N region, dropping the constraint $T \neq 0$. Furthermore, due to the symmetry of the manifold, similar calculations can be performed for $T<0$. Thus the B-R metric, in the limit of ``little'' $R$ and $T$, holds for all the wedges of the form \beq R > (1-\varepsilon)\mid T \mid\:, \:\:\:\: R>0 \label{wedge general} \eeq and \beq \frac{R}{\varepsilon}> \mid T \mid > (1+\varepsilon) R \:, \:\:\:\: R>0 \:. \label{wedge general2} \eeq Let us consider the form of the metric in R-N coordinates in the region defined by Eq.~(\ref{regione}).\\ Keeping the divergent leading order in $R$ and $T$ in Eq.~(\ref{regione}), Eq.(\ref{r estesa}) reads \beq r^{\ast} = -\frac{1}{2(T+R)}+\frac{1}{2(T-R)} + O(R,T) \sim \frac{R}{T^{2} -R^{2}}\:\:\:\:\:\: ( R>0 )\label{r approx}\:; \eeq for the coordinate $t$ we obtain similarly \beq t = -\frac{1}{2(T+R)}-\frac{1}{2(T-R)} + O(R,T) \sim -\frac{T}{T^{2} -R^{2}} \:\:\:\:\:\: ( T >0 ) \label{t approx}\:. \eeq It follow from these \beq R^{2}-T^{2} \sim (r^{\ast 2}-t^{2})^{-1} \eeq Using the latter three equations, we can write $R$ and $T$ in Eq.(\ref{brc}) in terms of $t$ and $r:= -r^{\ast}$ in the R-N region, or $r:= r^{\ast}$ beyond the horizon. Thus, we recover the approximated form of the metric also in R-N coordinates in the respective regions. In fact, we get the (dominant order approximated) inverse relations of Eq.s~(\ref{r approx}) and (\ref{t approx}) \begin{eqnarray} R&\sim&\frac{r^{\ast}}{t^{2}-r^{\ast 2}}\\ T&\sim&-\frac{t}{t^{2}-r^{\ast 2}}\:. \end{eqnarray} Substituting these results in Eq.~(\ref{brc}) we find the Bertotti-Robinson metric once again \beq ds^{2} \sim \frac{-dt^{2} + dr^{2} + r^{2} d\Omega_{2}}{r^{2}}\:. \eeq Let us examine the metric near the ``point'' at infinity (really a {\em 2-sphere}):\\ $O^{'}\equiv (T=\frac{\pi}{2}, R=\frac{\pi}{2})$.\\ We shall just sketch the approximation because this is very similar to the previous one.\\ Starting from the ansatz \begin{eqnarray} T &=&\frac{\pi}{2}-\hat{T}\:, \\ R &=&\frac{\pi}{2}\pm \hat{R}\:, \end{eqnarray} which implies $dT^{2}=d\hat{T}^{2}$ and $dR^{2}=d\hat{R}^{2}$ and considering the limit $ (\hat{R},\hat{T})\rightarrow (0,0)$ as in the previous case, we find \beq Q\sim \frac{1}{\hat{R}^{2}}\:, \eeq whatever the sign in front of $\hat{R}$ may be.\\ For $r^{\ast}$ we obtain the formula \beq r^{\ast} \sim \frac{ \pm \hat{R}}{\left( \hat{T}^{2} - \hat{R}^{2}\right)}\:. \eeq We see that, in order to restore the Bertotti-Robinson metric, the only possible choice for the sign in front of $\hat{R}$ is $-$. In fact, this guarantees that $r^{\ast}$ tends to $-\infty$ (i.e. $\bar{r}\rightarrow 1^{+}$) as $ (\hat{R},\hat{T})\rightarrow (0,0)$ and $\hat{T}>\hat{R}$ ( coming from the interior of the R-N region ). On the other hand this also guarantees that $r^{\ast}\rightarrow +\infty$ (i.e. $\bar{r}\rightarrow 1^{-}$) when $\hat{T}<\hat{R}$, which is when the horizon is approached from outside of the R-N region.\\ Therefore, if we change coordinates $T\rightarrow \hat{T}+\frac{\pi}{2}$ and $R\rightarrow \hat{R}+\frac{\pi}{2}$ we find the Bertotti-Robinson metric in terms of $\hat{T}\ll 1$ and $\hat{R}\ll 1$ within wedges of $\hat{T}$ and $\hat{R}$ similar to those previously found.\\ It can easily be proved that by translating the obtained metric into R-N coordinates ${t,r,\theta,\varphi}$ and using usual approximations, the metric results to be the Bertotti-Robinson metric as in the previous case.\\ In the first case we used the null coordinates $U$, $W$ and $u$, $w$, respectively, instead of the usually employed space-like and time-like ones used near $O$ and $O^{'}$. However we can point out how the first case formally includes the remnant ones if we do the limit $U \rightarrow 0^{+}$ or $ U \rightarrow \pi^{-}$ in Eq.~(\ref{settima}) and translate the result into the variables $R$ and $T$.\\ Furthermore we studied the manifold near a particular future horizon, but obviously, due to the evident symmetries of Carter's manifold, we may repeat all the previous calculations for all the event horizons (past or future) therein.\\ Finally, let us consider the form in Eq.~(1) of the metric, i.e., the metric directly expressed in R-N coordinates in the R-N region and in the region containing the irremovable singularity.\\ It is easy to prove that, keeping $t\in \R$ fixed, the metric transforms into the B-R metric as $\bar{r} \rightarrow 1^{\pm}$ ( or $r \rightarrow +\infty$ ).\\ Looking at {\bf figure~2} one recovers this path to approach the horizon as falling into the limit point $O$ for $\bar{r}>1$ or into the limit point $O^{'}$ for $\bar{r}<1$. Still looking at {\bf figure~2}, we see that, in order to reach the remaining points of the horizon also the variable $t$ must be increased or decreased, respectively, towards $\pm\infty$. Using the R-N picture, these paths approach the vertical line $\bar{r}=1$ but ``cross'' it only at infinity (in time). \section*{Appendix B. Approximated Wightman Functions } In {\bf section 2} we supposed the Bertotti-Robinson Wightman functions approximate to the Reissner-Nordstr\"om Wightman functions near the horizons because the Bertotti-Robinson metric approximates to the Reissner-Nordstr\"om metric there. However, the normalization of normal modes usually used defining the Wightman functions depends on the integration over the whole spatial manifold and not only on the region near the horizon. Thus, our hypothesis requires further explanations. \\ We shall prove that it is possible to overcome this problem, at least formally, dealing with {\em static} metrics and KMS states. In fact in this case one recovers by the KMS condition \cite{HNS} ($x \equiv (\tau, \vec{x})$) \beq < \phi(x_{1}) \phi(x_{2}) >_{\beta}\: = \frac{i}{2\pi} \int_{-\infty}^{+\infty} G(\tau_{1}+\tau, \vec{x}_{1} \mid \tau_{2}, \vec{x}_{2}) \frac{e^{\beta \omega}}{e^{\beta \omega}-1} e^{i\omega \tau}\: d\tau d\omega \label{unohaag} \:, \eeq where the distribution $G$ is the {\em commutator} of the fields. This distribution is {\em uniquely} determined \cite{HNS} by the fact that it is a solution of the Klein-Gordon equation in both arguments, vanishes for equal times $\tau_{1} = \tau_{2}$ and is normalized by the ``local'' condition \beq g^{\tau\tau}\sqrt{-g}\frac{\partial}{\partial \tau_{1}} G(x_{1}, x_{2}) \mid_{\tau_{1}=\tau_{2}} = \delta^{3}(\vec{x}_{1}, \vec{x}_{2}) \label{duehaag} \eeq The above $3$-delta function is usually understood as \begin{eqnarray} \delta^{3}(\vec{x}_{1}, \vec{x}_{2})&=&0 \:\:\:\: \mbox{for } \:\:\:\: \vec{x}_{1} \neq \vec{x}_{2} \nn\\ \int \delta^{3}(\vec{x}_{1}, \vec{x}_{2})\:\: d\vec{x}_{2} &=& 1 \nn \end{eqnarray} By the previous, spatially ``local'' formulas we expect that the function $G$ calculated with the ``true'' static metric becomes the function $G$ calculated by using the approximated static form of the metric inside of a certain static region $\delta\Sigma \times \R$ (where $ \tau \in \R$) as $\delta\Sigma$ shrinks around a $3$-point. Considering $(\delta\Sigma,\tau_{0})$ as a {\em Cauchy surface}, the above result should come out inside of the ``diamond-shaped'' four-region causally determined by $(\delta\Sigma,\tau_{0})$ at least. But, studying the form of the light cones near event horizons of the form $|\vec{x}|=r_{0}$ and $\tau \in \R$, it can be simply proved that this four-region will tend to contain the whole $\tau-$axis if $\delta\Sigma$ approaches the event horizons.\\ In the same way, using Eq.~(\ref{unohaag}), we could expect such a property for thermal Wightman functions, too. The case of zero temperature, regarded as the limit $\beta \rightarrow +\infty$, is included.\\ In the case of an extremal R-N black hole, the Bertotti-Robinson metric approximates to the R-N metric along any horizon, for $\bar{r}>1$ and $\bar{r}<1$ at any time $t\in \R$. This fact simply follows from the discussion in {\bf Section 1}. \section*{Appendix C. Feynman Propagator in the Carter-like Manifold} Let us start from the Feynman propagator of a scalar field propagating in the Einstein's static universe. We shall prove Eq.(\ref{feynmanCEU}) for the Feynman propagator on the conformally related Carter-like manifold using Eq.(\ref{esu}).\\ Camporesi \cite{camporesi}, employing heat kernel methods\footnote{Remind that our coordinates are not those which are usually used to describe $S^{2}$ also Einstein's static universe as they contain a factor $2$.}, obtained the following Feynman propagator for a scalar {\em conformally coupled} field \beq G_{F}(T-T',\sigma)_{m}= \nn \eeq \beq \frac{im}{8\pi}\sum_{n\in \Z}\frac{\sigma + 2\pi n}{\sin \sigma} \frac{H^{(2)}_{1}(m[(2T-2T^{'})^{2}-(\sigma + 2n \pi)^{2}-i\varepsilon]^{1/2})} {[i\varepsilon-(2T-2T^{'})^{2}+(\sigma + 2n \pi)^{2}]^{1/2}}\:, \label{feyman} \eeq where $m$ is the mass of the field, $\sigma$ is the {\em minimal} geodesical length between two points on $S^{3}$ and $H^{(2)}_{1}$ is a Hankel function of the second kind of order $1$. Furthermore, using our coordinates $\vec{X}\equiv (R, \theta, \varphi)$ on the above $3$-sphere, it is possible to prove that $\sigma$ satisfies \beq 2-2\cos\sigma(\vec{X},\vec{X})=(\cos 2R\: -\cos 2R^{'})^{2} + |\sin \vec{2R} \: -\sin \vec{2R}^{'}|^{2}\:.\nn \eeq Let us to consider the massless case as the limit $m \rightarrow 0^{+}$ in Eq.~(\ref{feyman}). We find \beq G_{F}(T-T',\sigma)= \nn \eeq \beq -\frac{1}{4\pi} \sum_{n\in \Z} \frac{\sigma + 2\pi n}{\sin \sigma} \left\{ \frac{i}{ \pi [ (2T-2T^{'})^{2}-(\sigma + 2n\pi)^{2} ] } - \delta( (2T-2T^{'})^{2}-(\sigma+2n \pi)^{2} ) \right\}\:.\nn \eeq We can explicitly carry out the summation over the terms which do not contain delta functions obtaining \beq -\frac{1}{4\pi} \sum_{n\in \Z} \frac{\sigma + 2\pi n}{\sin \sigma} \frac{i}{\pi [ (2T-2T^{'})^{2}-(\sigma + 2n\pi)^{2} ]} =\nn \eeq \beq =\frac{-1}{4\pi^{2}\sin\sigma}\:\:\frac{i}{4} \left\{ \cot\left[ \frac{2T-2T^{'}-\sigma }{2} \right] - \cot\left[ \frac{2T-2T^{'}+\sigma }{2} \right] \right\} = \nn \eeq \beq = \frac{-i}{16 \pi^{2}} \csc\frac{2T-2T^{'}-\sigma}{2}\: \csc\frac{2T-2T^{'}+\sigma}{2} = \frac{i}{4\pi^{2}} \frac{1}{2\cos (2T-2T^{'})-2 \cos \sigma} = \nn \eeq \beq = \frac{i}{4\pi^{2}} \frac{1}{2-2\cos\sigma -4 \sin^{2}(T-T^{'})}\:.\nn \eeq Finally, using Eq.(\ref{esu}), we may prove Eq.(\ref{feynmanCEU}) \beq G_{F}(T-T^{'}, \vec{R}, \vec{R}^{'})_{CEU}= \nn \eeq \beq \frac{i\:\sin 2R \: \sin2R^{'}}{4\pi^{2}} \frac{1}{2-2\cos\sigma -4 \sin^{2}(T-T^{'})} + \nn \eeq \beq + \frac{\sin 2R \sin 2R^{'} }{4\pi} \sum_{n\in \Z} \frac{\sigma + 2\pi n}{\sin \sigma} \delta( (2T-2T^{'})^{2}-(\sigma+2n \pi)^{2} ) \:. \nn \eeq \newpage
1,314,259,993,111
arxiv
\section{Introduction} The phenomena of the atmosphere and ocean are extremely rich in its organization and complexity, and a lot of them cannot be produced by experiments. These phenomena involve a broad range of temporal and spatial scales. As we know, both the atmospheric and oceanic flows are flows under the rotation of the earth. In fact, fast rotation and small aspect ratio are two main characteristics of the large scale atmospheric and oceanic flows. The small aspect ratio characteristic leads to the primitive equations, and the fast rotation leads to the quasi-geostrophic equations. These are fundamental equations in the study of atmospheric and oceanic flows; see Ghil and Childress \cite{GC}, Lions, Temam and Wang \cite{LTWa, LTWb}, and Pedlosky \cite{Pb}. Furthermore, convection occurs in many regimes of the atmospheric and oceanic flows. A key problem in the study of climate dynamics and in geophysical fluid dynamics is to understand and predict the periodic, quasi-periodic, aperiodic, and fully turbulent characteristics of large-scale atmospheric and oceanic flows. Stability/bifurcation theory enables one to determine how different flow regimes appear and disappear as control parameters, such as the Reynolds number, vary. It, therefore, provides one with a powerful tool to explore the theoretical capability in the predictability problem. Most studies so far have only considered systems of ordinary differential equations (ODEs) that are obtained by projecting the PDEs onto a finite-dimensional solution space, either by finite differencing or by truncating a Fourier expansion (see Ghil and Childress \cite{GC} and further references there). These were pioneered by Lorenz \cite{La, Lb}, Stommel \cite{S}, and Veronis \cite{Va, Vb} among others, who explored the bifurcation structure of low-order models of atmospheric and oceanic flows. More recently, pseudo-arclength continuation methods have been applied to atmospheric (Legras and Ghil \cite{LG}) and oceanic (Speich et al. \cite{SDG} and Dijkstra \cite{D}) models with increasing horizontal resolution. These numerical bifurcation studies have produced so far fairly reliable results for two classes of geophysical flows: (i) atmospheric flows in a periodic mid-latitude channel, in the presence of bottom topography and a forcing jet; and (ii) oceanic flows in a rectangular mid-latitude basin, subject to wind stress on its upper surface; see among others Charney and DeVore \cite{CD}, Pedlosky \cite{Pa}, Legras and Ghil \cite{LG} and Jin and Ghil \cite{JG} for saddle-node and Hopf bifurcations in the the atmospheric channel, and \cite{BM, CI, IS, JJG, MB, SDG} for saddle-node, pitchfork or Hopf in the oceanic basin. The main objective of this article is to conduct bifurcation and stability analysis for the original partial differential equations (PDEs) that govern geophysical flows. This approach should allow us to overcome some of the inherent limitations of the numerical bifurcation results that dominate the climate dynamics literature up to this point, and to capture the essential dynamics of the governing PDE systems. The present article addresses the stability and transitions of basic flows for the stratified rotating Boussinesq equations. These equations are fundamental equations in the geophysical fluid dynamics; see among others Pedlosky \cite{Pb}. We obtain two main results in this article. The first is to conduct a rigorous and complete bifurcation and stability analysis near the first eigenvalue of the linearized problem. The second is the onset of the Hopf bifurcation, leading to the existence of periodic solutions of the model. The detailed analysis is carried out in two steps. The first is a detailed study of the eigenvalue problem for the linearized problem around the basic state. In comparison to the classical B\'enard convection problem, the linearized problem here is non-selfadjoint, leading to much more complicated spectrum, and more complicated dynamics. We derive in particular two critical Rayleigh numbers $R_{c_1}$ and $R_{c_2}$. Here $R_{c_1}$ is the first critical Rayleigh number for the case where the Prandtl number is greater than one, and $R_{c_2}$ is the first critical Rayleigh number for the case where the Prandtl number is less than one. Moreover, $R_{c_1}$ leads to the onset of the steady state bifurcation while $R_{c_2}$ leads to the onset of the Hopf bifurcation. Both parameters are explicitly given in terms of the physical parameters. The crucial issues here include 1) a complete understanding of the spectrum, 2) identification of the critical Rayleigh numbers, and most importantly 3) the verification of the Principle of Exchange of Stabilities near these critical Rayleigh numbers. The second step is to conduct a rigorous nonlinear analysis to derive the bifurcations at both the critical Rayleigh numbers based on the classical Hopf bifurcation theory and a newly developed dynamic bifurcation theory by two of the authors. This new dynamic bifurcation theory is centered at a new notion of bifurcation, called attractor bifurcation for dynamical systems, both finite dimensional and infinite dimensional, together with new strategies for the Lyapunov-Schmidt reduction and the center manifold reduction procedures. The bifurcation theory has been applied to various problems from science and engineering, including, in particular, the Kuramoto-Sivashinshy equation, the Cahn-Hillard equation, the Ginzburg-Landau equation, Reaction-Diffusion equations in Biology and Chemistry, and the B\'enard convection problem, the Taylor problem; see \cite{b-book, mw-db1} and the references therein. We remark that the non-selfadjointness of the linearized problem gives rises the onset of the Hopf Bifurcation. We prove that the Hopf bifurcation appears at the Rayleigh number $R_{c_2}$. As mentioned earlier, the understanding and prediction of of the the periodic, quasi-periodic, aperiodic, and fully turbulent characteristics of large-scale atmospheric and oceanic flows are key issues in the study of climate dynamics and in geophysical fluid dynamics. It is hoped that the study carried out in this article will provide some insights in these important issues. Also, we would like to mention that rigorous proof of the existence of periodic solutions for a fluid system is a normally a very difficult task from the mathematical point of view. For instance, with a highly involved analysis, Chen et al. \cite{cgsw} proved the existence of a Hopf bifurcation in an idealized Fourier space. The paper is organized as follows. Section 2 gives the basic setting of the problem. Section 3 states the main results. The proofs of the main results occupies the remaining part of the paper: Section 4 recapitulates the essentials of the attractor bifurcation theory, Section 5 is on the eigenanalysis, and Section 6 is on the central manifold reduction and the completion of the proofs. \section{Stratified Rotating Boussinesq Equations in Geophysical Fluid Dynamics} The stratified rotating Boussinesq equations are basic equations in the geophysical fluid dynamics, and their non-dimensional form is given by \begin{equation} \label{b1} \left\{ \begin{aligned} & \frac{\partial U}{\partial t}=\sigma( \Delta U-\nabla p) +\sigma R T e - \frac{1}{Ro} e \times U-(U \cdot \nabla)U,\\ & \frac{\partial T}{\partial t}=\Delta T +w-(U \cdot \nabla)T,\\ & \text{\rm div} U = 0, \end{aligned} \right. \end{equation} for $(x,y,z)$ in the non-dimensional domain $\Omega=\mathbb R^{2}\times (0,1)$, where $U=(u,v,w)$ is the velocity fields, $e=(0,0,1)$ is the unit vector in the $z$-direction, $\sigma$ is the Prandtl number, $R$ is the thermal Rayleigh number, $Ro$ is the Rossby number, $T$ is the temperature function and $p$ is the pressure function. We refer the interested readers to Pedlosky \cite{Pb}, Lions, Temam and Wang \cite{LTWb} for the derivation of this model and the related parameters. In particular, the term $\frac{1}{Ro} e \times U$ represents the Coriolis force, the $w$ term in the temperature equation is derived using the stratification, and the definition of the Rayleigh number $R$ as follows: \begin{equation} \label{eq1.1} R = \frac{g\alpha\beta}{\kappa\nu}\, h^4. \end{equation} We consider the periodic boundary condition in the $x$ and $y$ directions \begin{align} \label{b2} (U,T)(x,y,z,t)& =(U,T)(x+2j\pi/\alpha_1,y,z,t) \\ & =(U,T)(x,y+2k\pi/\alpha_2,z,t), \nonumber \end{align} for any $j,k\in \mathbb Z$. At the top and bottom boundaries, we impose the free-free boundary conditions: \begin{equation} \label{b3} (T, w)=0, \quad \frac{\partial u}{\partial z}=0, \quad \frac{\partial v}{\partial z}=0, \quad \text{at} \quad z=0,1. \end{equation} It is natural to put the constraint \begin{equation} \label{b4} \int_{\Omega}udxdydz= \int_{\Omega}vdxdydz =0. \end{equation} The initial value conditions are given by \begin{equation} \label{b5} (U,T)=(\widetilde{U},\widetilde{T}) \quad \text{at}\quad t=0. \end{equation} Let \begin{align*} H=& \{(U,T)\in L^{2}(\Omega)^{4}\mid \text{\rm div} \, U=0, w\mid_{z=0,1}=0, (u,v) \text{ satisfies}\,\, (\ref{b2})\,\, \text{and}\,\, (\ref{b4})\},\\ H_1=& \{(U,T)\in H^{2}(\Omega)^{4}\cap H \,|\,(U,T)\,\text{ satisfies } \, (\ref{b2})-(\ref{b4}) \},\\ \widetilde{H}=& \{(U,T)\in H \mid (u,v,w,T)(-x,-y,z)=(-u,-v,w,T)(x,y,z)\}, \\ \widetilde{H}_1=& H_1 \cap \widetilde{H}. \end{align*} Let $L_{R}=-A-B_{R}: H_{1} \to H$ (resp., $\widetilde{H}_1$ $\to$ $\widetilde{H}$) and $G: H_{1}$ $\to$ $H$ (resp., $\widetilde{H}_1$ $\to$ $\widetilde{H}$) be defined by \begin{align*} & A\psi = ( -P [\sigma \Delta U-\frac{1}{Ro} e \times U], -\Delta T), \\ & B_{R} \psi = (-P [\sigma R T e],-w ),\\ & G (\psi) = G(\psi, \psi), \end{align*} for any $\psi=(U, T) \in H_1$ (resp., $\widetilde{H}_1$), where \begin{align*} & G(\psi_1 , \psi_2)= (-P[(U_1 \cdot \nabla)U_2], -(U_1 \cdot \nabla)T_2), \end{align*} for any $\psi_1=(U_1,T_1)$, $\psi_2 = (U_2, T_2) \in H_1$. Here $P$ is the Leray projection to $L^2$ fields, and for a detailed account of the function spaces; see among many others \cite{temam}. \begin{remark} \label{rmb1} {\rm Note that $\widetilde{H}_1$ and $\widetilde{H}$ are invariant under the bilinear operator $G$ in the sense that $$ G(\psi_1, \psi_2) \in \widetilde{H}, \qquad \text{for} \qquad \psi_1, \psi_2 \in \widetilde{H}_1. $$ Hence, $\widetilde{H}_1$ and $\widetilde{H}$ are invariant under the operator $L_R + G$. } \end{remark} Then the Boussinesq equations (\ref{b1})-(\ref{b4}) can be written in the following operator form \begin{equation}\label{b6} \frac{d\psi}{dt} = L_{R} \psi + G(\psi), \qquad \psi=(U,T). \end{equation} \section{Main Results} \subsection{Definition of attractor bifurcation} To state the main theorems of this article, we proceed with the definition of attractor bifurcation, first introduced by T. Ma and S. Wang in \cite{b-book,mw-db1}. Let $H$ and $H_1$ be two Hilbert spaces, and $H_1 \hookrightarrow H$ be a dense and compact inclusion. We consider the following nonlinear evolution equations \begin{equation} \label{c1} \left\{ \begin{aligned} & \frac{du}{dt} = L_\lambda u +G(u,\lambda), \\ & u(0) = u_0, \end{aligned} \right. \end{equation} where $u: [0, \infty) \to H$ is the unknown function, $\lambda \in \mathbb R$ is the system parameter, and $L_\lambda:H_1\to H$ are parameterized linear completely continuous fields depending continuously on $\lambda\in \mathbb R^1$, which satisfy \begin{equation} \label{c2} \left\{\begin{aligned} & -L_\lambda = A + B_\lambda && \text{a sectorial operator}, \\ & A:H_1 \to H && \text{a linear homeomorphism}, \\ & B_\lambda :H_1\to H && \text{parameterized linear compact operators.} \end{aligned}\right. \end{equation} It is easy to see \cite{henry} that $L_\lambda$ generates an analytic semi-group $\{e^{tL_\lambda}\}_{t\ge 0}$. Then we can define fractional power operators $(-L_\lambda)^{\mu}$ for any $0\le \mu \le 1$ with domain $H_\mu = D((-L_\lambda)^{\mu})$ such that $H_{\mu_1} \subset H_{\mu_2}$ if $\mu_1 > \mu_2$, and $H_0=H$. Furthermore, we assume that the nonlinear terms $G(\cdot, \lambda):H_\mu \to H$ for some $1> \mu \ge 0$ are a family of parameterized $C^r$ bounded operators ($r\ge 1$) continuously depending on the parameter $\lambda\in \mathbb R^1$, such that \begin{equation} \label{c3} G(u,\lambda) = o(\|u\|_{H_\mu}), \quad \forall\,\, \lambda\in \mathbb R^1. \end{equation} In this paper, we are interested in the sectorial operator $-L_\lambda = A +B_\lambda$ such that there exist an eigenvalue sequence $\{\rho_k\} \subset \mathbb C^1$ and an eigenvector sequence $\{e_k, h_k\}\subset H_1$ of $A$: \begin{equation} \label{c4} \left\{\begin{aligned} & Az_k = \rho_kz_k, \qquad z_k=e_k + i h_k, \\ & \text{Re} \rho_k\to \infty \,\,(k\to\infty), \\ & |\text{Im} \rho_k / (a + \text{Re} \rho_k) | \le c, \end{aligned}\right. \end{equation} for some $a, c > 0$, such that $\{e_k, h_k\}$ is a basis of $H$. Also we assume that there is a constant $0<\theta<1$ such that \begin{equation} \label{c5} B_\lambda :H_\theta \longrightarrow H \,\,\text{bounded, $\forall$ $\lambda\in \mathbb R^1$.} \end{equation} Under conditions (\ref{c4}) and (\ref{c5}), the operator $-L_\lambda=A + B_\lambda$ is a sectorial operator. Let $\{S_\lambda(t)\}_{t\ge 0}$ be an operator semi-group generated by the equation (\ref{c1}). Then the solution of (\ref{c1}) can be expressed as $\psi(t, \psi_0) = S_\lambda(t)\psi_0,$ for any $ t\ge 0.$ \begin{definition} \label{dfc1} A set $\Sigma \subset H$ is called an invariant set of (\ref{c1}) if $S(t) \Sigma = \Sigma$ for any $t\ge 0$. An invariant set $\Sigma \subset H$ of (\ref{c1}) is called an attractor if $\Sigma$ is compact, and there exists a neighborhood $W \subset H$ of $\Sigma$ such that for any $\psi_0\in W$ we have $$ \lim_{t\to \infty}\text{\rm dist}_H(\psi(t,\psi_0),\Sigma)= 0. $$ \end{definition} \begin{definition} \label{dfc2} \begin{enumerate} \item We say that the solution of (\ref{c1}) bifurcates from $(\psi,\lambda) = (0,\lambda_0)$ to an invariant set $\Omega_\lambda$, if there exists a sequence of invariant sets $\{\Omega_{\lambda_n}\}$ of (\ref{c1}) such that $0 \notin \Omega_{\lambda_n}$, $\lim_{n\to \infty} \lambda_n = \lambda_0$, and $$ \lim_{n\to \infty} \max_{x\in \Omega_{\lambda_n}} |x| =0.$$ \item If the invariant sets $\Omega_\lambda$ are attractors of (\ref{c1}), then the bifurcation is called attractor bifurcation. \end{enumerate} \end{definition} \subsection{Main theorems} In this article, we consider two cases: \begin{align} \label{c6} & \sigma >1 \qquad \text{and} && R_{c_1}\,\, \text{is obtained only at} \,\, (j,k,l)=(j_1, 0, 1), \,\, \\ \label{c7} &\sigma <1 \qquad \text{and} &&R_{c_2} \,\, \text{is obtained only at} \,\, (j,k,l)=(j_2, 0, 1), \end{align} for some $j_1$, $j_2 \in \mathbb N$, where $R_{c_1}$ and $R_{c_2}$ are defined in (\ref{e18}) and (\ref{e22}) respectively. In the above cases, $R_{c_1}$ and $R_{c_2}$ are given by the following formulas: \begin{align*} & R_{c_1}=\frac{(j_1^2\alpha_1^2+\pi^2)^3}{j_1^2 \alpha_1^2} +\frac{\pi^2}{\sigma^2 Ro^2 j_1^2 \alpha_1^2},\\ & R_{c_2}=\frac{2(\sigma+1)(j_2^2\alpha_1^2+\pi^2)^3}{j_2^2\alpha_1^2} +\frac{2 \pi^2}{(\sigma+1)Ro^2 j_2^2 \alpha_1^2 }. \end{align*} \begin{remark} {\rm \begin{enumerate} \item Condition (\ref{c6}) guarantees that for $R\approx R_{c_1}$, the first eigenvalue of $L_{R}\mid_{H_1}$ (resp., $L_{R}\mid_{\widetilde{H}_1}$) is real and of multiplicity two (resp., one); see Remark~\ref{rme3}. \item Condition (\ref{c7}) guarantees that, for $R\approx R_{c_2}$, there exists only one simple pair of conjugate complex eigenvalues of $L_{R}\mid_{\widetilde{H}_1}$ crossing the imaginary axis; see Lemma~\ref{le6}. \item Condition (\ref{c6}) or (\ref{c7}) can be satisfied easily; see Lemmas~\ref{le4} and ~\ref{le5}. \end{enumerate} } \end{remark} \begin{theorem} \label{thc4} Assume (\ref{c6}). Then the following assertions for Problem (\ref{b1})-(\ref{b4}) defined in $H$ hold true. \begin{enumerate} \item If $R \le R_{c_1}$, the steady state $(U,T)=0$ is locally asymptotically stable. \item For $R > R_{c_1}$, the problem bifurcates from $((U,T),R)=(0,R_{c_1})$ to an attractor $\Sigma_{R}=S^1$, consisting of only steady state solutions. \end{enumerate} \end{theorem} \begin{figure} \centering \includegraphics[height=.4\hsize]{R1.eps} \caption{Bifurcation from $(0,R_{c_1})$ to an attractor $\Sigma_{R}$ for $R > R_{c_1}$. } \label{fig1} \end{figure} \begin{theorem} \label{thc5} Assume (\ref{c7}) and $$Ro^2 < \frac{(1-\sigma)\pi^2}{\sigma^2 (1+\sigma)(j_2^2\alpha_1^2 + \pi^2)^3}.$$ The following statements are true. \begin{enumerate} \item For Problem (\ref{b1})-(\ref{b4}) defined in $H$, the steady state $(U,T)=0$ is locally asymptotically stable if $R<R_{c_2}$. \item For Problem (\ref{b1})-(\ref{b4}) defined in $\widetilde{H}$, a Hopf bifurcation occurs generically when $R$ crosses $R_{c_2}$. \end{enumerate} \end{theorem} \section{Preliminaries} \subsection{Attractor bifurcation theory} Consider (\ref{c1}) satisfying (\ref{c2}) and (\ref{c3}). We start with the Principle of Exchange of Stabilities (PES). Let the eigenvalues (counting the multiplicity) of $L_\lambda$ be given by $\beta_1(\lambda)$, $\beta_2(\lambda)$, $\cdots$. Suppose that \begin{equation} \label{d1} Re\beta_i(\lambda)\begin{cases} <0 \,\,\,\, \text{if}\,\,\,\, \lambda<\lambda_0,\\ =0 \,\,\,\, \text{if}\,\,\,\, \lambda=\lambda_0,\\ >0 \,\,\,\, \text{if}\,\,\,\, \lambda>\lambda_0, \end{cases} \qquad \text{if}\qquad 1\le i\le m, \end{equation} \begin{equation} \label{d2} Re\beta_j(\lambda_0)<0,\qquad \text{if} \qquad m+1\le j. \end{equation} Let the eigenspace of $L_\lambda$ at $\lambda_0$ be \[ E_0=\displaystyle{\bigcup_{1\le j \le m}\bigcup_{k=1}^{\infty}}\{u,v\in H_1\mid (L_{\lambda_0}-\beta_j(\lambda_0))^k w=0, w=u+iv \}. \] It is known that $\dim E_0=m$. \begin{theorem}[T. Ma and S. Wang \cite{b-book, mw-db1}] \label{thd1} Assume that the conditions (\ref{c2})-(\ref{c5}) and (\ref{d1})-(\ref{d2}) hold true, and $u=0$ is locally asymptotically stable for (\ref{c1}) at $\lambda=\lambda_0$. Then the following assertions hold true. \begin{enumerate} \item For $\lambda>\lambda_0$, (\ref{c1}) bifurcates from $(u,\lambda)=(0,\lambda_0)$ to attractors $\Sigma_\lambda$, having the same homology as $S^{m-1}$, with $ m-1\le dim\Sigma_\lambda\le m$, which is connected if $ m>1$; \item For any $u_\lambda\in\Sigma_\lambda$, $u_\lambda$ can be expressed as \[ u_\lambda=v_\lambda+o(\|v_\lambda\|_{H_1}), \,\, v_\lambda\in E_0; \] \item There is an open set $U\subset H$ with $0\in U$ such that the attractor $\Sigma_\lambda$ bifurcated from $(0,\lambda_0)$ attracts $U\backslash\Gamma$ in $ H$, where $\Gamma$ is the stable manifold of $u=0$ with co-dimension m. \end{enumerate} \end{theorem} \subsection{Center manifold theory} A crucial ingredient for the proof of the main theorems using the above attractor bifurcation theorem is an approximation formula for center manifold functions; see \cite{b-book}. Let $H_1$ and $H$ be decomposed into \begin{equation} \label{d3} H_1 = E^\lambda_1 \oplus E^\lambda_2, \qquad H = \widetilde E^\lambda_1 \oplus \widetilde E^\lambda_2, \end{equation} for $\lambda$ near $\lambda_0 \in \mathbb R^1$, where $E^\lambda_1$, $E^\lambda_2$ are invariant subspaces of $L_\lambda$, such that $\dim E^\lambda_1<\infty$, $\widetilde E^\lambda_1 = E^\lambda_1$, $\widetilde E^\lambda_2 =$ closure of $E^\lambda_2$ in $H$. In addition, $L_\lambda$ can be decomposed into $L_\lambda = \mathcal L^\lambda_1 \oplus \mathcal L^\lambda_2$ such that for any $\lambda$ near $\lambda_0$, \begin{equation} \label{d4} \begin{cases} \mathcal L^\lambda_1 = L_\lambda |_{E^\lambda_1} : E^\lambda_1 \longrightarrow \widetilde E^\lambda_1, & \\ \mathcal L^\lambda_2 = L_\lambda|_{E^\lambda_2}:E^\lambda_2 \longrightarrow \widetilde E^\lambda_2, & \end{cases} \end{equation} where all eigenvalues of $\mathcal L^\lambda_2$ possess negative real parts, and the eigenvalues of $\mathcal L^\lambda_1$ possess nonnegative real parts at $\lambda=\lambda_0$. Furthermore, with $\mu < 1$ given by (\ref{c3}), let $$E^\lambda_2(\mu)=\text{ closure of $E^\lambda_2$ in } H_\mu. $$ By the classical center manifold theorem (see among others \cite{henry,temam}), there exists a neighborhood of $\lambda_0$ given by $|\lambda-\lambda_0|<\delta$ for some $\delta>0$, a neighborhood $B_\lambda \subset E^\lambda_1$ of $x=0$, and a $C^1$ center manifold function $\Phi(\cdot,\lambda):B_\lambda \to E^\lambda_2(\theta)$, called the center manifold function, depending continuously on $\lambda$. Then to investigate the dynamic bifurcation of (\ref{c1}) it suffices to consider the finite dimensional system as follows \begin{equation} \label{d5} \frac{dx}{dt} = \mathcal L^\lambda_1 x + g_1(x,\Phi_\lambda(x),\lambda), \qquad x\in B_\lambda \subset E^\lambda_1. \end{equation} Hence, an approximation formula for the center manifold function $\Phi_\lambda$ is crucial for the bifurcation and stability study. Let the nonlinear operator $G$ be in the following form \begin{equation} \label{d6} G(u,\lambda) = G_n(u,\lambda) + o(\|u\|^n), \end{equation} for some integer $n \ge 2$. Here $G_n:H_1 \times \cdots \times H_1 \longrightarrow H$ is a $n$-multilinear operator, and $G_n(u,\lambda) = G_n(u,\cdots,u,\lambda).$ \begin{theorem}\cite{b-book} \label{thd2} Under the conditions (\ref{d3}), (\ref{d4}) and (\ref{d6}), the center manifold function $\Phi(x,\lambda)$ can be expressed as \begin{equation} \label{d7} \Phi(x,\lambda) = (-\mathcal L^\lambda_2)^{-1} P_2G_n(x,\lambda) + o(\|x\|^n) + O(|\text{\rm Re}\beta|\, \|x\|^n), \end{equation} where $\mathcal L^\lambda_2$ is as in (\ref{d4}), $P_2:H\to \widetilde E_2$ the canonical projection, $x\in E^\lambda_1$, and $\beta= (\beta_1(\lambda),\cdots,\beta_m(\lambda))$ the eigenvectors of $\mathcal L^\lambda_1$. \end{theorem} \section{Eigenvalue Problem} The eigenvalue problem of the linearized problem of (\ref{b1})-(\ref{b3}) is given by \begin{equation} \label{e1} \left\{ \begin{aligned} & \sigma( \Delta U-\nabla p) +\sigma R T e - \frac{1}{Ro}e \times U = \beta U,\\ & \Delta T +w = \beta T,\\ & \text{\rm div} U = 0, \end{aligned} \right. \end{equation} supplemented with (\ref{b2}) and (\ref{b3}). For $\psi=(U,T)$ satisfying (\ref{b2}) and (\ref{b3}), we expand the field $\psi$ in Fourier series \begin{equation} \label{e2} \psi(x,y,z)=\sum_{j,k=-\infty}^{\infty}\psi_{jk}(z)e^{i(j\alpha_1 x +k \alpha_2 y)}. \end{equation} Plugging (\ref{e2}) into (\ref{e1}), we obtain the following system of ordinary differential equations \begin{equation} \label{e3} \left\{ \begin{aligned} & \sigma(D_{jk} u_{jk} - ij\alpha_1 p_{jk})+\frac{1}{Ro}v_{jk}=\beta u_{jk},\\ & \sigma(D_{jk} v_{jk} - ik\alpha_2 p_{jk})-\frac{1}{Ro}u_{jk}=\beta v_{jk},\\ & D_{jk} w_{jk}-p_{jk}'+R T_{jk} = \sigma^{-1}\beta w_{jk},\\ & D_{jk} T_{jk} + w_{jk} = \beta T_{jk},\\ & ij\alpha_1 u_{jk}+ik\alpha_2 v_{jk} + w_{jk}'=0,\\ & u_{jk}' \mid_{z=0,1} = v_{jk}' \mid_{z=0,1}= w_{jk}\mid_{z=0,1}= T_{jk}\mid_{z=0,1}=0, \end{aligned} \right. \end{equation} for $j, k \in \mathbb Z$, where $'=d/dz$, $D_{jk}=d^2/dz^2-\alpha_{jk}^2$ and $\alpha_{jk}^2=j^2\alpha_1^2+k^2\alpha_2^2$. If $w_{jk} \ne 0$, (\ref{e3}) can be reduced to a single equation for $w_{jk}(z)$: \begin{align} \label{e4} & \{ (D_{jk}-\beta)(\sigma D_{jk}-\beta)^2 D_{jk} \\ & \qquad + \frac{1}{Ro^2}(D_{jk}-\beta)(D_{jk}+\alpha_{jk}^2) + \sigma R \alpha_{jk}^2(\sigma D_{jk}-\beta) \} w_{jk} =0, \nonumber \\ \label{e5} & w_{jk}=w_{jk}^{''}=w_{jk}^{(4)}=w_{jk}^{(6)}=0 \qquad \text{at} \qquad z=0,1, \end{align} for $j,k \in \mathbb Z$. Thanks to (\ref{e5}), $w_{jk}$ can be expanded in a Fourier sine series \begin{equation} \label{e6} w_{jk}(z)=\sum_{l=1}^{\infty}w_{jkl}\sin l\pi z, \end{equation} for $(j,k)\in \mathbb Z \times \mathbb Z$. Substituting (\ref{e6}) into (\ref{e4}), we see that the eigenvalues $\beta$ of the problem (\ref{e1}) satisfy the cubic equations \begin{align} \label{e7}& \beta^3 + (2\sigma +1) \gamma_{jkl}^2 \beta^2 + [(\sigma^2+2\sigma)\gamma_{jkl}^4 + \frac{l^2 \pi^2}{Ro^2 \gamma_{jkl}^2} -\sigma R \frac{\alpha_{jk}^2}{\gamma_{jkl}^2}]\beta \\ &\,\,\, + \sigma^2 \gamma_{jkl}^6 -\sigma^2 R \alpha_{jk}^2 +\frac{l^2 \pi^2}{Ro^2}=0, \nonumber \end{align} for $j,k\in \mathbb Z$ and $l\in \mathbb N$, where $\gamma_{jkl}^2=\alpha_{jk}^2 + l^2\pi^2$. In the following discussions, we let \begin{equation} \label{e8} \begin{aligned} & g_{jkl}(\beta)=(\beta+\gamma_{jkl}^2)[(\beta + \sigma \gamma_{jkl}^2)^2 + l^2 \pi^2 Ro^{-2} \gamma_{jkl}^{-2}],\\ & h_{jkl}(\beta)= \sigma R \alpha_{jk}^2 \gamma_{jkl}^{-2} (\beta + \sigma \gamma_{jkl}^2), \\ & f_{jkl}(\beta)=g_{jkl}(\beta) - h_{jkl}(\beta), \\ \end{aligned} \end{equation} and $\beta_{jkl1}(R)$, $\beta_{jkl2}(R)$ and $\beta_{jkl3}(R)$ be the zeros of $f_{jkl}$ with $$ Re(\beta_{jkl1}) \ge Re(\beta_{jkl2}) \ge Re(\beta_{jkl3}). $$ \subsection{Eigenvectors} In the following discussions, we consider the following index sets: \begin{align*} & \Lambda_1= \{(j,k,l) \in \mathbb Z^2 \times \mathbb N \mid j \ge 0, (j,k) \ne (0,0)\},\\ & \Lambda_2= \{(j,k,l) \in \mathbb Z^2 \times \{0\} \mid j \ge 0, (j,k) \ne (0,0)\},\\ & \Lambda_3= \{ (j,k,l) \in \{(0,0)\} \times \mathbb N\},\\ & \Lambda = \Lambda_1 \cup \Lambda_2 \cup \Lambda_3. \end{align*} \medskip 1. For $(j,k,0) \in \Lambda_2$, we define \begin{align*} & \psi^{\beta_{jk0}}_1= ( k \alpha_2 \sin (j \alpha_1 x +k\alpha_2 y), -j\alpha_1 \sin(j \alpha_1 x + k \alpha_2 y) ,0,0)^t,\\ & \psi^{\beta_{jk0}}_2= (-k \alpha_2 \cos (j\alpha_1 x + k \alpha_2 y ), j \alpha_1 \cos (j \alpha_1 x + k\alpha_2 y),0 ,0 )^t,\\ & E_{jk0}=span \{\psi^{\beta_{jk0}}_1, \psi^{\beta_{jk0}}_2\},\\ & \beta_{\Lambda_2}= \cup_{(j,k,0)\in \Lambda_2} \{\beta_{jk0}\}, \end{align*} where $\beta_{jk0}=- \sigma \gamma_{jk0}^2 =-\sigma \alpha_{jk}^2= -\sigma(j^2 \alpha_1^2 + k^2 \alpha_2^2).$ It is not hard to see that $L_R (\psi^{\beta_{jk0}}_1 )=\beta_{jk0}\psi^{\beta_{jk0}}_1$ and $L_R (\psi^{\beta_{jk0}}_2 )=\beta_{jk0}\psi^{\beta_{jk0}}_2$. \medskip 2. For $(0,0,l) \in \Lambda_3$, we define \begin{align*} & \psi^{\beta_{00l1}}=(0,0,0,\sin l\pi z)^t, && \psi^{\beta_{0012}}=(\cos l\pi z,0,0,0)^t, \\ & \psi^{\beta_{00l3}}=(0,\cos l\pi z,0,0)^t, && E_{00l}= span \{ \psi^{\beta_{00l1}}, \psi^{\beta_{00l2}}, \psi^{\beta_{00l3}}\}, \\ & \beta_{\Lambda_3}=\cup_{l=1}^{\infty} \cup_{q=1}^{3} \{\beta_{00lq}\}, && \beta_{\widetilde{\Lambda}_3}=\cup_{l=1}^{\infty} \{\beta_{00l1} \}, \end{align*} where $\beta_{00l1}= - \gamma_{00l}^2 = - l^2 \pi^2$, $\beta_{0012}= - \sigma \gamma_{00l}^2 - \frac{1}{Ro} i$ and $\beta_{00l3}= - \sigma \gamma_{00l}^2 + \frac{1}{Ro} i$. It is easy to check that \begin{align*} & L_R (\psi^{\beta_{00l1}})=\beta_{0011}\psi^{\beta_{00l1}},\\ & L_R (\psi^{\beta_{00l2}})= - \sigma \gamma_{00l}^2 \psi^{\beta_{00l2}} -\frac{1}{Ro} \psi^{\beta_{00l3}},\\ & L_R (\psi^{\beta_{00l3}})= \frac{1}{Ro} \psi^{\beta_{00l2}} - \sigma \gamma_{00l}^2 \psi^{\beta_{00l3}}. \end{align*} \medskip 3. For $(j,k,l) \in \Lambda_1$, we define \begin{align*} & \phi_{jkl}^1=( -\frac{j\alpha_1 l\pi}{\alpha_{jk}^2} \sin(j\alpha_1 x+ k\alpha_2 y )\cos l\pi z, -\frac{k\alpha_2 l\pi}{\alpha_{jk}^2} \sin(j\alpha_1 x+ k\alpha_2 y )\cos l\pi z,\\ & \qquad \qquad \qquad \qquad \cos(j\alpha_1 x+k \alpha_2 y )\sin l\pi z, 0)^t , \\ & \phi_{jkl}^2=(\frac{k\alpha_2 l\pi}{\alpha_{jk}^2} \sin(j\alpha_1 x+ k\alpha_2 y )\cos l\pi z, -\frac{j\alpha_1 l\pi}{\alpha_{jk}^2} \sin(j\alpha_1 x+ k\alpha_2 y )\cos l\pi z,0,0),\\ & \phi_{jkl}^3=(0,0,0,\cos (j\alpha_1 x + k \alpha_2 y) \sin l\pi z)^t,\\ & \phi_{jkl}^4=( \frac{j\alpha_1 l\pi}{\alpha_{jk}^2} \cos (j\alpha_1 x+ k\alpha_2 y) \cos l\pi z, \frac{k\alpha_2 l\pi}{\alpha_{jk}^2} \cos (j\alpha_1 x+ k\alpha_2 y) \cos l\pi z,\\ & \qquad \qquad \qquad \qquad \sin (j\alpha_1 x+k\alpha_2 y) \sin l\pi z, 0 )^t ,\\ & \phi_{jkl}^5= (-\frac{k\alpha_2 l\pi}{\alpha_{jk}^2} \cos (j\alpha_1 x+ k\alpha_2 y) \cos l\pi z, \frac{j\alpha_1 l\pi}{\alpha_{jk}^2} \cos (j\alpha_1 x+ k\alpha_2 y) \cos l\pi z, 0,0)^t, \\ & \phi_{jkl}^6 = (0,0,0, \sin(j \alpha_1 x + k \alpha_2 y) \sin l \pi z)^t,\\ & E_{jkl}^1 = span \{\phi_{jkl}^1, \phi_{jkl}^2, \phi_{jkl}^3 \}, \qquad E_{jkl}^2 = span \{\phi_{jkl}^4, \phi_{jkl}^5, \phi_{jkl}^6 \},\\ & E_{jkl}=E_{jkl}^1 \oplus E_{jkl}^2, \qquad \beta_{\Lambda_1}= \cup_{(j,k,l) \in \Lambda_1} \cup_{q=1}^3 \{ \beta_{jklq}\}. \end{align*} It is easy to check that $E_{jkl}^1$ and $E_{jkl}^2$ are invariant subspaces of the linear operator $L_R$ respectively, i.e., $ L_R (E_{jkl}^1) \subset E_{jkl}^1$ and $L_R (E_{jkl}^2) \subset E_{jkl}^2$. The characteristic polynomial of $L_R \mid_{E_{jkl}^1}$ (resp., $L_R \mid_{E_{jkl}^2}$) is given by $f_{jkl}$ as defined in (\ref{e8}). Since $E_{jkl}^1$ (resp.,$E_{jkl}^2$) is of dimension three, the (generalized) eigenvectors of $L_R\mid_{E_{jkl}^1}$, $\cup_{q=1}^{3}\{\psi^{\beta_{jklq}}_1\}$ ($\cup_{q=1}^{3}\{\psi^{\beta_{jklq}}_2\}$), form a basis of $E_{jkl}^1$ (resp., $E_{jkl}^2$), i.e., span$\{\cup_{q=1}^{3}\{\psi^{\beta_{jklq}}_1\}\}=E_{jkl}^1$ (resp., span$\{\cup_{q=1}^{3}\{\psi^{\beta_{jklq}}_2\}\}=E_{jkl}^2$). If $\beta_{jklq}$ is a real zero of $f_{jkl}$, the eigenvector corresponding to $\beta_{jklq}$ in $E_{jkl}^1$ \ (resp., $E_{jkl}^2$) is given by \begin{align} \label{e9} & \psi^{\beta_{jklq}}_1=\phi^1_{jkl}+A_1(\beta_{jklq})\phi^2_{jkl} +A_2(\beta_{jklq})\phi^3_{jkl}, \\ & ( \psi^{\beta_{jklq}}_2=\phi^4_{jkl}+A_1(\beta_{jklq})\phi^5_{jkl} +A_2(\beta_{jklq})\phi^6_{jkl} ), \nonumber \end{align} where \begin{equation} \label{e10} A_1(\beta)=\frac{-1}{Ro(\beta + \sigma \gamma_{jkl}^2)}, \qquad A_2(\beta)=\frac{1}{\beta+\gamma_{jkl}^2}. \end{equation} If $\beta_{jklq_1}=\bar{\beta}_{jklq_2}$ (imaginary numbers) are zeros of $f_{jkl}$, the (generalized) eigenvectors corresponding to $\beta_{jklq_1}$ and $\beta_{jklq_2}$ in $E_{jkl}^1$ (resp., $E_{jkl}^2$) are given by \begin{equation} \label{e11} \begin{aligned} & \psi^{\beta_{jklq_1}}_1=\phi_{jkl}^1 +R_1(\beta_{jklq_1})\phi_{jkl}^2 +R_2(\beta_{jklq_1})\phi_{jkl}^3, \\ & \psi^{\beta_{jklq_2}}_1=I_1(\beta_{jklq_1})\phi_{jkl}^2 +I_2(\beta_{jklq_1})\phi_{jkl}^3, \end{aligned} \end{equation} \begin{eqnarray*} \left( \begin{aligned} & \psi^{\beta_{jklq_1}}_2=\phi_{jkl}^4 +R_1(\beta_{jklq_1})\phi_{jkl}^5 +R_2(\beta_{jklq_1})\phi_{jkl}^6, \\ & \psi^{\beta_{jklq_2}}_2=I_1(\beta_{jklq_1})\phi_{jkl}^5 +I_2(\beta_{jklq_1})\phi_{jkl}^6, \end{aligned} \right), \end{eqnarray*} where \begin{equation} \label{e12} \begin{aligned} & R_1 (\beta) = Re(A_1(\beta)), \qquad R_2 (\beta)= Re(A_2(\beta)),\\ & I_1 (\beta) = Im(A_1(\beta)), \qquad I_2 (\beta)= Im(A_2(\beta)). \end{aligned} \end{equation} The dual vector corresponding to $\psi^{\beta_{jklq}}_1$ (resp., $\psi^{\beta_{jklq}}_2$) is given by \begin{align} \label{e13} & \Psi^{\beta_{jklq}}_1=\phi^1_{jkl}+C_1(\beta_{jklq})\phi^2_{jkl} +C_2(\beta_{jklq})\phi^3_{jkl}, \\ & (\Psi^{\beta_{jklq}}_2=\phi^4_{jkl}+C_1(\beta_{jklq})\phi^5_{jkl} +C_2(\beta_{jklq})\phi^6_{jkl}), \nonumber \end{align} where \begin{equation} \label{e14} C_1(\beta)=\frac{1}{Ro(\beta + \sigma \gamma_{jkl}^2)}, \qquad C_2(\beta)=\frac{\sigma R}{\beta+\gamma_{jkl}^2}. \end{equation} The dual vector $\Psi^{\beta_{jklq}}_1$ (resp., $\Psi^{\beta_{jklq}}_2$) satisfies \begin{align} \label{e15} & <\psi^{\beta_{jklq^{*}}}_1 ,\Psi^{\beta_{jklq}}_1 >_H =0 \qquad ( <\psi^{\beta_{jklq^{*}}}_2 ,\Psi^{\beta_{jklq}}_2 >_H = 0), \end{align} for $q^* \ne q$. \medskip We note that $E_{j_1 k_1 l_1}$ is orthogonal to $E_{j_2 k_2 l_2}$ for $(j_1, k_1, l_1) \ne (j_2, k_2, l_2)$ and $E_{jkl}^1$ is orthogonal to $E_{jkl}^2$ for $(j,k,l) \in \Lambda_1$. Hence the dual vector $\Psi^{\beta_{jklq}}_1$ (resp., $\Psi^{\beta_{jklq}}_2$) satisfies \begin{align} \label{e16} & <\psi, \Psi^{\beta_{jklq}}_1>_H = 0 \qquad \text{for} \qquad \psi \in (\cup_{(j^*, k^*, l^*)\ne(j,k,l)} E_{j^* k^* l^*})\cup E_{jkl}^2\\ & ( <\psi, \Psi^{\beta_{jklq}}_2>_H = 0 \qquad \text{for} \qquad \psi \in (\cup_{(j^*, k^*, l^*)\ne(j,k,l)} E_{j^* k^* l^*})\cup E_{jkl}^1). \nonumber \end{align} In view of the Fourier expansion, we see that $\cup_{(j,k,l)\in \Lambda} E_{jkl}$ is a basis of $H_1$ and $(\cup_{(j,k,l) \in \Lambda_1} E_{jkl}^1)\cup (\cup_{(j,k,0)\in \Lambda_2} \{\psi^{\beta_{jk0}}_1 \}) \cup (\cup_{(0,0,l)\in \Lambda_3} \{\psi^{\beta_{00l1}} \}) $ is a basis of $\widetilde{H}_1$. Hence, by the discussion above, we have the following conclusions. \medskip a) The set $\beta_{H_1}=\beta_{\Lambda_1} \cup \beta_{\Lambda_2} \cup \beta_{\Lambda_3}$ consists of all eigenvalues of $L_R \mid_{H_1}$, and the (generalized) eigenvectors of $L_R \mid_{H_1}$ form a basis of $H_1$. \medskip b) The set $\beta_{\widetilde{H}_1}=\beta_{\Lambda_1} \cup \beta_{\Lambda_2} \cup \beta_{\widetilde{\Lambda}_3} $ consists of all eigenvalues of $L_R \mid_{\widetilde{H}_1}$, and the (generalized) eigenvectors of $L_R \mid_{\widetilde{H}_1}$ form a basis of $\widetilde{H}_1$. \medskip c) $Re(\beta) < 0$ for each $\beta \in \beta_{\Lambda_2} \cup \beta_{\Lambda_3}$. \begin{lemma} \label{le1} If $R$ is small, then $ Re(\beta_{jklq}(R)) < 0 $ for each $\beta_{jklq} \in \beta_{\Lambda_1}$. \end{lemma} \begin{proof} Plugging $\beta = \gamma_{jkl}^2 \beta^*$ into $f_{jkl}$, we get $ f_{jkl}(\beta) = \gamma_{jkl}^6 \widetilde{f}_{jkl}(\beta^*)$, where $$ \widetilde{f}_{jkl}(\beta^*)= (\beta^*+1)(\beta^*+\sigma)^2 + \frac{l^2 \pi^2}{ \gamma_{jkl}^6 Ro^2}(\beta^*+1)- \sigma R \frac{\alpha_{jk}^2}{\gamma_{jkl}^6}(\beta^*+\sigma). $$ Hence, we only need to show that the real part of each zero of $\widetilde{f}_{jkl}$ is strictly negative when $R$ is small. We observe that $\widetilde{f}_{jkl}(\beta^*)>0$ for all $\beta^* \ge 0$ provided $ R < 1 + \sigma^{-1}$. Therefore, if all zeros of $\widetilde{f}_{jkl}$ are real numbers, we are done. For the case where only one of the zeros of $\widetilde{f}_{jkl}$ is real, this real zero, $\beta^*_1$, is a perturbation of $-1$. There exists an $\epsilon$ ( depending on $\sigma$ only) such that $ -(1+2 \sigma) < \beta^*_1 < 0 $ provided $ R < \epsilon$. This makes the real part of the other two zeros of $\widetilde{f}_{jkl}$ strictly negative and the proof is complete. \end{proof} \subsection{Characterization of Critical Rayleigh Numbers} Based on the above discussion, we know that only the eigenvalues in $\beta_{\Lambda_1}$ depend on the Rayleigh number $R$. Hence, to study the Principle of Exchange of Stabilities for problem (\ref{e1}), it suffices to focus the problem on the set $\beta_{\Lambda_1}$. We proceed with the following two cases. {\sc Case 1.} $\beta=0$ is a zero of $f_{jkl}$ if and only if the constant term of the polynomial $f_{jkl}$ is $0$. In this case, we have \begin{equation} \label{e17} R = \frac{\gamma_{jkl}^6}{\alpha_{jk}^2} + \frac{l^2 \pi^2}{\sigma^2 Ro^2 \alpha_{jk}^2} \ge \frac{(\alpha_{jk}^2+ \pi^2)^3}{\alpha_{jk}^2}+ \frac{ \pi^2}{\sigma^2 Ro^2 \alpha_{jk}^2}. \end{equation} Hence the critical Rayleigh number $R_{c_1}$ is given by \begin{equation} \label{e18} R_{c_1}= \min_{(j,k,l)\in \Lambda_1} \{ \frac{\gamma_{jkl}^6}{\alpha_{jk}^2} + \frac{l^2 \pi^2}{\sigma^2 Ro^2 \alpha_{jk}^2} \} = \frac{\gamma_{j_1 k_1 1}^6}{\alpha_{j_1 k_1}^2} + \frac{ \pi^2}{\sigma^2 Ro^2 \alpha_{j_1 k_1}^2}, \end{equation} for some $(j_1, k_1, 1 )\in \Lambda_1$. {\sc Case 2.} A careful analysis on (\ref{e7}) shows that $\beta=ai$ ($a \ne 0$), a purely imaginary number, is a zero of $f_{jkl}$ if and only if the following two equations hold true: \begin{align*} & (\sigma^2 + 2\sigma)\gamma_{jkl}^4 + \frac{l^2 \pi^2}{Ro^2 \gamma_{jkl}^2} -\sigma R \frac{\alpha_{jk}^2}{\gamma_{jkl}^2} >0, \\ & (2\sigma +1 )\gamma_{jkl}^2 [ (\sigma^2 + 2\sigma)\gamma_{jkl}^4 + \frac{l^2 \pi^2}{ Ro^2 \gamma_{jkl}^2} -\sigma R \frac{\alpha_{jk}^2}{\gamma_{jkl}^2}] \\ & \quad = \sigma^2 \gamma_{jkl}^6 -\sigma^2 R \alpha_{jk}^2 +\frac{l^2 \pi^2}{Ro^2}. \end{align*} In this case, we have \begin{align} & \label{e19} R = \frac{2(\sigma+1)\gamma_{jkl}^6}{\alpha_{jk}^2}+ \frac{2 l^2 \pi^2}{(\sigma +1) Ro^2 \alpha_{jk}^2}, \\ & \label{e20} R < \frac{(\sigma+2)\gamma_{jkl}^6}{\alpha_{jk}^2} + \frac{l^2 \pi^2}{\sigma Ro^2 \alpha_{jk}^2}. \end{align} Plugging (\ref{e20}) into (\ref{e19}), we derive an upper bound for $Ro^2$, \begin{equation} \label{e21} Ro^2 < \frac{(1-\sigma)l^2 \pi^2}{ \sigma^2(1+\sigma) \gamma_{jkl}^6 }, \end{equation} which could only hold true when $\sigma < 1$. As in Case 1, the minimum of the right hand side of (\ref{e19}) is always obtain at $l=1$. Hence the critical Rayleigh number $R_{c_2}$ is given by \begin{align} \label{e22} R_{c_2}=& \min_{(j,k,l)\in \Lambda_1} \{\frac{2(\sigma+1)\gamma_{jkl}^6}{\alpha_{jk}^2}+ \frac{2 l^2 \pi^2}{(\sigma +1) Ro^2 \alpha_{jk}^2} \}\\ =& \frac{2(\sigma+1)\gamma_{j_2 k_2 1}^6}{\alpha_{j_2 k_2}^2}+ \frac{2 \pi^2}{(\sigma +1) Ro^2 \alpha_{j_2 k_2}^2}, \nonumber \end{align} for some $(j_2, k_2, 1)\in \Lambda_1$. In the case of $\sigma < 1$, (\ref{e21}) with $l=1$ implies $R_{c_2}$ is smaller than $R_{c_1}$. Hence, for Problem (\ref{b1})-(\ref{b4}), $R_{c_1}$ is the first critical Rayleigh number if $\sigma >1$ and $R_{c_2}$ is the first critical Rayleigh number if $\sigma < 1$. Therefore, the Principle of Exchange of Stabilities is given by Lemma~\ref{le2} and Lemma~\ref{le6}. \begin{lemma} \label{le2} For fixed $\sigma > 1$ and $Ro > 0$, suppose that $(\alpha_{jk}^2,l)=(\alpha_{j_1 k_1}^2,1)$ minimizes the right hand side of (\ref{e17}), then \begin{equation} \label{e23} \beta_{j_1 k_1 11}(R) \begin{cases} <0 \,\,\,\, \text{if}\,\,\,\, R < R_{c_1}\\ =0 \,\,\,\, \text{if}\,\,\,\, R = R_{c_1}\\ >0 \,\,\,\, \text{if}\,\,\,\, R > R_{c_1} \end{cases}, \end{equation} \begin{equation} \label{e24} Re \beta_{jklq}(R) < 0 \qquad \text{for} \qquad (\alpha_{jk}^2, l) \ne (\alpha_{j_1k_1}^2, 1), \,q=1,2,3, \,\, R \,\, \text{near} \,\, R_{c_1}. \end{equation} \end{lemma} \begin{proof} By the above discussion, we only need to show that the first eigenvalue crosses the imaginary axis. We note that $f_{j_1 k_1 1}(\beta)=0$ is equivalent to $g_{j_1 k_1 1}(\beta)=h_{j_1 k_1 1}(\beta)$, i.e., \begin{equation} \label{e25} (\beta+\gamma_{j_1 k_1 1}^2)[(\beta + \sigma \gamma_{j_1 k_1 1}^2)^2 + l^2 \pi^2 Ro^{-2} \gamma_{j_1 k_1 1}^{-2}]= \sigma R \alpha_{j_1 k_1}^2 \gamma_{j_1 k_1 1}^{-2} (\beta + \sigma \gamma_{j_1 k_1 1}^2). \end{equation} We see that both $g_{j_1 k_1 1}$ and $h_{j_1 k_1 1}$ are strictly increasing for $\beta > -\gamma_{j_1 k_1 1}^2$ ( since $\sigma > 1$ ). Let $\Gamma_1$ be the graph of $\eta=g_{j_1 k_1 1}(\beta)$ and $\Gamma_2$ be the graph of $\eta=h_{j_1 k_1 1}(\beta)$ as shown in Figure~\ref{fig2}. When $R=R_{c_1}$, Point $S_0$, the intersecting point of $\Gamma_1$ and $\Gamma_2$ corresponding to $\beta_{j_1 k_1 1}(R)$ ( i.e., the $\beta$ coordinate of $S_0$ is $\beta_{j_1 k_1 1}(R)$), is on the $\eta$ axis. When $R$ increases (resp., decreases), $S_0$ becomes $S_1$ ( resp., $S_2$). This proves (\ref{e23}) and the proof is complete. \end{proof} \begin{figure} \centering \includegraphics[height=.5\hsize]{Rayleigh.eps} \caption{} \label{fig2} \end{figure} \begin{remark} \label{rme3} { \rm \begin{enumerate} \item In the proof of Lemma~\ref{le2}, as shown by (\ref{e25}) and Figure~\ref{fig2}, we see that, for $R \approx R_{c_1}$, the first eigenvalue $\beta_{j_1 k_1 1 1}$ is a simple zero of $f_{j_1 k_1 1}(\beta)$. We have seen in Section 5.1 that there are eigenvectors $\psi^{\beta_{j_1 k_111}}_1 \in E_{j_1 k_1 l}^1$ and $\psi^{\beta_{j_1 k_111}}_2 \in E_{j_1 k_1 l}^2$ corresponding to $\beta_{j_1 k_111}$. Therefore, the multiplicity of the first eigenvalue of $L \mid_{H_1}$ (resp., $L \mid_{\widetilde{H}_1}$) is $m_{H_1}=2m$ (resp., $m_{\widetilde{H}_1}=m$ ), where $m$ is the number of $(j,k,1)$'s ($ \in \Lambda_1$) satisfying $\alpha_{jk}^2=\alpha_{j_1 k_1}^2$. Hence, Condition (\ref{c6}) guarantees that, for $R\approx R_{c_1}$, the first eigenvalue of $L_{R}\mid_{H_1}$ (resp., $L_{R}\mid_{\widetilde{H}_1}$) is real and of multiplicity two (resp., one). \item For the classical B\'enard problem without rotation, the second term on the right hand side of (\ref{e17}), hence the second term on the right hand side of (\ref{e18}), is not presented. Therefore, the first critical Rayleigh number of the classical B\'enard problem depends only on the aspect of ratio; while the first critical Rayleigh number of the rotating problem depends on the aspect of ratio, the Prandtl number and the Rossby number. And it is clear that the first critical Rayleigh number of fast rotating flows is remarkably larger than the first critical Rayleigh number of the classical B\'enard problem. This indicates that the rotating flows are much more stable than the non-rotating flows. \item $R_{c_1}$ is the first Critical Rayleigh number if the Prandtl number is greater than one. For the case where the Prandtl number is smaller than one, $R_{c_2}$ is the first Critical Rayleigh number and, in general, there are a few critical values between $R_{c_2}$ and $R_{c_1}$ . \end{enumerate} } \end{remark} For $x>0$, $b \ge 0$, we define \begin{equation} \label{e26} f_{b}(x)=\frac{(x+\pi^2)^3+b}{x}. \end{equation} Let $x=\alpha_{jk}^2$, then the right hand side of (\ref{e18}) could be expressed as $f_{b_1}(x)$, where $b_1=\frac{\pi^2}{\sigma^2 Ro^2}$; and the second line of (\ref{e22}) could be expressed as $2(\sigma+1)f_{b_2}(x)$, where $b_2=\frac{\pi^2}{(\sigma+1)^2 Ro^2}$. Consider \begin{equation} \label{e27} f^{'}_{b}(x)= \frac{(2x-\pi^2)(x+\pi^2)^2-b}{x^2}. \end{equation} As shown in Figure~\ref{fig3}, it is easy to see that \begin{enumerate} \item[a)] for $x \in (0,\infty)$, $f_{b}(x)$ has only one critical number $x_{b}$, \item[b)] $f^{'}_{b}(x)<0$ if $x < x_{b}$, \item[c)] $f^{'}_{b}(x)>0$ if $x > x_{b}$, \item[d)] $f_{b}(x_{b})$ is the global minimum of $f_{b}(x)$, and \item[e)] $x_{b}$ is strictly increasing in $b$ , hence, $x_{b_1} > x_{b_2} > \frac{\pi^2}{2}$. \end{enumerate} \begin{figure} \centering \includegraphics[height=.5\hsize]{Critical.eps} \caption{} \label{fig3} \end{figure} In Lemmas 5.4 and 5.5, we consider the following different conditions \begin{align} \label{e28} & x_{b_1} \le \alpha_1^2 < \alpha_2^2,\\ \label{e29} & \alpha_1^2 \le \frac{1}{5} x_{b_1} < 2 x_{b_1}< \alpha_{2}^2,\\ \label{e30} & x_{b_2} \le \alpha_1^2 < \alpha_2^2,\\ \label{e31} & \alpha_1^2 \le \frac{1}{5} x_{b_2} < 2 x_{b_2}< \alpha_{2}^2. \end{align} \begin{lemma} \label{le4} \begin{enumerate} \item Condition (\ref{c6}) holds true under the assumption (\ref{e28}). \item Generically, Condition (\ref{c6}) holds true under the assumption (\ref{e29}). \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Under the assumption (\ref{e28}), by c), we conclude that $R_{c_1}$ is only obtained at $(j,k,l)=(1,0,1)$, i.e. $j_1=1$. \item Under the assumption (\ref{e29}), there exists $j^* \ge 2$ such that $ {j^*}^2\alpha_1^2 \le x_{b_1} < (j^*+1)^2 \alpha_1^2$. We note that \begin{eqnarray*} (j^*+1)^2 \alpha_1^2 \begin{cases} < 2 {j^*}^2\alpha_1^2 < 2 x_{b_1}<\alpha_2^2 \qquad \qquad \text{if} \qquad j^* \ge 3,\\ =9 \alpha_1^2 < \frac{9}{5} x_{b_1} < 2x_{b_1} <\alpha_2^2 \qquad \text{if} \qquad j^*=2. \end{cases} \end{eqnarray*} \end{enumerate} Hence, by b) and c), we conclude that $$ R_{c_1}=\min \{f_{b_1}({j^*}^2 \alpha_1^2),f_{b_1}((j^*+1)^2\alpha_1^2)\}, $$ i.e., $j_1=j^*$ or $j_1=j^*+1$. Note that, by b) and c), generically $f_{b_1}({j^*}^2 \alpha_1^2)\ne f_{b_1}((j^*+1)^2\alpha_1^2)$. The proof is complete. \end{proof} \begin{lemma} \label{le5} \begin{enumerate} \item Condition (\ref{c7}) holds true under the assumption (\ref{e30}). \item Generically, Condition (\ref{c7}) holds true under the assumption (\ref{e31}). \end{enumerate} \end{lemma} \begin{proof} Consider $$ R_{c_2}= \min_{(j,k,1) \in \Lambda_1}\{2 (\sigma +1)f_{b_2}(\alpha_{jk}^2)\}. $$ The rest part of the proof is the same as the proof of Lemma~\ref{le4}. \end{proof} \begin{lemma} \label{le6} Assume (\ref{c7}), $R \approx R_{c_2}$ and $Ro^2$ satisfies (\ref{e21}) for $(j,k,l)=(j_2, 0, 1)$, i.e., $Ro^2<\frac{(1-\sigma)\pi^2}{\sigma^2(1+\sigma) \gamma_{j_2 0 1}^6} $, then $\{\beta_{j_2 0 11}(R),\beta_{j_2 0 12}(R)\}$ ($\beta_{j_2 0 11}(R)=\bar{\beta}_{j_2 0 12}(R)$) is the only simple pair of complex eigenvalues of the problem (\ref{e1}) in space $\widetilde{H}_1$ satisfying \begin{equation} \label{e32} Re(\beta_{j_2 0 11}(R)) \begin{cases} <0 \,\,\,\, \text{if}\,\,\,\, R < R_{c_2},\\ =0 \,\,\,\, \text{if}\,\,\,\, R = R_{c_2},\\ >0 \,\,\,\, \text{if}\,\,\,\, R > R_{c_2}, \end{cases} \end{equation} \begin{equation} Re \beta_{jklq}(R) < 0 \,\, \text{for} \,\, (\alpha_{jk}^2, l) \ne (\alpha_{j_2 0}^2, 1), \,q=1,2,3, \, R \,\, \text{near} \,\, R_{c_2}. \end{equation} \end{lemma} \begin{proof} We only need to prove (\ref{e32}). Under the assumptions of the lemma together with (\ref{e11}), (\ref{e19}) and (\ref{e20}), by the discussion in Case (2) at the beginning of this subsection, we know that $\{ \beta_{j_2 0 11}(R) , \beta_{j_2 0 12}(R) \}$ is the only simple pair of complex eigenvalues of $L_R \mid _{\widetilde{H}_1}$ with $Re( \beta_{j_2 0 11}(R_{c_2}))=Re( \beta_{j_2 0 12}(R_{c_2}))=0$. Since $\beta_{j_2 0 13}(R)$ (real), $\beta_{j_2 0 11}(R)$ and $\beta_{j_2 0 12}(R)$ are zeros of $f_{j_2 0 1}$, we know that $$ \beta_{j_2 0 13}(R)=-(Re( \beta_{j_2 0 11}(R))+ Re( \beta_{j_2 0 12}(R)))-(2\sigma+1)\gamma_{j_2 0 1}^2. $$ Hence (\ref{e32}) is equivalent to \begin{equation} \label{e33} \beta_{j_2 0 13}(R)\begin{cases} >-(2\sigma+1)\gamma_{j_2 0 1}^2 \,\,\,\, \text{if}\,\,\,\, R < R_{c_2},\\ =-(2\sigma+1)\gamma_{j_2 0 1}^2 \,\,\,\, \text{if}\,\,\,\, R = R_{c_2},\\ <-(2\sigma+1)\gamma_{j_2 0 1}^2 \,\,\,\, \text{if}\,\,\,\, R > R_{c_2}, \end{cases} \end{equation} which is true as shown in Figure~\ref{fig4}. This completes the proof. \begin{figure} \centering \includegraphics[height=.5\hsize]{Rayleigh2.eps} \caption{} \label{fig4} \end{figure} \end{proof} \begin{lemma} \label{le8} For fixed $\alpha_1$, $\alpha_2 >0$ and $\sigma > 1$, $R_{c_1} \rightarrow \infty$ as $Ro \rightarrow 0$. More precisely, $R_{c_1}= O(Ro^{-\frac{4}{3}})$. \end{lemma} \begin{proof} Since $b_1=\frac{\pi^2}{\sigma^2 Ro^2}$, by (\ref{e27}), $x_{b_1}=O(b_1^{\frac{1}{3}})$ as $Ro \rightarrow 0$. Hence, \begin{eqnarray*} R_{c_1}=O(f_{b_1}(x_{b_1}))=O(b_{1}^{\frac{2}{3}})=O(Ro^{-\frac{4}{3}}). \end{eqnarray*} \end{proof} \section{Proof of Main Theorems} \subsection{Center manifold reduction} We are now in a position to reduce equations of (\ref{b1})-(\ref{b4}) to the center manifold. For any $\psi = (U,T) \in H_1$, we have \begin{align*} \psi =& \sum_{(j,k,l)\in \Lambda_1}^{\infty} \sum_{q=1}^{3} (x_{jklq}\psi^{\beta_{jklq}}_1 + y_{jklq}\psi^{\beta_{jklq}}_2)\\ & + \sum_{(j,k,0)\in \Lambda_2}(x_{jk0}\psi^{\beta_{jk0}}_1 +y_{jk0}\psi^{\beta_{jk0}}_2) +\sum_{l=1}^{\infty}\sum_{q=1}^{3}x_{00lq}\psi^{\beta_{00lq}} . \end{align*} Under the assumption (\ref{c6}), the first critical Rayleigh number is given by \begin{equation} \label{f1} R_{c_1} = \frac{\gamma_{j_1 0 1}^6}{\alpha_{j_1 0}^2} + \frac{\pi^2}{\sigma^2 Ro^2 \alpha_{j_1 0}^2}. \end{equation} In this case, the multiplicity of the first eigenvalue is two and the reduced equations of (\ref{b1})-(\ref{b4}) are given by \begin{equation} \label{f2} \left\{ \begin{aligned} & \frac{d x_{j_1011}}{dt}=\beta_{j_1011}(R)x_{j_1011} +\frac{1} {<\psi_{1}^{\beta_{j_1011}},\Psi_{1}^{\beta_{j_1011}}>_H} <G( \psi, \psi),\Psi_{1}^{\beta_{j_1011}})>_H,\\ & \frac{dy_{j_1011}}{dt}=\beta_{j_1011}(R)y_{j_1011}+\frac{1} {<\psi_{2}^{\beta_{j_1011}},\Psi_{2}^{\beta_{j_1011}}>_H} <G( \psi, \psi),\Psi_{2}^{\beta_{j_1011}})>_H.\\ \end{aligned} \right. \end{equation} Here for $\psi_1=(U_1,T_1)$, $\psi_2=(U_2,T_2)$ and $\psi_3=(U_3,T_3)$, \begin{eqnarray*} G(\psi_1,\psi_2)=-( P(U_1\cdot\nabla)U_2, (U_1\cdot\nabla)T_2 )^t \end{eqnarray*} and \begin{align*} <G(\psi_1,\psi_2),\psi_3>_H=-\int_{0}^{1} \int_{0}^{2\pi/\alpha_2} \int_{0}^{2\pi/\alpha_1} & [<(U_1\cdot\nabla)U_2, U_3>_{\mathbb R^3}\\ & +(U_1\cdot\nabla)T_2 T_3 ]dxdydz, \end{align*} where P is the Leray projection to $L^2$ fields. Let the center manifold function be denoted by \begin{equation} \label{f3} \Phi=\sum_{\beta \ne \beta_{j_1011}}( \Phi_{1}^{\beta}(x_{j_1011}, y_{j_1011})\psi_{1}^{\beta}+ \Phi_{2}^{\beta}(x_{j_1011},y_{j_1011})\psi_{2}^{\beta}). \end{equation} The direct calculation shows that \begin{equation} \label{f4} \begin{aligned} & G(\psi^{\beta_{j_1011}}_1,\psi^{\beta_{j_1011}}_1) = -(0, \frac{A_1 \pi^2}{2 j_1 \alpha_1} \sin 2 j_1 \alpha_1 x, 0, \frac{A_2 \pi}{2} \sin 2 \pi z )^t,\\ & G(\psi^{\beta_{j_1011}}_1,\psi^{\beta_{j_1011}}_2) =-( \frac{\pi^2}{2 j_1 \alpha_1}\cos 2 \pi z, \frac{A_1 \pi^2}{2 j_1 \alpha_1} (\cos 2 \pi z -\cos 2j_1 \alpha_1 x),0,0)^t,\\ & G(\psi^{\beta_{j_1011}}_2,\psi^{\beta_{j_1011}}_1) = -( \frac{-\pi^2}{2 j_1 \alpha_1} \cos 2 \pi z, \frac{-A_1 \pi^2}{ 2j_1 \alpha_1}(\cos 2 j_1 \alpha_1 x + \cos 2 \pi z ), 0, 0)^t,\\ & G(\psi^{\beta_{j_1011}}_2,\psi^{\beta_{j_1011}}_2) =-(0, \frac{- A_1 \pi^2 }{2 j_1 \alpha_1} \sin 2 j_1 \alpha_1 x, 0, \frac{A_2 \pi}{2} \sin 2 \pi z)^t. \end{aligned} \end{equation} \begin{equation} \label{f5} \begin{aligned} & G(\psi^{\beta_{j_1011}}_1,\Psi^{\beta_{j_1011}}_1) = -(0, \frac{C_1 \pi^2}{2 j_1 \alpha_1} \sin 2 j_1 \alpha_1 x, 0, \frac{C_2 \pi}{2} \sin 2 \pi z )^t,\\ & G(\psi^{\beta_{j_1011}}_1,\Psi^{\beta_{j_1011}}_2) =-( \frac{\pi^2}{2 j_1 \alpha_1}\cos 2 \pi z, \frac{C_1 \pi^2}{2 j_1 \alpha_1} (\cos 2 \pi z -\cos 2j_1 \alpha_1 x),0,0)^t,\\ & G(\psi^{\beta_{j_1011}}_2,\Psi^{\beta_{j_1011}}_1) = -( \frac{-\pi^2}{2 j_1 \alpha_1} \cos 2 \pi z, \frac{-C_1 \pi^2}{ 2j_1 \alpha_1}(\cos 2 j_1 \alpha_1 x + \cos 2 \pi z ), 0, 0)^t,\\ & G(\psi^{\beta_{j_1011}}_2,\Psi^{\beta_{j_1011}}_2) =-(0, \frac{- C_1 \pi^2 }{2 j_1 \alpha_1} \sin 2 j_1 \alpha_1 x, 0, \frac{C_2 \pi}{2} \sin 2 \pi z)^t, \end{aligned} \end{equation} where $A_1=A_1(\beta_{j_1011})$, $A_2=A_2(\beta_{j_1011})$ $C_1=C_1(\beta_{j_1011})$ and $C_2=C_2(\beta_{j_1011})$. Hereafter, we make the following convention: \begin{align*} & o(2)= o(x_{j_1011}^2+y_{j_1011}^2)+ O (\mid\beta_{j_1011}(R)\mid \cdot (x_{j_1011}^2+y_{j_1011}^2)),\\ & o(3)= o((x_{j_1011}^2+y_{j_1011}^2)^{3/2})+O (\mid\beta_{j_1011}(R)\mid \cdot (x_{j_1011}^2+y_{j_1011}^2)^{3/2}),\\ & o(4)= o((x_{j_1011}^2+y_{j_1011}^2)^{2})+O (\mid\beta_{j_1011}(R)\mid \cdot (x_{j_1011}^2+y_{j_1011}^2)^{2}). \end{align*} By Theorem~\ref{thd2} and (\ref{f4})-(\ref{f5}), we obtain \begin{equation} \Phi=\Phi_{1}^{\beta_{(2j_1) 00}}\psi_{1}^{\beta_{(2j_1)00}}+ \Phi_{2}^{\beta_{(2j_1) 00}}\psi_{2}^{\beta_{(2j_1)00}}+ \Phi_{1}^{\beta_{0021}}\psi_{1}^{\beta_{0021}}+o(2), \end{equation} where \begin{align*} & \Phi_{1}^{\beta_{(2j_1)00}}=\frac{A_1 \pi^2} {\sigma \alpha_{(2j_1)0}^4} (x_{j_1011}^2 - y_{j_1011}^2)+o(2), &&\psi_{1}^{\beta_{(2j_1)00}}=(0, -2j_1 \alpha_1 \sin 2 j_1 \alpha_1 x, 0, 0)^t,\\ & \Phi_{2}^{\beta_{(2j_1)00}}=\frac{A_1 \pi^2} { \sigma \alpha_{(2j_1)0}^4}( 2 x_{1011} y_{1011})+o(2), && \psi_{2}^{\beta_{(2j_1)00}}=(0, 2 j_1 \alpha_1 \cos 2 \alpha_1 x, 0, 0)^t,\\ & \Phi_1^{\beta_{0021}}=\frac{-A_2}{8 \pi} (x_{j_1011}^2+y_{j_1011}^2)+o(2), && \psi_{1}^{\beta_{0021}}=(0,0,0, \sin 2 \pi z)^t. \end{align*} Note that for any $\psi_i \in H_1$($i=$ 1, 2, 3), \begin{align} \label{f7} & <G(\psi_1,\psi_2),\psi_2>_H=0,\\ \label{f8} & <G(\psi_1,\psi_2),\psi_3>_H=-<G(\psi_1,\psi_3),\psi_2>_H; \end{align} and for any $\psi_{i}\in E_{jkl}$ $(i=1,2,3)$, \begin{equation} \label{f9} <G(\psi_1,\psi_2),\psi_3>_H=0. \end{equation} The direct calculation shows that \begin{equation} \label{f10} G(\widetilde{\psi}, \psi^{\beta_{j_1 0 1 1}}_i)=0 \qquad \text{for} \qquad \widetilde{\psi} \in \{\psi_{1}^{\beta_{(2j_1)00}}, \psi_{2}^{\beta_{(2j_1)00}}, \psi_{1}^{\beta_{0021}}\} ,\, i=1,2. \end{equation} Then by $\psi = x_{j_1011} \psi^{\beta_{j_1011}}_1 + y_{j_1011} \psi^{\beta_{j_1011}}_2 + \Phi(x_{j_1011},y_{j_1011})$ and (6.4)-(6.10), we derive that \begin{align} <G(\psi, \psi),& \Psi^{\beta_{j_1011}}_1>_H \nonumber \\ = & <G(\psi^{\beta_{j_1011}}_1, \Phi), \Psi^{\beta_{j_1011}}_1>_H x_{j_1011} +<G(\psi^{\beta_{j_1011}}_2, \Phi), \Psi^{\beta_{j_1011}}_1>_H y_{j_1011}+o(3), \nonumber\\ =&- <G(\psi^{\beta_{j_1011}}_1, \Psi^{\beta_{j_1011}}_1), \Phi>_H x_{j_1011} - <G(\psi^{\beta_{j_1011}}_2, \Psi^{\beta_{j_1011}}_1),\Phi>_H y_{j_1011}+o(3), \nonumber\\ = & -\frac{2 A_1 C_1 \pi^6} { \sigma \alpha_1 \alpha_2 \sigma_{(2j_1)0}^4} (x_{j_1011}^2-y_{j_1011}^2) x_{j_1011} -\frac{A_2 C_2 \pi^2}{ 8 \alpha_1 \alpha_2} (x_{j_1011}^2+y_{j_1011}^2) x_{j_1011} +o(3), \nonumber \\ & -\frac{2 A_1 C_1 \pi^6} { \sigma \alpha_1 \alpha_2 \alpha_{(2j_1)0}^4} (2x_{j_1011} y_{j_1011})y_{j_1011} \nonumber\\ =& -(\frac{2 A_1 C_1 \pi^6} { \sigma \alpha_1 \alpha_2 \alpha_{(2j_1)0}^4} +\frac{A_2 C_2 \pi^2}{ 8 \alpha_1 \alpha_2} ) (x_{j_1011}^2+y_{j_1011}^2) x_{j_1011} +o(3). \nonumber \end{align} Similarly, we obtain \begin{align*} <G(\psi, \psi), \Psi^{\beta_{j_1011}}_2>_H = -(\frac{2 A_1 C_1 \pi^6} { \sigma \alpha_1 \alpha_2 \alpha_{(2j_1)0}^4} +\frac{A_2 C_2 \pi^2}{ 8 \alpha_1 \alpha_2} ) (x_{j_1011}^2+y_{j_1011}^2) y_{j_1011} +o(3). \end{align*} Hence, the reduction equations are given by \begin{equation} \label{f11} \left\{ \begin{aligned} & \frac{dx_{j_1011}}{dt}=\beta_{j_1011}(R) x_{j_1011} + \delta (x_{j_1011}^2+y_{j_1011}^2) x_{j_1011}+o(3),\\ & \frac{dy_{j_1011}}{dt}=\beta_{j_1011}(R) y_{j_1011} + \delta (x_{j_1011}^2+y_{j_1011}^2) y_{j_1011}+o(3), \end{aligned} \right. \end{equation} where \begin{equation} \label{f12} \delta= - (\frac{2 A_1 C_1 \pi^4}{ \sigma \alpha_{(2j_1)0}^4} + \frac{A_2 C_2}{8})/ (\frac{ \pi^2}{ j_1^2 \alpha_1^2} (1+A_1 C_1) +1 + A_2 C_2) < 0. \end{equation} A standard energy estimate on (\ref{f11}) together with the center manifold theory show that, for $R \le R_{c_1}$, $(U,T)=0$ is locally asymptotically stable for the problem (\ref{b1})-(\ref{b4}). Hence by Theorem~\ref{thd1}, the solutions to (\ref{b1})-(\ref{b4}) bifurcate from $(U,T,R)=(0,R_{c_1})$ to an attractor $\Sigma_{R}$. Moreover, by (\ref{f11})-(\ref{f12}) together with Theorem 5.10 in \cite{amsbook}, we conclude that $\Sigma_{R}$ is homeomorphic to $S^{1}$ in $H$. \subsection{Completion of the proof of Theorem 3.4} In this subsection, we prove that $\Sigma_{R}$ consists of steady state solutions. It is clear that the first eigenvalue of $L_{R}|_{\widetilde{H}_1}$ is simple for $R\approx R_{c_1}$. By the Kransnoselski bifurcation theorem (see among others Chow and Hale \cite{ch} and Nirenberg \cite{nirenberg}), when $R$ crosses $R_{c_1}$, the equations bifurcate from the basic solution to a steady state solution in $\widetilde{H}$. Therefore the attractor $\Sigma_R$ contains at least one steady state solution. Secondly, it's easy to check that the equations (\ref{b1})-(\ref{b4}) defined in $H$ are translation invariant in the $x$-direction. Hence if $\psi_0(x,y,z)=(U(x,y,z),T(x,y,z))$ is a steady state solution, then $\psi_0(x+\rho,y,z)$ are steady state solutions as well. By the periodic condition in the $x$-direction, the set \begin{align*} S_{\psi_0}=\{\psi_0(x+\rho,y,z) | \rho \in \mathbb R \} \end{align*} is a cycle homeomorphic to $S^1$ in $H$. Therefore the steady state of (\ref{b1})-(\ref{b4}) generates a cycle of steady state solutions. Hence the bifurcated attractor $\Sigma_R$ consists of steady state solutions. The proof of Theorem~\ref{thc4} is complete. \subsection{Proof of Theorem 3.5} The proof follows directly from the classical Hopf bifurcation theorem and Lemma~\ref{le6}.
1,314,259,993,112
arxiv
\section{Introduction} The two-dimensional spin-1/2 quantum kagome antiferromagnet (QKA) has been early recognized as an ideal candidate for stabilizing a spin-liquid state.\cite{Balents, LMM} The possibility of generating fractionalized excitations such as spinons and the nature itself of its ground state (GS) have been hotly debated over the last 20 years with proposals of many competing states -- gapped\cite{Yan,Depenbrock} and gapless\cite{Ran,Iqbal} spin liquids, as well as valence-bond solids.\cite{Singh,Evenbly} While the most recent calculations clearly point to a gapped spin-liquid GS,\cite{Yan,Depenbrock} likely a resonating valence bond (RVB) state, the few experimental realizations have been found gapless,\cite{Mendels,Clark} thus apparently contradicting this scenario. It is commonly advocated that the issue of opposing experimental findings and theoretical predictions results from weak perturbing interactions in the context of a GS manifold of the isotropic Heisenberg exchange. The most deeply studied case is that of the out-of-plane Dzyaloshinsky-Moriya (DM) magnetic anisotropy\cite{DM} $D_z$ that is present in any real QKA as the bonds lack inversion symmetry and is theoretically predicted to create a quantum critical point (QCP) at $D_z^c/J\simeq0.1$.\cite{Cepas} This separates a moment-free phase ($D_z<D_z^c$) from a N\'eel-ordered phase ($D_z>D_z^c$).\cite{Cepas,Messio,Huh} The mineral herbertsmithite, $\gamma$-ZnCu$_3$(OH)$_6$Cl$_2$, believed so far to be the best realization of the QKA,\cite{Shores} appears to be a spin liquid \cite{Mendels,deVries2} sustaining spinon excitations,\cite{HahnNature} in line with its DM anisotropy $D_z/J=0.06(2)$.\cite{Zorko,Rousochatzakis,Samir} Its location in the region close to the QCP is likely responsible for observed field- \cite{Jeong} and pressure-induced \cite{Kozlenko} freezing. This theoretical scenario awaits further validation, potentially by finding new compounds lying in the N\'eel-ordered region of the phase diagram. \begin{figure}[b] \includegraphics[trim = 17mm 13mm 58mm 23mm, clip, width=1\linewidth]{Fig1.eps} \caption{The two inequivalent copper sites Cu(1) and Cu(2) on the kagome lattice in vesignieite~($ab$ crystallographic plane). (a) The double-headed arrows connect apical O(1) sites in each CuO$_6$ octahedron and define the principal axis of the $g$-tensor on each Cu$^{2+}$ site. (b) The Dzyaloshinsky-Moriya (DM) pattern of out-of-plane $D_z$ (uniform) and in-plane $D_p$ components. (c) Two principal directions, $\Delta$ and $E$, of the local symmetric anisotropic-exchange (AE) tensor; $\Delta$ is canted by $\theta_0=45^\circ$ out of the kagome plane, while $E$ lies in the plane.} \label{Fig1} \end{figure} In this context, the mineral vesignieite, BaCu$_3$V$_2$O$_8$(OH)$_2$, which has been recently highlighted as a new realization of the QKA,\cite{Okamoto} is an appealing case. It crystallizes in the monoclinic space group\cite{Lafontaine} $C2/m$ and the minute 0.07\% bond-length difference due to two inequivalent Cu$^{2+}$ sites (Fig.~\ref{Fig1}) makes the triangles very close to being equilateral.\cite{Colman} Indeed, there have been suggestions that the actual structure has equilateral symmetry,\cite{Yoshida} though this may not yet be conclusive.\cite{Boldrin} The magnetism of vesignieite is dominated by the nearest-neighbor antiferromagnetic interaction\cite{Okamoto} $J=53$~K that leads to a maximum in local susceptibility $\chi_i$ at the temperature $T\simeq 0.5J$,\cite{Quilliam} detected by nuclear magnetic resonance (NMR). In marked contrast to herbertsmithite, vesignieite shows a magnetic transition\cite{Colman,Yoshida,Boldrin,Quilliam,MYoshida} to a $q=0$ N\'eel state at $T_N=9$~K. During this spin freezing transition, an additional out-of-plane spin component creates a ZFC/FC bifurcation. Based on the width of electron-spin-resonance (ESR) spectra\cite{Zhang} vesignieite has been suggested to possess large DM anisotropy $D_z$ and thus to be in the ordered region of the phase diagram.\cite{Colman,Quilliam,Yoshida, MYoshida} Since vesignieite appears to be the first clear case of a long-range ordered QKA and no proper attempt to identify and quantify its magnetic anisotropy has been reported, a detailed study is essential. In this paper, we clarify the driving force of magnetic ordering in vesignieite by determining its dominant magnetic anisotropy. Employing the local-probe ESR technique we show that the in-plane component of the dominant DM anisotropy, $D_p$, exceeds $D_z$, in contrast to herbertsmithite. We propose that such a DM vector crucially suppresses quantum fluctuations and thus critically affects the GS of this material. Additionally, we assess the importance of a symmetric anisotropic exchange (AE) that has recently been suggested as an important spin-Hamiltonian component of herbertsmithite.\cite{Han, Ofer} \section{Experimental details} Our ESR experiments were conducted at 328.8~GHz on a custom-made spectrometer working in transmission mode at the NHMFL, Tallahassee, USA, allowing single field-sweep detection of spectra with negligible background. The sample was hydrothermally annealed powder similar to that used in previous studies.\cite{Colman,Quilliam} \section{Theoretical background} ESR has proven extremely efficient for determining magnetic anisotropy, either by detecting collective excitations,\cite{ZorkoFe,Zvyagin} or through the modeling of shifts\cite{Povarov,Furuya} and line widths\cite{Zorko,ZorkoSCBO} of a paramagnetic resonance. Both, the shifts and widths are non-zero only when the anisotropy is finite.\cite{AB} They allow distinction of different forms of the anisotropy and its direct quantification. In an ESR experiment a magnetic system is exposed to the applied static magnetic field $B_0$ and an electromagnetic wave with polarization of its magnetic field perpendicular to $B_0$ (conventional Faraday configuration). Within the linear response theory the absorption spectrum $I(\omega)$ is proportional to the imaginary part of the dynamical susceptibility,\cite{KT} \begin{equation} I(\omega)\propto\chi"({\rm \textbf{q}}\rightarrow0,\omega) \propto \int_{-\infty}^{\infty}{\rm d}t \left\langle S^+(t)S^-(0)\right\rangle \exp^{i \omega t}/T, \label{eqs1} \end{equation} and thus effectively measures spin correlations in the direction perpendicular to the applied field, $\left\langle S^+(t)S^-(0)\right\rangle$, where $\langle\;\rangle$ denotes canonical averaging and $S^\alpha=\sum_iS_i^\alpha$ is the $\alpha$-component of the total spin operator. Calculating the time-dependent spin operator in the Heisenberg representation $S^+(t)={\rm e}^{\frac{i}{\hbar}\mathcal{H}t}S^+{\rm e}^{-\frac{i}{\hbar}\mathcal{H}t}$ for a general spin Hamiltonian $\mathcal{H}$ is a nontrivial problem. Therefore, a few approximate solutions have been developed. The well-established Kubo-Tomita (KT) approach\cite{KT} relies on dividing the spin Hamiltonian into two parts, $\mathcal{H}=\mathcal{H}_0+\mathcal{H}'$, where the first, dominant part $\mathcal{H}_0$ contains only the Zeeman term and the Heisenberg isotropic exchange ($J$), while the magnetic anisotropy term $\mathcal{H}'$ is treated as a perturbation. The latter then determines the shape of the ESR spectrum (its position and width), because $\mathcal{H}_0$ possessing SU(2) symmetry conserves the total magnetization and therefore leads to a $\delta$-function resonance at the field $B_0$. For finite $\mathcal{H}'$, at high temperatures ($T\gg J$) the KT theory predicts a Lorentzian exchange-narrowed\cite{Anderson} ESR absorption line with the full-width-half-maximum line width\cite{Castner} \begin{equation} \Delta B = C \frac{k_B}{g \mu _B} \sqrt{\frac{M_2^3}{M_4}}, \label{eqs2} \end{equation} where \begin{align} M_{2}&= \notag \frac{\left\langle \left[ \mathcal{H}^{\prime },S^{+}\right] [ S^{-},\mathcal{H}^{\prime }] \right\rangle} {\left\langle S^{+}S^{-}\right\rangle},\\ M_{4}&=\frac{\left\langle \left[ \mathcal{H}-\mathcal{H}_{Z},\left[ \mathcal{H}^{\prime },S^{+}\right] \right] [ \mathcal{H}-\mathcal{H}_{Z},\left[ \mathcal{H}^{\prime },S^{-}\right] ] \right\rangle}{ \left\langle S^{+}S^{-}\right\rangle}, \end{align} are the second and the fourth moment of the absorption line, respectively, with $\left[ \; \right]$ denoting a commutator, $C$ is a constant of the order of unity (see below), $k_B$ stands for the Boltzman constant and $\mu_B$ for the Bohr magneton. The expression~(\ref{eqs2}) is valid if the magnetic anisotropy is small compared to $\mathcal{H}_0$ and if spin diffusion is negligible, which is generally the case in spin systems with dimensionality exceeding one.\cite{Richards} Strictly speaking, the ESR absorption line is never truly Lorentzian, as all of its moments, given by the spin Hamiltonian, are always finite while they diverge for the Lorentzian line shape. In systems with strong isotropic exchange compared to magnetic anisotropy deviations from the Lorentzian shape occur only in far wings of the resonance and an approximate line shape that is a product of the Lorentzian and a broad Gaussian $\propto{\rm e}^{-(B-B_0)^2/2B_e^2}$,\cite{Castner} with $B_e=k_B/g\mu_B\sqrt{M_4/M_2}$ being the exchange field, is applicable. This then yields $C=\sqrt{2\pi}$. \section{Results} In Fig.~\ref{Fig2}(a) we show derivative ESR spectra typical of those recorded in the $T$-range between 3 and 300~K. The spectra have similar width as in herbertsmithite,\cite{Zorko} suggesting that substantial magnetic anisotropy is also present in vesignieite. In addition to this broad component, we observe a narrow component with the principal $g$-factors values of 2.05 and 2.25, typical of Cu$^{2+}$ ions.\cite{AB} We attribute this narrow component to a minor impurity phase since its intensity at 300~K amounts to only 0.3\% of the broad-component intensity and exhibits a Curie-like $T$-dependence. The impurity signal is thus much too small to explain the substantial low-$T$ increase of the bulk susceptibility $\chi_b$ [Fig.~\ref{Fig2}(b)]. The ESR intensity of the broad component, $\chi_{\rm ESR}$, convincingly follows $\chi_b$ [Fig.~\ref{Fig2}(b)], and not the nonmonotonic intrinsic $\chi_i$. This has an important implication for the hitherto unknown origin of the low-$T$ increase of $\chi_b$.\cite{Okamoto,Colman} Indeed, the observation of a single broad-component ESR line, rather than two distinct lines, reveals that the spins contributing to $\chi_b$ are necessarily exchange coupled with the intrinsic Cu$^{2+}$ spins. Bond disorder due to oxygen vacancies or some other non-stoichiometry effect then provides a credible explanation for the mismatch between $\chi_b$ and $\chi_i$. Such disorder also explains the inhomogeneous broadening of the $^{51}$V NMR lines far above $T_N$.\cite{Quilliam} \begin{figure}[t] \includegraphics[trim = 0mm 22mm 0mm 15mm, clip, width=1\linewidth]{Fig2.eps} \caption{(a) The ESR spectra (symbols) measured at 328.8~GHz and the fits (lines). Arrows point to impurity lines with corresponding $g$-factors. (b) Comparison of the ESR intensity $\chi_{\rm ESR}$ (symbols) with bulk magnetic susceptibility $\chi_b$ (solid line) measured in 5~T and the intrinsic susceptibility $\chi_i$ (dashed line) obtained from $^{51}$V NMR.\cite{Quilliam} The arrow indicates the intensity of the impurity ESR signal at 3~K.} \label{Fig2} \end{figure} For the magnetic field applied perpendicular ($z$) and within the kagome plane ($p$) we find $g_z>g_p$. As the local anisotropy axis, set by the direction of the the shortened Cu-O(1) bond [Fig.~1(a)], makes the angle of $26.7^\circ<\pi/4$ with the kagome plane, the principal $g$-factor value $g_\|$ along the apical direction is smaller than the value $g_\bot$ in the perpendicular direction. This confirms that the Cu$^{2+}$ orbital state involves occupation of the $d_{3z^2-r^2}$ state\cite{Okamoto} rather than the more common $d_{x^2-y^2}$ orbital that would lead to $g_\|>g_\bot$.\cite{AB} \subsection{ESR line-width analysis} In order to determine the $T$-evolution of the ESR line width and its origin we first fitted the experimental spectra [Fig.~\ref{Fig2}(a)] to a powder-averaged line shape based on a field distribution originating from the $g$-factor anisotropy $g(\theta)=(g_z^2{\rm cos}^2\theta+g_p^2{\rm sin}^2\theta)^{1/2}$ that is convoluted with a Lorentzian function with the phenomenological line width $\Delta B(\theta)=(\Delta B_z^2{\rm cos}^2\theta+\Delta B_p^2{\rm sin}^2\theta)^{1/2}$. Here $\theta$ denotes the polar angle between the applied magnetic field and the normal to the plane. The presumed independence of both $g$ and $\Delta B$ on the azimuthal angle stems from the near threefold rotational symmetry of the lattice. Our approach elaborates on the previous ESR report that employed a simpler analysis yielding only an average $\Delta B$ and $g$-factor.\cite{Zhang} \begin{figure}[t] \includegraphics[trim = 0mm 22mm 0mm 15mm, clip, width=1\linewidth]{Fig3.eps} \caption{(a) $T$-dependence of the ESR line width (open symbols) for the perpendicular ($z$) and in-plane ($p$) directions of the applied magnetic field. Linear contribution (lines) is subtracted to obtain intrinsic kagome-lattice line width (full symbols). (b) The anisotropic-exchange model leads to a sizable temperature variation of the $g$-factor (lines) which is not detected in the experimental data (symbols).} \label{Fig3} \end{figure} The ESR line width in vesignieite~exhibits a minimum at $T_{\rm min}=40$~K and increases linearly with $T$ at least up to 300~K, i.e., $T/J\sim 6$ [Fig.~\ref{Fig3}(a)]. This is in sharp contrast to herbertsmithite~where it was found constant for $T\gtrsim J$.\cite{Zorko} In general, dying out of spin correlations above the characteristic exchange temperature $J$ causes a vanishing contribution to the ESR line width. Similar linearly increasing ESR line width at surprisingly high temperatures ($T\gg J$) was observed in localized-spin systems on several instances.\cite{ZorkoSCBO, Castner, Seehra, HuberJPCS, SeehraJPCM, Huber, Heinrich, Deisenhofer} Such behavior can arise either from the phonon modulation of the anisotropic exchange\cite{Seehra} or the crystalline field, the latter for $S>1/2$.\cite{HuberJPCS} We therefore attribute the observed behavior in vesignieite~to an additional line-broadening mechanism that arises from a spin-phonon coupling and is due to a direct phonon process yielding the linearly increasing ESR relaxation due to phonon modulation of magnetic anisotropy.\cite{Seehra} We propose that the linear increase might be related to a structural instability of vesignieite~associated with the energetic proximity of the monoclinic crystal structure to the higher-symmetry rhombohedral ($R\bar{3}m$) structure, the latter with a perfectly undistorted kagome lattice.\cite{Yoshida} In cases of the phonon-induced ESR broadening the line width is regularly written as a sum of a temperature independent and a linearly increasing contribution. Such division is justified for relaxation mechanisms that contribute independently to the relaxation of the spin correlation function $\left\langle S^+(t)S^-(0)\right\rangle$, whose decay is close to being exponential (for close-to Lorentzian line shapes), as regularly encountered in concentrated magnetic insulators. The usual subtraction of the linearly increasing line-width contribution in vesignieite~gives the width intrinsic to the kagome spin system. Its increase with decreasing $T$ below $T_{\rm min}$ is similar to that observed in herbertsmithite~and can be attributed to the building-up of spin correlations, which are also responsible for the maximum in $\chi_i$ at 25~K.\cite{Quilliam} In order to determine the magnetic anisotropy, we therefore make use of the 40~K spectrum, which corresponds well to the paramagnetic limit, as both the spin-phonon and the spin-correlation induced broadenings are small. Moreover, since $\chi_{\rm ESR}$ is not much different from $\chi_i$ at 40~K we do not expect any notable effect of the bond disorder on the 40~K ESR spectrum.\cite{Huber2} In this paramagnetic limit, we model the ESR line-width anisotropy $\Delta B(\theta)$ by employing the Kubo-Tomita moment approach [Eq.~(\ref{eqs2})]. Both, the DM magnetic anisotropy [Fig.\ref{Fig1}(b)] \begin{equation} \mathcal{H'_{\rm DM}}=\sum_{(ij)}{\bf D}_{ij}\cdot {{\bf S}_{i} \times {\bf S}_{j}} \label{DM} \end{equation} and the traceless symmetric anisotropic exchange [Fig.\ref{Fig1}(c)], written in a local basis as \begin{eqnarray} \mathcal{H'_{\rm AE}}&=\sum_{(ij)} \Big[ \frac{2\Delta}{3} S_i^\xi S_j^\xi +\left(-\frac{\Delta}{3}+\frac{E}{2} \right)S_i^\eta S_j^\eta \nonumber \\ &+\left(-\frac{\Delta}{3}-\frac{E}{2} \right)S_i^\nu S_j^\nu \Big], \label{AE} \end{eqnarray} yield the line width of the general form \begin{equation} \Delta B(\theta) = \sqrt{2 \pi} \frac{k_b}{2 g(\theta) \mu_B J} \sqrt{ \frac{(a+b\;{\rm cos}^2\theta)^3}{c+d\;{\rm cos}^2\theta}}, \label{eq1} \end{equation} where $g(\theta)$ denotes the $g$-factor averaged over the basic hexagon of the kagome lattice, and the constants $a$, $b$, $c$, $d$ are related to the anisotropy constants of each model [see Eqs.~(\ref{DMwidth}),~(\ref{AEwidth}) in the Appendix~\ref{appA}). Although this angular dependence is more complicated than the phenomenological one employed above, their differences are minimal (see Appendix~\ref{appB}), assuring that the $T$-dependences of all ESR parameters in Fig.~\ref{Fig3} are meaningful. \begin{figure}[t] \includegraphics[trim = 00mm 9mm 0mm 6mm, clip, width=1\linewidth]{Fig4a.eps} \caption{Reduced $\chi^2$ of fitting the 40~K ESR spectrum with (a) the DM and (b) the AE model. The former yields optimal parameters $|D_p|/J =0.19(2)$, $|D_z|/J =0.07(3)$ and the latter two inequivalent solutions, $\Delta/J =\pm 0.15(2)$, $E/J=\mp 0.13(2)$ and $\Delta/J=\pm 0.04(2)$, $E/J=\mp 0.21(1)$. Center of each panel: Comparison of the best fit (line) and experimental data.} \label{Fig4} \end{figure} Fitting the experimental spectrum to the powder-averaged line shape with the line width given by Eq.~(\ref{eq1}) provides fits of equal quality for both models. These are displayed in Fig.~\ref{Fig4} together with $\chi^2$ maps spanned over the parameter space. For the DM model, we find the solution $|D_p|/J =0.19(2)$, $|D_z|/J =0.07(3)$, while the AE model yields two inequivalent solutions, $\Delta/J =\pm 0.15(2)$, $E/J=\mp 0.13(2)$ and $\Delta/J=\pm 0.04(2)$, $E/J=\mp 0.21(1)$. The relative sizes of the DM and AE anisotropy are very similar with respect to the dominant exchange $J$. Since the DM interaction results from a first order correction of $J$ in the spin-orbit coupling, which is for the Cu$^{2+}$ ions a $\sim$10\% perturbation on $J$,\cite{AB} while the AE interaction is a second-order correction, the DM interaction is generally considered dominant. However, caution is necessary as $D_p$, the largest anisotropy in the DM model in vesignieite, is reducible on the kagome lattice.\cite{Cepas} This is because it possesses a hidden symmetry\cite{Shekhtman} and can be transformed into an effective term of the order $D_p^2/J$ by applying a nonuniform spin rotation.\cite{Choukroun} Therefore, we provide a second criterion that is based on the ESR line shift, which allows distinction between the two anisotropy models in vesignieite. \subsection{ESR line-shift analysis} When the anisotropy is smaller than the isotropic exchange, as is the case here, Nagata's theory\cite{Nagata} of the ESR line shift can be applied. Accordingly, the shift of the $g$-factor from its infinite-$T$ value $g^\infty$ is given by the first moment,\cite{Nagata,Maeda} $g-g^\infty = {\left\langle \left[ S^-,\left[S^+,\mathcal{H}' \right] \right] \right\rangle}/{2\mu_B B_0\left\langle S^z\right\rangle}$. It is important to stress that in this first-order calculation (in $\mathcal{H}'$) the DM interaction leads to zero shift,\cite{Maeda} while the shift due to the AE interaction scales with the susceptibility $\chi$ in the paramagnetic regime,\cite{Nagata,Nagata2} \begin{equation} g_Z-g_Z^\infty = \frac{\chi}{2N_A g \mu_0 \mu_B^2}\sum_{j\neq i}{\left(2 \Delta_{ij}^{ZZ}-\Delta_{ij}^{XX}-\Delta_{ij}^{ZZ}\right)}. \label{shift2} \end{equation} Here $\Delta^{ZZ}$ denotes the component of the AE tensor along the applied field and $\Delta^{XX},\,\Delta^{YY}$ in two perpendicular directions, $N_A$ is the Avogadro number and $\mu_0$ the vacuum permeability. Quantifying this expression in vesignieite~with the above-determined AE parameters, we arrive at the scalings $(g_z-g_z^\infty)/\chi = \pm 22.3$~(emu/mol~Cu)$^{-1}$ and $(g_p-g_p^\infty)/\chi = \mp 11.7$~(emu/mol~Cu)$^{-1}$ for the two relevant directions. These values significantly overestimate the measured ESR shifts that are found constant for $T\gtrsim 50$~K within much smaller error bars [Fig.~\ref{Fig3}(b)]. The $g^z$ data limit the AE anisotropy to at least 4-times smaller values. Therefore, the ESR line width, which scales with the square of the anisotropy, is entirely determined by the dominant DM interaction. The experimental ESR shift becomes sizable below 25~K [Fig.~\ref{Fig3}(b)], which was ascribed previously to short-range ordering effects.\cite{Zhang} However, due to the traceless nature of the AE interaction it is generally expected that even in the short-range correlated regime the ESR shift will exhibit both positive and negative shifts for different directions of the applied field,\cite{Nagata,Nagata2} just as occurs at higher $T$. This is not the case, therefore, we attribute the solely positive low-$T$ $g$-shifts to local inhomogeneity present in vesignieite. Namely, in inhomogeneous systems (e.g., impure systems and systems with several inequivalent sites) spatially varying local fields exclusively lead to positive $g$-shifts\cite{Malozemoff,Nagata3} that stem from the $q\neq0$ Fourier components of the inhomogeneous field.\cite{Nagata3} This corroborates our proposition, based on the scaling of $\chi_{\rm ESR}$ with $\chi_b$, that there is significant bond disorder in vesignieite. \section{Discussion} Although, in both herbertsmithite~and in vesignieite~the ESR spectra can be accounted for by the DM magnetic anisotropy, we point out an essential difference. While the out-of-plane DM component $D_z$ is dominant in herbertsmithite,\cite{Zorko} it is the in-plane component $D_p$ that dominates in vesignieite. We additionally note that the Kubo-Tomita approach generally underestimates the strength of the reducible $D_p$ component with respect to the irreducible $D_z$ component. Then, it is important to inspect the effect of both DM components on the GS of the QKA. Although in the classical limit both components immediately lead to magnetic ordering, their effect is rather different.\cite{Elhajal} The GS due solely to the $D_z$ component is an in-plane 120$^\circ$ spin structure invariant under a global rotation around the $z$ axis, while a finite $D_p$ component prefers a state with a finite uniform out-of-plane spin component, thus removing this state from the GS manifold of the Heisenberg Hamiltonian by eliminating the rotation symmetry. For $D_z>0$, the tilt $\phi$ of each spin from the kagome plane caused by $D_p$ is given by $\tan(2\phi)=2D_p/(\sqrt{3}J+D_z)$,\cite{Elhajal} which yields $\phi=6^\circ$ in vesignieite. Interestingly, the weak ferromagnetic spin component estimated by NMR\cite{MYoshida} amounts to 0.05-0.12~$\mu_B$, which together with the dominant in-plane component being larger than 0.6~$\mu_B$ limits the tilting angle to $3^\circ<\phi<9^\circ$. In the quantum picture\cite{Cepas} N\'eel ordering is induced by $D_z$ only for $D_z>D_z^c\simeq 0.1$. Including a finite $D_p$ in this case leads to a weak ferromagnetic moment in the $z$ direction that is still linear in $D_p/J$ as in the classical case,\cite{Cepas} while the position of the QCP would be affected by the $D_p^2/J$ term as well as linear terms in the AE anisotropy. In vesignieite, the condition $D_p>D_z$, could profoundly affect the QCP because $D_p$ disfavors spin structures from the GS manifold of the isotropic $J$ and should therefore be much more efficient in suppressing quantum fluctuations than $D_z$. This could explain why magnetic ordering in vesignieite~occurs at surprisingly high temperature $T_N/J=0.17$, despite possessing very similar $D_z/J$ as herbertsmithite. Comprehensive theoretical investigations of the general DM-perturbed phase diagram of the QKA are thus highly desired. \section{Conclusions} Employing the ESR line-width and line-shift analyses we have shown that the in-plane DM interaction is prevailing in the novel QKA vesignieite~and is most likely responsible for its magnetic ordering below $T_N=9$~K. We have detected intrinsic inhomogeneity of the kagome planes, which we attribute to bond disorder, as well as sizable spin-phonon contribution that might be related to a lattice instability. Last, we note that a preliminary analysis of the ESR line\cite{Zorko} of herbertsmithite~with the AE model yields anisotropy constants $|\Delta|/J =0.072$ and $|E|/J= 0.074$ that give an "effective" AE anisotropy of $|\Delta_{\rm av}|/J= 0.06$ if averaged over the triangle. This value is notably smaller than the recent estimate $\Delta_{\rm av}/J\simeq -0.1$,\cite{Han} which would lead to much broader ESR lines since their width scales with the square of the anisotropy. Increasing the sensitivity by performing single-crystal ESR and applying the above-presented analysis is likely the most reliable approach for resolving the standing issue of the dominant anisotropy in herbertsmithite, which could turn to be the crucial milestone in understanding its spin-liquid properties. \acknowledgments We thank O.~C\'epas and S.~El~Shawish for valuable discussions. AZ acknowledges the financial support of the Slovenian Research Agency (projects J1-2118, BI-US/09-12-040 and Bi-FR/11-12-PROTEUS-008). The NHMFL is supported by NSF Cooperative Agreement No.~DMR-1157490, and by the State of Florida.
1,314,259,993,113
arxiv
\section{Introduction} Proton decay is a generic feature of any unification scheme since the unification of quarks and leptons in a common multiplet introduces extra interactions that violate baryon number. Proton decay rates and modes are a prediction of GUT models that play a crucial role in their phenomenological viability. In fact, proton decay has turned out to be the nemesis of many GUT and Superstring models. It is a welcome prediction that can be used to test GUTs. In supersymmetric GUTs with conserved $R$--parity the dominant baryon number violating operators are dimension $D=5$, while $D=6$ operators are in general suppressed due to the increase of the unification scale in comparison to its non--supersymmetric values. $D=5$ operators are proportional to the Yukawa couplings and to the inverse of the heavy mass \cite{DIMF}. In minimal models the Yukawa couplings involved are associated with the fermion masses. The values of these couplings play an important role in the final value of the proton decay rate and the resulting hierarchy of existing modes. Nevertheless, Superstring embeddable models \cite{SEM} or models of phenomenologically oriented GUTs that treat the fermion mass problem \cite{FMM}, come out with an extended Higgs sector. In this talk we summarize the results of a recent work \cite{yo}, where we propose a mechanism for eliminating or suppressing such operators based on the use of textures of the hypercharge 1/3 mass--matrix accompanied by certain constraints of the extra triplet coupling to matter. \section{Proton Decay in minimal SU(5) models} Let us consider unified models with the minimal Higgs content to allow the beaking of the $SU(5)$ symmetry to $SU(3)_c\times SU(2)_L\times U(1)_Y$ at the GUT scale $M_{GUT}$. Non Supersymmetric $SU(5)$ models predict proton decay as a consequence of gauge interactions of the heavy particles, originated when the $SU(5)$ symmetry is broken, and quarks and leptons. The baryon-number-vioalting operators are $D=6$ and they are suppressed as the square of the mass of the heavy particles ($\approx M_{GUT}$). The dominant proton decay mode in these models is $p \rightarrow e^{+} \pi^0$. The calculated lifetime \cite{PN} is : \begin{equation} \tau(p\rightarrow e^+ \pi^0)\approx (\frac{M_{GUT}} {3.5\cdot 10^4 {\rm GeV}})^4\times 10^{31\pm 1} yr \end{equation} While the experimental bound for this process is \cite{pdb} \[ \tau(p \rightarrow e^{+} \pi^0) > 5.5 \times 10^{32} years \] Since tipical values for the {\it cuasi} unification in non-susy $SU(3)_c\times SU(2)_L\times U(1)_Y$ are $M_{GUT}\approx 10^{13}-10^{14} Gev$, proton lifetime predictions exceeds the experimental bounds. In SUSY SU(5) models, the scale of unification is increased to $M_{GUT}\approx 10^{16} GeV$, this is enought to bring the prediction for $D=6$ mediated proton decay to a safe limit: \[ \tau(p \rightarrow e^{+} \pi^0) \approx 10^{35\pm 1} years \] However proton decay is predicted, at smaller rates, due to Yukawa interactions. In this case the supersymmetric partners of the colored triplet Higgs bosons interact with leptons (sleptons) and quarks (squarks) fields. $D=5$ operators arises suppresed only by one power of $M_{GUT}$. Color triplets are contained in Higgs pentaplets ${h},{\overline{h}}$. The quarks and leptons are assigned to $\phi({\bf\overline{5}})+\psi({\bf\overline{10}})$ representations of $SU(5)$. The part of the superpotential related to dimension--5 decay will be \begin{equation} Y^u_{ij} \,\psi_i \psi_j {h}_{1} + Y^d_{ij}\,\psi_i\phi_j {\overline{h}}_{1}+{\mu}{h}{\overline{h}} +{\lambda}{h}{\Sigma}{\overline{h}} \ , \label{abo} \end{equation} where the symbol $\Sigma$ stands for the adjoint Higgs superfield in the {\bf{24}} representation. The $SU(5)$ symmetry is broken down to the MSSM, when $\Sigma$ gets a VEV, V, along the 24--direction. The isodoublet and colour--triplet masses are \begin{equation} {M}_2=\mu-3{\lambda}V \ , \end{equation} \begin{equation} {M}_3=\mu+2{\lambda}V \end{equation} The triplets are heavy, ${M}_3\sim M_{GUT}$, while the doublet pair must remain light ${M}_2\sim m_w$. Hence a fine--tuning condition must be imposed in the parameters of the superpotential (\ref{abo}). The effective $SU(3)_c\times SU(2)_L\times U(1)_Y$ superpotential describing the couplings of quarks and leptons to the extra coloured triplets of the $D$--quark type \begin{equation} Y^u_{ij}Q_i Q_j D+Y^d_{ij} Q_i L_j{\overline{D}} + Y^u_{ij}E_i^{c}U_j^{c}D \ , \end{equation} $D=5$ operators can be converted into four--fermion operators by the appropiate gaugino dressing. Assuming roughly an overall universal supersymmetry breaking scale $m_S$ ,the corresponding four--fermion operator will involve: \begin{equation} \label{eq:op4} \lambda \cdot\left[{\frac{({M}_3)^{-1}}{m_S}} -4 m_S({{M}_3})^{-3} \log{\frac{({{M}_{3}})}{m_S}}\right] \end{equation} Where $\lambda$ contain a combination of Yukawa and gauge couplings. The theoretical predictions \cite{PN} for the mode $p\rightarrow \overline{\nu} K$ are comparable to the experimental bound for this mode \cite{pdb} \[ \tau(p\rightarrow \overline{\nu} K) > 5.5\times 10^{32} yr. \] Hence, the parameter space for the minimal SUSY--$SU(5)$ is very restricted. In $SU(5)$--models with a non-minimal content of Higgs multiplets, ${M}_3^{-1}$ of eq.~(\ref{eq:op4}) will be replaced by a matrix, and therefore its null elements will play an important role in the suppresion of $D=5$--operator mediated proton decay. In the minimal flipped $SU(5)\times U(1)$ model \cite{BN} matter fields come in the representations \begin{equation} {F}_i({\bf{10}}, 1/2)\,\,\,,\,\,\,\,\,{f}_i^c({\bf{\overline{5}}},-3/2)\,\,, \,\,\,\,\,{l}_i^c({\bf{1}},5/2) \end{equation} while Higgses in \begin{equation} {h}({\bf{5}},-1)\,\,\,,\,\,\,{\overline{h}} ({\overline{\bf{5}}}, 1)\ , \end{equation} and in \begin{equation} {F}_{h}({\bf{10}}, 1/2)\,\,\,,\,\,\,\,\,{\overline{F}}_h ({\overline{\bf{10}}},-1/2)\ . \end{equation} The part of the superpotential relevant for the beaking of the unifiying symmetry and Yukawa terms is: \begin{eqnarray} &&f_{ij}F_i\,F_j\,{h}+y_{ij}F_i\,f_j^{c} \overline{h}+r_{ij}l_i^{c}\,f_j^{c}\,{\overline{h}}+ \nonumber\\ &&\mu_{h} {\overline{h}}+ {\lambda}{F}_{h}{F}_{h}{h} +{\overline{\lambda}}{\overline{F}}_{h}{\overline{F}}_{h}{\overline{h}} \label{mfl} \end{eqnarray} VEV's of $F_h$ and $\overline{F_h}$ along the neutrino-like component break the $SU(5)\times U(1)$ symmetry to the MSSM. A great advantage of the ``flipped" $SU(5)$ model over the ordinary one is that of the realization of the ``triplet--doublet splitting" mechanism without fine--tuning the parameters of the superpotential (\ref{mfl}). In this case the doublet mass is given by the parameter $\mu$ which can be $\sim m_w$ while the mass matices for the triplets: \begin{equation} {\cal{M}}_3=\left(\begin{array}{c c} 0 & \lambda V \\ \overline{\lambda} \overline{V} & 0 \end{array}\right) \end{equation} Where the entry 22 is null since the pair $F_h\, {\overline{F}_h} $ has to be massless in order to realize the $SU(5)\times U(1)$ breaking to the standard model. Since in this model there is not $D \overline{D}$ mass term, $D=5$-operators are naturally suppresed. \section{How to suppress dimension--5 operators in effective models with extra triplets} Let us consider a general supersymmetric model containing some extra hypercharge--$1/3$ colour--triplets \footnote{This superpotential arises in the case of the standard $SU(5)$ with extra Higgs 5--plets or from the flipped $SU(5)\times U(1)$ with both extra Higgs 5--plets and 10--plets.}. The effective $SU(3)_c\times SU(2)_L\times U(1)_Y$ superpotential describing the couplings of quarks and leptons to the extra coloured triplets of the $D$--quark type \begin{equation} f_{ij}^{\alpha}Q_i Q_j D_{\alpha}+y_{ij}^{\alpha} Q_i L_j{\overline{D}}_{\alpha} + r_{ij}^{\alpha}E_i^{c}U_j^{c}{\overline{D}}_{\alpha} \ , \label{ww} \end{equation} where $i,j=1,2,3$ are the usual generation indices and $\alpha=1,...,N$ is an extra index describing the multiplicity of triplets and repeated indices are summed. In addition the effective triplet mass matrix will be of the form \begin{equation} {({\cal M}_3)}_{\alpha \beta}D_{\alpha}\overline{D}_{\beta} \end{equation} where ${\cal M}_3$ is in general non--diagonal. We can always go to a basis in which the triplet mass--matrix is diagonal \begin{equation} D_{\alpha}=S_{\alpha \beta}D_{\beta}^\prime\,\,\,\,,\,\,\, \overline{D}_{\alpha} = U_{\alpha \beta}\overline{D}_{\beta}^\prime \end{equation} \begin{equation} {{\cal M}_3}_D\equiv diag (m_1,m_2,\cdots,m_N)=S^{T}{{\cal M}_3}\,U \end{equation} where the matrices $S$ and $U$ are unitary. In this basis we can easily evaluate $D=5$ operators resulting from Higgs triplet fermion exchange, and then recast the result in the original basis. Assuming that all triplets are massive ($m_i\ne0\,,i=1,\dots, N$), operators with the structure $Q_i\,Q_j\,Q_k\,L_n$ will be proportional to \footnote{The corresponding four--fermion operator, assuming roughly an overall universal supersymmetry breaking scale $m_S$ , will involve ${\frac{({\cal{M}}_3)^{-1}}{m_S}} -4 m_S({{\cal{M}}_3})^{-3} \log{\frac{({{\cal{M}}_{3}})}{m_S}}$ .} \begin{eqnarray} {\cal O}^{\mbox{\it\tiny QQQL}}_{ijkl}&=& \sum_{\alpha,\beta,\gamma=1}^{N}f_{ij}^{\alpha} S_{\alpha\gamma}({{\cal M}_3}_{D}^{-1})_{\gamma}U_{\beta\gamma}y_{kn}^{\beta}\nonumber\\ &=&\sum_{\alpha ,\beta =1}^{N}f_{ij}^{\alpha}({{\cal M}_3}^{-1})_{\alpha \beta}^{T} y_{kn}^{\beta}\nonumber\\ &=&\frac{1}{\det({{\cal M}_3})}\,\sum_{\alpha,\beta=1}^{N} f_{ij}^{\alpha}\,{\rm cof}({{\cal M}_3})_{\alpha\beta}\,y_{kn}^{\beta} \label{ma} \end{eqnarray} Analogous formulas hold for $D=5$ operators of the type $Q_iQ_jU^c_kE^c_k$. Suppose now that we want to eliminate all dimension five operators. Assuming that the Yukawa couplings $f_{ij}^\alpha$ and $y_{ij}^\beta$ are in general unrelated and $\det {\cal M}_3\ne0\,$, equation (\ref{ma}) implies that {\em the necessary and sufficient condition for vanishing of the ${\cal O}^{\mbox{\it\tiny QQQL}}_{ijkl}$ operator is that for every pair of triplets ($D^\alpha$,${\overline{D}}^\beta$\ , $\alpha,\beta=1,\dots,N$) that do couple to quarks and leptons respectively ($f^\alpha_{ij}\ne0\ and\ h^\beta_{ij}\ne0$) the cofactor of the corresponding triplet mass matrix element $({\cal M}_3)_{\alpha\beta}$ vanishes}\footnote{We consider here the triplet $D^\alpha$ as coupled to quarks and leptons if at least one $f^\alpha_{ij}\ne0$ and similarly for anti--triplets.} \begin{eqnarray} &&{\cal O}^{\mbox{\it\tiny QQQL}}_{ijkl}=0 \Longleftrightarrow {\rm cof}({\cal M}_3)_{\alpha\beta}=0\nonumber\\&& \forall\ (\alpha,\beta)\in \Xi=\{(\alpha,\beta) : f^\alpha_{ij}\ne0\nonumber\\ && {\rm and}\ h^\beta_{kl}\ne0\}\ . \label{con} \end{eqnarray} It is obvious that in the case where all triplets ($D$'s and $\overline{D}$'s) couple to matter the suppression of dimension five operators (\ref{ma}) is not possible since (\ref{con}) leads to $\det(M_3)=0$. Nevertheless, if for some reason (discrete symmetry, R--parity, anomalous $U(1)$, accidental symmetry) some of the $f^\alpha_{ij}$ and/or $y^\beta_{kl}$ are zero and the triplet mass matrix is such that the cofactors of the appropriate matrix elements are zero then the associated dimension--5 operator vanishes. The previous discussion leads to the possibility that {\em in a model with extra D--quark triplets dimension--5 operators can be eliminated by using textures of triplet mass matrices and the triplet--matter couplings.} To be concrete let us give a simple example of such an effective theory. Consider the case of an effective theory with two extra triplets. Only the first couples to matter through the superpotential terms \begin{equation} f_{ij}^{1}Q_iQ_jD_{1}+y_{ij}^{1}Q_iL_j{\overline{D}}_{1} +r_{ij}^{1}E_i^{c}U_j^{c}{\overline{D}}_{1} \label{tsp} \end{equation} and their mass--matrix has the form \begin{equation} {\cal M}_3=\left(\begin{array}{cc}\mu_{11}&\mu_{12}\\ \mu_{21}&0\end{array}\right)\ . \label{tmm} \end{equation} Since $f_{ij}^{2}=y_{ij}^{2}=0\,$ evaluation of (\ref{ma}) leads to \begin{equation} {\cal O}^{\mbox{\it\tiny QQQL}}_{ijkl}= f_{ij}^{1}\,{\rm cof}({{\cal M}_3})_{11}\,y_{kn}^{1} \sim {\rm cof}({{\cal M}_3})_{11} =0 \end{equation} It is remarkable that if we remove the second pair of triplets of the model (which do not couple directly to matter) the usual dimension--5 operators reappear. We shall study below that this nice property can be incorporated in $SU(5)$ models. \section{$SU(5)$ models without dimension--5 operators} Let us consider now an $SU(5)$ model with two pairs of Higgs pentaplets ${h}_\alpha,{\overline{h}}_\alpha\,, \alpha=1,2$ of which only the first couples to matter. The quarks and leptons are assigned to $\phi({\bf\overline{5}})+\psi({\bf\overline{10}})$ representations of $SU(5)$. The part of the superpotential related to dimension--5 decay will be \begin{eqnarray} &&f_{ij} \,\psi_i \psi_j {h}_{1} + y_{ij}\,\psi_i\phi_j {\overline{h}}_{1}+\nonumber\\ &&\sum_{\alpha,\beta=1}^2({\mu}_{\alpha \beta}{h}_{\alpha} {\overline{h}}_{\beta} +{\lambda}_{\alpha \beta}{h}_{\alpha}{\Sigma}{\overline{h}}_{\beta}) \ , \label{su5s} \end{eqnarray} where the symbol $\Sigma$ stands for the adjoint Higgs superfield in the {\bf{24}} representation. The isodoublet and colour--triplet mass matrices are correspondingly of the form \begin{equation} {\cal{M}}_2=\mu-3{\lambda}V \ , \end{equation} \begin{equation}{\cal{M}}_3=\mu+2{\lambda}V\end{equation} The well known fine--tuning that guarantees a massless pair of isodoublets amounts to \begin{equation} \det({\cal{M}}_2)=0 \ . \label{ft} \end{equation} The proton decay rate through $D=5$ operators is, according to equation (\ref{ma}), determined by the cofactor of the $1-1$ element of the triplet mass matrix \begin{equation} {\rm cof}({\cal M}_3)_{11}={(\mu_{22}+2{\lambda}_{22}V)} \ . \end{equation} Hence, choosing $\mu_{22}=-2{\lambda}_{22}V$ dimension--5 operators vanish. This condition is perfectly compatible with the previous fine--tuning condition (\ref{ft}). It is very interesting that proton decay through $D=5$ operators can be set to zero through a condition on the couplings \footnote{ Of course, proton decay still goes through at the (suppressed) rate of $D=6$ operators.}. In the framework of our standard $SU(5)$ example the required zero in the inverse triplet mass matrix does not correspond to any symmetry and is in a sense a second fine--tuning. Nevertheless, the general conclusion is that zeros of the triplet mass matrix, perhaps attributable to symmetries, can stabilize the proton. The superpotential considered above in (\ref{su5s}) is not the most general one. In fact, the case that all 5--plets couple to matter cannot be reduced to (\ref{su5s}) since it would require a different Higgs 5--plet rotation for each generation of matter. However, we should emphasize the fact that in $SU(5)$ models with non minimal Higgs content, the constraints imposed by proton decay on the parameter space and triplet masses can be relaxed. \section{\label{secf}Dimension--5 operators in extensions of the flipped $SU(5)$ } In spite of the nice features of the minimal flipped $SU(5)$ model, all attempts to obtain such a model from strings have yielded up to now non--minimal models. Such models include\\ (a) extra pairs of low energy Higges ($h$, $\bar h$) and/or\\ (b) extra pairs of $SU(5)\times U(1)$ breaking Higges ($F_h$, ${\overline{F}_h}$). We are thus motivated to study the presence of dimension--5 operators in such models. As we shall see contrary to the minimal case, such extensions of the flipped $SU(5)$ model are not automatically free of dimension--5 operators. The relevant part of the superpotential assuming $N_5$ pairs of Higgs 5--plets ($h_\alpha, {\overline{h}_\alpha}\ , \alpha=1,\dots,N_5$) that couple to matter and $N_{10}$ pairs of Higgs 10--plets ($F_{h\alpha}, {\overline{F}_{hA}}\ , A=1,\dots,N_{10}$) that do not couple to matter, will have the form \begin{eqnarray} && f_{ij}^{\alpha}F_i\,F_j\,{h}_{\alpha}+y_{ij}^{\alpha}F_i\,f_j^{c} {\overline{h}}_{\alpha}+\nonumber\\ &&r_{ij}^{\alpha}l_i^{c}\,f_j^{c}\,{\overline{h}}_{\alpha}+ {\mu_{\alpha\beta}}{h}_{\alpha} {\overline{h}}_{\beta}+ m_{AB}{F}_{hA}{\overline{F}}_{hB}\nonumber\\ && \mbox{}+{\lambda}_{AB\gamma} {F}_{hA}{F}_{hB}{h}_{\gamma} +{\overline{\lambda}}_{AB\gamma} {\overline{F}}_{hA}{\overline{F}}_{hB}{\overline{h}}_{\gamma} \end{eqnarray} where $A, B=1,\cdots,N_{10}$ , $\alpha, \beta, \gamma=1,\cdots,N_5\,$. Assuming GUT symmetry breaking to an arbitrary direction in the Higgs $10$--plet space \newline ($(V_1,V_2,\dots,V_{N_{10}})$ and similarly for bars) \footnote{D--flatness requires $\sum_A V_A^2=\sum_A{\overline{V}_A}^2$}, we obtain the triplet mass matrix \footnote{In a $(D_1,\cdots,D_{N_5}, (d_H^c)_1,\cdots,(d_H^c)_{N_{10}})$ versus $({\overline{D}}_1,\cdots, {\overline{D}}_{N_5}, ({\overline{d}}_H^c)_1,\cdots, ({\overline{d}}_H^c)_{N_{10}})$ basis, where with $D$ we denote the triplets which lie inside the Higgs 5--plets and with $D_{H}$ the triplets that lie inside the Higgs 10--plets. } \begin{equation} {\cal{M}}_3=\left(\begin{array}{cc} {\mu}_{\alpha\beta}&v_{\alpha A}\\ {\overline{v}}_{A\beta}&m_{AB} \end{array}\right) \end{equation} where $\mu_{\alpha\beta}$ is the doublet mass--matrix and $v_{\alpha A}=2{\lambda_{AB\alpha}} V_B\,$, ${\overline{v}}_{A\beta}=2{\overline{\lambda}}_{AB\beta} {\overline{V}}_B\,$ F--flatness demands ${\det(m)=0}$ in order to have at least one pair of massless Higgs decuplets which will realize the GUT symmetry breaking. One can actually choose $m$ to have only one zero eigenvalue so that all remnants of the Higgs decuplets will become heavy. Let us now start our study by a simple example. Consider the flipped model with two pairs of Higgs 5--plets and one pair of Higgs 10--plets. Assuming for simplicity that the 5-plet mass matrix is diagonal, the explicit form of the triplet matrix is \footnote{We have renamed $\lambda_1=\lambda_{111}\,, \lambda_2=\lambda_{112}$.} \begin{equation} {\cal{M}}_3=\left(\begin{array}{ccc} 0&0&{\lambda}_1V\\ 0&\mu&{\lambda}_2V\\ {\overline{\lambda}}_1{\overline{V}}&{\overline{\lambda}}_2{\overline{V}}&0 \end{array}\right) \end{equation} and $\det({\cal M}_3)=\lambda_1{\overline{\lambda}}_1{\overline{V}}$ The transpose of inverse triplet matrix entering in formula (\ref{ma}) is \begin{equation} \left({{\cal{M}}_3^{-1}}\right)^T= \left( \begin{array}{ccc} {\frac{\lambda_2{\overline{\lambda}}_2}{\lambda_1{\overline{\lambda}}_1\mu}}& -\frac{{{\lambda}}_2} {{{\lambda}}_1\mu}& \cdot\\ -\frac{{\overline{\lambda}}_2}{{\overline{\lambda}}_1\mu} &\frac{1}{\mu}&\cdot\\ \cdot&\cdot&\cdot \end{array}\right) \label{myma} \end{equation} where the dots stand for elements which are irrelevant. It is now obvious that in this model dimension five operators cannot be eliminated since even in the case $\overline{\lambda}_2=\lambda_2=0$ the $22$ element does not vanish. If we want to eliminate them we have two solutions :\\ (i) assume that the extra pair of 5--plets does not couple to matter. In this case only the $11$ element of the matrix in (\ref{myma}) is relevant and it vanishes for $\lambda_2=0$ (or ${\overline{\lambda}_2=0}$).\\ (ii) make the milder assumption that one of the 5--plets (e.g. $h_2$) does not couple to the up quarks (or similarly $\overline{h}_2$ does not couple to the down). In this case the second column (or line) of the matrix in (\ref{myma}) becomes irrelevant and the column (or line) left vanishes for $\lambda_2=0$ (or $\overline{\lambda}_2=0$). Another case that could arise is the existence of extra decuplets. The simplest of these cases is for $N_5=1$ and $N_{10}=n\geq 2$. \begin{equation} {\rm cof}({\cal M}_3)_{11}=\frac{\det m}{\det{\cal M}_3} \end{equation} which means that proton decay is absent only in the case that the restricted mass--matrix of the triplets not coupled to matter has \begin{equation} \det m =0 \label{cc} \end{equation} This constrain naturally arises in the context of the flipped $SU(5)\times U(1)$ model as consequence of F--flatness as we mentoned above. In the more general case where $N_5$ and $N_{10}$ are arbitrary dimension--5 operators can be suppressed only in the case $N_5\le N_{10}$ . Furthermore one has to require that the Higgs decuplet mass matrix has $N_5$ zero eigenvalues. This is compatible with symmetry breaking and with the requirement of making all triplets heavy but leaves $N_5-1$ pairs of $Q({\bf3},{\bf2},1/6)+ {\overline Q}({\bf\bar3},{\bf2},-1/6)$ massless. This feature does not necessarily mean that this possibility is ruled out. On the contrary one can consider the cases where extra $Q$'s have intermediate masses which are small enough to sufficiently suppress dimension--5 operators but they are still compatible with renormalization group requirements. The appearance of extra vector--like pairs of $Q$ and $D$ type multiplets with intermediate masses is a welcomed feature in the context of flipped $SU(5)\times U(1)\,$ models that raise the unification scale to the string scale \cite{LN}. \section{Conclusions } Our main result is that textured zeros of the color--triplet mass--matrix as well as Yukawa selection rules can eliminate certain dimension--5 operators. In order to be specific we focused on $SU(5)$ models. In particular, we showed that introducing an extra pair of Higgs pentaplets in the standard supersymmetric $SU(5)\,$, with specific couplings, can eliminate these operators. We also considered the case of the flipped--$SU(5)$ model with extra pentaplets and decuplets and analyzed the conditions for vanishing proton decay through dimension--5 operators. Flipped--$SU(5)$ with extra decuplets was shown to be $D=5$ operator--free as it happens in the case of the minimal model. However, flipped--$SU(5)$ with extra Higgs pentaplets is not automatically free of dimension--5 operators. We have proposed a solution to this problem which involves a texture of the pentaplet matrix together with certain constraints on the pentaplet couplings to matter. {\bf{Acknowledgments}} This work is supported by E.U. under the TMR contract ``Beyond the Standard Model'', ERBFMRX-CT96-0090.
1,314,259,993,114
arxiv
\section{Introduction} Geometric range searching asks to preprocess a set of objects and to build a data structure so that all objects intersecting a given query range can be reported quickly. There are classical variants of this problem such as computing the number of objects intersecting a query range, checking whether an object intersects a query range, and finding the closest pair of objects intersecting a query range. It is a widely used technique in computer science with numerous applications in geographic information systems, computer vision, machine learning, and data mining. These range searching problems have been studied extensively in computational geometry over the last decades. For more information on geometric range searching, refer to the surveys by Matousek~\cite{Matousek-1994}, and Agarwal and Erickson~\cite{Agarwal-1999}, and the computational geometry book~\cite{CGbook}. However, there are a large number of objects intersecting a query range in many real-world applications, and thus it takes long time to report all of them. Thus one might want to obtain a property of the set of such objects instead of obtaining all such objects. Queries of this kind are called \emph{range-analysis queries}. More formally, the goal of this problem is to preprocess a set $P$ of objects with respect to a fixed range-analysis function $f$ and to build a data structure so that $f(P\cap Q)$ can be computed efficiently for any query range $Q$. These query problems have been studied extensively under various range-analysis functions such as the diameter or width of a point set~\cite{Brass-2013} and the length of the minimum spanning tree of a point set~\cite{Arya-2015}. Note that the classical variants mentioned above are also range-analysis query problems. A clustering cost can also be considered as a range-analysis function. Clustering is a fundamental research topic in computer science and arises in various applications~\cite{Jain-1999}, including pattern recognition and classification, data mining, image analysis, and machine learning. In clustering, the objective is to group a set of data points into clusters so that the points from the same cluster are similar to each other and points from different clusters are dissimilar. Usually, input points are in a high-dimensional space and the similarity is defined using a distance measure. There are a number of variants of the clustering problem in the geometric setting depending on the similarity measure such as the $k$-median, $k$-means, and $k$-center clustering problems. In this paper, we study the approximate range-analysis query problems for three variants of the clustering with a set $P$ of $n$ points in $d$-dimensional Euclidean space with $d\geq 2$ and axis-parallel rectangular range queries: the \emph{$k$-median}, \emph{$k$-means}, and \emph{$k$-center range-clustering query problems}. The approximate $k$-median, $k$-means and $k$-center range-clustering query problems are defined as follows: Preprocess $P$ so that given a query range $Q$, an integer $k$ with $1\leq k\leq n$ and a value $\varepsilon>0$ as a query, an $(1+\varepsilon)$-approximation to the $k$-median, $k$-means, and $k$-center clusterings of $P\cap Q$ can be computed efficiently. Our desired query time is polynomial to $\log n$, $k$ and $(1/\varepsilon)$. \subsection{Previous Work} The $k$-median and $k$-means clustering problems have been studied extensively and there are algorithms achieving good approximation factors and polynomial running times to the problem. Har-Peled and Mazumdar~\cite{Har-Peled-2004} presented an $(1+\varepsilon)$-approximation algorithm for the $k$-means and $k$-median clustering using coresets for points in $d$-dimensional Euclidean space. Their algorithm constructs a $(k,\varepsilon)$-coreset with property that for any arbitrary set $C$ of $k$ centers, the clustering cost on the coreset with respect to $C$ is within $(1\pm\varepsilon)$ times the clustering cost on the original input points with respect to $C$. Then it computes the clusterings for the coreset using a known weighted clustering algorithm. Later, a number of algorithms for computing smaller coresets for the $k$-median and $k$-means clusterings have been presented~\cite{Chen-2009,Feldman-2011,Har-Peled2007-smaller}. The smallest $(k,\varepsilon)$-coresets known so far have size of $O(k/\varepsilon^2)$ for both $k$-median and $k$-means~\cite{Feldman-2011}. The $k$-center clustering problem has also been studied extensively. It is NP-Hard to approximate the 2-dimensional $k$-center problem within a factor of less than 2 even under the $L_{\infty}$-metric~\cite{Feder}. A $2$-approximation to the $k$-center can be computed in $O(kn)$ time for any metric space~\cite{Feder}, and in $O(n\log k)$ time for any $L_p$-metric space~\cite{Gon1985}. The exact $k$-center clustering can be computed in $n^{O(k^{1-1/d})}$ time in $d$-dimensional space under any $L_p$-metric~\cite{agarwal2002}. This algorithm combined with a technique for grids yields an $(1+\varepsilon)$-approximation algorithm for the $k$-center problem that takes $O(n\log k+ (k/\varepsilon)^{O(k^{1-1/d})})$ time for any $L_p$-metric~\cite{agarwal2002}. Notice that all these algorithms are \emph{single-shot} algorithms, that is, they compute a clustering of given points (without queries) just once. There have been some results on range-analysis query problems related to clustering queries. Brass et al.~\cite{Brass-2013} presented data structures of finding extent measures: the width, area, or perimeter of the convex hull of $P\cap Q$ and the smallest enclosing disk of $P\cap Q$. Arya et al.~\cite{Arya-2015} studied data structures that support clustering queries on the length of the minimum spanning tree of $P\cap Q$. Various types of range-aggregate nearest neighbor queries have also been studied~\cite{Papadias-2005,Shan2003}. Nekrich and Smid~\cite{Nekrich-2010} considered approximate range-aggregate queries such as the diameter or radius of the smallest enclosing ball for points in $d$-dimensional space. Basically, their algorithm constructs a $d$-dimensional range tree as a data structure, in which each node stores a $\delta$-coreset of points in its subtree (but not explicitly), and applies range-searching query algorithms on the tree, where $\delta$ is a positive value. Their algorithm works for any aggregate function that can be approximated using a decomposable coreset including coresets for the $k$-median, $k$-means and $k$-center clusterings. In this case, the size of the data structure is $O(kn\log^d n/\delta^2)$, and the query algorithm computes a $(k,\delta)$-coreset of size $O(k\log^{d-1} n/\delta^2)$. However, their algorithm uses $k$ and $\delta$ in constructing the data structure for the clusterings, and therefore $k$ and $\delta$ are fixed over range-clustering queries. Very recently, Abrahamsen et al.~\cite{Abrahamsen-2017} considered $k$-center range-clustering queries for $n$ points in $d$-dimensional space. They presented a method, for a query consisting of a range $Q$, an integer $k$ with $1\leq k\leq n$ and a value $\varepsilon>0$, of computing a $(k,\varepsilon)$-coreset $S$ from $P\cap Q$ of size $O(k/\varepsilon^d)$ in $O(k(\log n/\varepsilon)^{d-1}+ k/\varepsilon^d)$ time such that the $k$-center of $S$ is an $(1+\varepsilon)$-approximation to the $k$-center of $P\cap Q$. After computing the coreset, they compute an $(1+\varepsilon)$-approximation to the $k$-center of the coreset using a known single-shot algorithm. Their data structure is of size $O(n\log^{d-1} n)$ and its query algorithm computes an $(1+\varepsilon)$-approximate to a $k$-center range-clustering query in $O(k(\log n/\varepsilon)^{d-1}+ T_\textnormal{ss}(k/\varepsilon^d))$ time, where $T_\textnormal{ss}(N)$ denotes the running time of an $(1+\varepsilon)$-approximation single-shot algorithm for the $k$-center problem on $N$ points. The problem of computing the diameter of input points contained in a query range can be considered as a special case of the range-clustering problem. Gupta et al.~\cite{Gupta} considered this problem in the plane and presented two data structures. One requires $O(n\log^2 n)$ size that supports queries with arbitrary approximation factors $1+\varepsilon$ in $O(\log n/\sqrt{\varepsilon}+\log^3 n)$ query time and the other requires a smaller size $O(n\log n/\sqrt{\delta})$ that supports only queries with the \textit{fixed} approximation factor $1+\delta$ with $0<\delta<1$ that is used for constructing the data structure. The query time for the second data structure is $O(\log^3 n/\sqrt{\delta})$. Nekrich and Smid~\cite{Nekrich-2010} presented a data structure for this problem in a higher dimensional space that has size $O(n \log^d n)$ and supports diameter queries with the fixed approximation factor $1+\delta$ in $O(\log^{d-1} n/\delta^{d-1})$ query time. Here, $\delta$ is an approximation factor given for the construction of their data structure, and therefore it is fixed for queries to the data structure. \subsection{Our Results} We present algorithms for $k$-median, $k$-means, and $k$-center range-clustering queries with query times polynomial to $\log n$, $k$ and $1/\varepsilon$. These algorithms have a similar structure: they compute a $(k,\varepsilon)$-coreset of input points contained in a query range, and then compute a clustering on the coreset using a known clustering algorithm. We call an algorithm for computing a clustering of given points (without queries) a \emph{single-shot algorithm} for the clustering. We use $T_\textnormal{ss}(N)$ to denote the running time for any $(1+\varepsilon)$-approximation single-shot algorithm of each problem on $N$ points. For a set $P$ of $n$ points in $d$-dimensional Euclidean space with $d\geq 2$, we present the following results. \begin{itemize} \item There is a data structure of size $O(n\log^d n)$ such that an $(1+\varepsilon)$-approximation to the $k$-median or $k$-means clustering of $P\cap Q$ can be computed in time \[O(k^5\log^9 n+ k\log^d n/\varepsilon + T_\textnormal{ss}(k\log n/\varepsilon^d))\] for any orthogonal range $Q$, any integer $k$ with $1\leq k\leq n$, and any value $\varepsilon>0$ given as a query. To our best knowledge, this is the first result on the $k$-median and $k$-means clusterings for orthogonal range queries with any integer $k$ and any value $\varepsilon$. \item There is a data structure of size $O(n\log^{d-1} n)$ such that an $(1+\varepsilon)$-approximation to the $k$-center clustering of $P\cap Q$ can be computed in time \[O(k\log^{d-1} n+k\log n/\varepsilon^{d-1}+T_\textnormal{ss}(k/\varepsilon^d))\] for any orthogonal range $Q$, an integer $k$ with $1\leq k\leq n$, and a value $\varepsilon>0$ given as a query. This improves the result by Abrahamsen et al.~\cite{Abrahamsen-2017}. \item There is a data structure of size $O(n\log^{d-1} n)$ such that an $(1+\varepsilon)$-approximation to the diameter (or radius) of $P\cap Q$ can be computed in time \[O(\log^{d-1} n+\log n/\varepsilon^{d-1})\] for any orthogonal range $Q$ and a value $\varepsilon>0$ given as a query. This improves the results by Nekrich and Smid~\cite{Nekrich-2010}. \end{itemize} Our results are obtained by combining range searching with coresets. The $k$-median and $k$-means range-clusterings have not been studied before, except the work by Nekrich and Smid. They presented a general method to compute approximate range-aggregate queries. Their approach can be used to compute an $(1+\delta)$-approximation to the $k$-median or $k$-means range-clustering for a positive value $\delta$ which is given in the construction of their data structure. However, it is not clear how to use or make their data structure to support approximate range-clustering queries with various approximation factors we consider in this paper unless those values are known in advance. Indeed, the full version of the paper by Abrahamsen et al.~\cite{Abrahamsen-2017} poses as an open problem a data structure supporting $(1+\varepsilon)$-approximate $k$-median or $k$-means range-clustering queries with various values $\varepsilon$. Our first result answers to the question and presents a data structure for the $k$-median and $k$-means range-clustering problems for any value $\varepsilon$. Our second result, that is, the data structure and its query algorithm for the $k$-center problem, improves the best known previous work by Abrahamsen et al.~\cite{Abrahamsen-2017}. Recall that the query algorithm by Abrahamsen et al. takes $O(k\log^{d-1} n/\varepsilon^{d-1} + T_\textnormal{ss}(k/\varepsilon^d))$ time. We improved the first term of their running time by a factor of $\min\{1/\varepsilon^{d-1},\log^{d-2} n\}$. Our third result, that is, the data structure and its query algorithm for computing an approximate diameter and radius of points in a query range, improves the best known previous work by Nekrich and Smid~\cite{Nekrich-2010}. Our third result not only allows queries to have arbitrary approximation factor values $1+\varepsilon$, but also improves the size and the query time of these data structures. The size is improved by a factor of $\log n$. Even when $\varepsilon$ is fixed to $\delta$, the query time is improved by a factor of $\min\{1/\delta^{d-1}, \log^{d-2} n\}$ compared to the one by Nekrich and Smid~\cite{Nekrich-2010}. \medskip A main tool achieving the three results is a new data structure for range-emptiness and range-counting queries. Consider a grid $\Gamma$ with side length $\gamma$ covering an axis-parallel hypercube with side length $\ell$ that is aligned with the standard quadtree. For a given query range $Q$ and every cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\Gamma$, we want to check whether there is a point of $P$ contained in $\scalebox{0.9}{\ensuremath{\square}}\cap Q$ efficiently. (Or, we want to compute the number of points of $P$ contained in $\scalebox{0.9}{\ensuremath{\square}}\cap Q$.) For this purpose, one can use a data structure for an orthogonal range-emptiness queries supporting $O(\log^{d-1} n)$ query time~\cite{CGbook}. Thus, the task takes $O((\ell/\gamma)^d \log^{d-1} n)$ time for all cells of $\Gamma$ in total. Notice that $(\ell/\gamma)^d$ is the number of grid cells of $\Gamma$. To improve the running time for the task, we present a new data structure that supports a range-emptiness query in $O(\log^{d-t-1} n+\log n)$ time for a cell of $\Gamma$ intersecting no face of $Q$ with dimension smaller than $t$ for any fixed $t$. Using our data structure, the running time for the task is improved to $O(\log^{d-1} n+ (\ell/\gamma)^d \log n)$. To obtain this data structure, we observe that a range-emptiness query for $\scalebox{0.9}{\ensuremath{\square}}\cap Q$ can be reduced to a $(d-t-1)$-dimensional orthogonal range-emptiness query on points contained in $\scalebox{0.9}{\ensuremath{\square}}$ if $\scalebox{0.9}{\ensuremath{\square}}$ intersects no face of $Q$ with dimension smaller than $t$. We maintain a data structure for $(d-t-1)$-dimensional orthogonal range-emptiness queries for each cell of the compressed quadtree for every $t$. However, this requires $\Omega(n^2)$ space in total if we maintain these data structures explicitly. We can reduce the space complexity using a method for making a data structure partially persistent~\cite{Driscoll-1989}. Another tool to achieve an efficient query time is a \emph{unified grid} based on quadtrees. For the $k$-median and $k$-means clusterings, we mainly follow an approach given by Har-Peled and Mazumdar~\cite{Har-Peled-2004}. They partition input points with respect to the approximate centers explicitly, and construct a grid for each subset of the partition. Then they snap each input point $p$ to a cell of the grid constructed for the subset where $p$ is involved. However, their algorithm is a single-shot algorithm and requires $\Omega(|P\cap Q|)$ time due to the computation of a coreset from approximate centers of the points contained in a given query box. In our algorithm, we do not partition input points explicitly but we use only one grid instead, which we call the unified grid, for the purpose in the implementation so that a coreset can be constructed more efficiently. The tools we propose in this paper to implement the algorithm by Har-Peled and Mazumdar can be used for implementing other algorithms based on grids. For example, if Monte Carlo algorithms are allowed, the approach given by Chen~\cite{Chen-2009} for approximate range queries can be implemented by using the tools we propose in this paper. \section{Preliminaries} Let $P$ be a set of $n$ points in $d$-dimensional Euclidean space. For any two points $x$ and $y$ in $d$-dimensional space, we use $\dist{x}{y}$ to denote the Euclidean distance between $x$ and $y$. For a point $x$ and a set $Y$ in $d$-dimensional space, we use $\setdist{x}{Y}$ to denote the smallest Euclidean distance between $x$ and any point in $Y$. Throughout the paper, we use the term \emph{square} in a generic way to refer a $d$-dimensional axis-parallel hypercube. Similarly, we use the term \emph{rectangle} to refer a $d$-dimensional axis-parallel box. \subsection{Clusterings} For any integer $k$ with $1\leq k\leq n$, let $\ensuremath{\mathcal{C}}_k$ be the family of the sets of at most $k$ points in $d$-dimensional Euclidean space. Let $\Phi : \ensuremath{\mathcal{C}}_n\times \ensuremath{\mathcal{C}}_k \rightarrow \mathbb{R}_{\geq 0}$ be a cost function which will be defined later. For a set $P$ of $n$ points in $d$-dimensional Euclidean space, we define $\ensuremath{\textsc{Opt}_k}(P)$ as the minimum value $\Phi(P,C)$ over all sets $C\in\ensuremath{\mathcal{C}}_k$. We call a set $C\in\ensuremath{\mathcal{C}}_k$ realizing $\ensuremath{\textsc{Opt}_k}(P)$ a \emph{$k$-clustering of $P$ under the cost function $\Phi$}. In this paper, we consider three cost functions $\ensuremath{\Phi_\textsf{M}}, \ensuremath{\Phi_\textsf{m}}$ and $\ensuremath{\Phi_\textsf{c}}$ that define the $k$-median, $k$-means and $k$-center clusterings, respectively. Let $\phi(p,C)=\min_{c\in C} \dist{p}{c}$ for any point $p$ in $P$. The cost functions are defined as follows: for any set $C$ of $\ensuremath{\mathcal{C}}_k$, \[\ensuremath{\Phi_\textsf{M}}(P,C)=\sum_{p\in P} \phi(p,C),\quad \ensuremath{\Phi_\textsf{m}}(P,C)=\sum_{p\in P} (\phi(p,C))^2,\quad \ensuremath{\Phi_\textsf{c}}(P,C)=\max_{p\in P} \phi(p,C).\] We consider the query variants of these problems. We preprocess $P$ so that given a query rectangle $Q$, an integer $k$ with $1\leq k\leq n$, and a value $\varepsilon>0$, an $(1+\varepsilon)$-approximate $k$-clustering of the points contained in $Q$ can be reported efficiently. Specifically, we want to report a set $C\in\ensuremath{\mathcal{C}}_k$ with $\Phi(P_Q,C) \leq (1+\varepsilon) \ensuremath{\textsc{Opt}_k}(P_Q)$ in sublinear time, where $P_Q=P\cap Q$. We call a query of this type a \emph{range-clustering query}. \subsection{Coreset for Clustering} Consider a cost function $\Phi$. We call a set $S\subseteq\mathbb{R}^d$ a \emph{$(k,\varepsilon)$-coreset} of $P$ for the $k$-clustering under the cost function $\Phi$ if the following holds: for any set $C$ in $\ensuremath{\mathcal{C}}_k$, \[(1-\varepsilon)\Phi(P,C) \leq \Phi(S,C) \leq (1+\varepsilon)\Phi(P,C).\] Here, the points in a coreset might be weighted. In this case, the distance between a point $p$ in $d$-dimensional space and a weighted point $s$ in a coreset is defined as $w(s)\cdot d(p,s)$, where $w(s)$ is the weight of $s$ and $d(p,s)$ is the Euclidean distance between $p$ and $s$. By definition, an $(1+\varepsilon)$-approximation to the $k$-clustering of $S$ is also an $(1+\varepsilon)$-approximation to the $k$-clustering of $P$. Thus, $(k,\varepsilon)$-coresets can be used to obtain a fast approximation algorithm for the $k$-median, $k$-means, and $k$-center clusterings. A $(k,\varepsilon)$-coreset of smaller size gives a faster approximation algorithm for the clusterings. The followings are the sizes of the smallest $(k,\varepsilon)$-coresets known so far: $O(k/\varepsilon^2)$ for the $d$-dimensional Euclidean $k$-median and $k$-means clusterings~\cite{Feldman-2011}, and $O(k/\varepsilon^{d})$ for the $d$-dimensional Euclidean $k$-center clustering~\cite{agarwal-2005}. It is also known that $(k,\varepsilon)$-coresets for the $k$-median, $k$-means, and $k$-center clusterings are \emph{decomposable}. That is, if $S_1$ and $S_2$ are $(k,\varepsilon)$-coresets for disjoint sets $P_1$ and $P_2$, respectively, then $S_1\cup S_2$ is a $(k,\varepsilon)$-coreset for $P_1\cup P_2$ by~\cite[Observation 7.1]{Har-Peled-2004}. Using this property, one can obtain data structures on $P$ that support an $(1+\delta)$-approximation to the $k$-median, $k$-means, and $k$-center range-clustering queries for constants $\delta>0$ and $k$ with $1\leq k\leq n$ which are given in the construction phase as follows. Consider the $d$-dimensional range tree on $P$, a multi-level binary search tree~\cite{CGbook}. There are $O(n\log^{d-1} n)$ nodes in the level-$d$ trees of the range tree in total. Each such node $v$ corresponds to a $d$-dimensional axis-parallel box $B(v)$. For each node $v$, assume that a $(k,\delta)$-coreset of the points of $P$ contained in $B(v)$ is stored. For any rectangle $Q$, there are $O(\log^d n)$ nodes $v$ such that $P\cap Q$ is the set of the input points contained in the union of $B(v)$'s. Such nodes are called \emph{canonical nodes} for $Q$. To answer a clustering query with a query rectangle $Q$, it suffices to return the union of the $(k,\delta)$-coresets stored in every canonical node for $Q$, which is a $(k,\delta)$-coreset of $P\cap Q$. Then the query time and the size of the coreset are $O(f(k,\delta)\log^d n)$, where $f(k,\delta)$ is the size of a $(k,\delta)$-coreset of a clustering obtained from a single-shot algorithm for constants $\delta>0$ and $k$ with $1\leq k\leq n$. For the size of the data structure, observe that the size of the coreset stored in each node $v$ is at most the number of the points contained in $B(v)$. The total number of points contained in $B(v)$ for every node $v$ of the range tree is $O(n\log^d n)$, and thus the data structure has size $O(n\log^d n)$. One drawback with the data structure is that $k$ and $\delta$ are determined in the construction phase of the structure, and therefore they are fixed over range-clustering queries. To resolve this, we construct a number of the data structures for different values of $k=1,2,2^2, \ldots,2^{\lceil \log n\rceil}$. Given a value $k$ as a query with $\bar{k}\leq k< 2\bar{k}$, we simple return a $(\bar{k},\delta)$-coreset, which is a $(k,\delta)$-coreset. This does not increase the size of the data structure and the query time asymptotically and allows $k$ to be a part of queries. \begin{lemma}\label{lem:constant-coreset} Given a set $P$ of $n$ points in $d$-dimensional space and a value $\delta>0$ given in the construction phase, we can construct a data structure of size $O(n\log^d n)$ so that a $(k,\delta)$-coreset of $P\cap Q$ for the $k$-median and $k$-means clusterings of size $O(k\log^{d} n)$ can be computed in $O(k\log^{d} n)$ time for any orthogonal range $Q$ and any integer $k$ with $1\leq k\leq n$ given as a query. \end{lemma} Note that the approximation factor of the coreset is still fixed to queries. In Section~\ref{sec:median}, we will describe a data structure and its corresponding query algorithm answering $k$-median and $k$-means range-clustering queries that allow queries to have arbitrary approximation factor values $1+\varepsilon$. The query algorithm in Section~\ref{sec:median} uses the algorithm in Lemma~\ref{lem:constant-coreset} as a subprocedure. \subsection{Single-Shot Algorithms for the \texorpdfstring{$k$}{k}-Median and \texorpdfstring{$k$}{k}-Means Clusterings} \label{sec:single-shot} The single-shot version of this problem was studied by Har-Peled and Mazumdar~\cite{Har-Peled-2004}. They gave algorithms to compute $(k,\varepsilon)$-coresets of size $O(k\log n/\varepsilon^d)$ for the $k$-median and $k$-means clusterings. Since we extensively use their results, we give an overview to their algorithm for the $k$-median clustering. The algorithm for the $k$-means clustering works similarly. In this subsection, we use $\Phi$ to denote $\ensuremath{\Phi_\textsf{M}}$ for ease of description. Their algorithm starts with computing a constant-factor approximation $A\subset\mathbb{R}^d$ to the $k$-means clustering of $P$, that is, $A$ satisfying $\Phi(P,A)\leq c_1 \cdot\ensuremath{\textsc{Opt}_k}(P)$ for some constant $c_1>1$. The approximation set consists of $O(k\log^3 n)$ centers. Then given the constant-factor approximation set of $P$, it computes a $(k,\varepsilon)$-coreset $S$ of size $O(k\log^4 n/\varepsilon^d)$ for $P$. From $S$, the algorithm finally obtains a smaller $(k,\varepsilon)$-coreset of size $O(k\log n/\varepsilon^d)$ for $P$. \subsubsection{Coreset from Constant-Factor Approximate Centers} Given a constant-factor approximation $A=\{a_1,\ldots,a_m\}$ to the $k$-means clustering of $P$ such that $m$ is possibly larger than $k$, the algorithm by Har-Peled and Mazumdar computes a $(k,\varepsilon)$-coreset of size $O(|A|\log n/\varepsilon^d)$ for $P$ as follows. The procedure partitions $P$ with respect to $A$ into pairwise disjoint sets $P_i$ for $i=1,\ldots,m$ such that $P_i$ consists of points $p$ in $P$ with $\dist{p}{a_i}\leq c_2\cdot\dist{p}{a_j}$ for every index $j\neq i$ for a constant $c_2>1$. Note that $P_i$ is not necessarily unique. Then it constructs a grid for each set $P_i$ with respect to $a_i$ and snaps the points in $P_i$ to the grid as follows. Let $R=\Phi(P,A)/(c_1n)$, where $c_1>1$ is the approximation factor of $A$. Let $Q_{ij}$ be the square with side length $R2^j$ centered at $a_i$ for $j=0,\ldots, M$, where $M=\lceil2\log (c_1n) \rceil$. Let $V_{i0}=Q_{i0}$ and $V_{ij}=Q_{ij}\setminus Q_{i(j-1)}$. To compute a grid for $P_i$, the procedure partitions each $V_{ij}$ into squares with side length $r_j=\varepsilon R2^j/(10c_1d)$. Figure~\ref{fig:grid}(a) illustrates a grid constructed with respect to an approximate center in the middle. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{grid.pdf} \caption{\small (a) Exponential grid for the single-shot algorithm constructed with respect to the point in the middle. (b) Exponential grid for an algorithm for the query version constructed with respect to the point, say $a_1$, in the middle. The point is not the center of the grid. The gray region is the grid cluster for $Q_{10}$, the dashed region is the grid cluster for $Q_{11}$, and the largest box is the grid cluster for $Q_{12}$. (c) Only first-level grid is depicted.\label{fig:grid}} \end{center} \end{figure} For every grid cell $\Box$ containing a point of $P_i$, the procedure picks an arbitrary point $q$ of $P_i$ contained in it and assigns the number of points of $P_i$ contained in $\Box$ to $q$ as its weight. Let $S_i$ be the set of all such weighted points of $P_i$ for $i=1,\ldots, m$. They showed that the union of all $S_i$ is a $(k,\varepsilon)$-coreset for $P$ of size $O(|A|\log n/\varepsilon^d)$. \begin{lemma}[\cite{Har-Peled-2004}]\label{lem:approx-to-coreset} Given a constant-factor approximation $A$ to the $k$-means clustering of $P$ consisting of possibly more than $k$ centers, a $(k,\varepsilon)$-coreset of $P$ for the $k$-median clustering of size $O(|A|\log n/\varepsilon^d)$ can be computed in $O(n \log |A|)$ time. \end{lemma} \subsubsection{Smaller Coreset} By Lemma~\ref{lem:approx-to-coreset}, the algorithm constructs a $(k,\varepsilon)$-coreset $S$ of size $O(k\log^4 n/\varepsilon^d)$ using a constant-factor approximation of size $O(k\log^3 n)$. Using the coreset $S$, the algorithm obtains a smaller coreset of size $O(k\log n/\varepsilon^d)$ as follows. The algorithm computes a constant-factor approximation $\ensuremath{\mathcal{C}}_0$ to the $k$-center clustering of $S$ using the algorithm in~\cite{Gon1985}. This clustering is an $O(n)$-approximation to the $k$-median clustering. Then it applies the local search algorithm by Arya et al.~\cite{Arya-2004} to $\ensuremath{\mathcal{C}}_0$ and $S$ to obtain a constant-factor approximation of $P$ of size at most $k$. It uses this set to compute a $(k,\varepsilon)$-coreset of size $O(k\log n/\varepsilon^d)$ by applying Lemma~\ref{lem:approx-to-coreset} again. \section{Data Structures for Range-Clustering Queries}\label{sec:DS} We maintain two data structures constructed on $P$. One is a compressed quadtree~\cite{Aluru-2005}, and the other is a variant of a range tree, which we introduce in this paper. \subsection{Compressed Quadtree}\label{sec:quad} We use the term \emph{quadtrees} in a generic way to refer the hierarchical spatial tree data structures for $d$-dimensional data that are based on the principle of recursive decomposition of space, also known as quadtrees, octrees, and hyperoctrees for spatial data in $d=2, 3,$ and higher dimensions, respectively. A \emph{standard quadtree} on $P$ is a tree each of whose nodes $v$ corresponds to a square cell. The root of the quadtree corresponds to the axis-parallel square containing all points of $P$. A node $v$ of the quadtree corresponding to cell $\scalebox{0.9}{\ensuremath{\square}}$ has $2^d$ child nodes that correspond to the $2^d$ equal sized squares formed by splitting $\scalebox{0.9}{\ensuremath{\square}}$ by $d$ axis-parallel cuts through the center of $\scalebox{0.9}{\ensuremath{\square}}$ if $\scalebox{0.9}{\ensuremath{\square}}$ contains at least two points of $P$. Otherwise, $v$ is a leaf node of the quadtree. Without loss of generality, we assume that the side length of the square corresponding to the root is $1$. Then every cell of the standard quadtree has side length of $2^{-i}$ for an integer $i$ with $0\leq i\leq t$ for some constant $t$. We call a value of form $2^{-i}$ for an integer $i$ with $0\leq i\leq t$ a \emph{standard length}. Also, we call a grid a \emph{standard grid} if every cell of the grid is also in the standard quadtree. In other words, any grid aligned with the standard quadtree is called a standard grid. A \emph{compressed quadtree} on $P$ is a tree obtained by contracting the edges incident to each node having only one child node in the standard quadtree on $P$. It has size $O(n\log n)$ and can be constructed in $O(n\log n)$ time for any fixed dimension~\cite{har2011geometric}. By definition, for every node $v$ in the compressed quadtree, there is a node $w$ in the standard quadatree whose cell coincides with the cell of $v$. We use $\ensuremath{\mathcal{T}_s}$ and $\ensuremath{\mathcal{T}_c}$ to denote the standard and compressed quadtrees constructed on $P$, respectively. For ease of description, we will first introduce our algorithm in terms of the standard quadtree. But the algorithm will be implemented using the compressed quadtree to reduce the space complexity. To do this, we need the following lemma. In the following, we use a node and its corresponding cell of $\ensuremath{\mathcal{T}_s}$ (and $\ensuremath{\mathcal{T}_c}$) interchangeably. For a cell $\scalebox{0.9}{\ensuremath{\square}}$ of the standard quadtree on $P$, there is a unique cell $\overline{\cell}$ of the compressed quadtree on $P$ satisfying $\scalebox{0.9}{\ensuremath{\square}} \cap P=\overline{\cell}\cap P$. We call this cell the \emph{compressed cell} of $\scalebox{0.9}{\ensuremath{\square}}$. \begin{lemma}[\cite{har2011geometric}]\label{lem:cell-query} Given a cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\ensuremath{\mathcal{T}_s}$, we can find the compressed cell of $\scalebox{0.9}{\ensuremath{\square}}$ in $O(\log n)$ time. \end{lemma} We store the points of $P$ in an array of length $n$ in a specific order, which is called the \emph{$\mathcal{Z}$-order} defined as follows. Consider a DFS traversal of $\ensuremath{\mathcal{T}_c}$ that visits the child nodes of each node in the same relative order. The order of the nodes of $\ensuremath{\mathcal{T}_c}$ in which the DFS visits is called the $\mathcal{Z}$-order~\cite{har2011geometric}. By definition, for any cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\ensuremath{\mathcal{T}_c}$, the points of $P$ contained in $\scalebox{0.9}{\ensuremath{\square}}$ appear consecutively in the array. \subsection{Data Structure for Range-Emptiness Queries}\label{sec:emptiness-DS} In our query algorithm, we consider a standard grid $\Gamma$ of side length $\gamma$ covering an axis-parallel hypercube of side length $\ell$. For a given query range $Q$ and every cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\Gamma$, we want to check whether there is a point of $P$ contained in $\scalebox{0.9}{\ensuremath{\square}}\cap Q$ efficiently. For this purpose, one can use a data structure for orthogonal range-emptiness queries supporting $O(\log^{d-1} n)$ query time~\cite{CGbook}. Thus, the task takes $O((\ell/\gamma)^d \log^{d-1} n)$ time for all cells of $\Gamma$ in total. Notice that $(\ell/\gamma)^d$ is the number of grid cells of $\Gamma$. However, we can accomplish this task more efficiently using the data structure which we will introduce in this section. Let $t$ be an integer with $0<t\leq d$. We use $\ensuremath{<_t}$-face to denote a face with dimension smaller than $t$ among faces of a $d$-dimensional rectangle. Note that a $\ensuremath{<_t}$-face of a $d$-dimensional rectangle is its vertex if $t=1$. Our data structure allows us to check whether a point of $P$ is contained in $Q\cap \overline{\cell}$ in $O(\log^{d-t-1}n +\log n)$ time for a cell $\overline{\cell}$ of $\ensuremath{\mathcal{T}_c}$ intersecting no $\ensuremath{<_t}$-face of $Q$ with $0< t< d$. Here, we first compute the compressed cell $\overline{\cell}$ of each cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\Gamma$, and then apply the query algorithm to $\overline{\cell}$. Recall that $\scalebox{0.9}{\ensuremath{\square}}\cap P$ coincides with $\overline{\cell}\cap P$ for any cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\Gamma$ and its compressed cell $\overline{\cell}$. In this way, we can complete the task in $O(\sum_{t=1}^{d-1} x_t\log^{d-t-1} n+ |\Gamma|\log n+\log^{d-1} n)$ time in total, where $x_t$ is the number of cells of $\Gamma$ intersecting no $\ensuremath{<_t}$-face of $Q$ but intersecting a $t$-dimensional face of $Q$. Notice that for any cell $\scalebox{0.9}{\ensuremath{\square}}$ intersecting $Q$, there is an integer $t$ such that $\scalebox{0.9}{\ensuremath{\square}}$ intersects no $\ensuremath{<_t}$-face of $Q$, but intersects a $t$-dimensional face of $Q$ unless $\scalebox{0.9}{\ensuremath{\square}}$ contains a corner of $Q$. Here, $x_t$ is $O((\ell/\gamma)^t)$. Therefore, we can accomplish the task for every cell of $\Gamma$ in $O(\log^{d-1} n +(\ell/\gamma)^d \log n)$ time in total. For a nonempty subset $I$ of $\{1,\ldots,d\}$, the \emph{$I$-projection range tree} on a point set $A\subseteq\mathbb{R}^d$ is the range tree supporting fractional cascading that is constructed on the projections of the points of $A$ onto a $(d-t)$-dimensional hyperplane orthogonal to the $i$th axes for all $i\in I$, where $t$ is the cardinality of $I$. \begin{lemma}\label{lem:counting-query} Given the $I$-projection range tree on $P\cap \scalebox{0.9}{\ensuremath{\square}}$ for every cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\ensuremath{\mathcal{T}_c}$ and every nonempty subset $I$ of $\{1,\ldots,d\}$, we can check whether a point of $P$ is contained in $Q\cap\scalebox{0.9}{\ensuremath{\square}}$ for any query rectangle $Q$ and any cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\ensuremath{\mathcal{T}_c}$ intersecting no $\ensuremath{<_t}$-face of $Q$ in $O(\log^{d-t-1} n+\log n)$ time. \end{lemma} \begin{proof} Consider a subset $I$ of $\{1,\ldots,d\}$. We call a facet of an axis-parallel box an \emph{$I$-facet} if it is orthogonal to the $i$th axis for an index $i\in I$. Note that there are exactly $2|I|$ $I$-facets of $Q$. For a cell $\scalebox{0.9}{\ensuremath{\square}}$ intersecting no $\ensuremath{<_t}$-face of $Q$, we claim that there is a subset $I$ of $\{1,\ldots,d\}$ of size $t$ such that no $I$-facet of $Q$ intersects $\scalebox{0.9}{\ensuremath{\square}}$. Otherwise, there is a set $I'$ of $d-t+1$ indices such that a facet orthogonal to the $i'$th axis intersects $\scalebox{0.9}{\ensuremath{\square}}$ for every $i'\in I'$. The common intersection of all such facets is a $(t-1)$-dimensional face of $Q$, and it intersects $\scalebox{0.9}{\ensuremath{\square}}$ since both $\scalebox{0.9}{\ensuremath{\square}}$ and $Q$ are $d$-dimensional rectangles. This contradicts the fact that $\scalebox{0.9}{\ensuremath{\square}}$ intersects no $\ensuremath{<_t}$-face of $Q$. Thus, we do not need to consider the $i$th coordinates of the points in $\scalebox{0.9}{\ensuremath{\square}}$ for all $i\in I$ in testing if a point of $P$ is contained in $Q\cap \scalebox{0.9}{\ensuremath{\square}}$. For a set $A$ of points in $d$-dimensional space, we use $\proj{A}$ to denote the projection of $A$ onto a $(d-t)$-dimensional hyperplane orthogonal to the $i$th axes for all $i\in I$. A point of $P\cap \scalebox{0.9}{\ensuremath{\square}}$ is contained in $Q$ if and only if a point of $\proj{P\cap \scalebox{0.9}{\ensuremath{\square}}}$ is contained in $\proj{Q}$. By definition, the $I$-projection range tree on $P\cap \scalebox{0.9}{\ensuremath{\square}}$ is the $(d-t)$-dimensional range tree on $\proj{P\cap\scalebox{0.9}{\ensuremath{\square}}}$. Therefore, we can check whether a point of $\proj{P\cap\scalebox{0.9}{\ensuremath{\square}}}$ is contained in $\proj{Q}$ in $O(\log^{d-t-1} n)$ time for $t<d-1$ and in $O(\log n)$ time for $t\geq d-1$. \end{proof} However, the $I$-projection range trees require $\Omega(n^2)$ space in total if we store them explicitly. To reduce the space complexity, we use a method of making a data structure \emph{partially persistent}~\cite{Driscoll-1989}. A partially persistent data structure allows us to access any elements of an old version of the data structure by keeping the changes on the data structure. Driscoll et al.~\cite{Driscoll-1989} presented a general method of making a data structure based on pointers partially persistent. In their method, both time and space overheads for an update are $O(1)$ amortized, and the access time for any version is $O(\log n)$. \subsubsection{Construction of the \texorpdfstring{$I$}{I}-Projection Range Trees} Consider a fixed subset $I$ of $\{1,\ldots, d\}$. We construct the $I$-projection range trees for the cells of $\ensuremath{\mathcal{T}_c}$ in a bottom-up fashion, from leaf cells to the root cell. Note that each leaf cell of the compressed quadtree contains at most one point of $P$. We initially construct the $I$-projection range tree for each leaf cell of $\ensuremath{\mathcal{T}_c}$ in total $O(n)$ time. Assume that we already have the $I$-projection range tree for every child node of an internal node $v$ with cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\ensuremath{\mathcal{T}_c}$. Note that an internal node of the compressed quadtree has at least two child nodes and up to $2^d$ child nodes. We are going to construct the $I$-projection range tree for $v$ from the $I$-projection range trees of the child nodes of $v$. One may consider to merge the $I$-projection range trees for the child nodes of $v$ into one, but we do not know any efficient way of doing it. Instead, we construct the $I$-projection range tree for $v$ as follows. Let $u$ be a child node of $v$ on $\ensuremath{\mathcal{T}_c}$ that contains the largest number of points of $P$ in its corresponding cell among all child nodes of $v$. We insert all points of $P$ contained in the cells for the child nodes of $v$ other than $u$ to the $I$-projection range tree of $u$ to form the $I$-projection range tree of $v$. Here, we do not destroy the old version of the $I$-projection range tree of $u$ by using the method by Driscoll et al.~\cite{Driscoll-1989}. Therefore, we can still access the $I$-projection range tree of $u$. For the insertion, we use an algorithm that allows us to apply fractional cascading on the range tree under insertions of points~\cite{Mehlhorn-1990}. We do this for every subset $I$ of $\{1,\ldots, d\}$ and construct the $I$-projection range trees for nodes of $\ensuremath{\mathcal{T}_c}$. In this way, we can access any $I$-projection range tree in $O(\log n)$ time, and therefore we can check if a point of $P$ is contained in $Q\cap \scalebox{0.9}{\ensuremath{\square}}$ in $O(\log^{d-t-1} n+\log n)$ time for any query rectangle $Q$ and any cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\ensuremath{\mathcal{T}_c}$ intersecting no $\ensuremath{<_t}$-face of $Q$ for any integer $t$ with $0< t\leq d$ by Lemma~\ref{lem:counting-query}. \subsubsection{Analysis of the Construction} The construction of the dynamic range tree~\cite[Theorem 8]{Mehlhorn-1990} requires $O(\delta\log^{d-t-1} \delta)$ space on the insertions of $\delta$ points in $\mathbb{R}^{d-t}$. The method by Driscoll et al. requires only $O(1)$ overhead for each insertion on the space complexity. Thus, the space complexity of the $I$-projection range trees for a fixed subset $I$ consisting of $t$ indices over all cells of $\ensuremath{\mathcal{T}_c}$ is $O(n+ \delta\log^{d-t-1} \delta)$, where $\delta$ is the number of the insertions performed during the construction in total. The update procedure for the dynamic range tree~\cite[Theorem 8]{Mehlhorn-1990} takes $O(\log^{d-t-1} n)$ time if only insertions are allowed. The method by Driscoll requires only $O(1)$ overhead for each insertion on the update time. Thus, the construction time is $O(n+\delta \log^{d-t-1} n)$, where $\delta$ is the number of the insertions performed during the construction in total. The following lemma shows that $\delta$ is $O(n\log n)$, and thus our construction time is $O(n\log^{d-t} n)$ and the space complexity of the data structure is $O(n\log^{d-t} n)$ for each integer $t$ with $0<t\leq d$. Note that there are $2^d=O(1)$ subsets of $\{1,\ldots,d\}$. Therefore, the total space complexity and construction time are $O(n\log^{d-1} n)$. \begin{lemma} For a fixed subset $I$ of $\{1,\ldots,d\}$, the total number of insertions performed during the construction of all $I$-projection range trees for every node of $\ensuremath{\mathcal{T}_c}$ is $O(n\log n)$. \end{lemma} \begin{proof} We consider a fixed subset of $\{1,2,\ldots,d\}$, and compute the number of insertions performed during the construction of the $I$-projection range trees for all cells of $\ensuremath{\mathcal{T}_c}$. We use a notion, the \emph{rank} of a cell of $\ensuremath{\mathcal{T}_c}$, to analyze the number of insertions performed during the construction. Each leaf cell of $\ensuremath{\mathcal{T}_c}$ has rank $0$. For an internal node with cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\ensuremath{\mathcal{T}_c}$, let $r$ be the largest rank of the children of $\scalebox{0.9}{\ensuremath{\square}}$ in $\ensuremath{\mathcal{T}_c}$. If there is exactly one child of $\scalebox{0.9}{\ensuremath{\square}}$ with rank $r$, we set the rank of $\scalebox{0.9}{\ensuremath{\square}}$ to $r$. Otherwise, we set the rank of $\scalebox{0.9}{\ensuremath{\square}}$ to $r+1$. In the original construction, we insert all points in $\scalebox{0.9}{\ensuremath{\square}}\setminus \scalebox{0.9}{\ensuremath{\square}}'$ to the $I$-projection range tree for $\scalebox{0.9}{\ensuremath{\square}}$, where $\scalebox{0.9}{\ensuremath{\square}}'$ is a child of $\scalebox{0.9}{\ensuremath{\square}}$ containing the largest number of points of $P$. Instead, imagine that we insert all points in $\scalebox{0.9}{\ensuremath{\square}}\setminus \scalebox{0.9}{\ensuremath{\square}}''$ to the $I$-projection range tree for $\scalebox{0.9}{\ensuremath{\square}}''$, where $\scalebox{0.9}{\ensuremath{\square}}''$ is a child of $\scalebox{0.9}{\ensuremath{\square}}$ with largest rank. It is clear that the number of the insertions performed for each internal node $\scalebox{0.9}{\ensuremath{\square}}$ by this new procedure is at least the number of the insertions performed for $\scalebox{0.9}{\ensuremath{\square}}$ by the original procedure. We give an upper bound on the number of the insertions performed by the new procedure, which proves the lemma. We claim that each point $p\in P$ is inserted to some $I$-projection range trees for internal nodes at most $O(\log n)$ times during the construction. A cell of $\ensuremath{\mathcal{T}_c}$ has rank at most $\log n$. This is because any cell of rank $k$ has at least $2^k$ descendants. Assume that $p$ is inserted to an $I$-projection range tree. Let $\scalebox{0.9}{\ensuremath{\square}}_1$ and $\scalebox{0.9}{\ensuremath{\square}}_2$ be two child nodes (cells) of a cell $\scalebox{0.9}{\ensuremath{\square}}$ such that $p\in\scalebox{0.9}{\ensuremath{\square}}_1$ and $p$ is inserted to the $I$-projection range tree for $\scalebox{0.9}{\ensuremath{\square}}_2$ to form the $I$-projection range tree for their parent $\scalebox{0.9}{\ensuremath{\square}}$. There are two cases: the rank of $\scalebox{0.9}{\ensuremath{\square}}_1$ is smaller than the rank of $\scalebox{0.9}{\ensuremath{\square}}_2$, or the rank of $\scalebox{0.9}{\ensuremath{\square}}_1$ is equal to the rank of $\scalebox{0.9}{\ensuremath{\square}}_2$. In any case, the rank of $\scalebox{0.9}{\ensuremath{\square}}$ is larger than the rank of $\scalebox{0.9}{\ensuremath{\square}}_1$. This means that as you move up a path toward the root node of $\ensuremath{\mathcal{T}_c}$, the rank values of the cells containing $p$ become larger if $p$ was inserted to the $I$-projection range trees of the cells. (The rank value remains the same or becomes larger if $p$ was not inserted.) Therefore, the insertion of $p$ occurs at most $O(\log n)$ times in total. Since there are $n$ points to be inserted, the total number of insertions is $O(n\log n)$. \end{proof} Therefore, we have the following lemma. \begin{lemma} We can construct a data structure of size $O(n\log^{d-1} n)$ in $O(n\log^{d-1} n)$ time so that the emptiness of $P\cap Q\cap \scalebox{0.9}{\ensuremath{\square}}$ can be checked in $O(\log^{d-t-1} n+\log n)$ for any query rectangle $Q$ and any cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\ensuremath{\mathcal{T}_c}$ intersecting no $\ensuremath{<_t}$-face of $Q$ for an integer $t$ with $0< t\leq d$. \end{lemma} For a cell $\scalebox{0.9}{\ensuremath{\square}}$ containing a corner of $Q$, there is no index $t$ such that $\scalebox{0.9}{\ensuremath{\square}}$ intersects no $\ensuremath{<_t}$-face of $Q$. Thus we simply use the standard range tree on $P$ and check the emptiness of $P\cap Q\cap \scalebox{0.9}{\ensuremath{\square}}$ in $O(\log^{d-1}n)$ time. Notice that there are $2^d$ cells containing a vertex of $Q$ because the cells are pairwise disjoint. \subsection{Data Structure for Range-Counting Queries}\label{sec:DS-counting} The data structure for range-emptiness queries described in Section~\ref{sec:emptiness-DS} can be extended to a data structure for range-reporting queries. However, it does not seem to work for range-counting queries. This is because the dynamic range tree with fractional cascading by Mehlhorn and N{\"a}her~\cite{Mehlhorn-1990} does not seem to support counting queries. Instead, we use a dynamic range tree without fractional cascading, which increases the query time and update time by a factor of $\log n$. The other part is the same as the data structure for range-emptiness queries. Therefore, we have the following lemma. \begin{lemma} We can construct a data structure of size $O(n\log^{d-1} n)$ in $O(n\log^{d-1} n)$ time so that the number of points of $P$ contained in $Q\cap \scalebox{0.9}{\ensuremath{\square}}$ can be computed in $O(\log^{d-t} n+\log n)$ time for any query rectangle $Q$ and any cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\ensuremath{\mathcal{T}_c}$ intersecting no $\ensuremath{<_t}$-face of $Q$ for an integer $t$ with $0< t\leq d$. \end{lemma} \section{\texorpdfstring{$k$}{k}-Median Range-Clustering Queries} \label{sec:median} In this section, we present a data structure and a query algorithm for $k$-median range-clustering queries. Given a set $P$ of $n$ points in $d$-dimensional Euclidean space for a constant $d\geq 2$, our goal is to preprocess $P$ such that $k$-median range-clustering queries can be answered efficiently. A $k$-median range-clustering query consists of a $d$-dimensional axis-parallel rectangle $Q$, an integer $k$ with $1\leq k\leq n$, and a value $\varepsilon>0$. We want to find a set $C\in\ensuremath{\mathcal{C}}_k$ with $\ensuremath{\Phi_\textsf{M}}(P_Q,C) \leq (1+\varepsilon) \ensuremath{\textsc{Opt}_k}(P_Q)$ efficiently, where $P_Q=P\cap Q$. Throughout this section, we use $\Phi$ to denote $\ensuremath{\Phi_\textsf{M}}$ unless otherwise specified. Our query algorithm is based on the single-shot algorithm by Har-Peled and Mazumdar~\cite{Har-Peled-2004}. A main difficulty in the implementation for our setting is that they construct a grid with respect to each point in an approximate center set. Then for each grid cell, they compute the number of points of $P_Q$ contained in the grid cell. Thus to implement their approach in our setting directly, we need to apply a counting query to each grid cell. Moreover, we have to avoid overcounting as a point might be contained in more than one grid cell of their grid structures. To do this efficiently without overcounting, we use a \emph{unified grid} based on the standard quadtree. Although this grid is defined on the standard quadtree, we use the grid on the compressed quadtree in the implementation. To make the description easier, we use the standard quadtree instead of the compressed quadtree in defining of the unified grid. \subsection{Coreset Construction from Approximate Centers} Assume that we are given a constant-factor approximation $A=\{a_1,\ldots,a_m\}$ to the $k$-median clustering of $P_Q$, where $m$ is possibly larger than $k$. In this subsection, we present a procedure that computes a $(k,\varepsilon)$-coreset of size $O(|A|\log n/\varepsilon^d)$. We follow the approach by Har-Peled and Mazumdar~\cite{Har-Peled-2004} and implement it in our setting. \subsubsection{General Strategy}\label{sec:sketch} We describe our general strategy first, and then show how to implement this algorithm. For the definitions of the notations used in the following, refer to those in Section~\ref{sec:single-shot} unless they are given. We compute a $2\sqrt{d}$-approximation $R$ to the maximum of $d(p,A)/ (c_1|P_Q|)$ over all points $p\in P_Q$, that is, a value $R$ satisfying that the maximum value lies between $R/(2\sqrt{d})$ and $2\sqrt{d}R$, where $c_1>1$ is the approximation factor of $A$. Details can be found in Section~\ref{sec:LB}. Let $Q_{ij}$ be the cell of the standard quadtree containing $a_i$ with side length $\bar{R}_{j}$ satisfying $R2^j\leq \bar{R}_{j} < R2^{j+1}$ for $j=0,\ldots, M=\lceil 2\log (2\sqrt{d}c_1 |P_Q|)\rceil$. By construction, note that $Q_{ij_1}\subset Q_{ij_2}$ for any two indices $j_1$ and $j_2$ with $j_1 < j_2$. Note also that for any point $p$ in $P_Q$, we have at least one cell $Q_{ij}$ containing $p$ since there is a value $\bar{R}_j$ at least four times the maximum of $d(p,A)$. We define the \emph{grid cluster} for $Q_{ij}$ as the union of at most $3^d$ grid cells of the standard quadtree with side length $\bar{R}_j$ that share their faces with $Q_{ij}$ including $Q_{ij}$. Note that the grid cluster for $Q_{ij}$ contains all points of $d$-dimensional space that are within distance from $a_i$ at most $\bar{R}_j$. Also, every point of $d$-dimensional space contained in the grid cluster for $Q_{ij}$ has its distance from $a_i$ at most $2\sqrt{d}\bar{R}_j$. See Figure~\ref{fig:grid}(b). Let $V_{i0}$ denote the grid cluster for $Q_{i0}$ and $V_{ij}$ be the grid cluster for $Q_{ij}$ excluding the grid cluster for $Q_{i(j-1)}$. Note that $V_{ij}$ is the union of at most $3^{d}(2^d-1)$ cells of the standard quadtree with side length $\bar{R}_j/2$, except for $j=0$. For $j=0$, the region $V_{i0}$ is the union of at most $3^d$ such cells. The \emph{first-level grid} for a fixed index $i$ consists of all cells of the standard quadtree with side length $\bar{R}_j/2$ contained in $V_{ij}$. For an illustration, see Figure~\ref{fig:grid}(c). We partition each cell of the first-level grid into the cells of the standard quadtree with side length $\bar{r}_j$ satisfying $\varepsilon \bar{R}_{j}/(40c_1d) \leq \bar{r}_j \leq 2\varepsilon \bar{R}_{j}/(40c_1d)$. The \emph{second-level grid} for $i$ consists of all such cells. Let $\mathcal{V}$ be the set of all grid cells which contain at least one point of $P_Q$. Note that the size of $\mathcal{V}$ is $O(|A|\log n/\varepsilon^d)$. We will see that this set can be obtained in $O(|A|\log^{d}n/\varepsilon + |A|\log n/\varepsilon^d)$ time in Section~\ref{sec:Computing the Compressed Cells in the Grid}. We consider the grid cells $\scalebox{0.9}{\ensuremath{\square}}$ of $\mathcal{V}$ one by one in the increasing order of their side lengths, and do the followings. Let $P(\scalebox{0.9}{\ensuremath{\square}})$ be the set of points of $P_Q$ that are contained in $\scalebox{0.9}{\ensuremath{\square}}$, but are not contained in any other grid cells we have considered so far. We compute the number of points of $P(\scalebox{0.9}{\ensuremath{\square}})$, and assign this number to an arbitrary point of $P(\scalebox{0.9}{\ensuremath{\square}})$ as its weight. We call this weighted point the \emph{representative} of $\scalebox{0.9}{\ensuremath{\square}}$. Also, we say that a point of $P(\scalebox{0.9}{\ensuremath{\square}})$ is \emph{charged} to $\scalebox{0.9}{\ensuremath{\square}}$. Notice that every point of $P_Q$ is charged to exactly one cell of $\mathcal{V}$. We describe the details of this procedure in Section~\ref{sec:Computing the Compressed Cells in the Grid}. Let $S$ be the set of all such weighted points. Although the definition of the grid is different from the one by Har-Peled and Mazumdar~\cite{Har-Peled-2004}, we can still prove that $\mathcal{S}$ is a $(k,\varepsilon)$-coreset for $P_Q$ of size $O(|A|\log n/\varepsilon^d)$ using an argument similar to theirs. \begin{lemma} The set $S$ is a $(k,\varepsilon)$-coreset for $P_Q$ of size $O(|A|\log n/\varepsilon^d)$. \end{lemma} \begin{proof} Let $Y$ be an arbitrary set of $k$ points in $d$-dimensional space. For a point $p\in P_Q$, let $\bar{p}$ be the representative of the cell which $p$ is charged to. Let $\mathcal{E}=|\Phi(P_Q,Y)-\Phi(S,Y)|$. Here, $\Phi(S,Y)$ is the weighted cost function between $S$ and $Y$. But we consider $\bar{p}$ as an unweighted point when we deal with $d(p,\bar{p})$ and $\dist{\bar{p}}{Y}$. By definition, $\mathcal{E} \leq \sum_{p\in P_Q}|\dist{p}{Y}-\dist{\bar{p}}{Y}|$. By the triangle inequality, it holds that $\dist{p}{Y}\leq \dist{p}{\bar{p}} + \dist{\bar{p}}{Y}$ and $\dist{\bar{p}}{Y}\leq \dist{p}{\bar{p}} + \dist{p}{Y}$ for every point $p$ in $P_Q$, which implies $|\dist{p}{Y}-\dist{\bar{p}}{Y}| \leq \dist{p}{\bar{p}}$. Consider a point $p\in P_Q$ such that the cell $\scalebox{0.9}{\ensuremath{\square}}$ which $p$ is charged to comes from $V_{i0}$ for some index $i\geq 0$. In this case, the side length of $\scalebox{0.9}{\ensuremath{\square}}$ is $\bar{r}_0$, which is at most $\frac{2\varepsilon} {40c_1 d} \bar{R}_0 \leq \frac{4\varepsilon }{40c_1 d}R$. Therefore we have $\dist{p}{\bar{p}}\leq \frac{4\varepsilon}{40c_1d}R$, and the sum of $d(p,\bar{p})$ over all points $p$ in $P_Q$ belonging to this case is at most $\frac{4\varepsilon}{40c_1d}R|P_Q|$, which is at most $\frac{4\varepsilon}{40c_1}\Phi(P_Q,A)$ since $c_1>1$, $d\geq 2$ and $d(p,A)\leq \Phi(P_Q,A)$ for any $p\in P_Q$. Now consider a point $p\in P_Q$ such that the cell $\scalebox{0.9}{\ensuremath{\square}}$ which $p$ is charged to comes from $V_{ij}$ for any indices $i\geq 0$ and $j >0$. Since $j\neq 0$, the distance between $a_i$ and $p$ is at least $\bar{R}_j/4$. The side length of $\scalebox{0.9}{\ensuremath{\square}}$ is $\bar{r}_j$, which is at most $\frac{2\varepsilon }{40c_1 d}\bar{R}_j$. Therefore, we have $\dist{p}{\bar{p}}\leq \bar{r}_j\leq \frac{8\varepsilon}{40c_1 d}\dist{a_i}{p}$. Since we consider the grid cells in $\mathcal{V}$ in the increasing order of their side lengths, $p$ is contained in no grid cell of $\mathcal{V}$ of side length at most $\bar{r}_j/2$. Therefore, $a_i$ is a constant-factor approximate nearest neighbor of $p$ among the points of $A$. More precisely, $\dist{a_i}{p} \leq 2d\cdot \dist{p}{A}$. Therefore, the sum of $d(p,\bar{p})$ over all points $p$ in $P_Q$ belonging to this case is at most $\frac{16\varepsilon}{40c_1} \sum_{p\in P_Q}d(p,A)$, which is $\frac{16\varepsilon}{40c_1}\Phi(P_Q,A)$. Therefore, we have \[\mathcal{E} \leq \sum_{p\in P_Q} d(p,\bar{p}) \leq \frac{4\varepsilon}{40c_1} \Phi(P_Q,A) + \frac{16\varepsilon}{40c_1}\Phi(P_Q,A) \leq \frac{\varepsilon}{c_1}\Phi(P_Q,A) \leq \varepsilon \ensuremath{\textsc{Opt}_k}(P_Q). \] Then, by the definition of $(k,\varepsilon)$-coresets, the lemma holds. \end{proof} We implement the algorithm using the compressed quadtree, not the standard quadtree. We provide an implementation of the algorithm in the following subsections. \subsubsection{Computing an Approximation to the Average Radius}\label{sec:LB} The first step is to compute a $2\sqrt{d}$-approximation $R$ to the maximum of $d(p,A)/(c_1|P_Q|)$ over all points $p\in P_Q$, where $c_1>1$ is the approximation factor of $A$. More precisely, we compute $R$ such that $R/(2\sqrt{d}) \leq \max_{p\in P_Q} d(p,A)/(c_1|P_Q|) \leq 2\sqrt{d}R$. We can compute it in $O(|A|\log^{d} n)$ time. Let $r^*$ be the maximum of $d(p,A)$ over all points $p\in P_Q$. We compute a $2\sqrt{d}$-approximation of $r^*$ and divide it by $c_1|P_Q|$ to compute $R$. Note that we can compute $|P_Q|$ in $O(\log^{d-1} n)$ time using the range tree constructed on $P$. Imagine that we have a standard grid with side length $\alpha>0$ covering $Q$. Consider the grid cells in this grid each of which contains a point of $A$. If the union of the grid cluster of all these grid cells contains $P_Q$, it holds that $d(p,A)\leq 2\alpha\sqrt{d}$ for any $p\in P_Q$. Otherwise, $d(p,A) >\alpha$ for some $p\in P_Q$. We use this observation to check whether $2\alpha\sqrt{d}\geq r^*$ or $\alpha \leq r^*$. Basically, we apply binary search on the standard lengths. However, there are an arbitrarily large number of distinct standard lengths. We consider only $O(\log n)$ distinct standard lengths for applying binary search. For any value $x$, we use $\sfloor{x}$ and $\sceil{x}$ to denote the largest standard length which is smaller than or equal to $x$, and the smallest standard length which is larger than or equal to $x$, respectively. The following lemma is used for a subprocedure in the binary search. \begin{lemma}\label{lem:subproc} Given a standard length $\alpha$, we can check whether $\alpha$ is at most $r^*$ or at least $r^*/(\alpha \sqrt{d})$ in $O(|A|\log^{d-1} n)$ time. \end{lemma} \begin{proof} We find the cells of the standard quadtree with side length $\alpha$ that contain $a$ in their grid clusters for each point $a$ of $A$. The union $U$ of all these grid clusters consists of $3^d|A|$ cells of $\ensuremath{\mathcal{T}_s}$ with side length $\alpha$. We want to check whether every point of $P_Q$ is contained in $U$. If so, $r^*$ is at most $2\alpha\sqrt{d}$. Otherwise, $r^*$ is at least $\alpha$. To do this, for each cell $\scalebox{0.9}{\ensuremath{\square}}$ with side length $\alpha$ contained in $U$, we compute the number $N(\scalebox{0.9}{\ensuremath{\square}})$ of points of $P\cap Q$ that are contained in $\scalebox{0.9}{\ensuremath{\square}}$ in $O(\log^{d-1}n)$ time using the range tree on $P$. Since the cells are pairwise interior disjoint, the sum of $N(\scalebox{0.9}{\ensuremath{\square}})$ of all cells $\scalebox{0.9}{\ensuremath{\square}}$ is $|P_Q|$ if and only if all points of $P_Q$ are in the union of all such cells. Therefore, we can check whether all points of $P_Q$ are in $U$ in $O(|A|\log^{d-1} n)$ time. \end{proof} We apply binary search on a set $\mathcal{L}$ of standard lengths defined as follows. For every pair $(p,a)$ with $p\in P$ and $a\in A$, consider the difference $\ell$ between the $i$th coordinates of $p$ and $a$ for every $1\leq i\leq d$. Let $\mathcal{L}$ be the sorted list of $\sfloor{\ell}$ for every difference $\ell$. The size of $\mathcal{L}$ is $d |A|n$. Imagine that we have the sorted lists of $\mathcal{L}$. For every iteration, we choose the median $\alpha$ of the search space of $\mathcal{L}$ and check if $\alpha\geq r^*/2\sqrt{d}$ or $\alpha \leq r^*$. If $\alpha\geq r^*/(2\sqrt{d})$, we consider the lengths smaller than $\alpha$ in the current search space for the next iteration. Otherwise, we consider the lengths larger than $\alpha$ for the next iteration. In this way, we obtain an interval $[\alpha_L,\alpha_U]$ satisfying that either $\alpha_L \leq r^*$ and $a_U\geq r^*/(2\sqrt{d})$ or $r^*/(2\sqrt{d})\leq \alpha_U\leq r^*$ in $O(|A|\log^d n)$ time in total. We return $\alpha_U$ as an output. The following lemma shows that this binary search can be done in the same time without computing $\mathcal{L}$ explicitly. \begin{lemma} We can compute $\alpha_U$ in $O(|A|\log^d n)$ time after an $O(n\log n)$-time preprocessing on $P$. \end{lemma} \begin{proof} We apply binary search on $\mathcal{L}$ without computing it explicitly. As a preprocessing, we compute a balanced binary search tree on the projection of $P$ onto each axis. We have $d$ binary search trees, and we can compute them in $O(n\log n)$ time. This time is subsumed by the total construction time. For the binary search, we locate every point of $A$ in the balanced binary search trees in $O(|A|\log n)$ time in total. Then we have two search spaces for each pair $(a,i)$ with $a\in A$ and $1\leq i\leq d$: the differences of the $i$th coordinates of $a$ and the points of $P$ lying on the $i$th axis in one direction from $a$, and the difference of the $i$th coordinates of $a$ and the points of $P$ lying on the $i$th axis in the other direction from $a$. (Precisely, we apply apply $\sfloor{\cdot}$ operation to each element.) We can apply binary search on each search space using the balanced binary search trees. Note that we have $O(|A|)$ search spaces. To accomplish the task more efficiently, we apply binary search on all search spaces together as follows. We choose the median for each search space, and assign the size of the search space to the median as its weight in $O(|A|\log n)$ time in total. Then we choose the weighted median $\alpha$ of the weighted medians in $O(|A|)$ time. Then we test whether $\alpha\geq r^*/2\sqrt{d}$ or $\alpha \leq r^*$ in $O(|A|\log^{d-1}n)$ time by Lemma~\ref{lem:subproc}. Regardless of the result, the size of the total search space decreases by a constant factor. Therefore, in $O(\log n)$ iterations, we can obtain a desired interval in $O(|A|\log^d n)$ time in total. \end{proof} \begin{lemma} The standard length $\alpha_U$ is a $2\sqrt{d}$-approximation to $r^*$. \end{lemma} \begin{proof} We already showed that the interval $[\alpha_L, \alpha_U]$ satisfies one of the following conditions: either $\alpha_L \leq r^*$ and $a_U\geq r^*/(2\sqrt{d})$ or $r^*/(2\sqrt{d})\leq \alpha_U\leq r^*$. For the latter case, the lemma holds immediately. If there is at least one standard length in $\mathcal{L}$ lies between $r^*/(2\sqrt{d})$ and $r^*$, the output interval belongs to the latter case by construction. Thus assume there is no such standard length in $\mathcal{L}$, and $[\alpha_L,\alpha_U]$ belongs to the former case. Let $(p,a)$ be a pair with $p\in P_Q$ and $a\in A$ such that $d(p,a)$ is the maximum $r^*$ of $d(p,A)$ for all points of $p$. Let $i$ be an integer with $1\leq i\leq d$ that maximizes the length $\ell$ of the projection of the segment $\overline{pa}$ onto the $i$th axis. We have $r^*/\sqrt{d}\leq \ell\leq r^*$. By the construction, $\alpha=\sfloor{\ell}$ is in $\mathcal{L}$. We have $r^*/(2\sqrt{d}) \leq \alpha \leq r^*$. This contradicts the assumption that no standard length of $\mathcal{L}$ lying between $r^*/(2\sqrt{d})$ and $r^*$. Therefore, $\alpha_U$ is a $2\sqrt{d}$-approximation to $r^*$, and the lemma holds. \end{proof} \begin{lemma} We can compute a $2\sqrt{d}$-approximation to the maximum of $d(p,A)/(c_1|P_Q|)$ for all points $p$ in $P_Q$ in $O(|A|\log^d n)$ time. \end{lemma} \subsubsection{Computing the Compressed Cells in the Grid}\label{sec:Computing the Compressed Cells in the Grid} As described in Section~\ref{sec:sketch}, we construct the second-level grid for each index $i$ for $i=1,\ldots,m$, and check whether each grid cell contains a point of $P_Q$. The set of the grid cells in the second-level grids containing a point of $P_Q$ is denoted by $\mathcal{V}$. Then we consider the grid cells $\scalebox{0.9}{\ensuremath{\square}}$ of $\mathcal{V}$ one by one in the increasing order of their side lengths, and compute the number of points of $P_Q$ contained in $\scalebox{0.9}{\ensuremath{\square}}$, but not contained in any other grid cells we have considered so far. Computing this number is quite tricky. To handle this problem, we observe that for any two cells in $\mathcal{V}$, either they are disjoint or one is contained in the other. This is because they are cells of the standard quadtree. For two cells $\scalebox{0.9}{\ensuremath{\square}}_1$ and $\scalebox{0.9}{\ensuremath{\square}}_2$ with $\scalebox{0.9}{\ensuremath{\square}}_1\subseteq \scalebox{0.9}{\ensuremath{\square}}_2$, let $i_1$ and $i_2$ be the indices such that $\scalebox{0.9}{\ensuremath{\square}}_1$ and $\scalebox{0.9}{\ensuremath{\square}}_2$ are grid cells of the second-level grids for $i_1$ and $i_2$, respectively. Since the grid cells in the same second-level grid are pairwise interior disjoint, we have $i_1\neq i_2$. In this case, for any point $p\in \scalebox{0.9}{\ensuremath{\square}}_2$, there is another grid cell $\scalebox{0.9}{\ensuremath{\square}}_1'$ containing $p$ in the second-level grid for $i_1$ with side length smaller than the side length of $\scalebox{0.9}{\ensuremath{\square}}_2$. Therefore, we do not consider any cell of $\mathcal{V}$ containing another cell of $\mathcal{V}$. Imagine that we remove all such cells from $\mathcal{V}$. Then the cells of $\mathcal{V}$ are pairwise interior disjoint. Therefore, if suffices to compute the number of points of $P_Q$ contained in each cell of $\mathcal{V}$, which can be done efficiently using the data structure described in Section~\ref{sec:DS}. In the following, we show how to compute the set $\mathcal{V}$ after removing all cells containing another cell efficiently. To do this, we first compute the cells in the first-level grids, and discard some of them. Then we subdivide the remaining cells into cells in the second-level grids. More specifically, let $\mathcal{V}_1$ be the set of the cells of the first-level grids. We first compute the cells in $\mathcal{V}_1$, and then remove all cells in $\mathcal{V}_1$ containing another cell in $\mathcal{V}_1$. Then the cells in $\mathcal{V}_1$ are pairwise interior disjoint. And then we compute the second-level grid cells in each cell of $\mathcal{V}_1$. The second-level grid cells we obtain are the cells of $\mathcal{V}$ containing no other cell in $\mathcal{V}$. Also, in the following, to apply Lemma~\ref{lem:counting-query}, we consider the compressed cells instead of the cells in the standard quadtree. \subparagraph{First-Level Grid.} We compute the cells of the first-level grid for every index $i$. There are $O(|A|\log n)$ cells of the first-level grids in total. We compute them in $O(|A|\log n)$ time and compute the compressed cell for each cell in $O(|A|\log^2 n)$ time in total by Lemma~\ref{lem:cell-query}. We remove all compressed cells containing another compressed cells in $O(|A|\log^2 n)$ time using the following lemma. \begin{lemma} We can find all compressed cells of the cells of the first-level grids containing another compressed cells in $O(|A|\log^2 n)$ time in total. \end{lemma} \begin{proof} Recall that a cell of the compressed quadtree can be represented as an interval using the $\mathcal{Z}$-order. The description of this order is given in Section~\ref{sec:quad} of Appendix. Let $\langle\overline{\cell}_1,\ldots,\overline{\cell}_{k'}\rangle$ be the sequence of the compressed cells of the cells of the first-level grids in the increasing order of their side lengths. For each index $t$ with $1\leq t\leq k'$, we check whether there is an index $t'<t$ with $\overline{\cell}_{t'}\subseteq \overline{\cell}_t$. To do this, we consider the cells from $\overline{\cell}_1$ to $\overline{\cell}_{k'}$ and maintain an interval tree $\mathcal{I}$. The interval tree contains all intervals corresponding to the cells we have considered so far. Since the sequence of the insertions to the interval tree is known in advance, each insertion can be done in $O(\log n)$ time. To check whether there is an index $t'$ with $\overline{\cell}_{t'}\subseteq \overline{\cell}_t$ for some index $t$, we check whether the interval corresponding to $\overline{\cell}_t$ contains another interval in $\mathcal{I}$. This can be done in $O(\log n)$ time. Since there are $O(|A|\log n)$ cells of the first-level grids, we can find all compressed cells of the cells of the first-level grids contained in another compressed cells in $O(|A|\log^2 n)$ time in total. \end{proof} The resulting grid cells are pairwise disjoint and contain $P_Q$ in their union. But it is possible that a grid cell does not contain a point of $P_Q$. \subparagraph{Second-level Grids.} For each compressed cell $\overline{\cell}$ of the cells of the first-level grids, we compute the second-level grids constructed from it. To do this, we traverse the subtree of $\overline{\cell}$ of $\ensuremath{\mathcal{T}_c}$ towards its leaf nodes. More precisely, let $\mathcal{V}$ be the singleton set containing $\overline{\cell}$. We pick the largest cell of $\mathcal{V}$, and insert its children to $\mathcal{V}$. We do this until the largest cell of $\mathcal{V}$ has side length at most $\bar{r}_j$ assuming that $\overline{\cell}$ comes from a grid cluster $V_{ij}$. Notice that some of them may not intersect $Q$. This takes time linear in the number of the grid cells in the second-level grids. \subparagraph{Range-Counting for Each Compressed Cell.} The next step is to compute the number of points of $P_Q$ contained in each cell $\overline{\cell}$ in the second grids. If $\overline{\cell}$ is contained in $Q$, we already have the number of points of $P_Q$ contained in $\overline{\cell}$, which is computed in the preprocessing phase. If $\overline{\cell}$ contains a corner of $Q$, we use the range tree constructed on $P$. Since there are $O(1)$ such cells, we can handle them in $O(\log^{d-1}n)$ time in total. For the remaining cells, we use the data structure in Section~\ref{sec:DS-counting}. Then we can handle them in $O(\sum_{t=1}^{d-1} m_t \log^{d-t} n)$ time, where $m_t$ is the number of the cells of $\mathcal{V}$ intersecting no $\ensuremath{<_t}$-face of $Q$ but intersecting a $t$-dimensional face of $Q$ for an integer with $0<t< d$. We have $m_t=O(|A|\log n/\varepsilon^{t})$. Therefore, the total running time for the range-counting queries is $O(|A|\log^2 n/\varepsilon^{d-1} + |A|\log^d n/\varepsilon + \log^{d-1} n + |A|\log n/\varepsilon^d )$ in total, which is $O(|A|\log^d n/\varepsilon + |A|\log n/\varepsilon^d )$. Therefore, we have the following lemma. \begin{lemma}\label{lem:approx-center-to-coreset} Given a constant-factor approximation $A$ to the $k$-median clustering of a set $P$ of $n$ points in $d$-dimensional space such that $|A|$ is possibly larger than $k$, we can compute a $(k,\varepsilon)$-coreset of $P_Q$ of size $O(|A|\log n/\varepsilon^d)$ in $O(|A|\log^d n/\varepsilon + |A|\log n/\varepsilon^d )$ time for any rectangle $Q$, any integer $k$ with $1\leq k\leq n$ and any value $\varepsilon>0$. \end{lemma} \subsection{Smaller Coreset} Due to Lemma~\ref{lem:constant-coreset}, we can obtain a $(k,2)$-coreset $S$ of $P_Q$ of size $O(k\log^{d} n)$ in $O(k\log^{d} n)$ time for any query rectangle $Q$ using a data structure of size $O(n\log^d n)$. A $(k,c)$-coreset of $S$ is also a $(k,2c)$-coreset of $P_Q$ for any constant $c>1$ by the definition of the coreset. We compute a $(k,2)$-corset $S'$ of $S$, which is a $(k,4)$-coreset of $P_Q$, of size $O(k\log n)$ in $O(k\log^{d} n+k^5 \log^9 n)$ time by~\cite[Lemma 5.1]{Har-Peled-2004} by setting $\varepsilon=2$. Using this $(k,4)$-coreset of size $O(k\log n)$ of $P_Q$, we can obtain constant-factor approximate centers of size $k$ as Har-Peled and Mazumdar~\cite{Har-Peled-2004} do. We compute a constant-factor $k$-center clustering $\ensuremath{\mathcal{C}}_0$ of the coreset using~\cite{Gon1985}. Then we apply the local search algorithm due to Arya et al.~\cite{Arya-2004} to $\ensuremath{\mathcal{C}}_0$ and $S'$ to obtain a constant-factor approximation to $\ensuremath{\textsc{Opt}_k}(S)$. This takes $O(|S'|^2 k^3 \log n)=O(k^5\log^3 n)$ time, and finally $\mathcal{C}_0$ becomes a constant-factor approximation to $\ensuremath{\textsc{Opt}_k}(S)$ of size $k$~\cite{Har-Peled-2004}. Therefore, we can compute a $(k,\varepsilon)$-coreset of size $O(k\log n/\varepsilon^d)$ using Lemma~\ref{lem:approx-center-to-coreset} using the constant-factor approximation $\mathcal{C}_0$ to $\ensuremath{\textsc{Opt}_k}(S)$ of size $k$. \begin{lemma} Given a query range $Q\subseteq\mathbb{R}^d$, an integer $k$ with $1\leq k\leq n$, and a value $\varepsilon>0$ as a query, we can compute a $(k,\varepsilon)$-coreset of $P_Q$ for the $k$-median range-clustering of size $O(k\log n/\varepsilon^d)$ in $O(k^5\log^9 n+ k\log^d n/\varepsilon + k\log n/\varepsilon^d )$ time. \end{lemma} \begin{theorem} Let $P$ be a set of $n$ points in $d$-dimensional space. There is a data structure of size $O(n\log^d n)$ such that given a query range $Q\subseteq\mathbb{R}^d$, an integer $k$ with $1\leq k\leq n$, and a value $\varepsilon>0$ as a query, an $(1+\varepsilon)$-approximation to the $k$-median range-clustering of $P\cap Q$ can be computed in $O(k^5\log^9 n+k\log^d /\varepsilon +T_\textnormal{ss}(k\log n/\varepsilon^d))$ time, where $T_{\textnormal{ss}}(N)$ denotes the running time of an $(1+\varepsilon)$-approximation single-shot algorithm for the $k$-median clustering of $N$ weighted input points. \end{theorem} If we use the algorithm in~\cite{Har-Peled-2004} for computing an $(1+\varepsilon)$-approximation to the $k$-median clustering, $T_\textnormal{ss}(N)=O(N\log^2 W+k^5\log^9 W+ \varrho k^2\log^5 W)$, where $W$ is the total weight of the input points and $\varrho =\exp[O((1+\log (1/\varepsilon))/\varepsilon)^{d-1}]$. Therefore, we have the following corollary. In the running time of the corollary, the term of $k\log n/\varepsilon^d$ is subsumed by the term of $\varrho k^2\log^5 n$. \begin{corollary} Let $P$ be a set of $n$ points in $d$-dimensional space. There is a data structure of size $O(n\log^d n)$ such that given a query range $Q\subseteq\mathbb{R}^d$, an integer $k$ with $1\leq k\leq n$, and a value $\varepsilon>0$ as a query, an $(1+\varepsilon)$-approximation to the $k$-median range-clustering of $P\cap Q$ can be computed in $O(\varrho k^2\log^5 n+ k^5\log^9 n+ k\log^d n/\varepsilon)$ time, where $\varrho =\exp[O((1+\log (1/\varepsilon))/\varepsilon)^{d-1}]$. \end{corollary} \subparagraph{Remark.} The construction of the coreset for the $k$-means clustering is similar to the construction of the coreset for the $k$-median clustering in~\cite{Har-Peled-2004}. The only difference is that for the $k$-means clustering $\ensuremath{\Phi_\textsf{m}}$ is used instead of $\ensuremath{\Phi_\textsf{M}}$ and $R=\sqrt{\ensuremath{\Phi_\textsf{m}} (P,A)/(c_1n)}$ is used instead of $R=\ensuremath{\Phi_\textsf{M}}(P,A)/(c_1n)$. Therefore, we can compute a $(k,\varepsilon)$-coreset for the $k$-means clustering of size $O(k\log n/\varepsilon^d)$ in $O(k^5\log^9 n+ k\log^{d} n/\varepsilon + k\log n/\varepsilon^d)$ time. \begin{theorem} Let $P$ be a set of $n$ points in $d$-dimensional space. There is a data structure of size $O(n\log^d n)$ such that given a query range $Q\subseteq\mathbb{R}^d$, an integer $k$ with $1\leq k\leq n$, and a value $\varepsilon>0$ as a query, an $(1+\varepsilon)$-approximation to the $k$-means range-clustering of $P\cap Q$ can be computed in $O(k^5\log^9 n+k\log^d /\varepsilon +T_\textnormal{ss}(k\log n/\varepsilon^d))$ time, where $T_{\textnormal{ss}}(N)$ denotes the running time of an $(1+\varepsilon)$-approximation single-shot algorithm for the $k$-means clustering of $N$ weighted input points. \end{theorem} Since an $(1+\varepsilon)$-approximate $k$-means clustering of $N$ weighted points of total weight $W$ can be computed in $O(N\log^2 W+k^5 n\log^5 W + k^{k+2} \varepsilon^{(-2d+1)k} \log^{k+1} W\log^k (1/\varepsilon))$ time~\cite{Har-Peled-2004}, we have the following corollary. \begin{corollary} Let $P$ be a set of $n$ points in $d$-dimensional space. There is a data structure of size $O(n\log^d n)$ such that given a query range $Q\subseteq\mathbb{R}^d$, an integer $k$ with $1\leq k\leq n$, and a value $\varepsilon>0$ as a query, an $(1+\varepsilon)$-approximation to the $k$-median range-clustering of $P\cap Q$ can be computed in $O(k^6\log^6 n/\varepsilon^d + k^{k+2} \varepsilon^{(-2d+1)k} \log^{k+1} n\log^k (1/\varepsilon) + k^5\log^9 n+ k\log^{d} n/\varepsilon)$ time. \end{corollary} \section{\texorpdfstring{$k$}{k}-Center Range-Clustering Queries} In this section, we are given a set $P$ of $n$ points in $d$-dimensional Euclidean space for a constant $d\geq 2$. Our goal is to process $P$ so that $k$-center range-clustering queries can be computed efficiently. A range-clustering query consists of a rectangle $Q\subseteq\mathbb{R}^d$, an integer $k$ with $1\leq k\leq n$, and a value $\varepsilon>0$. We want to find a set $C\in\ensuremath{\mathcal{C}}_k$ with $\ensuremath{\Phi_\textsf{c}}(P_Q,C) \leq (1+\varepsilon) \ensuremath{\textsc{Opt}_k}(P_Q)$ efficiently, where $P_Q=P\cap Q$. In this section, we use $\Phi$ to denote $\ensuremath{\Phi_\textsf{c}}$. \subparagraph{Sketch of the Algorithm by Abrahamsen et al.} Abrahamsen et al.~\cite{Abrahamsen-2017} present a data structure and its query algorithm for this problem. They construct a compressed quadtree on $P$ as a data structure. Their query algorithm consists of two phases. In the first phase, they compute a lower bound $\textsc{lb}$ of $\ensuremath{\textsc{Opt}_k}(P_Q)$, and then obtain a set of $O(k)$ pairwise interior disjoint cells of $\ensuremath{\mathcal{T}_c}$ with side length at most $\textsc{lb}$ whose union contains all points of $P_Q$. In the second phase, they subdivide the cells they obtained so that the side length of a cell becomes at most $\varepsilon\textsc{lb}$. Then for each cell that contains a point of $P\cap Q$, they choose an arbitrary point in the cell. They use the set of all chosen points as a $(k,\varepsilon)$-coreset. By applying a single-shot algorithm for the $k$-center clustering to the coreset, they can obtain an $(1+\varepsilon)$-approximate $k$-center range-clustering. The first phase takes $O(k\log^{d-1}n )$ time, and the second phase takes $O(k(\log n/\varepsilon)^{d-1})$ time. In this section, we show that the second phase can be done in $O(k\log^{d-1} n + k/\varepsilon^d)$ time using the data structure described in Section~\ref{sec:DS}. \subparagraph{Data Structure.} We construct a compressed quadtree $\ensuremath{\mathcal{T}_c}$ on $P$. For each cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\ensuremath{\mathcal{T}_c}$, we store the point of $P\cap \scalebox{0.9}{\ensuremath{\square}}$ closest to each facet of $\scalebox{0.9}{\ensuremath{\square}}$. Also, we mark whether or not $\scalebox{0.9}{\ensuremath{\square}}$ contains a point of $P$. Due to this information, given the node of $\ensuremath{\mathcal{T}_c}$ corresponding to a cell $\scalebox{0.9}{\ensuremath{\square}}$, we can check whether $P_Q\cap\scalebox{0.9}{\ensuremath{\square}}$ is empty or not in constant time if $\scalebox{0.9}{\ensuremath{\square}}$ crosses only one facet of $Q$ or it is contained in $Q$. Also, we construct $I$-projection range trees on $P$ described in Section~\ref{sec:DS}. The total space complexity is $O(n\log^{d-1} n)$. \subparagraph{Query Algorithm.} We are given a query rectangle $Q$, an integer $k$ and a value $\varepsilon$. Also, assume that we have the cells obtained from the first phase of the algorithm by Abrahamsen et al.~\cite{Abrahamsen-2017}. For each cell $\scalebox{0.9}{\ensuremath{\square}}$ obtained from the first phase, we traverse the subtree of $\scalebox{0.9}{\ensuremath{\square}}$ of $\ensuremath{\mathcal{T}_c}$ towards its leaf nodes until we reach the cells with side length at most $\varepsilon\textsc{lb}$. More precisely, let $\mathcal{G}(\scalebox{0.9}{\ensuremath{\square}})$ be a set of descendants of $\scalebox{0.9}{\ensuremath{\square}}$ in $\ensuremath{\mathcal{T}_c}$, which is initially set to the singleton set containing $\scalebox{0.9}{\ensuremath{\square}}$. We remove the largest cell of $\mathcal{G}(\scalebox{0.9}{\ensuremath{\square}})$ from $\mathcal{G}(\scalebox{0.9}{\ensuremath{\square}})$, and insert its children to $\mathcal{G}(\scalebox{0.9}{\ensuremath{\square}})$. We do this until the largest cell of $\mathcal{G}(\scalebox{0.9}{\ensuremath{\square}})$ has side length at most $\varepsilon\textsc{lb}$. Then we remove the cells of $\mathcal{G}(\scalebox{0.9}{\ensuremath{\square}})$ not intersecting $Q$ from $\mathcal{G}(\scalebox{0.9}{\ensuremath{\square}})$. The union $\mathcal{G}$ of all $\mathcal{G}(\scalebox{0.9}{\ensuremath{\square}})$'s is the set of all cells they obtain in the second phase. This takes time linear to the number of cells in $\mathcal{G}$, which is $O(k/\varepsilon^d)$. Each cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\mathcal{G}$ belongs to one of the three types: $\scalebox{0.9}{\ensuremath{\square}}$ is contained in $Q$, $\scalebox{0.9}{\ensuremath{\square}}$ contains a corner of $Q$, and otherwise. We want to check whether or not each cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\mathcal{G}$ contains a point of $P\cap Q$. For a cell of the first type, we can check this in $O(1)$ time using the information stored in $\scalebox{0.9}{\ensuremath{\square}}$. There are $O(k/\varepsilon^d)$ cells of the first type. For a cell $\scalebox{0.9}{\ensuremath{\square}}$ of the second type, we use the range tree on $P$ and check the emptiness in $O(\log^{d-1} n)$ time. There are $O(1)$ cells of the third type. For a cell of the there type, there is an integer $t$ with $0<t<d$ such that $\scalebox{0.9}{\ensuremath{\square}}$ intersects no $\ensuremath{<_t}$-face of $Q$ but intersects a $t$-dimensional face of $Q$. There are $O(k/\varepsilon^t)$ cells of $\mathcal{G}$ intersecting no $\ensuremath{<_t}$-face of $Q$ but intersecting a $t$-dimensional face of $Q$. Therefore, the cells of the fourth type can be handled in $O(k\sum_{t=1}^{d-1} (\log^{d-t-1}n+\log n)/\varepsilon^{t})=O(k\log^{d-2} n/ \varepsilon + k\log n/\varepsilon^{d-1})$ time in total. The overall running time is $O(k\log^{d-1} n+k/\varepsilon^d + k\log^{d-2} n/ \varepsilon + k\log n/\varepsilon^{d-1})$, which is $O(k\log^{d-1} n+k/\varepsilon^d+k\log n/\varepsilon^{d-1})$. Therefore, we have the following lemma. The paper~\cite{Abrahamsen-2017} deals with a more general cost function which they call a \emph{$(c,f(k))$-regular function}. For definition, see Definition~1 of~\cite{Abrahamsen-2017}. The method in this section can be directly applied for the $(c,f(k))$-regular function. \begin{lemma} Given any query range $Q$, an integer $k$ with $1\leq k\leq n$, and a value $\varepsilon>0$ as a query, we can compute a $(k,\varepsilon)$-coreset of $P_Q$ for the $k$-center range-clustering of size $O(k/\varepsilon^d)$ in $O(k\log^{d-1} n+ k/\varepsilon^d+k\log n/\varepsilon^{d-1})$ time using a data structure of size $O(n\log^{d-1} n)$. \end{lemma} \begin{theorem} Let $P$ be a set of $n$ points in $d$-dimensional Euclidean space. There is a data structure of size $O(n\log^{d-1} n)$ such that given a query range $Q\subseteq\mathbb{R}^d$, an integer $k$ with $1\leq k\leq n$, and a value $\varepsilon>0$ as a query, an $(1+\varepsilon)$-approximation to the $k$-center range-clustering of $P\cap Q$ can be computed in $O(k\log^{d-1}n +k\log n/\varepsilon^{d-1}+T_\textnormal{ss} (k/\varepsilon^d))$ time, where $T_{\textnormal{ss}}(N)$ denotes the running time of an $(1+\varepsilon)$-approximation single-shot algorithm for the $k$-center clustering of $N$ input points. \end{theorem} The algorithm by Agarwal and Cecillia computes the exact $k$-center clustering of $N$ points in $d$-dimensional space under any $L_p$-metric in $N^{O(k^{1-1/d})}$ time~\cite{agarwal2002}. \begin{corollary} Let $P$ be a set of $n$ points in $d$-dimensional Euclidean space. There is a data structure of size $O(n\log^{d-1} n)$ such that given a query range $Q\subseteq\mathbb{R}^d$, an integer $k$ with $1\leq k\leq n$, and a value $\varepsilon>0$ as a query, an $(1+\varepsilon)$-approximation to the $k$-center range-clustering of $P\cap Q$ can be computed in $O(k\log^{d-1}n+k\log n/\varepsilon^{d-1} + (k/\varepsilon^d)^{O(k^{1-1/d})})$ time. \end{corollary} \section{Approximate Diameter and Radius of a Point Set} In this section, we are given a set $P$ of $n$ points in $d$-dimensional Euclidean space. Our goal in this section is to preprocess $P$ so that given any orthogonal range $Q$ and a value $\varepsilon>0$, an approximate diameter (or radius) of $P \cap Q$ can be computed efficiently. This problem can be considered as a special case of the clustering problem where the number of clusters is only one. This problem was studied by Gupta et al.~\cite{Gupta} and Nekrich and Smid~\cite{Nekrich-2010}. Gupta et al.~\cite{Gupta} considered this problem in the plane and presented two data structures. One requires $O(n\log^2 n)$ size that supports queries with arbitrary approximation factors $1+\varepsilon$ in $O(\log n/\sqrt{\varepsilon}+\log^3 n)$ query time and the other requires a smaller size $O(n\log n/\sqrt{\delta})$ that supports only queries with the \textit{fixed} approximation factor $1+\delta$ with $0<\delta<1$ that is used for constructing the data structure. Later, Nekrich and Smid presented a data structure for this problem in a higher dimensional space that has size $O(n \log^d n)$ and supports diameter (or radius) queries with the fixed approximation factor $1+\delta$ in $O(\log^{d-1} n/\delta^{d-1})$ query time. Here, $\delta$ is the approximation factor given for the construction of their data structure, and therefore it is fixed for queries to the data structure. That is, the data structure does not support any queries with approximation factors other than $(1+\delta)$. We present data structures and a query algorithm for this problem. In the plane, our data structure requires $O(n\log n)$ size and supports diameter (or radius) queries with arbitrary approximation factors $1+\varepsilon$ in $O(\log n/\varepsilon)$ query time. In higher dimension $d$, our data structures not only allow queries to have arbitrary approximation factor values $1+\varepsilon$, but also improve the size and the query time of the data structure. The size is improved by a factor of $\log n$. Even when $\varepsilon$ is fixed to $\delta$, the query time is improved by a factor of $\min\{1/\delta^{d-1}, \log^{d-2} n\}$. \subparagraph{$\varepsilon$-Coresets.} Our query algorithm starts by sampling a set $S$ of points from $P\cap Q$, which we call an $\varepsilon$-coreset of $P\cap Q$, such that the diameter of $S$ is an $(1+\varepsilon)$-approximation of the diameter of $P\cap Q$. Let $\textsc{apx}$ be a value such that $D \leq \textsc{apx} \leq c\cdot D$ for a constant $c>1$, where $D$ is the diameter of $P\cap Q$. Consider a standard grid of side length $\varepsilon\textsc{apx}$ covering $Q$. Assume that we pick an arbitrary point in each grid cell containing a point of $P\cap Q$. Then the set of all picked points is an $\varepsilon$-coreset of $P\cap Q$ of size $O(1/\varepsilon^d)$. Let $\mathcal{D}$ be the set of all grid cells containing a point of $P\cap Q$. We can obtain a smaller $\varepsilon$-coreset as follows. We first obtain a subset $\mathcal{D}'\subseteq \mathcal{D}$ and choose an arbitrary point in each grid cell of $\mathcal{D}'$ for a $\varepsilon$-coreset. If a grid cell of $\mathcal{D}$ intersects the boundary of $Q$, we move it from $\mathcal{D}$ to $\mathcal{D}'$. The remaining cells are contained in $Q$. For the remaining cells of $\mathcal{D}$, consider the grid cells of $\mathcal{D}$ whose centers have the same coordinates, except for only one coordinate, say the $i$th coordinate. We add the grid cells with the largest $i$th coordinate and smallest $i$th coordinate to $\mathcal{D}'$. See Figure~\ref{fig:coreset-diam}. Then $\mathcal{D}'$ consists of $O(1/\varepsilon^{d-1})$ grid cells. We choose an arbitrary point of $P$ contained in each grid cell of $\mathcal{D}'$. The set $S$ of all chosen points is an $\varepsilon$-coreset of $P\cap Q$ of size $O(1/\varepsilon^{d-1})$. \begin{figure} \begin{center} \includegraphics[width=0.3\textwidth]{coreset-diam.pdf} \caption{\small The gray cells are chosen as a $\varepsilon$-corset of size $O(1/\varepsilon^{d-1})$. \label{fig:coreset-diam}} \end{center} \end{figure} \begin{lemma} The set $S$ is an $\varepsilon$-coreset of $P\cap Q$. \end{lemma} \begin{proof} Let $p$ and $q$ be two points of $P\cap Q$ such that $d(p,q)$ is the diameter of $P\cap Q$. Consider the hyperplane $h$ orthogonal to $\overline{pq}$ and passing through $p$. A (closed) half-space bounded by $h$ contains all points of $P\cap Q$, and the other (open) half-space $\mathcal{H}$ bounded by $h$ contains no point of $P\cap Q$. See Figure~\ref{fig:coreset-diam}. There is a ray starting from $p$ and parallel to an axis that does not intersect $\mathcal{H}$. This means that a grid cell in the grid cluster of the grid cell containing $p$ is chosen by the construction, and the same holds for $q$. Let $p'$ and $q'$ be the chosen points from the cells containing $p$ and $q$, respectively. The diameter of $S$ is at most $d(p',q')$, and we have $d(p',q') \leq d(p,q)+2\varepsilon \textsc{apx} \leq D+\varepsilon c\cdot D \leq (1+c') D$ for a constant $c'$. Therefore, $S$ is an $\varepsilon$-coreset of $P\cap Q$. \end{proof} \subparagraph{Data Structure.} We construct the standard range tree on $P$, the compressed quadtree on $P$, and the data structure on $P$ described in Section~\ref{sec:DS}. We also maintain another data structure similar to the one described in Section~\ref{sec:DS}. For each index $i$, we project all points of $P$ onto a hyperplane orthogonal to the $i$th axis. Let $P_i$ be the set of such projected points. If there is more than one point of $P$ which is projected to the same point, we consider them as distinct points in $P_i$. We construct the compressed quadtree $\ensuremath{\mathcal{T}_c}(P_i)$ on $P_i$ that is aligned to $\ensuremath{\mathcal{T}_c}$. Given a cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\ensuremath{\mathcal{T}_c}(P_i)$ and a value $x$, we want to find the point with the largest $i$th coordinate smaller than $x$ among the points of $P_i\cap \scalebox{0.9}{\ensuremath{\square}}$. To do this, we do the followings. For each point $p_i$ in a cell $\scalebox{0.9}{\ensuremath{\square}}_i$ of $\ensuremath{\mathcal{T}_c}(P_i)$, there is a unique point $p$ in $P$ whose projection onto the hyperplane orthogonal to the $i$th axis is $p_i$. We assign the $i$th coordinate of $p$ to $p_i$ as its weight. Then we compute the 1-dimensional range tree (balanced binary search tree) on $P_i\cap \scalebox{0.9}{\ensuremath{\square}}_i$ with respect to their weights for each cell $\scalebox{0.9}{\ensuremath{\square}}_i$ of $\ensuremath{\mathcal{T}_c}(P_i)$. As we did in Section~\ref{sec:DS}, we use a persistent data structure instead of computing the balanced binary search tree explicitly. Since we consider the balanced binary search tree here, we do not need to use fractional cascading. Each insertion takes $O(\log n)$ time. Therefore, this data structure has size $O(n\log^{d-1} n)$ and can be constructed in $O(n\log^{d-1} n)$ time in total. For every cell $\scalebox{0.9}{\ensuremath{\square}}$ of $\ensuremath{\mathcal{T}_c}$ and an index $i$, there is a cell $\scalebox{0.9}{\ensuremath{\square}}_i$ of $\ensuremath{\mathcal{T}_c}(P_i)$ such that the projection of $\scalebox{0.9}{\ensuremath{\square}}$ onto the hyperplane orthogonal to the $i$th axis is $\scalebox{0.9}{\ensuremath{\square}}_i$. We make $\scalebox{0.9}{\ensuremath{\square}}$ to point to $\scalebox{0.9}{\ensuremath{\square}}_i$. \subparagraph{Query Algorithm.} We are given an orthogonal range $Q$ and a value $\varepsilon>0$ as a query. We first compute a constant-factor approximation $\textsc{apx}$ to the diameter of $P\cap Q$ in $O(\log^{d-1}n)$ time using the standard range tree. To do this, for each facet of $Q$, we find the points of $P\cap Q$ closest to the facet in $O(\log^{d-1} n)$ time. That is, we compute the smallest enclosing box $\ensuremath{\textsc{meb}}$ of $P\cap Q$. The diameter $\textsc{apx}$ of $\ensuremath{\textsc{meb}}$ is a constant-factor approximation to the diameter of $P\cap Q$. Assume that $\varepsilon\textsc{apx}$ is a standard length. Otherwise, we consider the largest standard length smaller than $\varepsilon\textsc{apx}$ instead of $\varepsilon\textsc{apx}$. Then we compute an $\varepsilon$-coreset of $P\cap Q$ of size $O(1/\varepsilon^{d-1})$ as follows. Consider the standard grid with side length $\varepsilon\textsc{apx}$ covering $\ensuremath{\textsc{meb}}$. Here, we do not compute this grid explicitly because there are $O(1/\varepsilon^d)$ cells in this grid. Instead, we compute the grid cells intersecting the boundary of $\ensuremath{\textsc{meb}}$. There are $O(1/\varepsilon^{d-1})$ such cells. For each such cell $\scalebox{0.9}{\ensuremath{\square}}$, we check whether or not $\scalebox{0.9}{\ensuremath{\square}}$ contains a point of $P\cap Q$ using the data structure in Section~\ref{sec:DS}. There are $O(1/\varepsilon^t)$ cells intersecting no $\ensuremath{<_t}$-face of $\ensuremath{\textsc{meb}}$ but intersecting a $t$-dimensional face of $\ensuremath{\textsc{meb}}$ for an integer $t$ with $0< t< d$. For the cells containing a corner of $\ensuremath{\textsc{meb}}$, we use the standard range tree on $P$ in $O(\log^{d-1} n)$ time. In this way, we can check the emptiness for all cells intersecting the boundary of $\ensuremath{\textsc{meb}}$ in $O(\log^{d-1}n + \log n/\varepsilon^{d-1})$ time in total. Now we consider the grid cells fully contained in $\ensuremath{\textsc{meb}}$. Let $Q'$ be the union (a $d$-dimensional box) of all such grid cells, which can be computed in constant time by a simple calculation with respect to the coordinates of $\ensuremath{\textsc{meb}}$. For each index $i$, consider the standard grid of side length $\varepsilon\textsc{apx}$ such that the union of the cells coincides with the projection of $Q'$ onto a hyperplane orthogonal to the $i$th axis. Let $\mathcal{G}_i$ be the set of all such grid cells. For each cell $\scalebox{0.9}{\ensuremath{\square}}_i$ of $\mathcal{G}_i$, we want to find the point $p\in P\cap Q$ with largest (and smallest) $i$th coordinate among the points whose projections are in $\scalebox{0.9}{\ensuremath{\square}}_i$. We choose the grid cell in the standard grid with side length $\varepsilon\textsc{apx}$ containing $p$ as a $\varepsilon$-coreset. To do this, observe that $p$ is in $P\cap Q$ if and only if the projection of $p$ onto the $i$th axis is in $[q_i,q_i']$, where $[q_i,q_i']$ is the projection of $Q'$ onto the $i$th axis. Due to the data structure introduced in this section, this can be computed in $O(\log n)$ time. Since there are $O(1/\varepsilon^{d-1})$ cells of $\mathcal{G}_i$ and $d$ is a constant, this can be done in $O(\log n/\varepsilon^{d-1})$ time in total. Therefore, we can compute an $\varepsilon$-coreset of $P\cap Q$ in $O(\log^{d-1} n + \log n/\varepsilon^{d-1})$ time in total. The diameter of $N$ points can be computed in $O(N+1/\varepsilon^{d-1.5})$ time~\cite{Chan-diameter-2002}. Since the size of the coreset is $O(1/\varepsilon^{d-1})$ in our case, the overall running time is $O(\log^{d-1} n + \log n/\varepsilon^{d-1})$. \subparagraph{Remark.} An approximate radius can be computed in a similar way. The radius of a point set $P$ is defined as $\min_{c\in \mathbb{R}^d} \max_{p\in P} d(p,c)$. A constant-factor approximation to the diameter of $P$ is also a constant-factor approximation to the radius of $P$. The coreset we considered for the diameter is also a coreset for the radius. Therefore, we can compute an $\varepsilon$-coreset of $P\cap Q$ for the radius in $O(\log^{d-1} n + \log n/\varepsilon^{d-1})$ time. Since the radius of a point set can be computed in linear time for any fixed dimension~\cite{Megiddo-83}, we can compute the radius of $P\cap Q$ in $O(\log^{d-1} n + \log n/\varepsilon^{d-1})$ time in total. \begin{theorem} Given a set $P$ of $n$ points in $\mathbb{R}^d$, we can compute an $(1+\varepsilon)$-approximate diameter (or radius) of $P\cap Q$ in $O(\log^{d-1} n + \log n/\varepsilon^{d-1})$ time for a query consisting of an orthogonal range $Q$ and a value $\varepsilon>0$ using a data structure of size $O(n\log^{d-1} n)$. \end{theorem} \newpage
1,314,259,993,115
arxiv
\section{Introduction} \label{introduction} A prerequisite for an analysis of stationary black holes is the understanding of properties of Killing vector fields in asymptotically flat\footnote{Recall that there exist various papers analyzing properties of Killing vector fields in asymptotically flat space--times \cite{AAM,AshtekarSchmidt,AX,BSchmidt}. These papers do not, however, seem to give answers to the questions asked here. Moreover, the asymptotic conditions here are considerably weaker than considered in those references.} space--times. Consider, for instance, an asymptotically flat partial Cauchy surface $\Sigma$ in a space--time $(M,g_{\mu\nu})$ with a Killing vector field $X^\mu$. In the case of a stationary black hole one is interested in situations where $X^\mu$ is timelike in the asymptotic regions. [Here we say that an asymptotically flat space--time $(M,g_{\mu\nu})$ with a Killing vector field $X^\mu$ is stationary if $X^\mu$ is timelike in the asymptotic regions of $M$.] A natural question to ask is, how does then $X^\mu$ behave in the asymptotic regions? Now it is easily seen from the equations \begin{equation} \label{I.0} \nabla_\mu\nabla_\nu X_\alpha = {R^\lambda}_{\mu\nu\alpha}X_\lambda \end{equation} (which are a well known consequence of the Killing equations) and from the asymptotic flatness conditions ({\em cf.\/} Propositions \ref{2PN.1} or \ref{PD.1}, Section \ref{KVSH}, for a precise description of the asymptotic conditions needed here) that there exist constants $A^\mu$ such that every Killing vector field $X^\mu$ which is timelike for $r\ge R$ for some $R$ satisfies \begin{eqnarray} & X^\mu - A^\mu \rightarrow_{r\to\infty} 0\, ,\label{I.1} & \\ & \eta_{\alpha\beta}A^\alpha A^\beta \le 0 \,. & \nonumber \end{eqnarray} Here $\eta_{\alpha\beta}$ is the Minkowski metric, and we use the signature $(-,+,+,+)$. It should be emphasized that the requirement of timelikeness of $X^\mu$ for large $r$ does {\em not\/} exclude the possibility that $\eta_{\alpha\beta}A^\alpha A ^\beta $ vanishes. Indeed, an explicit example of a metric (not satisfying any reasonable field equations) with an everywhere timelike Killing vector which is asymptotically null can be found in \cite{ChWald} ({\em cf.\/} the Remark preceding Theorem A.1, Appendix A of \cite{ChWald}). (Let us point out that by a null vector we mean a non--zero vector of zero Lorentzian length.) Now in the uniqueness theory of black holes it is customary to assume that $A^\mu = \delta^\mu_0$ in an asymptotically flat coordinate system in which $\Sigma$ is given by an equation $x^0=0$. If the orbits of the Killing vector field $X^\mu$ are complete (at least in the asymptotic region) and if $A^\mu$ is timelike, then $\Sigma$ can be deformed (``boosted") to a new partial Cauchy surface for which $A^\mu = \delta^\mu_0$ (in an appropriately redefined asymptotically flat coordinate system). If, however, $X^\mu$ is asymptotically null (by which we mean that the vector $A^\mu$ appearing in (\ref{I.1}) is null), then no such deformation is possible and we are faced with the intriguing possibility of existence of stationary space--times in which the Killing vector cannot be reduced to a standard form where the metric is diagonal and the vector $A^\mu $ of (\ref{I.1}) equals $\delta^\mu_0$. As has been argued in \cite{Chnohair}, the existence of such Killing vector fields does not seem to be compatible with the rigidity part of the positive energy theorems. Here we make the arguments of \cite{Chnohair} precise and show the following (the reader is referred to Theorem \ref{TV.1} for a more precise formulation): \begin{Theorem} \label{T1new} Let $(M,g_{\mu\nu})$ be a space--time with a Killing vector field which is asymptotically null along an (appropriately regular) asymptotically flat spacelike hypersurface $\Sigma$. Then the ADM energy--momentum vector of $\Sigma$ vanishes. \end{Theorem} To say more about space--times considered in Theorem \ref{T1new} one can use the positive energy theorem. In Section \ref{pets} below we prove the following\footnote{Various variants of Theorem \ref{Tpet3} are of course well--known, {\em cf.\/} Section \ref{pets} for a detailed discussion.}: \begin{Theorem}[``Timelike ``future--pointing'' four--momentum theorem''] \label{Tpet3} Under the conditions of Theorems \ref{Tpet} and \ref{Tpet2} below, suppose that the initial data $(\Sigma,g_{ij},K_{ij})$ are {\em not} initial data for Minkowski space--time. Then the ADM energy--momentum vector $p^ \mu $ of $\Sigma$ satisfies $$ p^0 > \sqrt{\sum_{i=1}^3(p^i)^2}. $$ \end{Theorem} Theorem \ref{T1new} can be used together with Theorem \ref{Tpet3} to obtain the following: \begin{Theorem} \label{T1} Let $(M,g_{\mu\nu})$ be a maximal globally hyperbolic space--time with a Cauchy surface satisfying the requirements of Theorems \ref{Tpet} and \ref{Tpet2}. Let $X^\mu$ be a Killing vector field on $M$ which is asymptotically null along an asymptotically flat Cauchy surface. Then $X^\mu$ is everywhere null and $(M,g_{\mu\nu})$ is the Minkowski space--time. \end{Theorem} Theorem \ref{T1} and the results of \cite{Chorbits} ({\em cf.\/} also \cite{Chnohair}[Theorem 1.7]) show that there is no loss of generality in assuming that $A^\mu = \delta^\mu_0$ in, say electrovacuum, maximal globally hyperbolic space--times with an appropriately regular asymptotically flat Cauchy surface. Let us mention that the results here settle in the positive Conjecture 1.8 of \cite{Chnohair}. This paper is organized as follows. In Section \ref{KVSH} we discuss some general properties of Killing vector fields in asymptotically flat space--times. In order to minimize the number of assumptions we adopt a $3+1$ dimensional point of view; the various advantages for doing that are explained at the beginning of Section \ref{KVSH}. The main result there are Proposition \ref{2PN.1} and \ref{PD.1} which establish the asymptotic behaviour of Killing vectors along asymptotically flat spacelike surfaces. In that section we also introduce the notion of {\em Killing development}, which turns out to be very useful in our analysis. In section \ref{translational} we study the relationship between the ADM four--momentum and the asymptotic behaviour of the Killing vector. The results there can be summarized as follows: If $X^\mu\to_{r\to\infty} A^\mu$ along an asymptotically flat spacelike surface $\Sigma$, then the ADM four--momentum is proportional to $A^\mu$. The proportionality constant is zero when $A^\mu$ is not timelike. Let us point out, that some similar results can be found in \cite{AAM}. However in that reference the possibility of asymptotically null Killing vector fields is not taken into consideration. Also, in \cite{AAM} rather strong asymptotic conditions are imposed. In a sense most of the work here consists in showing that the asymptotic conditions needed to be able to obtain the desired conclusions can actually be derived from the decay conditions on the matter sources and from the hypothesis of existence of Killing vector fields. In Section \ref{pets} we prove a positive energy theorem with hypotheses and asymptotic conditions appropriate for our purposes. Theorems \ref{Tpet} and \ref{Tpet2} there are improvements of known results, {\em cf.\/} the beginning of Section \ref{pets} for a more detailed discussion. \section{Killing vectors and spacelike hypersurfaces} \label{KVSH} Consider a space--time $(M,g_{\mu\nu})$ with a Killing vector field $X^\mu$, \begin{equation} \label{K.1} \nabla_\mu X_\nu + \nabla_\nu X_\mu =0\ , \end{equation} where $\nabla_\mu$ is the covariant derivative operator of the metric $g_{\mu\nu}$. Let $\Sigma$ be a spacelike hypersurface in $M$ and suppose that on $\Sigma$ the field of unit normals $n^\mu$ can be defined; this will be the case {\em e.g.\/} if $(M,g_{\mu\nu})$ is time--orientable. On $\Sigma$ define a scalar field $N$ and a vector field $Y^i$ by the equations \begin{eqnarray} \label{K.2} & N = - n_{\mu} X^\mu\ , & \\ \label{K.3} & g_{ij} Y^i dx^j = i^* (g_{\mu\nu}X^\mu dx^\nu)\ , & \end{eqnarray} where $i$ denotes the embedding of $\Sigma$ into $M$. We use the symbol $g_{ij}$ to denote the pull--back metric $i^* g_{\mu\nu}$. Eq.\ \eq{K.1} with $\mu=i$ and $\nu=j$ reads \begin{equation} \label{K.4} 2NK_{ij} + {\cal L}_Y g_{ij} = 0 \ , \end{equation} where $\cal L$ denotes the Lie derivative, and $K_{ij}$ is the extrinsic curvature tensor of $i(\Sigma)$ in $(M,g_{\mu\nu})$, defined as\footnote{$K_{ij}$ as defined here is $-K_{ij}$ as in \cite{YorkinSmarr}; similarly $J^i$ as defined here is $-J^i$ as defined there.\label{extrinsiccurvature}} the pull--back to $\Sigma$ of $\nabla_\mu n_\nu$. Set $$ \Sigma_{N>0} = \{p\in\Sigma : N\ne 0\} \ . $$ In a neighbourhood of $\Sigma_{N>0}$ we can introduce a coordinate system $(u,x^i)$ in which $X^\mu\partial_\mu = \partial _u$ and in which $\Sigma_{N>0}$ is given by the equation $u=0$. The metric on this neighbourhood takes the form \begin{equation} \label{K.0} g_{\mu\nu}dx^\mu dx^\nu = -N^2 du^2 + g_{ij}(dx^i+Y^idu)(dx^j+Y^jdu)\ , \end{equation} with some functions which do not depend upon $u$. Let $G_{\mu\nu}$ be the Einstein tensor of $g_{\mu\nu}$, that is, $G_{\mu\nu}= R_{\mu\nu}-\frac{g^{\alpha\beta}R_{\alpha\beta}}{2} g_{\mu\nu}$, where $R_{\mu\nu}$ is the Ricci tensor of $ g_{\mu\nu}$. We have the $3+1$ decomposition formulae ({\em cf.\ e.g.\/} \cite{YorkinSmarr}) \begin{eqnarray} & 2G_{\mu\nu} n^\mu n^\nu={}^3R+K^2-K^{ij}K_{ij}\ , \label{K.5} & \\ & G_{i\mu}n^\mu = D_j(K^{ij}-g^{kl}K_{kl}g^{ij}) \ , \label{K.6} & \\ & G_{ij}-\frac{1}{2}g^{k\ell}G_{k\ell}g_{ij} = {}^3R_{ij} + K K_{ij} - 2 K_{ik} K^k{}_j \qquad\qquad\qquad\qquad\qquad \nonumber & \\ & \qquad \qquad \qquad\qquad\qquad - N^{-1}({\cal L}_Y K_{ij} + D_i D_j N ) -\frac{1}{2}G_{\mu\nu} n^\mu n^\nu\,g_{ij} \ . \label{K.7} \end{eqnarray} Here $g^{ij}$ is the tensor inverse to $g_{ij}$, $K =g^{kl}K_{kl}$, ${}^3R_{ij}$ is the Ricci tensor of the metric $g_{ij}$, and ${}^3R=g^{ij}{}^3R_{ij}$. All the above is of course well--known, we have written it down in detail to fix the notation and to spell--out the conditions needed for the definition of the fields $N$ and $Y^i$. In particular we wish to emphasize that we did not need to assume completeness of the orbits of $X^\mu$, we did not need to assume that the orbits of $X^\mu$ intersect $\Sigma$ only once, etc. It is however the case that those last properties are needed for several arguments, {\em e.g.\/} in the uniqueness theory of black holes ({\em cf.\ e.g.\/} \cite{Chnohair}). By way of example, consider a maximal globally hyperbolic space--time $(M,g_{\mu\nu})$ with an asymptotically flat Cauchy surface with compact interior, with a metric satisfying the Einstein--Yang-Mills--Higgs equations, and with a Killing vector field $X^\mu$. While one expects the orbits of $X^\mu$ to be complete ({\em cf.\ e.g.\/} \cite{Chorbits} for an analysis in the vacuum case), no proof of such a result has been established so far. It is therefore of interest to establish various properties of space--times $(M,g_{\mu\nu})$ with Killing vectors with a minimal amount of global assumptions on $M$. As one is often interested in globally hyperbolic space--times it does not seem to be overly restrictive to assume the existence in $M$ of a spacelike hypersurface $\Sigma$ satisfying the hypotheses spelled out at the beginning of this section. The construction above yields then a scalar field $N$ and a vector field $Y^i$ defined on $\Sigma$, such that eqs.\ \eq{K.4}--\eq{K.7} hold. Consider then a set $(\Sigma,g_{ij},K_{ij},N,Y^i)$. We shall call the {\em Killing development of $(\Sigma,g_{ij},K_{ij},N,Y^i)$} the space--time $(\hat M, \hat g_{\mu\nu})$, where $$\hat M = {{\bf R}}\times \Sigma_{N>0}\ ,$$ and where $\hat g_{\mu\nu}$ is given by the equation \begin{eqnarray} \label{K.10} & \hat g_{\mu\nu}dx^\mu dx^\nu = -\hat N^2 du^2 + \hat g_{ij}(dx^i+\hat Y^idu)(dx^j+\hat Y^jdu)\ , & \\ \nonumber & \hat N(u,x^i) = N(x^i),\quad \hat g_{ij}(u,x^i) = g_{ij}(x^i),\quad \hat Y^i(u,x^i) = Y^i(x^i)\ . & \end{eqnarray} Here the $u$ coordinate runs over the ${\bf R}$ factor in $ {{\bf R}}\times \Sigma_{N>0}$. Clearly the vector field $X^\mu\partial_\mu=\partial_u$ is a Killing vector, so that \begin{equation} \label{K.10.0} \hat\nabla_\mu X_\nu + \hat\nabla_\nu X_\mu =0\ , \end{equation} where $\hat\nabla_\mu$ is the covariant derivative operator of the metric $\hat g_{\mu\nu}$. Note that \begin{equation} \label{K.11} X_i\Big|_{u=0} = Y_i,\qquad \hat N\Big|_{u=0} = N\ . \end{equation} Consider the extrinsic curvature tensor $\hat K_{ij}$ of the slices $u=\mbox{const}$. In general $\hat K_{ij}$ will have nothing to do with the tensor field $K_{ij}$. Suppose, however, that \eq{K.4} holds. Eq.\ \eq{K.10.0} with $i=\mu$ and $\nu=j$, eq.\ \eq{K.11} and \eq{K.4} give then, at $u=0$, \begin{equation} \label{K.12} \hat K_{ij}=K_{ij}\ . \end{equation} Since $\hat K_{ij}$ is $u$--independent it follows that this last relation holds throughout $\hat M$. One also notices that \eq{K.12} will hold if and only if \eq{K.4} holds. Consider, next, the Einstein tensor $\hat G_{\mu\nu}$ of the metric $\hat g_{\mu\nu}$. It is given by the hatted equivalent of eqs.\ \eq{K.5}--\eq{K.7}. Given the set $(\Sigma,g_{ij},K_{ij},N,Y^i)$ one can define on $\Sigma_{N>0}$ a scalar field $\rho$, a vector field $J^i$, and a tensor field $\tau_{ij} $ via the equations \begin{eqnarray} \label{K.13} & 2 \rho = {}^3R+K^2-K^{ij}K_{ij}\ , & \\ & J^i=D_j(K^{ij}-Kg^{ij}) \ , & \label{K.14} \\ & \tau_{ij}-\frac{1}{2}g^{k\ell}\tau_{k\ell}g_{ij} = {}^3R_{ij} + K K_{ij} - 2 K_{ik} K^k{}_j \qquad\qquad\qquad\qquad \nonumber & \\ & \qquad \qquad \qquad\qquad\qquad - N^{-1}({\cal L}_Y K_{ij} + D_i D_j N ) -\frac{\rho}{2}\,g_{ij} \ . & \label{K.15} \end{eqnarray} If eq.\ \eq{K.4} holds it follows from \eq{K.11}--\eq{K.12} that we will have \begin{equation} \label{K.16} \hat G_{\mu\nu} \hat n^\mu \hat n^\nu (u,x^\ell)= \rho (x^\ell) , \quad \hat G_{i\nu} \hat n^\nu (u,x^\ell) = J_i (x^\ell), \quad \hat G_{ij} (u,x^\ell) = \tau_{ij} (x^\ell)\ , \end{equation} where $\hat n_\mu$ is the unit normal to the slices $u =$ const. It is of interest to consider the case of covariantly constant Killing vector fields. In that case on a hypersurface $\Sigma$ as at the beginning of this section we will have \begin{eqnarray} \label{K.01} & N K_{ij}+ D_iY_j = 0 \ , & \\ \label{K.02} & K_{ij}Y^j+ D_i N = 0 \ . & \end{eqnarray} Let us show that if \eq{K.01}--\eq{K.02} hold, then the vector field $X^\mu\partial_\mu=\partial_u$ on the Killing development $(\hat M,\hat g_{\mu\nu})$ of $(\Sigma,g_{ij},K_{ij},N,Y^i)$ will be covariantly constant. To see that note that eqs.\ \eq{K.01}, \eq{K.11} and \eq{K.12} imply $$ \hat \nabla_i X_j = 0 $$ at $u$=0, hence throughout $\hat M$. Eq.\ \eq{K.02} similarly gives $$ \hat \nabla_i X_0 = 0\ . $$ As $X^\mu$ satisfies \eq{K.10.0} the equations $ \hat \nabla_\mu X_\nu = 0 $ readily follow. In our work, as well as in various other analyses, an essential role is played by the asymptotic behaviour of the Killing vector fields. Let us start with a result based on a four--dimensional formalism. For $R> 0$ let $M_R$ be defined by \begin{equation} M_R=\big\{(t,\vec x)\in {{\bf R}} \times \big({{\bf R}}^3\setminus B(R)\big)\big\}\,, \label{D.0} \end{equation} where $B(R)$ is a closed ball of radius $R$. Let $\alpha$ be a positive constant; the couple $(M_R,g_{\mu\nu})$ will be called an {\em $\alpha$--asymptotically flat four--end} if the Lorentzian metric $g$ defined on $M_R$ is twice differentiable\footnote{In this paper for several purposes we could assume weak differentiability of $g$ only, and replace the decay conditions \eq{D.1}--\eq{D.3} by some weighted Sobolev conditions. For the sake of simplicity we shall, however, not consider those weaker conditions.} and if there exists a constant $C$ such that the following inequalities hold in $M_R$: \begin{eqnarray} & |g_{\mu\nu}| + |g^{\mu\nu}| + r^\alpha |g_{\mu\nu}-\eta_{\mu\nu}| + r^{\alpha+1} |\partial_\sigma g_{\mu\nu}| + r^{\alpha+2} |\partial_\sigma\partial_\rho g_{\mu\nu}| \le C \,, & \label{D.1}\\ & g_{00}\le -C^{-1},\qquad g^{00}\le -C^{-1} \,,& \label{D.2}\\ & \forall X^i\in {{\bf R}}^3 \quad g_{ij}X^iX^j\ge C^{-1}\sum(X^i)^2 \,.& \label{D.3} \end{eqnarray} Here and throughout $\eta_{\mu\nu}$ is the Minkowski metric, while $r=\sqrt{ x^2+y^2+z^2}$. The proof of Proposition \ref{PD.1} that follows is based on the analysis of the equations \begin{equation} \label{I.0.new} \nabla_\mu\nabla_\nu X_\alpha = {R^\lambda}_{\mu\nu\alpha}X_\lambda \ , \end{equation} which are a well known consequence of the Killing equations. The arguments follow closely those of the proof of Proposition \ref{2PN.1} below, to be found in Appendix \ref{Aproof}, and will be omitted. \begin{Proposition} \label{PD.1} Let $R>0$ and let $X^\mu$ be a Killing vector field defined on an $\alpha$--asymptotically flat end $M_R$, $0<\alpha<1$. Then there exist numbers $ \Lambda_{\mu\nu}= \Lambda_{[\mu\nu]} $ and a function $C(t)$ such that on every slice $t=\mbox{const}$ we have \begin{equation} |X^\mu-{\Lambda^\mu}_\nu x^\nu| + r |\partial_\sigma X^\mu - {\Lambda^\mu}_\sigma| + r^2 |\partial_\sigma \partial_\rho X^\mu | \le C(t) r^{1-\alpha}\,, \label{D.4} \end{equation} with ${\Lambda^\mu}_\nu\equiv \eta^{\mu\alpha}\Lambda_{\alpha\nu}$. If ${\Lambda_{\mu\nu}}=0$, then there exist numbers $A^\mu$ and a constant $C$ such that on $M_R$ we have \begin{equation} |X^\mu-A^\mu | + r |\partial_\sigma X^\mu | + r^2 |\partial_\sigma \partial_\rho X^\mu | \le C r^{-\alpha}\,. \label{D.5} \end{equation} If ${\Lambda_{\mu\nu}}=A^\mu=0$, then $X^\mu \equiv 0$. \end{Proposition} {\bf Remark:} Obvious analogs of the results of Proposition \ref{2PN.1} below with $k>2$ hold if higher asymptotic regularity of the metric is assumed in Proposition \ref{PD.1}. It also follows from Proposition \ref{2PN.1} below that if the constant $C$ in \eq{D.1}--\eq{D.3} is replaced by a function of $t$, then the conclusions of Proposition \ref{PD.1} will still hold with the constant $C$ in \eq{D.5} replaced by some function $C'(t)$. Our next result is the $3+1$ equivalent of Proposition \ref{PD.1}. The reader may wish to note the following: in the $4$--dimensional formulation the fall--off conditions on the metric ensure that the space--time Riemann tensor vanishes at an appropriate rate. In the $3+1$ formulation the fall--off conditions on $g_{ij}$ and $K_{ij}$ are not sufficient to guarantee that, they must be supplemented by a fall--off condition on $\rho$ and $\tau_{ij}$. Thus the eq.\ \eq{K.100} below is a rather weak equivalent of the decay conditions $R_{\mu\nu\rho\sigma} = O(r^{-2-\alpha})$. The following is a straightforward consequence of eqs.\ \eq{K.13} and \eq{K.15} ({\em cf.\/} also \cite[Theorem 3.3 and Proposition 3.2]{ChrOM}). The notation $O_k$ is defined in Appendix \ref{definitions}. An outline of the proof is given in Appendix \ref{Aproof}. \begin{Proposition}\label{2PN.1} Let $R > 0$ and let $(g_{ij},K_{ij})$ be initial data on $\Sigma_R \equiv {\bf R}^3 \setminus B(R)$ satisfying \begin{equation} g_{ij} - \delta_{ij} = O_{k} (r^{-\alpha}), \qquad K_{ij} = O_{k-1} (r^{-1-\alpha}), \label{K.99} \end{equation} with some $ k>1$ and some $0<\alpha< 1$. Let $N$ be a $C^2$ scalar field and $Y^i$ a $C^2$ vector field on $\Sigma_R$ such that eqs.\ \eq{K.4}, \eq{K.13} and \eq{K.15} hold with some $\rho$ and $\tau_{ij}$ satisfying \begin{equation} |\rho| + |\tau_{ij}| \le C (1+r)^{-2-\alpha}\ . \label{K.100} \end{equation} Then there exists numbers $ \Lambda_{\mu\nu}= \Lambda_{[\mu\nu]} $ such that we have \begin{equation} Y^i-\Lambda_{ij} x^j = O_{k}(r^{1-\alpha}), \qquad N+\Lambda_{0i}x^i = O_{k}(r^{1-\alpha}) \ . \label{D.4.0} \end{equation} If ${\Lambda_{\mu\nu}}=0$, then there exist numbers $A^\mu$ such that we have \begin{equation} Y^i-A^i = O_{k}(r^{-\alpha}), \qquad N-A^0 = O_{k}(r^{-\alpha}) \ . \label{D.4.1} \end{equation} If ${\Lambda_{\mu\nu}}=A^\mu=0$, then $Y^i\equiv N \equiv 0$. \end{Proposition} Let us remark that if $\alpha=1$, then Proposition \ref{2PN.1} holds with the function $r^{1-\alpha}$ in the right--hand--side of eq.\ \eq{D.4.0} replaced by $1+|\log r|$; similarly in \eq{D.4.1} $r^{-\alpha}$ has to be replaced by $r^{-1}(1+|\log r|)$. A Killing vector field for which $\Lambda_{\mu\nu}=0$ will be called {\em asymptotically translational}. For further use let us mention the following: Consider $(g_{ij},K_{ij})$ such that \eq{K.99} holds, and suppose that $(N,Y^i)$ satisfy \eq{D.4.1} with some $A^0\ne 0$. Suppose finally that \eq{K.4} is weakened to \begin{equation} \label{K.4.1} 2NK_{ij} + {\cal L}_Y g_{ij} = O_{k-1}(r^{-\beta}) \ , \end{equation} with some $\beta \ge 1$. In that case \eq{K.16} will be replaced by \begin{eqnarray} & \nonumber \hat G_{\mu\nu} \hat n^\mu \hat n^\nu - \rho = O_{k-1}(r^{-\min(1+\alpha,\beta)-\beta}), \quad \hat G_{i\nu} \hat n^\nu - J_i = O_{k-2}(r^{-\beta-1}), & \\ & \hat G_{ij} - \tau_{ij}= O_{k-2}(r^{-\beta-1})\ . & \label{K.16.1} \end{eqnarray} \section{ADM four--momentum in space--times with asymptotically translational Killing vectors} \label{translational} In this section we prove the following results: Consider an asymptotically flat space--time with an asymptotically translational Killing vector field $X^\mu$, that is, there exist constants $A^\mu$ such that $X^\mu\to_{r\to\infty}A^\mu$. Then: \begin{enumerate} \item If $A^\mu A_\mu \ge 0$, then the ADM four--momentum $p^\mu$ vanishes. \item If $A^\mu A_\mu < 0$, then $p^\mu$ is proportional to $A^\mu$. \end{enumerate} We shall establish those results in the $3$ dimensional framework discussed in Section \ref{KVSH}. Proposition \ref{2PN.1} in that section justifies our fall--off conditions on the fields $N$ and $Y^i$. The results here are actually slightly more general than stated above, in that we allow for fields which satisfy the relevant Killing equations up to terms which decay at an appropriate rate, {\em cf.\/} below for the precise conditions. \begin{Proposition}\label{PN.1} Let $R > 0$ and let $(g_{ij},K_{ij})$ be initial data on $\Sigma_R \equiv {\bf R}^3 \setminus B(R)$ satisfying \begin{eqnarray} \label{F0.1} & g_{ij} - \delta_{ij} = O_{2} (r^{-\alpha}), \qquad K_{ij} = O_{1} (r^{-1-\alpha}), \qquad \alpha > 1/2, & \\ & \label{F0.2} J^i = O (r^{-3-\epsilon}), \qquad \rho = O (r^{-3-\epsilon}), \qquad \epsilon > 0\ . & \end{eqnarray} Let $N$ be a $ C^1$ scalar field and $Y^i$ a $C^1$ vector field on $\Sigma_R$ such that \begin{equation} \label{F0.1.1} N-A^0 = O_{1} (r^{-\alpha}), \qquad Y^i\to _{r\to\infty} A^i \ , \end{equation} for some set of constants $(A^\mu) \not\equiv 0$. Suppose further that \begin{equation} 2 N K_{ij} + {\cal L}_Y g_{ij} = O_1 (r^{-2-\epsilon}). \label{(PN.1.0)} \end{equation} Let $p^\mu$ be the ADM four--momentum of\/ $\Sigma_R$. Then: \begin{enumerate} \item If $A^0 = 0$, then $p^0 = 0$. \item If $A^0 \ne 0$, then $p^\mu$ is proportional to $A^\mu$. \end{enumerate} \end{Proposition} \paragraph{{\bf Remark:}} The pointwise decay estimates assumed above can be weakened to weighted Sobolev spaces conditions. To avoid a tedious discussion of technicalities we shall, however, not consider such fields here. \paragraph{Proof:} Without loss of generality we may assume that both $\alpha$ and $\epsilon$ are strictly smaller than $1$. Eq.\ \eq{F0.1.1} and a simple analysis of eq. \eq{(PN.1.0)} ({\em cf.\ e.g.\/} the proof of Prop.\ \ref{2PN.1}, Appendix \ref{Aproof}) show that \begin{equation} \label{F0.1.11} Y^i-A^i = O_{2} (r^{-\alpha})\ . \end{equation} By our asymptotic conditions eq. \eq{(PN.1.0)} can be rewritten as \begin{equation} g_{ij,k} A^k + Y^i{}_{,j} + Y^j{}_{,i} = - 2 A^0 K_{ij} + O_1(r^{-2-\epsilon})\ , \label{(PN.1.1)} \end{equation} and we have redefined $\epsilon$ to be $\min (\epsilon,2\alpha -1) > 0$. The momentum--constraint equation reads \begin{equation} \partial_i K_{ij} = \partial_j K + O(r^{-3-\epsilon}), \label{(PN.1.2)} \end{equation} where $K=g^{ij}K_{ij}$. Taking the divergence of \eq{(PN.1.1)} and using \eq{(PN.1.2)} gives \begin{equation} g_{ij,kj} A^k + \Delta_\delta Y^i + \partial_i (Y^j{}_{,j}) = - 2A^0 K_{,i} + O(r^{-3-\epsilon}). \label{(PN.1.3)} \end{equation} Here $\Delta_\delta = \sum_i \partial_i \partial_i$. Contracting $i$ with $j$ in \eq{(PN.1.1)} allows us to eliminate $\partial_j Y^j$ in \eq{(PN.1.3)} in terms of $K_{,i}$ so that \eq{(PN.1.3)} leads to $$ \Delta_\delta Y^i = - A^0 K_{,i} - (g_{ij,j} - \frac{1}{2} g_{jj,i})_{,k} A^k + O(r^{-3-\epsilon}). $$ In what follows we shall freely make use of properties of harmonic functions on $\Sigma_R$ which were established in {\em e.g.\/} \cite{Meyers,Bartnikmass,ChAFT}. Increasing $R$ if necessary we may choose harmonic\footnote{There arises a slight difficulty here, related to the fact that the metric might not satisfy the conditions \eq{F0.1} in harmonic coordinates due to a loss of classical differentiability. In our proof we have ignored that issue, assuming {\em e.g.\/} that eq.\ \eq{(PN.1.1)} still holds in harmonic coordinates. The problem is easily cured by keeping track of weighted--Sobolev differentiability of various error terms which arise in our equations, making use of the estimates of \cite{Bartnikmass}. In doing that one can verify that the statement of our result is correct as stated. All the details of the proof as written here can be justified if a H\"older differentiability index $\lambda$ is added in eqs.\ \eq{F0.1}--\eq{F0.2}. In order to make the argument more transparent we have chosen to present our proof without the introduction of weighted Sobolev spaces.} coordinates on $\Sigma_R$, $$ \partial_i (g^{ij} \sqrt{\det g}) = 0, $$ with $$ g_{ij} - \delta_{ij} = O_1(r^{-\alpha}). $$ If $A^0 = 0$ define $\varphi$ to be identically zero, otherwise let $\varphi = O_1(r^{1-\alpha})$ be a solution of \begin{equation} \label{PN.1.3.0} \Delta_\delta \varphi = - A^0 K. \end{equation} Setting $Z^i = Y^i - A^i - \varphi^{,i}$ one is led to $$ \Delta_\delta Z^i = O(r^{-3-\epsilon}), $$ so that there exist numbers $\alpha^i \in {\bf R}$ such that $$ Z^i = \frac{\alpha^i}{r} + O_1(r^{-1-\epsilon}). $$ A contraction over $i$ and $j$ in \eq{(PN.1.1)} gives \begin{equation} Z^i{}_{,i} = - \frac{\alpha^i x^i}{r^3} + O(r^{-2-\epsilon}) = - \frac{1}{2} g_{ii,k} A^k + O(r^{-2-\epsilon}). \label{(PN.1.4)} \end{equation} The scalar constraint equation in harmonic coordinates gives \begin{equation} \Delta_\delta g_{ii} = O(r^{-3-\epsilon}) \Rightarrow g_{ii} = 3+ \frac{\beta}{r} + O_1(r^{-1-\epsilon}), \label{(PN.1.5)} \end{equation} for some constant $\beta$. Eq. \eq{(PN.1.5)} inserted in the formula for the ADM mass yields \begin{equation} \label{ADMmass} m = \displaystyle \frac{1}{16 \pi} \int_{S_\infty} (g_{ij,j} - g_{jj,i})\, dS_i = - \displaystyle \frac{1}{32\pi} \int g_{jj,i}\, dS_i = \displaystyle \frac{\beta}{8}. \end{equation} Inserting this in \eq{(PN.1.4)} one is led to $$ \alpha^i = - 4m A^i, $$ so that one finally obtains \begin{equation} Y^i = A^i\left(1 - \frac{4m}{r}\right) + \varphi_{,i} + O_1(r^{-1-\epsilon}). \label{(PN.1.6)} \end{equation} Suppose first that $A^0 = 0$. In this case we necessarily have $A^i \not\equiv 0$, and, rescaling $X^\mu \partial_\mu$ if necessary, we can choose coordinates so that $A^i = \delta^i_z$. Eq. \eq{(PN.1.1)} now reads \begin{equation} g_{AB,z} = O(r^{-2-\epsilon}), \label{(PN.1.7)} \end{equation} \begin{equation} (g_{zz} + 2Y^z)_{,z} = O(r^{-2-\epsilon}), \label{(PN.1.8)} \end{equation} \begin{equation} g_{zA,z} = \left(\frac{4m}{r}\right)_{,A} + O(r^{-2-\epsilon}). \label{(PN.1.9)} \end{equation} Let $\rho^2=x^2+y^2$. For $\rho\ge R$ eq.\ \eq{(PN.1.9)} gives \begin{eqnarray*} 0 & = & x^A\int_{-\infty}^\infty g_{zA,z}dz \\ & = & -4m \int_{-\infty}^\infty \frac{dz}{(1+z^2)^{3/2}} + \int_{-\infty}^\infty O(r^{-2-\epsilon})dz\ . \end{eqnarray*} To estimate the second integral it is convenient to consider separately the integrals $\int_{-\infty}^{-\rho}$, $\int_{-\rho}^\rho$ and $\int_{\rho}^\infty$. Elementary estimates then show that this integral is $O(\rho^{-\epsilon})$; passing to the limit $\rho\to\infty$ one subsequently obtains $m=0$, which establishes point 1. To establish point 2, suppose that $A^0 \neq 0$. After a rescaling of $X^\mu$ if necessary we can without loss of generality assume that $A^0 = 1$. Eq. \eq{(PN.1.1)} gives thus \begin{equation} \begin{array}{rcl} K_{ij} &=& - \displaystyle \frac{1}{2} \{ Y^i{}_{,j} + Y^j{}_{,i} + g_{ij,k} A^k\} + O_1(r^{-1-2\alpha}) \\[7pt] &=& - \displaystyle \frac{1}{2} \{ Z^i{}_{,j} + Z^j{}_{,i} + 2\varphi_{,ij} + g_{ij,k} A^k\} + O_1(r^{-1-2\alpha}). \end{array} \label{(PN.1.15)} \end{equation} Consider the ADM momentum\footnote{The unusual sign in eq.\ \eq{(PN.1.13)} is due to our convention on $K_{ij}$, {\em cf.\/} footnote \ref{extrinsiccurvature}.} $p_i$: \begin{equation} p_i = - \frac{1}{8\pi} \int_{S_\infty} (K_i{}^j - K \delta_i{}^j) dS_j, \label{(PN.1.13)} \end{equation} After insertion of \eq{(PN.1.15)} in \eq{(PN.1.13)} one finds \begin{equation} p_i = \frac{1}{16\pi} \int_{S_\infty} (Z^i{}_{,j} + Z^j{}_{,i} + A_j g_{ik,k}) dS_j. \label{(PN.1.16)} \end{equation} Here the $\varphi$ contribution drops out because of the following calculation: \begin{equation} \begin{array}{rcl} & \int_{S_\infty} (\Delta_\delta \varphi \delta_{ij} - \partial_i \partial_j \varphi)dS_j = &\\[7pt] & \int_{S_\infty} (\partial_k \varphi \delta_{ij} - \partial_j \varphi \delta_{ki})_{,k} dS_j =& 0. \end{array}\label{dropout} \end{equation} We have also used the identities $$ g_{ij,k} A^k = (g_{ij} A^k - g_{ik} A^j)_{,k} + g_{ik,k} A^j, $$ and integration by parts to rearrange the $g_{ij,k} A^k$ terms. Inserting \eq{(PN.1.6)} in \eq{(PN.1.16)} and using the harmonic coordinates condition one obtains $$ p_i = m \; A_i, $$ which had to be established. \hfill\ $\Box$ Point 1 of Proposition \ref{PN.1} suggests strongly that the ADM four--momentum must vanish when $A^\mu$ is spacelike. We can show that if we assume some further asymptotic conditions on the fields under consideration. A similar result has been established previously in \cite{AAM} under rather stronger asymptotic and global conditions. \begin{Proposition}\label{PN.2} Under the hypotheses of Proposition \ref{PN.1}, suppose further that $N$ is $C^2$ and that \begin{equation} N \tau_{ij} = O (r^{-3-\epsilon}) \ . \label{F0.1.12} \end{equation} If \begin{equation} \label{F0.1.21} (A^0)^2 < \sum_i A^iA^i\ , \end{equation} then $p^\mu$ vanishes. \end{Proposition} \paragraph{Proof:} It follows from eqs.\ \eq{F0.1.1}, \eq{(PN.1.0)} and \eq{F0.1.12} that \begin{equation} \label{F0.1.111} Y^i- A^i = O_{2} (r^{-\alpha}), \quad N-A^0 = O_{2} (r^{-\alpha})\ . \end{equation} Consider first the case $A^0=0$; by Proposition \ref{PN.1} we have $p^0=0$. Let $\psi$ be any function on $\Sigma_R$ such that $\psi_{,z} = N $. Eq.\ \eq{F0.1.12} gives $$ (K_{ij} - \partial_i \partial_j \psi)_{,z} = O(r^{-3-\epsilon}), $$ so that by $z$--integration one obtains $$ K_{ij} - \partial_i \partial_j \psi = O(r^{-2-\epsilon}). $$ Inserting this in eq. \eq{(PN.1.13)} one obtains \begin{equation} \begin{array}{rcl} p_i &=& - \displaystyle \frac{1}{8\pi} \int_{S_\infty} (\Delta_\delta \psi \delta_{ij} - \partial_i \partial_j \psi)dS_j \\[7pt] &=& - \displaystyle \frac{1}{8\pi} \int_{S_\infty} (\partial_k \psi \delta_{ij} - \partial_j \psi \delta_{ki})_{,k} dS_j \\[7pt] &=& 0. \end{array}\label{dropout1} \end{equation} Consider, next, the case $A^0\ne 0$, let $(\hat M,\hat g_{\mu\nu})$ be the Killing development of $(\Sigma_R,g_{ij},K_{ij},N,Y^i)$ as constructed in Section \ref{KVSH}. As discussed in the paragraph preceding eq.\ \eq{K.16.1}, eqs.\ \eq{F0.2} and \eq{F0.1.12} imply that the Einstein tensor $\hat G_{\mu\nu}$ of $\hat g_{\mu\nu}$ will satisfy the fall-off condition \begin{equation} \label{hatTfalloff} \hat G_{\mu\nu}=O(r^{-3-\epsilon})\ . \end{equation} Let $\Lambda^\mu{}_\nu$ be the matrix of a Lorentz transformation such that $\Lambda^0{}_\nu A^\nu = 0$. Let further $\Lambda\Sigma$ be the image under $\Lambda^\mu{}_\nu$ of $\Sigma_R\cap \hat M$ in $\hat M$. On $\Lambda \Sigma$ the Killing vector $X^\mu$ satisfies $X^0\to_{r\to\infty}0$. Eq.\ \eq{hatTfalloff} shows that we can apply the previous analysis to conclude that the ADM four--momentum of $\Lambda\Sigma$ vanishes. Moreover the decay condition \eq{hatTfalloff} ensures ({\em cf.\ e.g.\/} \cite{Chremark}) that $p^\mu$ transforms as a Lorentz vector under Lorentz transformations of hypersurfaces, so that the ADM four--momentum of $\Sigma_R$ vanishes as well. \hfill$ \Box$ It is of interest to consider Killing vector fields which are covariantly constant. As discussed in Section \ref{KVSH}, in such a case eqs.\ \eq{X.0.1}--\eq{X.0.2} below will hold ( with $0$ on the right--hand--sides). We have the following result, which does not cover asymptotically null Killing vectors: \begin{Proposition}\label{PN.1.1} Under the hypotheses of Proposition \ref{PN.1}, assume moreover that $N$ is $C^2$, that eq.\ \eq{F0.1.12} holds and that \begin{eqnarray} \label{X.0.1} & N K_{ij}+ D_iY_j = O_1(r^{-2-\epsilon}) \ , & \\ \label{X.0.2} & K_{ij}Y^j+ D_i N = O_1(r^{-2-\epsilon}) \ , & \\ & A^\mu A_\mu \neq 0. &\nonumber \end{eqnarray} Then the ADM four momentum $p^\mu$ vanishes. \end{Proposition} \paragraph{Proof:} Let $(\hat M,\hat g_{\mu\nu})$ be the Killing development of $(\Sigma_R,g_{ij},K_{ij},N,Y^i)$ as constructed in Section \ref{KVSH}. From what is said in that section ({\em cf.\/} the discussion following eqs.\ \eq{K.01}--\eq{K.02}) it follows that $X^\mu\partial_\mu = \partial_u$ will satisfy \begin{equation} \hat \nabla_\mu X_\nu = O_1(r^{-2-\epsilon})\ . \label{X.0} \end{equation} As is well known \cite{Beig,AAM}, we have \begin{equation} p_\mu A^\mu = \lim_{r \rightarrow} \newcommand{\rai}{\rightarrow\infty \infty} \frac{1}{8\pi} \int \hat\nabla^{[\mu} X^{\nu]} dS_{\mu\nu} \label{X.2} \end{equation} ({\em cf.\ e.g.\/} \cite{Chremark} for a proof under the present asymptotic conditions). By (\ref{X.0}) we have $p_\mu A^\mu = 0$. Now, by Prop. \ref{PN.1}, $p_\mu $ is proportional to $ A_\mu$, so if $A^\mu A_\mu \neq 0$ the result follows. \hfill\ $\Box$ The main result of this section addresses the case of asymptotically null Killing vectors. Unfortunately the proof below requires more asymptotic regularity than one would wish to have. It would be of some interest to find out whether or not the result below is sharp, in the sense that decay conditions on three derivatives of the metric and two derivatives of the extrinsic curvature are necessary. \begin{Theorem}\label{TV.1} Let $R>0$ and let $(g_{ij},K_{ij})$ be initial data on $\Sigma _R={\bf R}^3\backslash B(R)$ satisfying \begin{eqnarray} & g_{ij}-\delta_{ij}=O_{3+\lambda}(r^{-\alpha}), \qquad K_{ij}=O_{2+\lambda}(r^{-1-\alpha}), & \label{V.0} \\ & J^i = O _{1+\lambda}(r^{-3-\epsilon}), \qquad \rho = O_{1+\lambda} (r^{-3-\epsilon}), \\ &\alpha> 1/2, \qquad \epsilon >0, \qquad 0<\lambda<1. & \nonumber \end{eqnarray} Let $N$ be a scalar field and $Y^i$ a vector field on $\Sigma_R$ such that $$ N\to_{r\to\infty}A^0,\quad Y^i\to_{r\to\infty}A^i, \qquad A^\mu A_\mu=0\ , $$ for some constants $A^\mu\not \equiv 0$. Suppose further that \begin{eqnarray} & 2 N K_{ij} + {\cal L}_Y g_{ij} = O_{3+\lambda} (r^{-2-\epsilon})\ , & \label{K.1500} \\ & \tau_{ij} =O_{1+\lambda} (r^{-3-\epsilon})\ , & \label{K.1501} \end{eqnarray} where $\tau_{ij}$ is defined by the equation \begin{eqnarray} & N( \tau_{ij}-\frac{1}{2}g^{k\ell}\tau_{k\ell}g_{ij}) = N({}^3R_{ij} + K K_{ij} - 2 K_{ik} K^k{}_j) \qquad\qquad\qquad\qquad \nonumber \nonumber & \\ & \qquad \qquad \qquad\qquad\qquad - {\cal L}_Y K_{ij} + D_i D_j N -\frac{\rho}{2}\,N\,g_{ij} \ . & \label{K.15010} \end{eqnarray} Then the ADM four-momentum of $\Sigma _R$ vanishes. \end{Theorem} {\bf Remark}: There is little doubt that the result is still true with $\lambda=0$. To prove that one would, however, need to extend the weighted Sobolev estimates of \cite{Bartnikmass} to the case $\mbox{\rm dim}M =2$, a task which lies beyond the scope of this paper. {\sc Proof:}\ Arguments similar to the proof of Proposition \ref{2PN.1}, Appendix \ref{Aproof}, show that $$ N-A^0 = O_{3+\lambda} (r^{-\alpha}), \quad Y^i-A^i = O_{3+\lambda} (r^{-\alpha})\ . $$ Rescaling $A^\mu$ if necessary we can choose the coordinate system so that $A^0=1, A^i=\delta _z^i$. Replacing $\epsilon$ by any number smaller than one if necessary we can assume that $\epsilon<1$ and $\epsilon\le 2\alpha-1$. Taking the trace of eq.\ (\ref{K.1501}) and using the scalar constraint equation we find $$ \Delta_\delta N+K_{,z}=O_{1+\lambda}(r^{-3-\epsilon}). $$ Here, as before, $\Delta_\delta =\partial_x^2+\partial_y^2+\partial_z^2$. Let $\varphi$ be as in eq.\ (\ref{PN.1.3.0}), we obtain $$ \Delta_\delta (N-\varphi _{,z})=O_{1+\lambda}(r^{-3-\epsilon}), $$ hence there exists a constant $D$ such that \begin{equation} N-\varphi _{,z}=1+\frac{D}r+O_{3+\lambda}(r^{-1-\epsilon}). \label{Neq} \end{equation} In harmonic coordinates eqs.\ (\ref{(PN.1.0)}), (\ref{(PN.1.6)}), (\ref{K.1501}) and (\ref{Neq}) give \begin{eqnarray} & -\frac{1}2 \Delta_2 g_{ij}=\chi_{ij}+\Psi _{ij}, &\label{V.1} \\& \chi _{ij}=-2m \partial_z [\delta _z^j \partial_i\frac{1}r+\delta _z^i \partial_j\frac{1}r] +\partial_i\partial_j\frac{D}r, & \label{V.1.0} \\& \Psi _{ij}=O_{1+\lambda}(r^{-3-\epsilon}). & \label{V.2.0} \end{eqnarray} Here $\Delta_2=\partial_x^2+\partial_y^2$. In what follows the indices $A,B$, etc.\ take values in the set $\{1,2\}$. Consider the eq.\ \eq{V.1} with $i=z$, $j=A$. We have \begin{equation} \Delta_2 g_{zA}=(8m-2D)\partial_A\partial_z\frac{1}r +O(r^{-3-\epsilon}).\label{V.2.1} \end{equation} It follows from \cite{ChAFT,Meyers} that for every fixed value of $z$ the functions $g_{zA}$ have the asymptotic expansion \begin{equation} g_{zA}=C_{AB}(z)\partial_B \ln\rho +O_{(1)}( \rho ^{-1-\epsilon}\ln \rho). \label{V.2.2} \end{equation} Here $\rho^2=x^2+y^2$, the functions $C_{AB}(z)$ are functions of $z$ only, and we write \begin{equation} \label{bracketconvention} f=O_{(1)}( \rho ^{-\alpha } \ln^\beta \rho)\quad \mbox{\rm if } \quad |f|+\rho |\partial_A f|\leq C (1+\rho )^{-\alpha }(1+\ln (1+\rho))^\beta \end{equation} for some constant $C$ which may depend upon $z$. Let us define $S(\rho,a )$ to be a circle of radius $\rho $ centered at $x=y=0$ lying in the plane $z=a$. Eq.\ (\ref{V.2.2}) shows that for any fixed value of $z$ the limits $$ \lim_{\rho \to\infty}\int_{S(\rho ,z)} g_{zB}dx^C, \qquad \lim_{\rho \to\infty}\int_{S(\rho ,z)} x^D\partial_A g_{zB}dx^C $$ exist. It also follows from our asymptotic conditions on $g_{ij}$, eq.\ \eq{V.0}, that these limits are $z$--independent. Set \begin{equation} \Omega=\lim_{\rho \to\infty}\int_{S(\rho ,z)} (x^A\partial_C g_{zA}-g_{zC})dx^C.\label{V.2.3} \end{equation} For $|z|>R$ by the Stokes theorem we have \begin{eqnarray*} & \Omega =\int_{{\bf R}^2}x^A \Delta_2 g_{Az}=(1)+(2) , & \\ & (1) = (8m-2D)\int_{{\bf R}^2} x^A \partial_z\partial_A\frac{1}r, & \\ & (2) = \int_{{\bf R}^2}x^A\Psi _{Az} \, , & \end{eqnarray*} $\Psi _{Az}$ - as in \eq{V.1}. The first integral is easily calculated and equals \begin{equation} 8\pi(4m-D)\ \mbox{sgn} z,\label{V.2.4} \end{equation} where $\ \mbox{sgn} z$ denotes the sign of $z$. To estimate the second integral it is convenient to split the region of integration into the sets $\rho \leq |z|$ and $\rho \geq |z|$. One then finds \begin{equation} |(2)|\leq C|z|^{-\epsilon} \mbox{ for }\ |z|> R\ , \label{V.2.5} \end{equation} with a constant $C$ which does {\em not} depend upon $z$. Equations \eq{V.2.4}--\eq{V.2.5} are consistent with $\partial\Omega/\partial z = 0$ if and only if \begin{equation} 4m=D\ . \label{V.2.6} \end{equation} Consider now eq.\ \eq{V.1} with $i=A,j=B$. Differentiating this equation with respect to $z$ one obtains \begin{equation} \Delta_2 \frac{\partial g_{AB}}{\partial z}= -2D \partial_A \partial_B \partial_Z \frac{1}r + O(r^{-4-\epsilon}). \label{V.3}. \end{equation} By hypothesis we have $\frac{\partial g_{ij}}{\partial z}=O(r^{-1-\epsilon})$, and the estimates of \cite {ChAFT} or \cite {Meyers} show that there exist functions $D_{ABCD}(z)$ such that for any fixed value of $z$ we have \begin{equation} \frac{\partial g_{AB}}{\partial z}= D_{ABCD} \partial_C\partial_D \ln \rho+ O_{(1)}( \rho ^{-2-\epsilon}\ln \rho).\label{V.4}. \end{equation} Let us set $$ \Omega'=\lim_{\rho \to\infty} \int_{S(\rho ,z)} (2x^A x^B \partial_C\partial_z g_{AB}-x^A x^A\partial_C\partial_z g_{BB}+ 2 x_C \partial_z g_{AB}-4x^B\partial_z g_{CB}) dx^C. $$ \eq{V.4} shows that $\Omega'$ is well defined, while \eq{V.0} implies that $\Omega'$ is $z$--independent. For $|z|>R$ we again use the Stokes theorem to obtain $$ \Omega'= \int_{{\bf R}^2} (2 x^A x^B \Delta_2 \partial_z g_{AB}-x^A x^A \Delta_2\partial_z g_{BB}). $$ A calculation as above leads to $$ \Omega'=16\pi D \ \mbox{sgn} z+O(|z|^{-\epsilon}),\quad |z|>R. $$ Hence $D=m=0$ ({\em cf.\ eq.\/} \eq{V.2.6}), which together with Proposition \ref{PN.1} establishes our claims. \hfill $\Box$ \section{A positive energy theorem} \label{pets} In this section we shall prove a ``future--pointing--timelike--or--vanishing--energy--momentum--theorem'', under conditions weaker than previously considered. The main two issues we wish to address are 1) the impossibility of a null ADM four--momentum and 2) a result which invokes hypotheses concerning only the fields $g_{ij}$ and $K_{ij}$. Let us start with an example of a metric with ``null ADM four--momentum''. Recall that in \cite{AS} Aichelburg and Sexl consider a sequence of Schwarzschild space--times with energy--momentum vector $(m,0,0,0)$. After applying a ``boost'' transformation to the Schwarzschild space--time one obtains an energy--momentum vector $(\gamma m,\gamma v m,0,0)$. Then one takes the limit $v\to 1$ keeping $\gamma m$ equal to a fixed constant $ p$. The resulting space--time has a distributional metric and it is not clear if it is asymptotically flat. Nevertheless, it seems reasonable to assign to the Aichelburg--Sexl solutions a null energy--momentum vector $(p,p,0,0)$. So, in this sense, there exist space--times with a null energy--momentum vector. The Aichelburg--Sexl metrics are plane--fronted waves, and it is of interest to enquire whether any asymptotically flat plane--fronted wave metrics exist. Recall that the usual approach in defining asymptotic flatness is to introduce coordinate systems on $ ({{\bf R}}^3\setminus B(R))$. Consider thus a plane--fronted wave metric on ${{\bf R}}\times ({{\bf R}}^3\setminus B(R))$, \begin{equation} \label{pm} ds^2=-2du\,dz + \alpha\, dz^2+dx^2 + dy^2\ . \end{equation} As is well known ({\em cf.\ e.g.\/} \cite{Brinkmann,Schimming}), the metric \eq{pm} is vacuum if and only if $\alpha=\alpha(x,y,z)$ with \begin{equation} \label{pm1} (\partial^2_x + \partial^2_y)\alpha = 0 \ . \end{equation} Let then $\alpha$ be any solution of \eq{pm1} such that $\alpha= 1$ for $|z|\ge R$, but $\alpha\not \equiv 1$. Such solutions are easily found, and for any finite $\ell$ we can choose $\alpha$ to satisfy $$ 0\le k\le \ell \qquad |\partial_{A_1}\ldots \partial_{A_k} (\alpha-1)| \le C r^{-k-1}\ . $$ An example is given by the function \begin{equation} \label{alphafunction} \alpha = 1+ \phi(z)C^{A_1\ldots A_\ell}\partial_{A_1} \ldots \partial_{A_\ell} \ln \rho \ , \end{equation} where $\phi(z)$ is a smooth compactly supported function and $C^{A_1\ldots A_\ell}$ is a totally symmetric tensor with constant coefficients. We have the following: \begin{enumerate} \item If $\ell =1$ the metric \eq{pm} with $\alpha$ given by \eq{alphafunction} will not satisfy the fall--off requirements of the positive energy--theorem, {\em cf.\/} Theorem \ref{Tpet} below, because the $z$ derivatives of the metric do not vanish fast enough as $r$ tends to infinity. This fall--off of the metric is not known to be sufficient for a well--defined notion of ADM mass (compare \cite{Bartnikmass,Chremark,ChErice}). However one can calculate the ADM integral \eq{ADMmass} in the coordinate system $(x,y,z)$ as above and find that this integral vanishes. \item For all $\ell \ge 2$ the hypersurfaces $u=\mbox{\rm const}$ will have a well defined vanishing ADM mass. This does, however, not follow from our Theorem \ref{TV.1} unless\footnote{Strictly speaking we would need to have $\ell \ge 4$ to be able to apply Theorem \ref{TV.1} as is; {\em cf.\/}, however, the remark following that Theorem. When we know {\em a priori} that the metric is a plane--fronted wave, we can use independent arguments to get rid of the H\"older differentiability index $\lambda$ in Theorem \ref{TV.1}, no details will be given.} $\ell \ge 3$. \end{enumerate} Nevertheless this example shows that non--trivial, vacuum, asymptotically flat plane--fronted waves exist (with $p^\mu = 0$), as long as no further global conditions are imposed. With those examples in mind, let us briefly recall what is known about the nonexistence of appropriately regular space--times with null energy--momentum. In \cite{Witten} an argument was given to support the expectation that the ADM momentum cannot be null for vacuum or electrovacuum space--times, the general case being left open. In \cite{AshtekarHorowitz} this case has been excluded under rather strong global hypotheses on the space--time and under stringent asymptotic conditions. In \cite{Yip} a proof was given assuming only hypotheses on the initial data. However, the proof there is rather more complicated than ours. Moreover the asymptotic conditions of \cite{Yip} are more restrictive than ours. We wish next to emphasize the following issue: The statement that the ADM mass $m$ is non-negative requires only the inequality $\rho\ge\sqrt{J_iJ^i}$, where $\rho$ and $J^i$ are quantities which can be purely defined in terms of the fields $g_{ij}$ and $K_{ij}$, {\em cf.\/} eqs. \eq{EP.1}--\eq{EP.2} below. Now the published Witten--type proofs that the vanishing of $m$ implies, loosely speaking, flatness of the resulting space--time, involve the full dominant energy condition ($T_{\mu\nu}X^\mu Y^\nu\ge 0$ for all timelike consistently time--oriented vectors $X^\mu$ and $Y^\nu$) ({\em cf.\ e.g.\/} \cite{ParkerTaubes}). Recall that the corresponding statement of Schoen and Yau \cite{SchoenYau} does not involve\footnote{ Their proof, however, requires rather strong asymptotic conditions on the fields. Moreover Schoen and Yau require the trace of the extrinsic curvature to fall--off at least as $r^{-3}$. In general this can be justified by applying a ``logarithmic supertranslation'' in time to the initial data surface, and requires the supplementary hypothesis that the associated space--time is large enough. Finally to guarantee that all the required hypotheses hold on the deformed hypersurface one needs again the full dominant energy condition.} any supplementary field $T_{\mu\nu}$. Similarly both the proof in \cite{AshtekarHorowitz} and the proof in \cite{Yip} which exclude a null ADM energy--momentum assume the full dominant energy condition . A result involving only conditions on $g_{ij}$ and $K_{ij}$ seems to be much more satisfactory from a conceptual point of view, and it seems reasonable to expect that the desired conclusion could be obtained in the Witten--type setting without imposing conditions on fields other than $g_{ij}$ and $K_{ij}$. We show below that this is indeed the case. Before passing to the statement of our results, in addition to the papers already quoted let us mention the papers \cite{LV,Jezierski,KijJezier,KijJezier2,GHHP,ChoquetBruhatlesHouches,Bartnikmass,Reula,ReulaTod,MalecBizon,Penrosetwistorletter,PenroseSorkinWoolgar} where proofs or arguments relevant to the positive energy--theorem have been given. The review paper \cite{Horowitz} contains some further references. We have the following: \begin{Theorem}[(Rigid) positive energy theorem] \label{Tpet} Consider a data set $(\Sigma,g_{ij},K_{ij})$, with $\Sigma$ of the form $\Sigma =\Sigma _{{\rm int}} \bigcup^I_{i=1}\Sigma _i $, for some $I<\infty$. Here we assume that $\Sigma _{\rm int}$ is compact, and that each of the ends $\Sigma _i$ is diffeomorphic to ${{\bf R}}^3\setminus B(R_i)$ for some $R_i>0$, with $B(R_i)$ --- coordinate ball of radius $R_i$. In each of the ends $\Sigma _i$ the fields $(g,K)$ are assumed to satisfy the following inequalities \begin{equation} \label{falloff} | g _{ij}-\delta_{ij}|+|r\partial_k g _{ij}|+|rK_{ij}| \le C r^{-\alpha}\ , \end{equation} for some constants $C>0$ and $\alpha>1/2$, with $r=\sqrt{\sum (x^i)^2}$. Suppose moreover that the quantities $\rho$ and $J$ \begin{eqnarray} & 2 \rho := {}^3R + K^2 - K^{ij}K_{ij}\,, & \label{EP.1}\\ & J^k := D_l (K^{kl} - K g^{kl}) \,, & \label{EP.2} \end{eqnarray} are well defined (perhaps in a distributional sense), and satisfy \begin{equation} \sqrt{ g _{ij}J^iJ^j}\le \rho \le C(1+r)^{-3-\epsilon}, \qquad \epsilon >0. \label{EP.3} \end{equation} Then the ADM four--momentum $(m,p^i)$ of any of the asymptotic ends of $\Sigma$ satisfies $m\ge \sqrt{p_ip^i}$. If $m=0$, then $\rho\equiv J^i \equiv 0$, and there exists an isometric embedding $i$ of $\Sigma$ into Minkowski space--time $({{\bf R}}^4,\eta_{\mu\nu})$ such that $K_{ij}$ represents the extrinsic curvature tensor of $i(\Sigma)$ in $(M,\eta_{\mu\nu})$. Moreover $i(\Sigma)$ is an asymptotically flat Cauchy surface in $({{\bf R}}^4,\eta_{\mu\nu})$. \end{Theorem} {\sc Proof:}\ Under the conditions here the ADM four--momentum of each of the asymptotic regions of $\Sigma$ is finite and well defined \cite{ChErice,Bartnikmass}. As discussed {\em e.g.\/} in \cite{Chremark}, under the boundary conditions here the Witten boundary integral reproduces correctly the ADM four--momentum. The arguments of any of the references \cite{Bartnikmass,Reula,Chremark} show that one can find solutions to the Witten equation which asymptote to a constant non--zero spinor in one of the asymptotic ends, and to zero in all the other ones. Witten's identity subsequently implies that the ADM momentum of each of the ends is non--spacelike. Suppose that in one of the ends $m$ vanishes. Then for each $\vec n \in {\bf R}^3$ there exists a spinor field $\lambda_M(\vec n)$ defined on $\Sigma$ satisfying eq. (\ref{A.7}), such that the corresponding vector field $Y^j(\vec n)$ defined via eq. (\ref{A.8}), and the scalar field $N(\vec n)$ defined by eq. (\ref{A.9}), satisfy $$ Y^j(\vec n) {\rightarrow} \newcommand{\rai}{\rightarrow\infty}_{r \rightarrow} \newcommand{\rai}{\rightarrow\infty \infty} \vec n^j, \qquad N(\vec n) {\rightarrow} \newcommand{\rai}{\rightarrow\infty}_{r \rightarrow} \newcommand{\rai}{\rightarrow\infty \infty} |\vec n|_\delta. $$ Here $|\vec n|_\delta$ is the norm of $\vec n$ in the flat metric on ${\bf R}^3$. As shown in Appendix \ref{A}, the fields $N(\vec n)$ and $Y^i(\vec n)$ satisfy the linear system of equations ({\em cf.\/} eqs.\ (\ref{A.11}) and (\ref{A.11.0})) \begin{eqnarray} D_i Y_j + N K_{ij} &=& 0, \label{mo.1} \\ D_i N + K_{ij} Y^j &=& 0. \label{mo.2} \end{eqnarray} Consider the fields \begin{eqnarray} Y_j &=& Y_j((1/2,1/2,0)) - Y_j((-1/2,1/2,0)) - Y_j((1,0,0)), \label{mo.3} \\ N &=& N((1/2,1/2,0)) - N((-1/2,1/2,0)) - N((1,0,0)). \label{mo.4} \end{eqnarray} The fields $Y_j$ and $N$ satisfy eqs. (\ref{mo.1})--(\ref{mo.2}) by linearity of those equations. Moreover we have \begin{equation} Y^j {\rightarrow} \newcommand{\rai}{\rightarrow\infty}_{r \rightarrow} \newcommand{\rai}{\rightarrow\infty \infty} 0, \qquad N {\rightarrow} \newcommand{\rai}{\rightarrow\infty}_{r \rightarrow} \newcommand{\rai}{\rightarrow\infty \infty} 1. \label{mo.5} \end{equation} Let $(\widehat M,\widehat g_{\mu\nu})$ be the Killing development of $(\Sigma,g_{ij},K_{ij},N,Y_i)$. As discussed in Section \ref{KVSH}, it follows from eqs.\ (\ref{mo.1})--(\ref{mo.2}) that the vector field $X^\mu \partial_\mu = \partial_u$ is covariantly constant on $\widehat M$; (\ref{mo.5}) implies then \begin{equation} \widehat g_{\mu\nu} X^\mu X^\nu = -1 \quad \Longrightarrow \quad N^2 - g_{ij} Y^i Y^j = 1. \label{mo.6} \end{equation} By Proposition 3.1 of \cite{ChWald} $\Sigma$ is a Cauchy surface for $(\widehat M, \widehat g_{\mu\nu})$. We wish to show that $(\widehat M,\widehat g_{\mu\nu})$ is geodesically complete. Consider, then, an affinely parametrized geodesic $x^\mu(s)$, and let $p$ denote the constant of motion associated with the Killing vector $X^\mu$: \begin{equation} p = \widehat g_{\mu\nu} \dot x^\mu X^\nu = - \dot u + Y_i \dot x^i. \label{mo.7} \end{equation} Here eqs. (\ref{K.10}) and (\ref{mo.6}) have been taken into account; a dot over a quantity means differentiation with respect to $s$. Since $s$ is an affine parameter we have, with $\varepsilon = 0,\pm 1$, \begin{equation} - \dot u^2 + 2 Y_i \dot x^i \dot u + g_{ij} \dot x^i \dot x^j = \varepsilon. \label{mo.8} \end{equation} Eqs. (\ref{mo.7}) and (\ref{mo.8}) give \begin{equation} (g_{ij} + Y_i Y_j) \dot x^i \dot x^j = \varepsilon + p^2. \label{mo.9} \end{equation} (\ref{mo.9}) and (\ref{mo.8}) imply that there exists a function $C(p)$ such that \begin{equation} |\dot x|_g + |\dot u| \leq C(p). \label{mo.10} \end{equation} Choose $p \in {\bf R}$ and consider the set $\Omega_p$ of maximally extended affinely parametrized geodesics with that value of $p$, with $x^\mu(0) \in \Sigma$. We can without loss of generality assume that $\alpha < 1$; an analysis of eqs.\ \eq{mo.1}--\eq{mo.2} along the lines of Appendix \ref{Aproof} shows that $\widehat g_{\mu\nu} - \eta _{\mu\nu}= O_1(r^{-\alpha})$. By asymptotic flatness of $\widehat g_{\mu\nu}$ (cf. Proposition \ref{2PN.1}) and the interior compactness condition on $\Sigma$ there exists $\delta > 0$ such that all geodesics in $\Omega_p$ are defined for $s \in (-\delta,\delta)$. Eq. (\ref{mo.10}) shows that in that affine time the value of $|u|$ can change at most by $C(p)\delta$, similarly for the value of $r(s) \equiv (x^2(s) + y^2(s) + z^2(s))^{1/2}$ in the asymptotic regions. One can now invoke the fact that the $u$-translations are isometries to conclude that all geodesics in $\Omega_p$ are complete, and the result follows. Let us show now that $(\widehat M,\widehat g)$ is flat. Let $Y^i_{(k)} = Y^i(\vec e_k)$, where $Y^i(\vec n)$ is as at the beginning of this proof and where the $\vec e_k$'s, $k = 1,2,3$, form an orthonormal basis of ${\bf R}^3$. Let $N_{(k)} = N(\vec e_k)$ be the corresponding lapse functions. On $\widehat M$ define the fields $X^\mu_{(k)}$ by the eq. \begin{equation} X^\mu_{(k)} \partial_\mu = \widehat N_{(k)} n^\mu \partial_\mu + \widehat Y^i_{(k)} \partial_k, \label{PET.1} \end{equation} $$ \widehat Y^i_{(k)}(u,x^i) = Y^i_{(k)} (x^i), \qquad \widehat N_{(k)}(u,x^i) = N_{(k)}(x^i). $$ Here $n^\mu$ is the field of unit normals to the slices $\{ u = \mbox{const}\}$. By eqs. (\ref{A.11}) and (\ref{A.11.0}) we have \begin{equation} \widehat \nabla_j X^\mu_{(k)} = 0 . \label{PET.2} \end{equation} By construction of $(\widehat M,\widehat g_{\mu\nu})$ it also holds that \begin{equation} \widehat \nabla_\mu X^\nu = \widehat \Gamma^\nu_{\mu\lambda} X^\lambda = \widehat \Gamma^\nu_{\mu u} = 0. \label{PET.3} \end{equation} As the components of $X^\mu_{(i)}$ are $u$-independent by (\ref{PET.1}), eq. (\ref{PET.3}) gives \begin{equation} \widehat \nabla_u X^\mu_{(k)} = \partial_u X^\mu_{(k)} + \widehat \Gamma^\mu_{\lambda u} X^\lambda_{(k)} = 0. \end{equation} Consequently \begin{equation} \widehat \nabla_\mu X^\nu_{(k)} = 0 . \label{PET.4} \end{equation} Differentiating (\ref{PET.4}) one obtains \begin{equation} \widehat R_{\mu\nu\rho\sigma} X^\sigma_{(i)} = 0. \label{PET.5} \end{equation} As the vector fields $X^\sigma_{(i)}$ are everywhere null and linearly independent, standard algebra gives \begin{equation} \widehat R_{\mu\nu\rho\sigma} \equiv 0. \end{equation} Consider, next, the universal covering space $\widetilde \Sigma$ of $\Sigma$ with fields $(\widetilde g_{ij},\widetilde K_{ij},\widetilde Y_i,\widetilde N)$ obtained by pull-back. Let $(\bar M,\bar g_{\mu\nu})$ be the Killing development of $(\widetilde \Sigma, \widetilde g_{ij}, \widetilde K_{ij}, \widetilde Y_j,\widetilde N)$. Clearly $\bar M$ is the universal covering space of $\widehat M$ with $\bar g_{\mu\nu}$ being the pull-back of $\widehat g_{\mu\nu}$. It is easily seen that $(\bar M,\bar g)$ inherits from $(\widehat M,\widehat g)$ the following properties: \begin{enumerate} \item $(\bar M,\bar g_{\mu\nu})$ is globally hyperbolic with Cauchy surface $\widetilde \Sigma$. \item $(\bar M,\bar g_{\mu\nu})$ is geodesically complete. \item $(\bar M,\bar g_{\mu\nu})$ is flat. \end{enumerate} As $\bar M$ is simply connected, it follows {\em e.g.\/} from \cite[Theorem 2.4.9]{Wolf} that $(\bar M,\bar g_{\mu\nu})$ is the Minkowski space-time $({\bf R}^4,\eta_{\mu\nu})$. As $\widetilde \Sigma$ is a Cauchy surface for $\bar M$, it is necessarily a graph over a spacelike plane $t = 0$ in $({{\bf R}}^4,\eta_{\mu\nu})$. In particular $\widetilde \Sigma$ has only one asymptotically flat end (compare also \cite[Lemma 2]{Chmass}). If $\Sigma$ had been non-simply connected, then $\widetilde \Sigma$ would have had more than one asymptotic end. It follows that $\Sigma = \widetilde \Sigma$, $\widehat M = {{\bf R}}^4$ and our claims follow. \hfill\ $\Box$ To exclude the case of a null ADM four--momentum we need to assume some further asymptotic regularity conditions: \begin{Theorem} \label{Tpet2} Under the hypotheses of Theorem \ref{Tpet}, suppose moreover that in some of the asymptotic ends it holds that \begin{eqnarray} & g_{ij}-\delta_{ij}=O_{3+\lambda}(r^{-\alpha}), \qquad K_{ij}=O_{2+\lambda}(r^{-1-\alpha}), & \label{V.00} \\ & \rho=O_{1+\lambda}(r^{-3-\epsilon}), & \label{V.01} \end{eqnarray} with some $0<\lambda<1$. Then the ADM four--momentum of that end cannot be null. \end{Theorem} {\bf Remark:} It can be shown by rather different techniques that the result is still true with $\lambda=0$, we shall however not discuss that here. {\sc Proof:}\ Consider an asymptotic end $\Sigma_1$ in which eqs. \eq{V.00}--\eq{V.01} hold and which has a null ADM four--momentum $p^\mu$. As discussed in the proof of Theorem \ref{Tpet} and in Appendix \ref{A}, the hypotheses of Proposition \ref{PA.1} and Corollary \ref{CA.1} are satisfied. We can thus apply Theorem \ref{TV.1} to conclude that the ADM four--momentum of the end under consideration vanishes, and the result follows from Theorem \ref{Tpet}. \hfill $\Box$ Let us close this section by proving Theorem \ref{T1}: By the arguments given above $\rho$ and $J^i$ vanish on $\Sigma$. It follows from a result of Hawking and Ellis \cite[Chapter 4, Section 4.3]{HE} that $(M,g_{\mu\nu})$ must be flat. By uniqueness of the maximal globally hyperbolic vacuum developments it follows that the Killing development constructed in the proof of Theorem \ref{Tpet2} ({\em cf.\/} Appendix \ref{A}) coincides with the maximal globally hyperbolic development of $(\Sigma, g_{ij}, K_{ij})$, and Theorem \ref{T1} follows. {\bf Acknowledgements} Part of the work on this paper was done when both authors were visiting the Max Planck Institut f\"ur Astrophysik in Garching; they are grateful to J\"urgen Ehlers and to the members of the Garching relativity group for hospitality. P.T.C.\ acknowledges the hospitality of the E. Schr\"odinger Institute and of the Vienna relativity group during part of work on this paper.
1,314,259,993,116
arxiv
\section{Approach Overview} \label{sec:approach} In typical management and orchestration frameworks~\cite{etsi-mano}, service providers need to submit exact descriptors of their network service structure, resource demands, and expected traffic from sources to a service management and orchestration system (Fig.~\ref{fig:orch-normal}). Based on the descriptors, placement, scaling, and routing decisions are made for each network service, independently from one another. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{orch2} \caption{Conventional network service life-cycle, from descriptors to running services} \label{fig:orch-normal} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{orch-jasper2} \caption{Network service management and orchestration using JASPER} \label{fig:orch-jasper} \end{figure} JASPER makes two major changes to this approach, one with respect to the description of the network services (Section~\ref{sec:templ-inst-over}) and another with respect to handling the scaling, placement, and routing decision processes (Section~\ref{sec:joint-single-step}). \subsection{Templates instead of over-specified descriptors} \label{sec:templ-inst-over} Because of the limited precision and flexibility of typical descriptors, we base our approach on so-called \emph{service templates}. Using service templates, service providers are required to specify neither the \emph{exact} resource demands (e.g., memory or CPU) of service components nor the required number of instances of each component. The service template describes the components of the network service and their required interconnections on an abstract level, without deployment details. Moreover, it gives the resource demands of the network service as a function of the load: \begin{itemize} \item The required computational capacity (e.g., CPU and memory) is described for each service component as a function of the input data rate. This can be used to calculate the network node capacity required to host the service component. \item The amount of traffic leaving each service component towards other components is specified as a function of the data rate that enters the component. This can be used to calculate the link capacity required to host the traffic flowing between any two interconnected instance. \end{itemize} In addition to the service templates, service providers may include the expected traffic originating from the sources of the network service in the request to embed a service template. As the traffic is constantly changing, the current traffic needs to be monitored and fed back to the template embedding process, to keep the network service in an optimal state. In this way, depending on the location and data rate of the sources of the network service, resource requirements are calculated dynamically, based on the given functions, eliminating the risk of over- or under-estimating the resource demands. Based on the functions describing the dependency of resource requirements and outgoing data rates on incoming data rates, it is also possible to reason about possible changes to the deployment and their impact, which is a pre-requisite for effective optimization. The specific functions highly depend on the type and implementation of the service component and can be derived, for example, based on historical usage data or by automatic service profiling methods~\cite{profiling2016}. \subsection{Joint, single-step scaling, placement, and routing} \label{sec:joint-single-step} As shown in Fig.~\ref{fig:orch-normal}, in typical management and orchestration frameworks~\cite{etsi-mano}, based on the description of the network service and the state of the network's resources, the requested number of instances for each service component are computed and then placed and deployed with the requested amount of resources, in an appropriate location. After path selection and instantiation of the network service, the running instances are monitored and re-scaled and re-placed based on pre-defined scaling rules if required. Deciding the number of required instances for each service component, the amount of resources allocated to each component, and the optimal paths selected for routing the network service flows are, however, highly interdependent problems, which cannot be solved optimally using such independent management and orchestration steps. Our approach, illustrated in Fig.~\ref{fig:orch-jasper}, changes the way network service life-cycle is handled, by combining scaling, placement, and routing steps into a joint decision process. Depending on the location and data rate of the sources, \begin{itemize} \item each service template is scaled out into an overlay with the necessary number of instances required for each service component; \item each component instance is mapped to a network node and is allocated the required amount of resources on that node; \item the connections among component instances are mapped to flows along network links, carrying the data rate. \end{itemize} JASPER is an integrated approach in multiple dimensions: (i) scaling, placement, and routing decisions are made in a single optimization step; (ii) all services that are to be placed in the same substrate network are considered together; (iii) newly requested and already deployed services are optimized jointly. This way, a global optimum can be achieved. Modern management and orchestration systems~\cite{osmwebsite,sonatawebsite,unifywebsite} have a flexible design to incorporate innovative life-cycle management approaches. For example, SONATA's service platform~\cite{sonata-paper} has a customizable service life-cycle management plugin. The platform operator can easily modify the order of life-cycle management operations and customize different operations. Using service-specific management programs it is also possible to specify when and how scaling, placement, and routing operations are performed for each network service, making the practical implementation of JASPER possible. \section{Complexity} \label{sec:compl} \begin{theorem} \label{thm:npc} For an instance of the Template Embedding problem as defined in Section~\ref{sec:problem}, deciding whether a solution with no violations exists is NP-complete in the strong sense\footnote{NP-complete in the strong sense means that the problem remains NP-complete even if the numbers appearing in it are constrained between polynomial bounds. Under the P$\ne$NP assumption, this precludes even the existence of a pseudo-polynomial algorithm -- i.e., an algorithm the runtime of which is polynomial if restricted to problem instances with polynomially bounded numbers.}. \end{theorem} \begin{proof} It is clear that the problem is in NP: a possible witness for the positive answer is a solution -- i.e., a set of overlays and their embedding into the substrate network -- with 0 violations. The witness has polynomial size and can be verified in polynomial time wrt.\ to the input size. To establish NP-hardness, we show a reduction from the Set Covering problem (which is known to be NP-complete in the strong sense \cite{karp1972reducibility}) to the Template Embedding problem. An input of the Set Covering problem consists of a finite set $U$, a finite family $\cal W$ of subsets of $U$ such that their union is $U$, and a number $k\in\mathbb{N}$. The aim is to decide whether there is a subset $\cal Z\subseteq W$ with cardinality at most $k$ such that the union of the sets in $\cal Z$ is still $U$. From this instance of Set Covering, an instance of the Template Embedding problem is created as follows. The substrate network consists of nodes $V=\{s_1,\ldots,s_{|U|}\}\cup\{a_1,\ldots,a_{|{\cal W}|}\}\cup\{b\}$, where each $s_i$ represents an element of $U$ and each element $a_j$ represents an element of $\cal W$. There is a link from $s_i$ to $a_j$ if and only if the element of $U$ represented by $s_i$ is a member of the set represented by $a_j$. Furthermore, there is a link from each $a_j$ to $b$. The capacities of the nodes are as follows: $\text{cap}_{\text{cpu}}(s_i)=\text{cap}_{\text{mem}}(s_i)=0$ for each $i\in[1,|U|]$, $\text{cap}_{\text{cpu}}(a_j)=0$ and $\text{cap}_{\text{mem}}(a_j)=1$ for each $j\in[1,|{\cal W}|]$, and $\text{cap}_{\text{cpu}}(b)=1$ and $\text{cap}_{\text{mem}}(b)=0$. For each link, its maximum data rate is 1, its delay is 0. There is a single template consisting of a source component $S$ and two further components $A$ and $B$, and two arcs $(S,A)$ and $(A,B)$. Component $A$ has one input and one output, its resource consumption as a function of the input data rate $\lambda$ is given by $p_A(\lambda)=0$ and $m_A(\lambda)=1$; its output data rate is given by $r_A(\lambda)=1$. Component $B$ has one input and no output, its resource consumption as a function of the input data rate $\lambda$ is given by \begin{equation*} p_B(\lambda)= \begin{cases} 1, & \text{if }\lambda\le k, \\ 2, & \text{otherwise,} \end{cases} \end{equation*} and $m_B(\lambda)=0$. In each $s_i$, there is a source corresponding to an instance of $S$ with data rate $\lambda=1$. \begin{figure}[tb] \centering \subfigure[An instance of Set Covering ($k=2$) and a solution (thick lines)]{\hspace*{0.05\columnwidth}\includegraphics[width=0.35\columnwidth]{proof_setcover}\hspace*{0.05\columnwidth}} \hspace*{0.05\columnwidth} \subfigure[The generated instance of Template Embedding and the corresponding solution]{\includegraphics[width=0.45\columnwidth]{proof_te}} \caption{An example for the proof of Theorem \ref{thm:npc}} \label{fig:proof_npc} \end{figure} Suppose first that the original instance of Set Covering is solvable, i.e., there is a subset $\cal Z\subseteq W$ with cardinality at most $k$ such that the union of the sets in $\cal Z$ is $U$. In this case, the generated instance of the Template Embedding problem can also be solved without any violations, as follows (see Fig.\ \ref{fig:proof_npc} for an example). Each $s_i$ must of course host an instance of $S$. In each $a_i$ corresponding to an element of $\cal Z$, an instance of $A$ is created. Since the union of the sets in $\cal Z$ is $U$, each $s_i$ has an outgoing link to at least one $a_j$ hosting an instance of $A$, which can be selected as the target of the traffic leaving the source in $s_i$ through the link $(s_i,a_j)$. Further, a single instance of $B$ is created in node $b$ and each instance of $A$ is connected to $B$ through the $(a_j,b)$ link. Since the number of instances of $A$ is at most $k$, each emitting traffic with data rate 1, the CPU requirement of the instance of $B$ is 1, so that it fits on $b$, and hence we obtained a solution to the Template Embedding problem with no violation. Now assume that the generated instance of the Template Embedding problem is solvable without violations. Then, we can construct a solution of the original instance of Set Covering, as we show next. In a solution of the generated instance of the Template Embedding problem, each $s_i$ must host an instance of $S$ and there is no other instance of $S$. Instances of $A$ can only be hosted by $a_j$ nodes because of the memory requirement, and an instance of $B$ can only be hosted in $b$ because of the CPU requirement. We define $\cal Z$ to contain those elements of $\cal W$ for which the corresponding node $a_j$ hosts an instance of $A$. Since each source generates traffic that must be consumed by an instance of $A$ and there is a path (actually, a link) from $s_i$ to $a_j$ only if the set corresponding to $a_j$ contains the element corresponding to $s_i$, it follows that the sets in $\cal Z$ cover all elements of $U$. Moreover, since the instance of $B$ must fit on $b$ and each instance of $A$ generates traffic with data rate 1, it follows that the number of instances of $A$ is at most $k$ and hence $|{\cal Z}|\le k$, thus $\cal Z$ is a solution of the original Set Covering problem. Since all numbers in the generated instance of the Template Embedding problem are constants, this reduction shows that the Template Embedding problem is indeed NP-hard in the strong sense. \end{proof} As a consequence, we can neither expect a polynomial or even pseudo-polynomial algorithm for solving the problem exactly nor a fully polynomial-time approximation scheme, under standard assumptions of complexity theory. \section{Conclusions} \label{sec:concl} We have presented JASPER, a fully automatic approach to scale, place, and route multiple virtual network services on a common substrate network. JASPER can be used for both the initial allocation of newly requested services and the adaptation of existing services to changes in the demand. Besides formally defining the problem and proving its NP-hardness, we developed two algorithms for it, an MILP-based one and a custom constructive heuristic. Empiric tests have shown how our approach finds a balance between conflicting requirements and ensures that the allocated capacity quickly follows changes in the demand. The MILP-based algorithm gives optimal or near-optimal results for relatively small substrate network graphs, making it suitable for, e.g., calculations on top of a geographically distributed network where each node represents a data center. The heuristic remains very fast for even the largest networks that were tested. Overall, the tests gave evidence to the feasibility of our approach, which makes it possible (i) for service developers to specify services at a high level of abstraction and (ii) for providers to quickly re-optimize the system state after changes. Promising future research directions include, beside further algorithmic enhancements to the presented algorithms and the development of new algorithms, the consideration of queuing incoming requests in the service components and the investigation of the effects of cyclic service templates. \section{Evaluation} \label{sec:eval} We implemented the presented algorithms in the form of a C++ program. For solving the MILP, Gurobi Optimizer 7.0.1\footnote{\url{http://www.gurobi.com/}} was used. For substrate networks, we used benchmarks for the Virtual Network Mapping Problem\footnote{\url{https://www.ac.tuwien.ac.at/files/resources/instances/vnmp}} from Inf\"uhr and Raidl \cite{infuhr2013solving}. As service templates, we used examples from IETF's Service Function Chaining Use Cases \cite{draft-liu-sfc-use-cases-08}. \subsection{An example} \begin{figure}[tb] \centering \includegraphics[height=55mm]{graph} \caption{Example substrate network} \label{fig:graph} \end{figure} \begin{figure*}[tb] \centering \subfigure[\label{fig:state2}Initial embedding]{\includegraphics[height=54mm]{state_milp_2}} \hfil \subfigure[\label{fig:state3}Result of increased source data rate]{\includegraphics[height=54mm]{state_milp_3}} \hfil \subfigure[\label{fig:state4}Result of the emergence of a second source]{\includegraphics[height=54mm]{state_milp_4}} \caption{Illustrative example (memory values not shown for better readability)} \end{figure*} First, we illustrate our approach on a small substrate network of 10 nodes and 20 arcs (see Fig.~\ref{fig:graph}) in which the CPU and memory capacity of each node is both 100. In this network, a service consisting of a source (S), a firewall (FW), a deep packet inspection (DPI) component, an anti-virus (AV) component, and a parental control (PC) component is deployed. Initially, there is a single source in node 1 with a moderate data rate. As a result, our algorithm deploys all components of the service in node~1 (see Fig.~\ref{fig:state2}). Subsequently, the data rate of the source increases. As a result, the resource demand of the processing components of the service increases so that they do not fit onto node~1 anymore. Our algorithm automatically re-scales the service by duplicating the DPI, AV, and PC components and automatically places the newly created instances on a nearby node, namely node~3 (see Fig.~\ref{fig:state3}). Later on, a second source emerges for the same service on node~9. The algorithm automatically decides to create new processing component instances on node~9 to process as much as possible of the traffic of the new source locally. The excess traffic from the new FW instance that cannot be processed locally due to capacity constraints is routed to the existing DPI, AV, and PC instances on node~3 because node~3 still has sufficient free capacity (see Fig.~\ref{fig:state4}). Already this small example shows the difficult trade-offs that template embedding involves. Next, we show that our approach is capable of handling also much more complex scenarios. \subsection{Comparison of the algorithms} \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{scenario_streaming_08_with_heur} \caption{Temporal development of the demand and the allocated capacity in a complex scenario} \label{fig:scenario} \end{figure} \begin{figure}[tb] \includegraphics[width=0.9\columnwidth]{latency} \caption{Total latency over all created paths for the embedded template} \label{fig:latency} \end{figure} We consider a substrate network with 20\,nodes and 44\,arcs, in which multiple services are deployed. Each service is a virtual content delivery network for video streaming, consisting of a streaming server, a DPI, a video optimizer, and a cache. The number of concurrently active services varies from 0 to 4, the number of sources varies from 0 to 20. Fig.~\ref{fig:scenario} shows how the total data rate of the sources (as a metric of the demand) and the total CPU size of the created instances (as a metric of the allocated processing capacity) change through re-optimization after each event. An event is the emergence or disappearance of a service, the emergence or disappearance of a source, or the change of the data rate of a source. As can be seen, the allocated capacity using both the heuristic and the MILP algorithms follow the demand very closely, meaning that our algorithms are successful in scaling the service in both directions to quickly react to changes in the demand. Regarding total data rate and total latency of the overlay edges, the MILP algorithm performs better than the heuristic algorithm. For example, Fig.~\ref{fig:latency} shows the total latency over all paths created for the template in this scenario\footnote{In Fig.~\ref{fig:latency}, in the high-load area between event 20 and 50, some problem instances are too complex to be solved within the 60 seconds time limit we have set for the optimizer. This results in solutions with zero latency, as no paths are created.}. The reason for this difference is that in the MILP algorithm, the optimal location for all required instances can be determined at the same time. This results in shorter distances between the source and the instances. The heuristic algorithm, however, needs to create instances one by one, resulting in larger data rates traveling over larger distances in the substrate network. In this scenario, to handle the peak demand, a total of 127 instances are created using the MILP algorithms, while the heuristic algorithm creates 261 instances. \subsection{Scalability} \begin{figure}[tb] \centering \subfigure[\label{fig:exec_time_milp}Execution time of the MILP algorithm]{\includegraphics[width=0.9\columnwidth]{exec_time_milp}} \subfigure[\label{fig:gap_milp}Optimality gap of the MILP algorithm]{\includegraphics[width=0.9\columnwidth]{gap_milp}} \subfigure[\label{fig:exec_time_heur}Execution time of the heuristic algorithm]{\includegraphics[width=0.9\columnwidth]{exec_time_heur}} \caption{Scalability of the presented algorithms} \end{figure} Since the template embedding problem is NP-hard, it is foreseeable that the scalability of the MILP solver will be limited. In order to test this, we gradually increase the source data rate of the service from our first experiment, leading to an increasing number of instances; moreover, we also consider substrate networks of increasing size. In each case, the MILP solver is run with a time limit of 60 seconds, meaning that the solution process stops at (roughly) 60 seconds with the best solution and the best lower bound that the solver found until that time. The measurements were performed on a machine with Intel Core i5-4210U CPU @ 1.70GHz and 8GB RAM. Fig.~\ref{fig:exec_time_milp} shows the execution time of the MILP algorithm for different data rates and substrate network sizes, while Fig.~\ref{fig:gap_milp} shows the corresponding gap between the found solution and the lower bound. As can be seen, for a small network with 10 nodes and 20 arcs, the algorithm computes optimal results for the lower half of source data rate values, and even for larger source data rates, the optimality gap is quite low (around 20\,\%), meaning that the results are almost optimal. However, for a bigger substrate network with 20 nodes and 44 arcs, the solver reaches the time limit for much smaller source data rate and also the optimality gap is much bigger. For even bigger substrate networks, the performance of the algorithm further deteriorates, up to the point where it cannot be run anymore because of memory problems. The large sensitivity to the size of the substrate network is not surprising, given that the number of variables of the MILP is cubic in the size of the substrate network. In contrast, as shown in Fig.~\ref{fig:exec_time_heur}, the execution time of the heuristic algorithm remains very low even for the largest substrate networks: for 1000 nodes and 2530 arcs, the execution time is still below 20 milliseconds, rendering the heuristic practical for real-world problem sizes as well. \section{Heuristic approach} \label{sec:heur} Now we present a heuristic algorithm that is not guaranteed to find an optimal solution but is much faster than the mixed integer programming approach. Moreover, it has the advantage that it does not require the functions $p_j$, $m_j$, and $r_j$ to be linear. The heuristic constructs the new solution from the existing one by means of a series of small local changes.\footnote{Also the placement of a new service is done with a series of small local changes, creating component instances one by one.} While doing so, it has to be ensured that (i) the instantiation of source components is in line with the given data sources, (ii) the data flows produced by each instance are routed to appropriate instances, and (iii) capacity constraints are satisfied as much as possible. This can be achieved by iterating through the instances of each overlay once in a topological order, possibly creating new instances on the fly if necessary. Note that this may indeed be necessary, for example, if a new data source appeared or the output data rate of a data source increased. In each step, the algorithm aims at economical use of resources, e.g., by only creating new instances if necessary, deleting unneeded instances, or preferring short paths. The heuristic is shown in Algorithm~\ref{alg:heur_main}. It starts by checking that each service has a corresponding overlay and each overlay corresponds to a service (lines 1--5). If a new service has been started or an existing service has been stopped since the last invocation of the algorithm, the corresponding overlay is created or removed at this point. Next, the mapping of the sources and source components is checked and updated if necessary (lines 6--11): if a new source emerged, an instance of the corresponding source component is created; if the data rate of a source changed, then the output data rate of the corresponding source component instance is updated; if a source disappeared, then the corresponding source component instance is removed. Finally, to propagate the changes of the sources to the processing instances, we need to iterate over all instances and ensure that the new output data rates, which are determined by the new input data rates, are discharged correctly by outgoing flows (lines 12--24). For this purpose, it is important to consider the instances in a topological order (according to the overlay) so that when an instance is dealt with, its incoming flows have already been updated. If a change in the outgoing flows is necessary, then the \textsc{increase} or \textsc{decrease} procedures are called. \begin{algorithm}[htb] \caption{Main procedure of the heuristic algorithm} \label{alg:heur_main} \begin{algorithmic}[1] \small \If{$\exists G_{\text{OL}}(T)$ with $T\not\in {\cal T}$} \State remove $G_{\text{OL}}(T)$ \EndIf \ForAll{$T\in {\cal T}$} \If{$\nexists G_{\text{OL}}(T)$} \State create empty overlay $G_{\text{OL}}(T)$ \EndIf \ForAll{$(v,j,\lambda)\in S(T)$} \If{$\nexists i\in I_{\text{OL}}$ with $c(i)=j$ and $P_T^{(I)}(i)=v$} \State create $i\in I_{\text{OL}}$ with $c(i)=j$ and $P_T^{(I)}(i)=v$ \EndIf \State set output data rate of $i$ to $\lambda$ \EndFor \If{$\exists i\in I_{\text{OL}}$, where $c(i)$ is a source component but $\nexists (P_T^{(I)}(i),c(i),\lambda)\in S(T)$ for any $\lambda$} \State remove $i$ \EndIf \ForAll{$i\in I_{\text{OL}}$ in topological order} \If{all input data rates of $i$ are 0} \State remove $i$ and go to next iteration \EndIf \State compute output data rates of $i$ \ForAll{output $k$ of $i$} \State $\Phi$: set of flows currently leaving output $k$ \State $\lambda$: sum of the data rates of the flows in $\Phi$ \State $\lambda'$: new data rate on output $k$ \If{$\lambda'<\lambda$} \State $\cal E$: set of edges leaving output $k$ \State \Call{decrease}{$\cal E$,$\lambda-\lambda'$} \ElsIf{$\lambda'>\lambda$} \State \Call{increase}{$i$,$k$,$\Phi$,$\lambda'-\lambda$} \EndIf \EndFor \EndFor \EndFor \end{algorithmic} \end{algorithm} The auxiliary subroutines are detailed in Algorithm~\ref{alg:aux}. \textsc{decrease} first removes as many edges as possible (lines 3--6); when a further decrease is necessary but no more edges can be removed, it reduces the next flow on each link by the same factor to achieve the required reduction (lines 7--9). \textsc{increase} first checks if new instances need to be created to be consistent with the template (lines 12--16), then tries to increase the existing flows (lines 17--19). If this is not sufficient to achieve the necessary increase, it creates further instances and flows (lines 20--23). \begin{algorithm}[htb] \caption{Auxiliary methods of the heuristic} \label{alg:aux} \begin{algorithmic}[1] \small \State\Comment{Decrease the flows on the edges in $\cal E$ by $\Delta\lambda$ in total} \Procedure{decrease}{$\cal E$,$\Delta\lambda$} \State sort $\cal E$ in non-decreasing order of flow data rate \ForAll{$e\in {\cal E}$ while flow data rate $\lambda(e)\le\Delta\lambda$} \State $\Delta\lambda:=\Delta\lambda-\lambda(e)$ \State remove $e$ \EndFor \If{$\Delta\lambda>0$} \State let $e$ be the next edge \State reduce flow of $e$ by a factor of $(\lambda(e)-\Delta\lambda)/\lambda(e)$ \EndIf \EndProcedure \State\Comment{Increase the flows in $\Phi$ leaving output $k$ of instance $i$ by $\Delta\lambda$ in total} \Procedure{increase}{$i$,$k$,$\Phi$,$\Delta\lambda$} \ForAll{arc $(c(i),j)$ leaving output $k$ of $c(i)$} \If{$\nexists i'\in I_{\text{OL}}$ with $c(i')=j$ and $ii'\in E_{OL}$} \State $\varphi:=\;$\Call{createInstanceAndFlow}{$j$, $i$, $\Delta\lambda$} \State $\Delta\lambda:=\Delta\lambda-(\text{data rate of } \varphi)$ \State $\Phi:=\Phi\cup\{\varphi\}$ \EndIf \EndFor \ForAll{$\varphi\in\Phi$} \State $d:=\;$\Call{incrFlow}{$\varphi$,$\Delta\lambda$} \State $\Delta\lambda:= \Delta\lambda-d$ \EndFor \While{$\Delta\lambda>0$} \State $(c(i),j)$: random arc leaving output $k$ of $c(i)$ \State $\varphi:=\;$\Call{createInstanceAndFlow}{$j$, $i$, $\Delta\lambda$} \State $\Delta\lambda:=\Delta\lambda-(\text{data rate of }\varphi)$ \EndWhile \EndProcedure \State\Comment{Create an instance of component $j$ with flow from instance $i$ of high data rate (capped at cutoff)} \Procedure{createInstanceAndFlow}{$j$,$i$,cutoff} \ForAll{$v\in V$} \State create temporary instance $i'$ of $j$ on $v$ \State $\varphi$: flow of data rate 0 from $i$ to $i'$ \State \Call{incrFlow}{$\varphi$,cutoff} \State remove $i'$ and $\varphi$ \EndFor \State create instance of $j$ on node resulting in best flow \EndProcedure \State\Comment{Increase flow data rate by at most $d$} \Procedure{incrFlow}{$\varphi$,$d$} \State $v:=\;$start node of $\varphi$ \State $v':=\;$end node of $\varphi$ \State $\beta_1:=\;$maximum flow based on $cap_{CPU}(v')$ \State $\beta_2:=\;$maximum flow based on $cap_{mem}(v')$ \State $d:=\min(d,\beta_1,\beta_2)$ \State $P$: $v\leadsto v'$ path of high bandwidth ($b$) and low latency \State increase $\varphi$ by $\min(b,d)$ along $P$ \EndProcedure \end{algorithmic} \end{algorithm} In the \textsc{createInstanceAndFlow} procedure (called by \textsc{increase} to create a new instance of a component together with a flow from an existing instance), all nodes of the substrate network are temporarily tried for hosting the new instance. The candidate that leads to the best flow is selected (lines 26--31). Finally, the \textsc{incrFlow} procedure (called by both \textsc{increase} and \textsc{createInstanceAndFlow}) increases the data rate of a flow along a new path (lines 34--40). As can be seen, we avoid computing maximum flows. This is because the running time of the best known algorithms for this purpose are worse than quadratic with respect to the size of the graph \cite{hochbaum2008pseudoflow}. Since these subroutines are run many times, the high time complexity would be problematic for large substrate networks. Instead, each run of \textsc{incrFlow} increases a flow only along one new path. For finding the path, a modified best-first-search \cite{korf1993linear} is used, which runs in linear time. It should be noted that split flows can still be created if \textsc{incrFlow} is run multiple times for a flow. When improving a flow and when selecting from multiple possible flows, the \textsc{incrFlow} and \textsc{createInstanceAndFlow} routines must strike a balance between flow data rate and the increase in overall delay of the solution. Our strategy for comparing two possible flows is to first compare their data rates and compare their latencies only if there is a tie. This strategy is used in line 31 to select the best flow. The rationale is that selecting flows with high data rate leads to a small number of instances to be created. However, we also employ a cutoff mechanism: flow data rates above the cutoff (the increase in data rate that we want to achieve) do not add more value and are hence regarded to be equal to the cutoff value. This increases the likelihood of a tie, so that the tie-breaking method of preferring lower latencies is also important. An analogous strategy is used in line 39 to compare paths: the primary criterion is to prefer paths with higher bandwidth -- up to the given cutoff $d$ -- and, in case of a tie, to prefer paths with lower latency. For finding the best path, a modified best-first-search is used, in which the nodes to be visited are stored in a priority queue, where priority is defined in accordance with the comparison relation described above. \section{Mixed integer programming approach} \label{sec:integer} In this section, we provide a mixed integer programming (MIP) formulation of the problem. On one hand, this serves as a further formalization of the problem; on the other hand, under suitable assumptions (to be detailed in Section~\ref{sec:solve_mip}) an appropriate solver can be used to solve the mixed integer program, yielding an algorithm for the problem. Based on the assumption that two instances of the same component cannot be mapped to a node, instances can be identified by the corresponding component and the hosting node. This is the basis for our choice of variables, which are explained in more detail in Table~\ref{tab:vars}. \begin{table}[tb] \caption{Variables} \label{tab:vars} \begin{tabular}{p{8mm}p{8mm}p{60mm}} \toprule Name & Domain & Definition \\ \midrule $x_{j,v}$ & $\{0,1\}$ & 1 iff an instance of component $j{\in} {\cal C}$ is mapped to node $v{\in} V$ \\ $y_{a,v,v'}$ & $\mathbb{R}_{\ge 0}$ & If $a{\in} A_T$ is an arc from an output of $j{\in} C_T$ to an input of $j'{\in} C_T$, an instance of $j$ is mapped on $v{\in} V$, and an instance of $j'$ is mapped on $v'{\in} V$, then $y_{a,v,v'}$ is the data rate of the corresponding flow from $v$ to $v'$; otherwise it is 0 \\ $z_{a,v,v',l}$ & $\mathbb{R}_{\ge 0}$ & If $a{\in} A_T$ is an arc from an output of $j{\in} C_T$ to an input of $j'{\in} C_T$, an instance of $j$ is mapped on $v{\in} V$, and an instance of $j'$ is mapped on $v'{\in} V$, then $z_{a,v,v',l}$ is the data rate of the corresponding flow from $v$ to $v'$ that goes through link $l{\in} L$; otherwise it is 0 \\ $\Lambda_{j,v}$ & $\mathbb{R}_{\ge 0}^{|\text{In}(j)|}$ & Vector of data rates on the inputs of the instance of component $j{\in} C_T$ on node $v{\in} V$, or an all-zero vector if no such instance is mapped on $v$ \\ $\Lambda'_{j,v}$ & $\mathbb{R}_{\ge 0}^{|\text{Out}(j)|}$ & Vector of data rates on the outputs of the instance of component $j{\in} C_T$ on node $v{\in} V$, or an all-zero vector if no such instance is mapped on $v$ \\ $\varrho_{j,v}$ & $\mathbb{R}_{\ge 0}$ & CPU requirement of the instance of component $j{\in} C_T$ on node $v{\in} V$, or zero if no such instance is mapped on $v$ \\ $\mu_{j,v}$ & $\mathbb{R}_{\ge 0}$ & Memory requirement of the instance of component $j{\in} C_T$ on node $v{\in} V$, or zero if no such instance is mapped on $v$ \\ $\omega_{v,\text{cpu}}$ & $\{0,1\}$ & 1 iff the CPU capacity of node $v{\in} V$ is exceeded \\ $\omega_{v,\text{mem}}$ & $\{0,1\}$ & 1 iff the memory capacity of node $v{\in} V$ is exceeded \\ $\omega_l$ & $\{0,1\}$ & 1 iff the maximum data rate of link $l{\in} L$ is exceeded \\ $ \psi_\text{cpu} $ & $\mathbb{R}_{\ge 0}$ & Maximum CPU over-subscription over all nodes \\ $ \psi_\text{mem} $ & $\mathbb{R}_{\ge 0}$ & Maximum memory over-subscription over all nodes \\ $ \psi_\text{dr} $ & $\mathbb{R}_{\ge 0}$ & Maximum capacity over-subscription over all links\\ $\zeta_{a,v,v',l}$ & $\{0,1\}$ & 1 iff $z_{a,v,v',l}>0$ \\ $\delta_{j,v}$ & $\{0,1\}$ & 1 iff $x_{j,v}\ne x^*_{j,v}$ \\ \bottomrule \end{tabular} \end{table} We use the following notations for formalizing the constraints and objectives. ${\cal C}{=}\bigcup_{T\in {\cal T}} C_T$ denotes the set of all components, ${\cal A}{=}\bigcup_{T\in {\cal T}} A_T$ the set of all arcs, and ${\cal S}{=}\bigcup_{T\in {\cal T}} S(T)$ the set of all sources across all network services that we want to map to the network. $M$, $M_1$, and $M_2$ denote sufficiently large constants. $(\Lambda_{j,v})_k$ denotes the $k$th component of the vector $\Lambda_{j,v}$. $\underbar{0}$ denotes a zero vector of appropriate length. Information about existing instances should also be taken into account during the decision process. For this, we define $x^*_{j,v} (\forall j \in {\cal C}, v \in V)$ as a constant given as part of the problem input. If there is a previously mapped instance of component $j$ on node $v$ in the network, $x^*_{j,v}$ is 1, otherwise it is 0. \subsection{Constraints} Here we define the sets of constraints that enforce the required properties of the template embedding process. \subsubsection{Mapping consistency rules} {\footnotesize \begin{align} \forall (v,j,\lambda)\in {\cal S}: && x_{j,v} & = 1 \\ \forall (v,j,\lambda)\in {\cal S}: && \Lambda'_{j,v} & = \lambda \\ \forall j\in {\cal C}, \forall v\in V, k \in [1,|\text{In}(j)|]: && (\Lambda_{j,v})_k & \le M\cdot x_{j,v} \\ \forall j\in {\cal C}, \forall v\in V, k \in [1,|\text{Out}(j)|]: && (\Lambda'_{j,v})_k & \le M\cdot x_{j,v} \\ \forall j\in {\cal C}, \forall v\in V: && x_{j,v}-x^*_{j,v} & \le \delta_{j,v} \\ \forall j\in {\cal C}, \forall v\in V: && x^*_{j,v}-x_{j,v} & \le \delta_{j,v} \end{align} } Constraints (1) and (2) enforce that the placement respectively the output data rate of source component instances are in line with the tuples specified in $\cal S$. Constraint (3) guarantees the consistency between the variables $\Lambda_{j,v}$ and $x_{j,v}$: if $\Lambda_{j,v}$ has a positive component, then $x_{j,v}$ must be 1, i.e., only an existing component instance can process the incoming flow. Constraint (3) is analogous for the outgoing flows, represented by the $\Lambda'_{j,v}$ variables. Constraints (5) and (6) together ensure that $\delta_{j,v}=1$ if and only if $x_{j,v}\ne x^*_{j,v}$. \subsubsection{Flow and data rate rules} {\footnotesize \begin{multline} \forall j\in {\cal C}, \text{$j$ not a source component}, \forall v\in V:\\ \Lambda'_{j,v} = r_j(\Lambda_{j,v})-(1-x_{j,v})\cdot r_j(\underbar{0}) \end{multline} \begin{multline} \forall j\in {\cal C}, \forall v\in V, k \in [1,|\text{In}(j)|]:\\ (\Lambda_{j,v})_k=\sum_{a\text{ ends in input }k\text{ of }j, v'\in V} y_{a,v',v} \end{multline} \begin{multline} \forall j\in {\cal C}, \forall v\in V, k \in [1,|\text{Out}(j)|]:\\ (\Lambda'_{j,v})_k=\sum_{a\text{ starts in output }k\text{ of }j, v'\in V} y_{a,v,v'} \end{multline} \begin{multline} \forall a\in{\cal A},\forall v,v_1,v_2\in V:\\ \sum_{vv'\in L}z_{a,v_1,v_2,vv'}-\sum_{v'v\in L}z_{a,v_1,v_2,v'v} =\\ =\begin{cases} 0 & \text{if }v\ne v_1\text{ and }v\ne v_2 \\ y_{a,v_1,v_2} & \text{if }v=v_1\text{ and }v_1\ne v_2 \\ 0 & \text{if }v=v_1=v_2 \end{cases} \end{multline} \begin{align} \forall a\in{\cal A}, \forall v,v'\in V, \forall l\in L: && z_{a,v,v',l} & \le M\cdot \zeta_{a,v,v',l} \end{align} } Constraint (7) computes the data rate on the outputs of a processing component instance based on the data rates on its inputs and the $r_j$ function of the underlying component. The constraint is formulated in such a way that for $x_{j,v}=1$, $\Lambda'_{j,v} = r_j(\Lambda_{j,v})$, whereas for $x_{j,v}=0$ (in which case also $\Lambda_{j,v}=0$ because of Constraint (3)), also $\Lambda'_{j,v}=0$ so that there is no contradiction with Constraint (4). Constraint (8) computes the data rate on the inputs of a component instance as the sum of the data rates on the links ending in that input. Similarly, Constraint (9) ensures that the data rate on the outputs of a component instance is distributed on the links starting in that output. Constraint (10) is the flow conservation rule, also ensuring the right data rate of each flow, thus connecting the $z_{a,v,v',l}$ variables (flow values on individual links) and the $y_{a,v,v'}$ variables (flow data rate). Constraint (11) sets the $\zeta_{a,v,v',l}$ variables (on the basis of the $z_{a,v,v',l}$ variables), so that they can be used later on in the objective function (Section~\ref{subsec:objective}). \subsubsection{Calculation of resource consumption} {\footnotesize \begin{align} \forall j\in {\cal C}, \forall v\in V: && \varrho_{j,v} & = p_j(\Lambda_{j,v})-(1-x_{j,v})\cdot p_j(\underbar{0}) \\ \forall j\in {\cal C}, \forall v\in V: && \mu_{j,v} & = m_j(\Lambda_{j,v})-(1-x_{j,v})\cdot m_j(\underbar{0}) \end{align} } Constraints (12) and (13) calculate CPU respectively memory consumption of each component instance based on the $p_j$ and $m_j$ functions of the underlying component\footnote{Adding more resource types would be reflected by adding corresponding constraints here.}. The logic here is analogous to that of Constraint (7). \subsubsection{Capacity constraints} {\footnotesize \begin{align} \forall v\in V: & \quad \sum_{j\in {\cal C}}\varrho_{j,v} \le \text{cap}_{\text{cpu}}(v)+M\cdot\omega_{v,\text{cpu}} \\ \forall v\in V: & \quad \sum_{j\in {\cal C}}\varrho_{j,v} - \text{cap}_{\text{cpu}}(v) \leq \psi_\text{cpu} \\ \forall v\in V: & \quad \sum_{j\in {\cal C}}\mu_{j,v} \le \text{cap}_{\text{mem}}(v)+M\cdot\omega_{v,\text{mem}} \\ \forall v\in V: & \quad \sum_{j\in {\cal C}}\mu_{j,v} - \text{cap}_{\text{mem}}(v) \leq \psi_\text{mem} \\ \forall l\in L: & \quad \sum_{a\in {\cal A};v,v'\in V}z_{a,v,v',l} \le b(l)+M\cdot\omega_l \\ \forall l\in L: & \quad \sum_{a\in {\cal A};v,v'\in V}z_{a,v,v',l} - b(l) \leq \psi_\text{dr} \end{align} } The aim of these constraints is to set the $\omega$ and $\psi$ variables (based on the already defined $\varrho$, $\mu$ and $z$ variables), which will be used in the objective function (Section~\ref{subsec:objective}). Constraint (14) ensures that $\omega_{v,\text{cpu}}$ will be 1 if the CPU capacity of node $v$ is overloaded, while Constraint (15) ensures that $\psi_\text{cpu}$ will be at least as high as the amount of CPU overload of any node (the appearance of $\psi_\text{cpu}$ in the objective function will guarantee that it will be exactly the maximum amount of CPU overload and not higher than that). Constraints (16), (17) do the same for memory overloads and Constraints (18), (19) do the same for the overload of link capacity. \subsubsection{Interplay of the constraints} To illustrate the interplay of the constraints, we assume that we need to optimize the embedding shown in Fig.~\ref{fig:mapping}. Constraints (1) and (2) ensure that instances of the source component, i.e., S1 and S2, are embedded and their output data rates are set correctly. Constraint (9) ensures that these data rates are then handed out as flows that can only end up in instances of A. These flows are mapped to network links and instances of A are assigned input data rates using Constraints (10) and (8), respectively. That being set, Constraint (3) marks the instances A1 and A2 as embedded, and Constraint (7) sets their output data rates using the respective $r_j$ function. In a similar way, the rest of the components are instantiated and embedded in the network. Constraints (5) and (6) ensure that the $\delta_{j,v}$ variables are set correctly. Constraints (12) and (13) compute the resource consumption of each instance based on the input data rates and the corresponding $p_j$ and $m_j$ functions. Constraints (14)--(19) make sure that over-subscription of node and link capacities are captured correctly, and collect the maximum value of over-subscription for each resource type. This maximum value is used in the objective function described in Section~\ref{subsec:objective}, which drives the decisions based on the constraints. \subsection{Optimization objective} \label{subsec:objective} We formalize the optimization objective based on the goals defined in Section~\ref{subsec:problem} as follows: {\footnotesize \begin{multline} \text{minimize} \quad M_1\cdot\Big( \sum_{v\in V}(\omega_{v,\text{cpu}} + \omega_{v,\text{mem}}) + \sum_{l\in L}\omega_{l}\Big)+ \\ + M_2\cdot\Big(\sum_{\substack{a\in {\cal A} \\ v,v'\in V \\ l\in L}}(d(l)\cdot\zeta_{a,v,v',l})+ \sum_{\substack{j\in {\cal C} \\ v\in V}}\delta_{j,v}\Big)+ \\ + \psi_\text{cpu} + \psi_\text{mem} + \psi_\text{dr} + \sum_{\substack{j\in {\cal C} \\ v\in V}}(\varrho_{j,v} + \mu_{j,v}) + \sum_{\substack{a\in {\cal A} \\ v,v'\in V \\ l\in L}}z_{a,v,v',l} \end{multline} } By assigning sufficiently large values to $M_1$ and $M_2$, we can achieve the following goals with the given priorities (1 being the highest priority): \begin{enumerate} \item Number of capacity constraint violations over all nodes and links is minimized. \item Template arcs are mapped to network paths in such a way that their total latency is minimized. Moreover, the number of instances that need to be started/stopped is minimized. \item The maximum value for capacity constraint violations over all nodes and links is minimized. Also, overlay instances and the edges among them are created in a way that their resource consumption is minimized. \end{enumerate} The objective function is in line with the objectives defined in Section~\ref{subsec:problem}. The primary objective is to minimize the number of constraint violations; a sufficiently large $M_1$ ensures that a decrease in the first term of the objective function has larger impact than any change in the other terms. Moreover, the resulting solution $\sigma'$ will be Pareto-optimal with respect to the other, secondary metrics: otherwise, there would be another solution $\sigma''$ that is as good as $\sigma'$ according to each secondary metric and strictly better than $\sigma'$ in at least one secondary metric, but then, $\sigma''$ would lead to a lower overall value of the objective function. This mixed integer program can be used for initial embedding of service templates as well as for optimizing existing embeddings. However, for the initial embedding of newly requested network services, the term $\sum_{j\in {\cal C}, v\in V}\delta_{j,v}$ should be removed from the objective function because it would introduce an unwanted bias towards embeddings with fewer instances, although it is possible that having more instances can decrease the overall cost of the solution. \subsection{Solving the mixed integer program} \label{sec:solve_mip} All our constraints are linear equations and linear inequalities, and also the objective function is linear. Hence, if the functions $p_j$, $m_j$, and $r_j$ are linear for all $j\in{\cal C}$, then we obtain a mixed-integer linear program (MILP), which can be solved by appropriate solvers. For non-linear functions, a piecewise linear approximation may make it possible to use MILP solvers to obtain good (although not necessarily optimal) solutions. \section{Introduction} Network services, like video streaming and online gaming, consist of different service components, including (virtual) network functions, application servers, data bases, etc. Typically, several of these network services are hosted on top of wide-area networks, serving the continuously changing demands of their users. The need for efficient and automatic deployment, scaling, and path selection methods for the network services has led to paradigms like network softwarization, including software-defined networking (SDN) and network function virtualization (NFV). SDN and NFV provide the required control and orchestration mechanisms to drive the network services through their life-cycle. Today, network services are placed and deployed in the network based on fixed, pre-defined descriptors~\cite{etsi-mano} that contain the number of required instances for each service component and the exact resource demands. More flexibility can be achieved by specifying auto-scaling thresholds for metrics of interest. Once such a threshold is reached, the affected network services should be modified, e.g., scaled. To react to addition and removal of network services, fluctuations in the request load of a network service, or to serve new user groups in a new location, (i) the network services can be scaled out/in by adding/removing instances of service components, (ii) the placement of service components and the amount of resources allocated to them can be modified, and (iii) the network flows between the service components can be re-routed through different, more suitable paths. Given this large number of degrees of freedom for finding the best adaptation, deciding scaling, placement, and routing independently can result in sub-optimal decisions for the network and the running services. Consider a service platform provider hosting a dynamically changing set of network services, where each network service serves dynamically changing user groups that produce dynamically changing data rates. Trade-offs among the conflicting goals of network services and platform operators can be highly non-trivial, for example: \begin{itemize} \item Placing a compute-intensive service component on a node with limited resources near the \emph{source} of requests (e.g., the location of users, content servers, etc.) minimizes latency but placing it on a more powerful node further away in the network minimizes processing time. \item Letting a single instance of a data-processing component serve multiple sources minimizes compute resource consumption but using dedicated instances near the sources minimizes network load. \item Changing the current configuration to a better one will hopefully pay off in the long run but keeping the current configuration avoids reconfiguration costs. \item Fulfilling the resource requirements of one service versus the requirements of another service. \end{itemize} To deal with these challenges, we propose JASPER, a comprehensive approach for the \textbf{J}oint optimiz\textbf{A}tion of \textbf{S}caling, \textbf{P}lac\textbf{E}ment, and \textbf{R}outing of virtual network services. In JASPER, each network service is described by a \emph{service template}, containing information about the components of the network service, the interconnections between the components, and the resource requirements of the components. Both the resource requirements and the outgoing data rates of a component are specified as \emph{functions of the incoming data rates}. The input to the problem we are tackling comprises service templates, location and data rate of the \emph{sources} of each network service, and the topology and available resources of the underlying network. Our optimization approach takes care of the rest: based on the location and current data rate of the sources, in a single step, the templates are scaled by replicating service components as necessary, the placement of components on network nodes is determined, and data flows are routed along network paths. Node and link capacity constraints of the network are automatically taken into account. We optimize the solution along multiple objectives, including minimizing resource usage, minimizing latency, and minimizing deployment adaptation costs. Our main contributions are as follows: \begin{itemize} \item For the case where resource demands of service components are determined as a function of the incoming data rate to each instance, we formalize \emph{template embedding} as a joint optimization problem for scaling, placing, and routing service templates in the network. \item We prove the NP-hardness of the problem. \item We present two algorithms for solving the problem, one based on mixed integer programming, the other a custom heuristic. \item We evaluate both algorithms in detail to determine their strengths and weaknesses. \end{itemize} With the proposed approach, service providers obtain a flexible way to define network services on a high level of abstraction while service platform providers obtain powerful methods to optimize the scaling and placement of multiple services in a single step, fully automatically. The rest of the paper is organized as follows. In Section~\ref{sec:previous}, we give an overview of related work. Section~\ref{sec:approach} presents a high-level overview of our approach and Section~\ref{sec:problem} describes the details of our model and assumptions. We discuss the complexity of template embedding in Section~\ref{sec:compl} and formulate the problem as a mixed integer programming model in Section~\ref{sec:integer}. We present a heuristic solution in Section~\ref{sec:heur} and the evaluation results of our solutions in Section~\ref{sec:eval}, before concluding the paper in Section~\ref{sec:concl}. \section*{Acknowledgment} This work has been performed in the context of the SONATA project, funded by the European Commission under Grant number 671517 through the Horizon 2020 and 5G-PPP programs. This work is partially supported by the German Research Foundation (DFG) within the Collaborative Research Center ``On-The-Fly Computing'' (SFB 901). The work of Z.\ \'A.\ Mann was partially supported by the Hungarian Scientific Research Fund (Grant Nr. OTKA 108947) and the European Union's Horizon 2020 research and innovation programme under grant 731678 (RestAssured). \bibliographystyle{IEEEtran} \section{Related work} \label{sec:previous} The template embedding problem is a joint, single-step optimization of scaling, placement, and routing for network services. In general, our solution can be applied in different contexts, e.g., (distributed) cloud computing and Network Function Virtualization (NFV). In this section, after an analysis of related approaches from a theoretical point of view, we give an overview of related work in the cloud computing and NFV contexts. The major difference among the existing work in these two fields is usually the abstraction level considered for the substrate network and the resulting assumptions for the model. In particular, in the cloud computing context, embedding is typically done on top of physical machines in data centers, while in the NFV context, embedding is done on top of geographically distributed points of presence. \subsection{Virtual network embedding problem} The combination of the placement and path selection sub-problems of template embedding is similar to the Virtual Network Embedding (VNE) problem. Both deal with mapping virtual nodes and virtual links of a graph into another graph and do not include the scaling step. Fischer et al.~\cite{Fischer2013} have published a survey of different approaches to VNE, including static and dynamic VNE algorithms. In contrast to static VNE solutions that consider the initial mapping process only, in this paper we also deal with optimizing and modifying already embedded templates. Some VNE solutions, for example, Houidi et al.~\cite{houidi2015exact}, can modify the mapping in reaction to node or link failures. The modifications in their work, however, are limited to recalculating the location for the embedded virtual network, i.e., migrating some of the nodes and changing the corresponding paths among them. In addition to these modifications, our approach can also modify the \emph{structure} of the graph to be embedded by adding or removing nodes and links if necessary. \subsection{Cloud computing context} The related problem in cloud environments is typically formulated as resource allocation for individual components. Scaling and placing instances of virtual machines on top of physical machines while adhering to capacity constraints are the usual problems tackled in this context~\cite{lorido2014review,mann2016interplay}. The communication among different virtual machines, however, is usually left out or considered only in a limited sense~\cite{mann2015allocation}. Even the approaches that do consider the communication among virtual machines~\cite{divakaran2015towards,ahvar2015nacer,alicherry2013optimizing,ahvar2016cacev} do not include routing decisions whereas JASPER also includes routing. Relevant to the placement sub-problem of template embedding, Bellavista et al.~\cite{Bellavista2015} focus on the technical issues of deploying flexible cloud infrastructure, including network-aware placement of multiple virtual machines in virtual data centers. Wang et al.~\cite{Wang2017} study the dynamic scaling and placement problem for network services in cloud data centers, aiming at reducing costs. These papers also do not address routing. Moreover, our approach of specifying resource consumption as a function of input data rates allows a much more realistic modeling of the resource needs of service components than the constant resource needs assumed by the existing approaches in this context. Keller et al.~\cite{Keller2014b} consider an approach similar to our template embedding problem in the context of distributed cloud computing. Our terminology is partly based on their work but there are important differences in the assumptions and the models that make our approach stronger and more flexible than their solutions. In contrast to their model, where the number of users determines the number of required instances, the deciding factor in our work is the \emph{data rate} originating from different source components. Data rate can be represented, for example, as requests or bits per second and is a more perceptible metric in practical applications and gives a more fine-grained control over the embedding process. Moreover, we do not enforce strict scaling restrictions for components as done in their work. (For example, their method needs as input the exact number of instances of a back-end server that is required behind a front-end server.) Finally, the optimization objective in their model is limited to minimizing the total number of instances for embedded templates. We use a more sophisticated multi-objective optimization approach where different metrics like CPU and memory load of network nodes, data rate on network links, and latency of embedded templates are considered. \subsection{Network function virtualization context} The placement and routing problems are also relevant in the field of Network Function Virtualization (NFV). In the NFV context, the \emph{forwarding graphs} of network services composed of multiple virtual network functions (VNFs) are mapped into the network. Herrera et al.~\cite{herrera2016resource} have published an analysis of existing solutions for placing network services as part of a survey on resource allocation in NFV. Kuo et al.~\cite{Kuo2016} consider the joint placement and routing problem, focusing on maximizing the number of admitted network service embedding requests. Ahvar et al.~\cite{Ahvar2017} propose a solution to this problem, with the assumption that the VNFs can be re-used among different flows. Their objective is to find the optimal number of VNFs for all requests and to minimize the costs for the provider. Another similar approach that considers re-using components is proposed by Bari et al.~\cite{bari2016orchestrating}. Kebbache et al.~\cite{Khebbache2017} aim at solving this problem in an efficient way that can scale with the size of the underlying infrastructure and the embedded network services. They measure the efficiency of their algorithms with respect to run time, acceptance rate, and costs. Another attempt to solve this problem in an efficient and scalable way has been made by Luizelli et al.~\cite{Luizelli2017}, focusing on minimizing resource allocation. In comparison to all these approaches, we consider a more comprehensive optimization objective, trying to minimize the delay for network services, the number of added or removed instances, resource consumption, as well as overload of resources. In our work, the exact structure of the network service does not have to be fixed in the deployment request. In a previous work~\cite{draexler2017ijnm}, we have studied another type of flexibility in the network service structure, namely, the case where the network service components are specified with a partial order and can be re-ordered if desirable for the optimization objectives. Beck et al.~\cite{Beck2015} also consider placement of network services with flexibly ordered components. JASPER is based on the assumption that the \emph{order} of traversing the service components is fixed and given, however, the number of instances for each component and the amount of resources allocated to each component can be adapted dynamically, resulting in network services with malleable structures. Several other optimization approaches~\cite{mehraghdam-netsoft16,sahhaf2015,moens2014vnf,savi2015impact} and heuristic algorithms~\cite{mijumbidesign,beck2015coordinated} have been proposed for placement, scaling, and path selection problems for network services. Our template embedding approach has two important differences compared to these solutions. First, our approach can be used for initial placement of a newly requested service as well as scaling and adapting existing embeddings. Second, in our approach, the structure of the service and mapping of the service components to network nodes and the optimal routing are determined in one single step, based on the requirements of the service and current state of network resources, searching for a global optimum. A preliminary version of this work was presented at the CCGrid 2017 conference \cite{draexler2017joint}. Compared to the conference version, this paper contains the proof of NP-hardness, more detailed explanation of the problem model and the devised algorithms, and a more detailed evaluation and discussion of the practical applicability of the proposed approach. \section{Problem model} \label{sec:problem} In this section, we formalize our model and define the problem we are tackling. Our model uses three different graphs for representing (i) the generic network service structure, (ii) a concrete and deployable instantiation of the network service, and (iii) the actual network. We use different names and notations to distinguish among these graphs (Table~\ref{tab:graphs}). Informally, the problem we address is as follows: given a substrate network, a set of -- newly requested or already existing -- network services with their templates, and the source(s) for the services in the network along with the traffic originating from them, we want to optimally embed the network services into the network. \begin{table}[t] \centering \caption{Notations Used for Graphs in the Model} \label{tab:graphs} \begin{tabular}{llll} \toprule Graph & Symbol & Name & Annotations \\ \midrule \multirow{2}{*}{Template $G_\text{tmpl}$} & $j{\in} C_T$ & Component & $\text{In}(j)$, $\text{Out}(j)$, $p_j,m_j,r_j$ \\ & $a{\in} A_T$ & Arc \\ \midrule \multirow{2}{*}{Overlay $G_\text{OL}$} & $i{\in} I_\text{OL}$ & Instance & $c(i)$, $P_T^{(I)}(i)$ \\ & $e{\in} E_\text{OL}$ & Edge & $P_T^{(E)}(e)$ \\ \midrule \multirow{2}{*}{Network $G_\text{sub}$} & $v{\in} V$ & Node & $\text{cap}_{\text{cpu}}(v)$, $\text{cap}_{\text{mem}}(v)$ \\ & $l{\in} L$ & Link & $b(l)$, $d(l)$ \\ \bottomrule \end{tabular} \end{table} \subsection{Substrate network} We model the \emph{substrate network} as a directed graph $G_\text{sub}{=}(V,L)$. Each \emph{node} $v\in V$ is associated with a CPU capacity $\text{cap}_{\text{cpu}}(v)$ and a memory capacity $\text{cap}_{\text{mem}}(v)$ (this can be easily extended to other types of resources). Moreover, we assume that every node has routing capabilities and can forward traffic to its neighboring nodes.\footnote{Capacities can be 0, e.g., to represent conventional switches by 0 CPU capacity or an end device by 0 forwarding capacity.} Each \emph{link} $l\in L$ is associated with a maximum data rate $b(l)$ and a propagation delay $d(l)$. For each node $v$, we assume that the internal communications (e.g., communication inside a data center) can be done with unlimited data rate and negligible delay. \subsection{Templates} The substrate network has to host a set $\cal T$ of network services. We define the structure of each network service $T\in {\cal T}$ using a \emph{template}, which is a directed acyclic graph $G_\text{tmpl}(T){=}(C_T,A_T)$. We refer to the nodes and edges of the template graph as \emph{components} and \emph{arcs}, respectively. They define the type of components required in the network service and specify the way they should be connected to each other to deliver the desired functionality. Fig.~\ref{fig:template} shows an example template. A template component $j\in C_T$ has an ordered set of inputs, denoted as $\text{In}(j)$, and an ordered set of outputs, denoted as $\text{Out}(j)$. Its resource consumption depends on the data rates of the flows entering the component. We characterize this using a pair of functions $p_j,m_j:\mathbb{R}_{\ge 0}^{|\text{In}(j)|} \to \mathbb{R}_{\ge 0}$, where $p_j$ is the CPU load and $m_j$ is the required memory size of component $j$, depending on the data rate of the incoming flows. These functions typically account for resource consumption due to processing the input data flows as well as fixed, baseline consumption (even when idle). Similarly, data rates of the outputs of the component are determined as a function of the data rates on the inputs, specified as $r_j:\mathbb{R}_{\ge 0}^{|\text{In}(j)|} \to \mathbb{R}_{\ge 0}^{|\text{Out}(j)|}$. Fig.~\ref{fig:component} shows examples for functions $p_j,m_j,r_j$ that define the resource demands and output data rates of an example component. Each arc in $A_T$ connects an output of a component to an input of another component. \emph{Source components} are special components in the template: they have no inputs, a single output with unspecified data rate, and zero resource consumption. In the example of Fig.~\ref{fig:template}, S is a source component, whereas the others are normal processing components. \subsection{Overlays and sources} A template specifies the types of components and the connections among them as well as their resource demands depending on the load. A specific, deployable instantiation of a network service can be derived by scaling its template, i.e., creating the necessary number of instances for each component and linking the instances with each other according to the requirements of the template. Depending on data rates of the service flows and the locations in the network where the flows start, different numbers of instances for each component might be required. To model this, for each network service $T$, we define a set of \emph{sources} $S(T)$. The members of $S(T)$ are tuples of the form $(v,j,\lambda)$, where $v\in V$ is a node of the substrate network, $j\in C_T$ is a source component, and $\lambda\in\mathbb{R}_+$ is the corresponding data rate assigned to the output of this source component. Such a tuple means that an instance of source component $j$ generates a flow from node $v$ with rate $\lambda$. Sources may represent populations of users, sensors, or any other component that can generate flows to be processed by the corresponding network service. Fig.~\ref{fig:sources} shows two example sources for the template of Fig.~\ref{fig:template}, located on different nodes of the substrate network. \begin{figure}[tb] \centering \subfigure[\label{fig:template}A template]{\includegraphics[width=0.27\columnwidth]{template}} \hfil \subfigure[\label{fig:component}A component]{\includegraphics[width=0.4\columnwidth]{component}} \subfigure[\label{fig:sources}Sources on a network]{\includegraphics[width=0.57\columnwidth]{sources}} \subfigure[\label{fig:overlay}An overlay]{\includegraphics[width=0.3\columnwidth]{overlay}} \hfil \subfigure[\label{fig:mapping}Overlay embedded in the network]{\includegraphics[width=0.57\columnwidth]{mapping1}} \caption{Some examples: (a) a template, (b) a component, (c) an overlay corresponding to the template, and (d) a mapping of the overlay into a substrate network. The links of the substrate network are bi-directional.} \end{figure} An \emph{overlay} is the outcome of scaling the template based on the associated sources. An overlay $\text{OL}$ stemming from template $T$ is described by a directed acyclic graph $G_\text{OL}(T){=}(I_\text{OL},E_\text{OL})$. Each component \emph{instance} $i\in I_\text{OL}$ corresponds to a component $c(i)\in C_T$ of the underlying template. Each $i\in I_\text{OL}$ has the same characteristics (inputs, outputs, resource consumption characteristics) as $c(i)$. Moreover, if there is an edge from an output of an instance $i_1$ to an input of instance $i_2$ in the overlay, then there must be a corresponding arc from the corresponding output of $c(i_1)$ to the corresponding input of $c(i_2)$ in the template. This ensures that the edge structure of the overlay is in line with the structural requirements of the network service, represented by the arcs in the template. To be able to create the required number of instances for each component, we assume either that the components are stateless or that a state management system is in place to handle state redistribution upon adding or removing instances. In this way, requests can be freely routed to any instance of a component. Alternatively, additional details can be added to the model, for example, to make sure that the flows belonging to a certain session are routed to the right instance of stateful components that have stored the corresponding state information. Fig.~\ref{fig:overlay} shows an example overlay corresponding to the template in Fig.~\ref{fig:template}. The naming of the instances follows the convention that the first letter identifies the corresponding component in the template, e.g., A1 is an instance of component A. An overlay might include multiple instances of a specific template component, e.g., B1, B2, and B3 all are instances of component B. An output of an instance can be connected to the input of multiple instances of the same component, like the output of A1 is connected to the inputs of B1 and B2. In a case like that, B1 and B2 share the data rate calculated for the connection between components A and B. Similarly, outputs of multiple instances in the overlay can be connected to the input of the same instance, like the input of C1 is connected to the output of B1, B2, and B3, in which case the input data rate for C1 is the sum of the output data rates of B1, B2, and B3. \subsection{Mapping on the substrate network} Each overlay $G_\text{OL}(T)$ must be mapped to the substrate network by a feasible mapping $P_T$. We define the mapping as a pair of functions $P_T=\left(P_T^{(I)},P_T^{(E)}\right)$. $P_T^{(I)}:I_\text{OL}\to V$ maps each instance in the overlay to a node in the substrate network. We make the simplifying assumption that two instances of the same component cannot be mapped to the same node. The rationale behind this assumption is that in this case it would be more efficient to replace the two instances by a single instance and thus save the idle resource consumption of one instance.\footnote{This simplification is mostly a technicality to simplify the problem write-up and could be extended if necessary.} $P_T^{(E)}:E_\text{OL}\to {\cal F}$ maps each edge in the overlay to a flow in the substrate network; $\cal F$ is the set of possible flows in $G_\text{sub}$. We assume the flows are splittable, i.e., can be routed over multiple paths between the corresponding endpoints in the substrate network. The two functions must be compatible: if $e\in E_\text{OL}$ is an edge from an instance $i_1$ to an instance $i_2$, then $P_T^{(E)}(e)$ must be a flow with start node $P_T^{(I)}(i_1)$ and end node $P_T^{(I)}(i_2)$. Moreover, $P_T^{(I)}$ must map the instances of source components in accordance with the sources in $S(T)$, mapping an instance corresponding to source component $j$ to node $v$ if and only if $\exists (v,j,\lambda)\in S(T)$. The binding of instances of source components to sources determines the outgoing data rate of these instances. As the overlay graphs are acyclic, the data rate $\lambda(e)$ on each further overlay edge $e$ can be determined based on the input data rates and the $r_j$ functions of the underlying components, considering the instances in a topological order. The data rates, in turn, determine the resource needs of the instances. Fig.~\ref{fig:mapping} shows a possible mapping of the overlay of Fig.~\ref{fig:overlay} to an example substrate network, based on the pre-defined location of S1 and S2 in the network. Note that it is possible to map two communicating instances to the same node, like A2 and D2 in the example. In this case, the edge between them can be realized inside the node, without using any links. The flow between A2 and B3 is an example of a split flow that is routed over two different paths in the substrate network. Note that Fig.~\ref{fig:mapping} shows only a single overlay mapped to the substrate network for the sake of clarity. In general, JASPER can embed several overlays corresponding to different network services into a substrate network. \subsection{Objectives} \label{subsec:problem} The \emph{system configuration} consists of the overlays and their mapping on the substrate network. A new system configuration can be computed by an appropriate algorithm for the template embedding problem. A valid system configuration must respect all capacity constraints: for each node $v$, the total resource needs of the instances mapped to $v$ must be within its capacity concerning both CPU and memory, and for each link $l$, the sum of the flow values going through $l$ must be within its maximum data rate. However, it is also possible that some of those constraints are violated in a given system configuration: for example, a valid system configuration (i.e., one without any violations) may become invalid because the data rate of a source has increased, because of a temporary peak in resource needs, or a failure in the substrate network. Therefore, given a current system configuration $\sigma$, our primary objective is to find a new system configuration $\sigma'$, in which the \emph{number of constraint violations is minimal} (ideally, zero). For this, we assume that violating node CPU, memory, and link capacity constraints is equally undesirable. There are a number of further, secondary objectives, which can be used as tie-breaker to choose from system configurations that have the same number of constraint violations: \begin{itemize} \item Total delay of all edges across all overlays \item Number of instance addition/removal operations required to transition from $\sigma$ to $\sigma'$ \item Maximum amounts of capacity constraint violations, for each resource type (CPU, memory, link capacity) \item Total resource consumption of all instances across all overlays, for each resource type (CPU, memory, link capacity) \end{itemize} Higher values for these metrics result in higher costs for the system or in lower customer satisfaction, so our objective is to minimize these values. Therefore, our aim is to select a new system configuration $\sigma'$ from the set of system configurations with minimal number of constraint violations that is Pareto-optimal with respect to these secondary metrics. \subsection{Problem formulation summary} Our aim is to handle the scaling, placement, and routing for newly requested network services as well as already deployed network services. Taking this into account, the Template Embedding problem can be summarized as follows: \begin{itemize} \item Inputs: \begin{itemize} \item Substrate network \item Template for each network service \item Location and data rate of the sources for each network service \item For the already deployed network services: overlay and its mapping onto the substrate network \end{itemize} \item Outputs: \begin{itemize} \item For the newly requested network services: overlay and its mapping onto the substrate network \item For the already deployed network services: modified overlay and its modified mapping onto the substrate network \end{itemize} \end{itemize} Scaling is performed while creating the overlay from the template, while placement and routing are performed when the instances and edges of the overlay are mapped onto the substrate network. A further important detail concerns the relationship between different network services. The creation of the overlay from the template and its mapping onto the substrate network are defined for each network service separately; however, they share the same substrate network. The objectives defined in Sec.~\ref{subsec:problem} relate to the whole network including all network services, aiming for a global optimum and potentially resulting in trade-offs among the network services. A further connection among different network services may arise if they share the same component type. In this case, it is also possible that the corresponding overlay instances are realized by the same instance.
1,314,259,993,117
arxiv
\section*{Introduction} \label{intro} {\it Ab initio} description of strongly correlated materials has been a challenge in condensed matter physics and materials science. A promising and widely-used scheme is to combine local density approximation (LDA) with Hubbard-type model Hamiltonian approach within density functional theory (DFT) framework \cite{LDA++}. One of the earliest attempts of this kind is DFT+$U$ \cite{Anisimov_91,Liechtenstein,LDAU_review} which is now established as a standard approach. However, the calculation results of this type of methods strongly depends on the choice of double-counting energy functionals (which remove the conceptually equivalent contribution already present in LDA or GGA (generalized gradient approximation)) as well as interaction parameters (such as on-site Coulomb repulsion $U$ and Hund interaction $J$). This feature severely limits the predictive power of DFT$+U$ and its cousins such as DFT+DMFT (dynamical mean-field theory). There have been many attempts to establish a proper double-counting scheme \cite{Anisimov_93,Czyzyk,Solovyev_94,Petukhov,nominal_dc,Amadon,Karolak,Wang,U',Haule}. The difficulty lies in the nonlinear dependence of exchange-correlation (XC) functionals on the charge and/or spin density. It is therefore non-trivial to extract the precise portion of LDA/GGA XC energy for the correlated subspace. Ever since its first invention of DFT+$U$ method, several phenomenological recipes have been suggested among which most widely used are so-called FLL (fully localized limit) \cite{Anisimov_93,Czyzyk,Solovyev_94,Liechtenstein} and AMF (around mean-field) \cite{Anisimov_91,Czyzyk}. Even though these double counting implementations have been extensively exploited, a comprehensive understanding of their working principles has not been reached. It is still unclear how and how much all these different formalisms give different results and predictions. In spite of previous analyses including some recent case studies of transition-metal systems within FLL \cite{Czyzyk,Solovyev_94,Petukhov,Bultmark,JChen,Park,Chen}, many functionals seem to be used often at random choice and without a proper guiding principle. As a result, it remains difficult to compare the results or predictions obtained by different DFT$+U$ formalisms. In this paper, we perform a comparative study of representative DFT+$U$ functionals including FLL and AMF double countings. The effect of XC functional choice is also examined. To understand the detailed working principles of each DFT+$U$ formalism, we first examine the simplified model systems in terms of their energetics and potentials. Special attention has been paid to the $J$ dependence which has rarely been addressed before. Our analysis clearly shows the different behaviors of DFT+$U$ functionals and their origins. In particular, when spin-polarized version of LDA or GGA is adopted, it can likely produce the undesirable effects. The characteristic features are further highlighted with real material examples covering strongly correlated insulating oxides (MnO and NiO) and metallic magnetic systems (SrRuO$_3$ and BaFe$_2$As$_2$). Our work sheds new light on understanding DFT$+U$ formalism and related methodology, thereby providing an useful guideline for its applications. \section*{Formalism} In this section for the completeness and clarity of our presentation and notation, we briefly summarize DFT+$U$ formalisms within non-collinear density functional scheme. Simplification to collinear case is straightforward. Hence `CDFT+$U$' refers to LDA+$U$ or GGA+$U$, and `SDFT+$U$' to LSDA+$U$ (local spin density approximation + $U$) or SGGA+$U$ (spin-polarized GGA + $U$). Also, we use terms ``cFLL''/``cAMF'' to denote CDFT+$U$ with FLL/AMF double counting and ``sFLL"/``sAMF'' to their SDFT+$U$ versions. \subsection*{DFT+$U$ energy functionals} DFT+$U$ total energy correction to CDFT or SDFT can be written as \cite{LDAU_review}: \begin{eqnarray} {E^{U}} = \sum_{s}{E^{U}_s} = \sum_{s}{E^{\textrm {int}}_s}-E^{\textrm {dc}}_s, \end{eqnarray} where $E^{\textrm{int}}_s$ and $E^\textrm{dc}_s$ refers to the interaction energy within $d$- or $f$-shells and the double counting term, respectively for a particular atom $s$. From now on, we omit atom index $s$ for simplicity. In the present study, $E^U$ refers to either $E^U_{\textrm{FLL}}$ (FLL) or $E^U_{\textrm{AMF}}$ (AMF) depending on the choice of double counting term. The FLL form of $E^\textrm{int}$ reads \cite{Liechtenstein,Fulde}: \begin{align} \label{int} E^{\textrm{int}}_{\textrm{FLL}} = \frac{1}{2}\sum_{\{m_i\},\sigma,\sigma'} \{n^{\sigma\sigma}_{m_1m_2}\langle m_1,m_3|V_{ee}|m_2,m_4 \rangle n^{\sigma'\sigma'}_{m_3m_4} - n^{\sigma\sigma'}_{m_1m_2} \langle m_1,m_3|V_{ee}|m_4,m_2 \rangle n^{\sigma'\sigma}_{m_3m_4} \}, \end{align} where $n^{\sigma\sigma'}_{m_1m_2}$ are the elements of on-site density matrix (DM) $\mathbf{n}$ for orbitals $\{m_i\}$ and spins $\sigma,\sigma'$ ($\sigma,\sigma' = \uparrow$ or $\downarrow$) \cite{MacDonald,Kubler}. The matrix elements of on-site Coulomb interaction can be expressed by \cite{Liechtenstein, Vaugier}: \begin{align} \label{Coulomb} \langle m_1,m_3|V_{ee}|m_2,m_4 \rangle = \sum_{\{m_i'\}}\Big[S_{m_1m_1'}S_{m_3m_3'} \Big\{\sum_{k=0}\alpha_k(m_1',m_3',m_2',m_4')F^k\Big\} S^{-1}_{m_2'm_2}S^{-1}_{m_4'm_4} \Big] \end{align} where $\alpha_k$ and $F^k$ refers to Racah-Wigner numbers and Slater integrals, respectively \cite{Liechtenstein,Vaugier}, and $S$ is a transformation matrix from spherical harmonics to the predefined local basis sets. We follow the conventional expression of $U=F^0$, $J=(F^2+F^4)/14$, and $F^4/F^2=0.625$ for $d$-orbitals. The effect of using different ratio between $F^4$ and $F^2$ is found to be negligible (see Supplementary Information). Expressing $E^\textrm{dc}$ has long been an important issue and still remains as an open problem \cite{Karolak,Wang}. Note that $E^\textrm{dc}$ itself should depend on the given XC energy functional. The FLL double counting based on CDFT+$U$ (or cFLL) can be written as \cite{Anisimov_93,Solovyev_94}: \begin{align} \label{nFLL} E_{\textrm {cFLL}}^{\textrm{dc}} = \frac{1}{2}UN(N-1) - \frac{1}{2}JN\bigg(\frac{N}{2}-1\bigg), \end{align} where $N=\textrm{Tr}[\mathbf{n}]$ within the correlated subspace. For SDFT+$U$ (or sFLL), effect of spin-polarized XC energy should also be taken into account \cite{Czyzyk,Liechtenstein,Bultmark}: \begin{align} \label{FLL} E_{\textrm {sFLL}}^{\textrm{dc}} = \frac{1}{2}UN(N-1) - \frac{1}{2}JN\bigg(\frac{N}{2}-1\bigg) -\frac{1}{4}J{\vec{\mathrm{M}} \cdot \vec{\mathrm{M}}}, \end{align} where the magentization $\vec{\mathrm{M}} = \textrm{Tr}[\vec{\sigma} \mathbf{n}]$ and $\vec{\sigma}$ is Pauli matrices \cite{Bultmark}. Note that the difference is the third term of Eq.~(\ref{FLL}). This formulation of Eq.~(\ref{FLL}) has been widely used. In AMF formalism \cite{Anisimov_91,Czyzyk,Bultmark}, the energy correction is given by the fluctuation with respect to the average occupation of the correlated orbitals \cite{Anisimov_91}: \begin{align} \label{int2} {E^U_\textrm{AMF}} = E^{\textrm{int}}_{\textrm{AMF}} - E_\textrm{AMF}^\textrm{dc} = \frac{1}{2}\sum_{\{m_i\},\sigma,\sigma'} \{ \widetilde{n}^{\sigma\sigma}_{m_1m_2}\langle m_1,m_3|V_{ee}|m_2,m_4 \rangle \widetilde{n}^{\sigma'\sigma'}_{m_3m_4} - \widetilde{n}^{\sigma\sigma'}_{m_1m_2} \langle m_1,m_3|V_{ee}|m_4,m_2 \rangle \widetilde{n}^{\sigma'\sigma}_{m_3m_4} \}, \end{align} where $\widetilde{n}^{\sigma\sigma'}_{m_1m_2}$ are the elements of the redefined DM $\mathbf{\widetilde{n}}$. In CDFT+$U$ (or cAMF) \cite{Anisimov_91}, \begin{align} \label{nAMF} \mathbf{\widetilde{n}} = \mathbf{n} - \frac{1}{2(2l+1)}(N\mathbf{I}), \end{align} where $l$ denotes the angular momentum quantum number for the correlated subspace (e.g., $l = 2$ for $d$-shells) and $\mathbf{I}$ is the identity matrix. In SDFT+$U$ (or sAMF) \cite{Czyzyk,Bultmark}, \begin{align} \label{AMF} \mathbf{\widetilde{n}} = \mathbf{n} - \frac{1}{2(2l+1)}(N\mathbf{I}+\vec{\sigma} \cdot \vec{\mathrm{M}}). \end{align} \subsection*{DFT+$U$ potentials} The matrix elements of orbital dependent potentials are given by $V_{m_1m_2}^{U,\sigma \sigma'} = {\partial({E^{\textrm{int}}}-E^{\textrm{dc}}})/{\partial n^{\sigma \sigma'}_{m_1m_2}} = V_{m_1m_2}^{\textrm{int},\sigma \sigma'} - V_{m_1m_2}^{\textrm{dc},\sigma \sigma'}$. For FLL, the interaction potential for spin diagonal and off-diagonal part is given respectively by \cite{Liechtenstein,Fulde}, \begin{align} \label{pot} &V_{\textrm{FLL},m_1m_2}^{\textrm{int},\sigma \sigma} = \sum_{m_3,m_4,\sigma'} \{\langle m_1,m_3|V_{ee}|m_2,m_4 \rangle - \langle m_1,m_3|V_{ee}|m_4,m_2 \rangle \delta_{\sigma \sigma'} \}n^{\sigma' \sigma'}_{m_3m_4} \end{align} and \begin{align} \label{Vint_off} V_{\textrm{FLL},m_1m_2}^{\textrm{int},\sigma \overline{\sigma}} &= - \sum_{m_3,m_4}\langle m_1,m_3|V_{ee}|m_4,m_2 \rangle n^{\overline{\sigma} \sigma}_{m_3m_4}. \end{align} Here, $\overline{\sigma}$ denotes the opposite spin to $\sigma$. Within CDFT, the double counting potential is \cite{Anisimov_93,Solovyev_94}: \begin{align} V_{\textrm{cFLL},m_1m_2}^{\textrm{dc},\sigma \sigma} &= \Big\{U\bigg(N-\frac{1}{2}\bigg) - J\bigg(\frac{N}{2}-\frac{1}{2}\bigg)\Big\}\delta_{m_1 m_2} \end{align} and \begin{align} \label{VcFLL_off} V_{\textrm{cFLL},m_1m_2}^{\textrm{dc},\sigma \overline{\sigma}} = 0.\qquad \qquad \qquad \qquad \qquad \qquad \quad \; \end{align} Note that the off-diagonal components vanish and thus are spin independent. It is in a sharp contrast to the case of SDFT. In SDFT+$U$, \cite{Bultmark,Liechtenstein,Czyzyk}: \begin{align} &V_{\textrm{sFLL},m_1m_2}^{\textrm{dc},\sigma \sigma} = \Big\{U\bigg(N-\frac{1}{2}\bigg) - J\bigg(N^{\sigma \sigma}-\frac{1}{2}\bigg)\Big\}\delta_{m_1 m_2}, \\ \label{VsFLL_off} &V_{\textrm{sFLL},m_1m_2}^{\textrm{dc},\sigma \overline{\sigma}} = -JN^{\overline{\sigma} \sigma}\delta_{m_1 m_2}, \end{align} where $N^{\sigma \sigma'}=\textrm{Tr}_m[\mathbf{n}^{\sigma \sigma'}]$ (taking trace over orbitals $m_i$). In AMF, the potential is given by taking derivative of Eq.~(\ref{int2}) with respect to density fluctuation $\mathbf{\widetilde n}$ \cite{Anisimov_91,Czyzyk,Bultmark}: \begin{align} V_{\textrm{AMF},m_1m_2}^{U,\sigma \sigma} &= \sum_{m_3,m_4,\sigma'} \{\langle m_1,m_3|V_{ee}|m_2,m_4 \rangle - \langle m_1,m_3|V_{ee}|m_4,m_2 \rangle \delta_{\sigma \sigma'} \}\widetilde{n}^{\sigma' \sigma'}_{m_3m_4}, \\ V_{\textrm{AMF},m_1m_2}^{U,\sigma \overline{\sigma}} &= - \sum_{m_3,m_4}\langle m_1,m_3|V_{ee}|m_4,m_2 \rangle \widetilde{n}^{\overline{\sigma} \sigma}_{m_3m_4}, \end{align} where $\mathbf{\widetilde n}$ refers to Eq.~(\ref{nAMF}) and Eq.~(\ref{AMF}) for CDFT+$U$ (or cAMF) and SDFT+$U$ (or sAMF), respectively. \section*{Analysis of model systems} \label{analysis} To get a systematic understanding of how each DFT+$U$ functional works, we analyze model systems in this section. We investigate the behaviors of energy functionals and potentials as a function of key parameters, which provides useful insight into their differences. \subsection*{Energetics} \label{energetics} In general, DFT+$U$ DM is not necessarily diagonal \cite{Liechtenstein}. As it can always be diagonalized, however, we below assume the diagonalized DM without loss of generality. Total energy corrections by DFT+$U$ in the case of collinear spins are now reduced to \cite{sum_rule,Czyzyk,Ylvisaker}: \begin{align} \label{cFLL} E^U_\textrm{cFLL} &= E^\textrm{int} - \frac{1}{2}UN(N-1) + \frac{1}{2}JN\bigg(\frac{N}{2}-1\bigg), \\ \label{sFLL} E^U_\textrm{sFLL} &= E^\textrm{int} - \frac{1}{2}UN(N-1) + \frac{1}{2}JN\bigg(\frac{N}{2}-1\bigg) + \frac{1}{4}JM^2, \\ \label{cAMF} E^U_\textrm{cAMF} &= E^\textrm{int} - \frac{1}{2}UN^2 + \frac{1}{4}\frac{U+2lJ}{2l+1}N^2, \\ \label{sAMF} E^U_\textrm{sAMF} &= E^\textrm{int} - \frac{1}{2}UN^2 + \frac{1}{4}\frac{U+2lJ}{2l+1}N^2 + \frac{1}{4}\frac{U+2lJ}{2l+1}M^2, \end{align} where \begin{align} E^\textrm{int} = \frac{1}{2}\sum_{\{m_i\},\sigma,\sigma'} n^{\sigma}_{m_1}\{\langle m_1,m_2|V_{ee}|m_1,m_2 \rangle - \langle m_1,m_2|V_{ee}|m_2,m_1 \rangle \delta_{\sigma\sigma'} \} n^{\sigma'}_{m_2}, \end{align} from $n^{\sigma}_{m_1}=n^{\sigma \sigma'}_{m_1m_2}\delta_{m_1m_2}\delta_{\sigma \sigma'}$ in Eq.~(\ref{int}). The fourth terms in Eq.~(\ref{sFLL}) and Eq.~(\ref{sAMF}) are responsible for the effective exchange interaction of SDFT (i.e., LSDA/SGGA, $U=0$). To represent the precise amount of this energy is a non-trivial task. Here we follow the conventional way of using Stoner parameter $I$ with which SDFT contribution to the energy gain via spin polarization is represented by $\Delta E^\textrm{SDFT} = -IM^2/4$ \cite{Andersen,Heine,Anisimov_91}. Note that in sFLL, this contribution is cancelled out when $J=I$ (see Eq.~(\ref{sFLL})). Now let us see how these functionals work in different conditions. Before taking real material examples in the next section, we consider some idealized model systems. With a fixed value of $U=5$ eV, the energy distributions of $d$-shell electronic configurations are presented in Fig.~\ref{energy} (see also Fig.~3 of Ref.~\cite{Ylvisaker}). We use both $J$ and $I$ as control parameters. Here, all possible configurations of integer occupancy for a given electron number $N$ are considered (e.g., $_{10}C_4=210$ configurations for $N=4$). We present the energy from DFT+$U$ and XC functional contributions, which is defined as $E^{U+\textrm{XC}} \equiv E^U_\textrm{sFLL(sAMF)} - IM^2/4$ for sFLL (sAMF) and $E^{U+\textrm{XC}} \equiv E^U_\textrm{cFLL(cAMF)}$ for cFLL (cAMF). Fig.~\ref{energy}(a) shows the result of $J=0$ which can represent so-called `simplified rotationally invariant' formalism by Dudarev {\it et al.} \cite{Dudarev}. Note that the configurations with the same $N$ are degenerate within $E^\textrm{sFLL}$ and this degeneracy is lifted by SDFT energy of $-IM^2/4$. Therefore, the largest possible $M$ configuration is always favored energetically. By comparing Fig.~\ref{energy}(a) with (d), one can clearly notice the role of $J$; lifting degeneracy within the same $N$-$M$ configurations \cite{Ylvisaker}. If the energy contribution from SDFT is negligible (i.e., $I=0$ in $\Delta E^\textrm{SDFT}$; Fig.~\ref{energy}(b)), the smaller $M$ configurations are favored. Only when it becomes significant (Fig.~\ref{energy}(d)), the larger $M$ states are stabilized and the Hund's first rule is satisfied. While sFLL has been considered to be appropriate for high spin systems \cite{Ylvisaker}, this behavior is mainly attributed to SDFT exchange rather than to DFT+$U$ correction, $E^{U}_\textrm{sFLL}$, as clearly seen by comparing Fig.~\ref{energy}(b) and (d). In sFLL, the low spin or nonmagnetic solution is favored as far as $J$ is significantly larger than $I$; see Fig.~\ref{energy}(c). In cFLL, the spin state is controlled solely by the term $E^U_\textrm{cFLL}$. Note that Fig.~\ref{energy}(e) is quite similar with Fig.~\ref{energy}(d). If $I=J$ in sFLL, the third term in Eq.~(\ref{sFLL}) cancels $\Delta E^\textrm{SDFT}$ contribution and sFLL becomes equivalent to cFLL. If the exchange contribution implicit in SDFT is larger than $J$ (i.e., $I>J$), sFLL favors the larger $M$ state more than cFLL (compare Fig.~\ref{energy}(d) and (e)). The estimation of the intrinsic exchange in SDFT is not trivial and in general material dependent. Recent works reported that it is about $\sim 1.0 - 1.5$ eV for 3$d$ transition metal systems such as nickelates, SrMnO$_3$, SrVO$_3$, and bcc Fe, which can be regarded as large \cite{Park,Chen}. As shown in Fig.~\ref{energy}(b) - (d), the exchange contribution from SDFT plays a major role in determining the moment formation, and therefore sFLL can prefer the unphysically large moment solutions. Further, SGGA has in general the stronger tendency toward the magnetic solution than LSDA \cite{Ryee}, which is another source of ambiguity. It is certainly a drawback of SDFT+$U$ especially for predicting material property. In the case of AMF, the difference between CDFT and SDFT is more dramatic; see Fig.~\ref{energy}(f)--(j). As studied by Ylvisaker {\it et al.} \cite{Ylvisaker}, sAMF favors the low spin state and requires quite large value of $I$ to recover Hund's first rule. As shown in Fig.~\ref{energy}(i), sAMF still favors the lowest moment solution even for $I=1$ eV, which is in a sharp contrast to cAMF favoring the moment formation as in cFLL (Fig.~\ref{energy}(e) and (j)). It is attributed to the fourth term of Eq.~(\ref{sAMF}) which penalizes the larger moment formation. For example, with $U=5$ eV, $\frac{1}{4}\frac{U+2lJ}{2l+1}M^2 = \frac{1}{4}(1+\frac{4}{5}J)M^2$. Thus $I$ should be greater than $1+4J/5$ for exchange energy gain by SDFT. This feature can cause some practical problems in using AMF functionals. \subsection*{$J$-dependence of potentials} \label{splitting_analysis} To understand the effect of $J$ on the moment formation and spectral property, here we further analyze DFT+$U$ potentials. The $J$-only contribution to DFT+$U$ potentials (separated from $U$ contributions) for an orbital $m$ and spin $\sigma$ can be expressed as (assuming the diagonalized DM): \begin{align} \label{VcFLL} \widetilde{V}^{U,\sigma}_{\textrm{cFLL},m} &= \widetilde{V}^{\textrm{int},\sigma}_{J,m} + J\bigg(\frac{N}{2}-\frac{1}{2}\bigg),\\ \label{VsFLL} \widetilde{V}^{U,\sigma}_{\textrm{sFLL},m} &= \widetilde{V}^{\textrm{int},\sigma}_{J,m} + J\bigg(N^\sigma - \frac{1}{2}\bigg), \\ \label{VcAMF} \widetilde{V}^{U,\sigma}_{\textrm{cAMF},m} &= \widetilde{V}^{\textrm{int},\sigma}_{J,m} + J\bigg(\frac{2l}{2l+1}\frac{N}{2}\bigg),\\ \label{VsAMF} \widetilde{V}^{U,\sigma}_{\textrm{sAMF},m} &= \widetilde{V}^{\textrm{int},\sigma}_{J,m} + J\bigg(\frac{2l}{2l+1}N^{\sigma}\bigg), \end{align} where $\widetilde{V}^{\textrm{int},\sigma}_{J,m}$ is obtained from Eq.~(\ref{pot}) by taking non-monopole terms in Coulomb interaction matrix elements, \begin{align} \label{Vint2} \widetilde{V}_{J,m_1}^{\textrm{int},\sigma} = \sum_{m_2,\sigma'} \{\langle m_1,m_2|V_{J,ee}|m_1,m_2 \rangle - \langle m_1,m_2|V_{J,ee}|m_2,m_1 \rangle \delta_{\sigma \sigma'} \}n^{\sigma'}_{m_2}, \end{align} and $\langle m_1,m_2|V_{J,ee}|m_1,m_2 \rangle$ is defined as \begin{align} \langle m_1,m_2|V_{J,ee}|m_1,m_2 \rangle = \sum_{\{m_i'\}}\Big[S_{m_1 m_1'}S_{m_2 m_3'} \Big\{\sum_{k\ne 0}\alpha_k(m_1',m_3',m_2',m_4')F^k\Big\} S^{-1}_{m_2' m_1}S^{-1}_{m_4' m_2} \Big]. \end{align} In Eq.~(\ref{VcFLL}) - (\ref{VsAMF}), the second terms are double counting contributions. One can clearly notice that sFLL and sAMF potentials have the spin-dependent double counting which causes the additional up/down spin potential difference. Namely, the spin-splitting is affected by double counting terms. For $\widetilde{V}^{U,\sigma}_{\textrm{cFLL},m}$ and $\widetilde{V}^{U,\sigma}_{\textrm{cAMF},m}$, on the other hand, the spin-splitting is only controlled by interaction potential, $\widetilde{V}^{\textrm{int},\sigma}_{J,m}$. In Fig.~\ref{potential}, the calculated $J$-induced spin-splittings for the model systems are presented (see also Table~\ref{conf} for the list of configurations) . The potential difference, $\Delta \widetilde{V}^U_{\alpha} \equiv \widetilde{V}^{U,\downarrow}_{\alpha} - \widetilde{V}^{U,\uparrow}_{\alpha}$ for a given orbital $\alpha$, can be estimated in the unit of $J$ through Eq.~(\ref{VcFLL}) - (\ref{VsAMF}) and Eq.~(\ref{Vint2}). Noticeable is the same behavior of cFLL and cAMF, in which $\Delta \widetilde{V}^U_{\alpha}$ is quite substantial and always positive, favoring the moment ($M$) formation. This feature is attributed to the spin potential in Eq.~(\ref{VcFLL}) and (\ref{VcAMF}) where the spin-splitting is only controlled by $\widetilde{V}^{\textrm{int},\sigma}_{J,m}$ due to the exact cancellation of up- and down-spin double counting potentials. Thus, it is not specific to a particular form of double counting scheme. Note that the effect of $J$ in CDFT+U (cFLL and cAMF) is consistent with what is expected from Hartree-Fock approximation. Very different features are found in sFLL where the sign of $\Delta \widetilde{V}^U_{\alpha}$ depends on the configuration. In particular, for configurations of $M \ge 3$ (i.e., configuration 8 -- 12), sFLL suppresses the spin-splittings, which is the case of SrMnO$_3$ reported by Chen {\it et al.} \cite{Chen} (see configuration 8). The trend of suppressing spin-splitting is most pronounced at half-filling (configuration 12), e.g., MnO. Further, it is important to note that the negative spin-splitting is not a general feature of sFLL double counting contrary to what is speculated by Ref.~\cite{Chen}. See the positive $\Delta \widetilde{V}^U_{\alpha}$ configurations in Fig.~\ref{potential}. Our result clearly shows that both sFLL and sAMF can produce the positive spin-splitting potential. We note that SDFT+$U$ (sFLL and sAMF) behaves in a counter-intuitive way from the point of view of Hartree-Fock picture. It is because the spin-dependent double countings do not in general cancel out the exchange interaction from SDFT. To recover the Hartree-Fock behavior, it is desirable to use CDFT+$U$. \section*{Application to real materials} \label{real} \subsection*{Calculation detail} \label{detail} All calculations were performed using our new implementation of DFT+$U$ into OpenMX software package \cite{openmx}, which is based on the nonorthogonal LCPAO (linear combination of localized pseudoatomic orbitals) formalism \cite{LCPAO1,LCPAO2,LCPAO3}. We adopted Troullier-Martins type norm-conserving pseudopotentials \cite{TM} with partial core correction. We used 9 $\times$ 9 $\times$ 9, 12 $\times$ 12 $\times$ 12 (8 $\times$ 8 $\times$ 6), and 14 $\times$ 14 $\times$ 7 $\mathbf{k}$-points for rocksalt MnO and NiO, cubic (orthorhombic {\it Pbnm}) SrRuO$_3$, and BaFe$_2$As$_2$ in the first Brillouin zone, respectively, and the energy cutoff of 500 Ry for numerical integrations in real space grid. The localized orbitals were generated with radial cutoff of 6.0 (Mn, Ni, and Fe) and 7.0 (Ru) a.u. \cite{LCPAO1,LCPAO2}. Experimental lattice parameters were used for all materials. For the XC functional, L(S)DA \cite{CA} parameterized by Perdew and Zunger \cite{CA-PZ} was used. Unless otherwise specified, we adopted `dual' projector \cite{MJH} for on-site DM. For more discussion on local projectors in LCPAO scheme, see Ref.~\cite{MJH}. \subsection*{MnO and NiO} \label{mno_and_nio} Now we consider real materials. The first examples are MnO and NiO, corresponding to the configuration 12 and 7 in Fig.~\ref{potential}, respectively (see also Table~\ref{conf}). Although these two prototype correlated insulators have been extensively studied by using DFT+$U$, the systematic $J$-dependence of the electronic and magnetic property has rarely been addressed. In Fig.~\ref{split}, the calculated spin-splittings and magnetic moments by four different DFT+$U$ formalisms (namely, cFLL, sFLL, cAMF, and sAMF) are compared as a function of $J$. First of all, we note that the calculated $\Delta \widetilde{V}^U_{\alpha}$ is consistent with our analyses presented in Fig.~\ref{potential}. In MnO, the splitting is rapidly increased in cFLL and cAMF as $J$ increases, which is consistent with the positive value of $\Delta \widetilde{V}^U_\alpha$ in Fig.~\ref{potential}. On the other hand, it is gradually reduced in sFLL as a function of $J$, being consistent with the small and negative $\Delta \widetilde{V}^U_\alpha$ in Fig.~\ref{potential}. The results of NiO are also very well compared with the configuration 7 in Fig.~\ref{potential} It is noted that sAMF predicts the entirely wrong magnetic ground state, $M \simeq 1$ $\mu_B$/Mn (see green lines in Fig.~\ref{split}(a) and (b)). This low spin configuration is no longer represented by configuration 12 in Fig.~\ref{potential}. This is an outstanding example to show that sAMF can unphysically favor the low spin state due to the overestimated $I$. In this kind of case, the use of sAMF is highly undesirable. The high spin ground state of MnO is well reproduced by sFLL, cFLL, and cAMF in a reasonable range of $J$ (Fig.~\ref{split}(a) and (b)). In sFLL, this ground state configuration is obtained even at $J=0$ eV due to the intrinsic exchange within SDFT ($U=0$) large enough to stabilize the high spin. The calculated density of states (DOS) in Fig.~\ref{mno_nio}(a) and (b) clearly shows the different $J$ dependence of cFLL and sFLL functionals. While the up/down spin state split is mainly controlled by $J$ in cFLL, it is quite significant already at small $J$ in the case of sFLL. To further elucidate the difference between CDFT+$U$ and SDFT+$U$, Fig.~\ref{mno_nio}(c) shows the total energy difference between antiferro- and ferro-magnetic phases ($\Delta E = E_\textrm{AF}-E_\textrm{FM}$) calculated by cFLL and sFLL. The $J$ dependence of $\Delta E$ exhibits the opposite trends; as $J$ increases, cFLL tends to less favor the AF order while sFLL more favors it. From the superexchange magnetic coupling of $J_\textrm{ex} \sim -t^2/(U+4J)$ ($t$: Mn-site effective hopping integral), the behavior predicted by cFLL is more reasonable than sFLL. In NiO (Fig.~\ref{split}(c) and (d)), the $M$ is insensitive to $J$, $M \simeq 1.6$ $\mu_B$/Ni while the slight increase is observed in cFLL and cAMF following the trend of the $d_{x^2-y^2}$ spin-splitting (see also Fig.~\ref{mno_nio}(d) and (e)). Here we note that in this $d^8$ case the low and high spin configuration is irrelevant to get the ground state property. The calculated $\Delta E$ change is also quite small in sFLL (Fig.~\ref{mno_nio}(f)). In cFLL, $\Delta E = -0.320$ and $-0.224$ eV/f.u. at $J=0$ and $1$ eV, respectively, being consistent with superexchange estimation. \subsection*{SrRuO$_3$} SrRuO$_3$ is a ferromagnetic metal with a transition temperature of $T_c \sim 160$ K \cite{Koster}. DFT+$U$ has often been used to study SrRuO$_3$ \cite{Jeng,Mahadevan,Granas,Verissimo} in spite of its metallic nature \cite{Georges_Hund}. Therefore it will be informative to investigate the DFT+$U$ functional dependence in this material. The configuration 6' in Fig.~\ref{potential} and Table~\ref{conf} corresponds to this case. Fig.~\ref{split}(e) and (f) shows the calculated spin-splitting and magnetic moment, respectively. They are consistent with the results of Fig.~\ref{potential}; namely, the slight decreasing (increasing) trend of splitting and moment in sFLL (sAMF) and the large increase in cFLL and cAMF as a function of $J$. It is noted that sFLL gives the fully polarized spin moment of $M \simeq 2$ $\mu_B$/f.u. for both cubic and distorted orthorhombic (not shown) structures. This half-metallic phase has been reported before by using sFLL version of SDFT+$U$ \cite{Jeng,Mahadevan,Granas}, however, it is not well supported by experiments. The result of sAMF shows the smaller spin splitting and moment than those of sFLL as also reported in Ref.~\cite{Granas}. This behavior of sAMF and sFLL are consistent with what is observed in MnO and NiO discussed above. Namely, it is attributed to the spin-dependent double counting which depends on $U$ as well as $J$ in sAMF (Eq.~(\ref{sAMF})). Due to its metallic nature, the magnetism of SrRuO$_3$ can be more sensitive to the choice of double counting. CDFT+$U$ (i.e., cFLL and cAMF) shows notably different behaviors. The calculated magnetic moment and splitting are gradually increased as a function of $J$ (Fig.~\ref{split}(e) and (f)) and the half-metallic phase is observed only for large $J$ ($J \gtrsim 0.9$ eV for cubic and $0.8$ eV for orthorhombic structure). In a reasonable range of $J \simeq 0.4$ -- $0.6$ eV \cite{comment,Si,Dang}, the calculated moment is $M \simeq 1.4$ and $1.6$ $\mu_B$/f.u. for cubic and orthorhombic structure, respectively, in good agreement with experiments \cite{Koster}. As mentioned in the previous section, the exchange contribution by SGGA is expected to be greater than the LSDA \cite{Ryee}. This tendency is clearly shown in Fig.~\ref{sro_dos}(b). In SGGA+$U$, the moment size is further enhanced ($M = 1.96$ $\mu_B$/f.u.) than LSDA+$U$ ($M = 1.67$ $\mu_B$/f.u.). On the other hand, in the case of CDFT+$U$ (Fig.~\ref{sro_dos}(a)), GGA+$U$ gives basically the same result with LDA+$U$ ($M = 1.41$ $\mu_B$/f.u.). \subsection*{BaFe$_2$As$_2$} \label{Ba122} The superconducting Fe pnictides have been a subject of intensive research activities. From the viewpoint of first-principles calculations, the unusually large magnetic moment by SDFT compared to experiments is a long standing issue \cite{Mazin,Yin,pnictides,Johannes}. Interestingly, to reproduce experimental moments, {\it negative} $U$ values within SDFT+$U$ \cite{Nakamura,Yi} have been adopted. As pointed out in Ref.~\cite{Yi}, however, it is hard to be justified in the physics sense. Here we note that the intrinsic exchange contribution of $\sim IM$ in SDFT can be too large as discussed in the above, and SDFT may not be the right starting point to take the correlation effects into account. We found that CDFT+$U$ can provide much more sensible picture for magnetism in this material. Table~\ref{ba122} shows the calculated magnetic moment for BaFe$_2$As$_2$ with cRPA (constrained random phase approximation) value of $U=2.3$ eV \cite{Biermann_IBS}. The result of $M^\textrm{cFLL}$ is in a fairly good agreement with experiment ($M \simeq 0.9$ $\mu_B$/Fe \cite{Huang}) for $J = 0.3$ -- $0.5$ eV whereas $M^\textrm{sFLL}$ always overestimate the moments. Note that the reasonable size of $M$ is reproduced with realistic value of $U$ and $J$ only within CDFT+$U$. As shown in Table~\ref{ba122}, the moment is also sensitive to the way of defining local DM projector since the `full' projector tends to take the smaller on-site electron occupation compared to the `dual' \cite{MJH}. The best comparison with experiment is achieved with $J=0.3$ eV for `dual' and $J \simeq 0.6$ eV for `full' projector. Also noticeable is the different $J$ dependence of moment by two functionals; $M^\textrm{cFLL}$ ($M^\textrm{sFLL}$) increases (decreases) as $J$ increases. This feature is again consistent with the behavior discussed in the previous section. The consistent result of cFLL with experiment is impressive even though the dynamic correlation beyond DFT+$U$ certainly plays the role in this system \cite{Georges_Hund}. \section*{Summary and Conclusion} We performed a comparative analysis on DFT+$U$ functionals employing two widely-used double counting forms and their relation to standard XC functionals. The detailed investigations on each formulation as well as the real material examples provided a clear understanding of different behaviors of DFT+$U$ functionals. The calculated energetics and spin potentials for representative model systems clearly show the role of double counting and XC functional in determining the ground state magnetic property. Competition between the effect of $J$ and the spin density XC energy is the key to understand the SDFT+$U$ result. Application to real materials including MnO, NiO, SrRuO$_3$, and BaFe$_2$As$_2$ further clarify the different tendency between the formalisms, supporting the analyses with model systems. As a rule of thumb, CDFT+$U$ is suggested as the desirable choice for most purposes. \section*{Acknowledgements} This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2018R1A2B2005204). The computing resource was partly supported by National Institute of Supercomputing and Networking / Korea Institute of Science and Technology Information with supercomputing resources including technical support (KSC-2015-C2-011). \section*{Author contributions} S.R. performed the calculations and analysis under the supervision of M.J.H. Both authors wrote the manuscript. \section*{Competing interests} The authors declare no competing financial interests.
1,314,259,993,118
arxiv
\section{Introduction} In statistics, data sets that reside in high dimensional spaces are quite common. A widely used set of techniques to simplify and analyze such sets is \emph{principal component analysis} (PCA). It was introduced by Pearson in 1901 and independently by Hotelling in 1933. A comprehensive introduction can be found in Jolliffe (2002). The main aim of PCA is to provide a smaller subspace such that the maximum amount of information is retained when the original data points are projected onto it. This smaller subspace is expressed through components. In many contexts, one dimensional subspaces are called lines, so we will follow this terminology. The line that carries the most variation present in the data set is called \emph{first principal component} (PC1). The \emph{second principal component} (PC2) is the line such that when combined with PC1, the most variation that can be retained in a two-dimensional subspace is kept. One may repeat this procedure to find as many principal components as necessary to properly summarize the data set in a manageable sized subspace formed by the principal components. Another way to characterize the principal components to consider the distances of the data points to a given subspace. The line which minimizes the sum of squared distances of data points onto it can be considered as PC1. Similarly, PC2 is the line that, when combined with PC1, the sum of squared distances of the data points to this combination is minimum. An important topic within PCA is called \emph{dimension reduction} (See Mardia et al (1973) for dimension reduction and Jolliffe (2002) pp. 144, for {\it backward elimination method}). The aim of dimension reduction method is defined as to find the components such that when eliminated, the remaining subspace will retain the maximum amount of variation. Or alternatively, the remaining subspace will have the minimum sum of squared distances to the data points. These are the components with least influence. We would like to note that, in the general sense, any PCA method can be regarded as a dimension reduction process. However, Mardia et al (1973) reserves the term dimension reduction specifically for this method, which some other resources also refer as backward elimination, or backward PCA. In this paper we will follow Mardia et al (1973)'s convention, together with ``backward PCA" terminology. The original approach will be called \emph{forward PCA}. In general, the choice of which technique to use depends on the needs of the end user: If only a few principal components with most variation in them are needed, then the forward approach is more suitable. If the aim is to eliminate only a few least useful components, then the backward approach would be the appropriate choice. The historically most common space used in statistics is the Euclidean space ($\mathbb{R}^n$) and the PCA ideas were first developed in this context. In $\mathbb{R}^n$, the two definitions of PC's (maximum variation and minimum distance) are equivalent, and the components are all orthogonal to each other. In Euclidean space, applying forward or backward PCA $n$ times for a data set in $\mathbb{R}^n$ would provide an orthogonal basis for the whole space. Moreover, in this context, the set of components obtained with the backward approach is the same as the one obtained by the classical forward approach, only the order of the components is reversed. This is a direct result of orthogonality properties in Euclidean space. This phenomenon can be referred as \emph{path independence} and it is very rare in non-Euclidean spaces. In fact, this paper may be presenting the first known example of path independence in non-Euclidean spaces. With the advancement of technology, more and more data sets that do not fit into the Euclidean framework became available to researchers. A major source of these has been biological sciences, collecting detailed images of their objects of interest using advanced imaging technologies. The need to statically analyze such non-traditional data sets gave rise to many innovations in statistics area. The type of non-traditional setting we will be focusing in this paper is sets of trees as data. Such sets arise in many contexts, such as blood vessel trees (Aylward and Bullitt (2002)), lung airways trees (Tschirren et al. (2002)), and phylogenetic trees (Billera et al. (2001)). A first starting point in PCA analysis for trees is Wang and Marron (2007), who attacked the problem of analyzing the brain artery structures obtained through a set of Magnetic Resonance Angiography (MRA) images. They modeled the brain artery system of each subject as a binary tree and developed an analog of the forward PCA in the binary tree space. They provided appropriate definitions of concepts such as distance, projection and line in binary tree space. They gave formulations of first, second, etc. principal components for binary tree data sets based on these definitions. This work has been the first study in adapting classical PCA ideas from Euclidean space to the new binary tree space. The PCA formulations of Wang and Marron (2007) gave rise to interesting combinatorial optimization problems. Ayd{\i}n et al. (2009) provided an algorithm to find the optimal principal components in binary tree space in linear time. This development enabled a numerical analysis on a full-size data set of brain arteries, revealing a correlation between their structure and age. In the context of PCA in non-Euclidean spaces, Jung et al. (2010) gave a backward PCA interpretation in image analysis. They focus on \emph{mildly non-Euclidean}, or manifold data, and propose the use of Principal Nested Spheres as a backward step-wise approach. Marron et al. (2010) provided a concise overview of backward and forward PCA ideas and their applications in various non-classical contexts. They also mention the possibility of backwards PCA for trees: ``... The notion of backwards PCA can also generate new approaches to tree line PCA. In particular, following the backwards PCA principal in full suggests first optimizing over a number of lines together, and then iteratively reducing the number of lines." This quote essentially summarizes one of our goals in this paper. In this work, our first goal is to extend the definitions and results of Wang and Marron (2007) and Ayd{\i}n et al. (2009) on forward PCA from binary tree space to the more general rooted labeled tree space. We will provide the generalized versions of some basic definitions such as distance, projection, PC, etc., and proceed with showing that the optimal algorithms provided for the limited binary tree space can be extended to the general rooted labeled tree space. A rooted labeled tree is a tree such that there is a single node designated as a root, and each node is labeled in such a way that a \emph{correspondence} structure can be established between data trees. For example, in binary tree context, this means that the left and right child nodes of the any node are distinct from each other. In general, the labeling of the nodes greatly affects the statistical results obtained from any data set. For the rest of the paper, we will refer to the rooted labeled tree space as \emph{tree space}. Next, we attack the problem of finding an analog of dimension reduction. We first provide a de\-fi\-ni\-tion for principal components with least influence (we call these \emph{backward principal components}) in tree space, and define the optimization problem to be solved to reach them. We then provide a linear time algorithm to solve this problem to optimality. Furthermore, we prove that the set of backward principal components in tree space is the same as the forward set, with order reversed, just like their counterparts in the classical Euclidean space. This equivalence is significant since the same phenomenon in Euclidean space is a result of orthogonality, and the concept of orthogonality does not carry over to the tree space. This result enables the analyst to switch between the two approaches as necessary while the results remain comparable, i.e., the components and their influence do not depend on which approach is used to find them. Therefore path independence property is valid in tree space PCA as well. Our numerical results come from two main data sets. First one is an updated version of the brain artery data set previously used by Ayd{\i}n et al. (2009). Using our backward PCA tool, we investigate the effect of aging in brain artery structure in male and female subjects. We define two different kinds of age effect on the artery structure: overall branchyness and location-specific effects. We report that while both of these effects are strongly observed in males, they could not be observed in females. Secondly, we present a statistical analysis of the organization structure of a large US company. We present evidence on the structural differences across departments focusing on finance, marketing, sales and research. The organization of the paper is as follows: In Section \ref{preliminaries}, we provide the definitions of concepts such as distance, projection, etc. in general tree space, together with a description of the forward approach and the algorithm to solve it. These are generalizations of the concepts introduced in Wang and Marron (2007) and Ayd{\i}n et al (2009). In Section \ref{backward} we describe the problem of finding the backward principal components in tree space and give an algorithm to find the optimal solution. In Section $4$ we prove the equivalence of forward and backward approaches in tree space. Section \ref{numerical} contains our numerical analysis results. \section{Forward PCA in Tree Space}\label{preliminaries} In this section, we will provide definitions of some key concepts such as distance, projection, etc. in tree space, together with illustrative examples. The binary tree space versions of these definitions were previously given in Wang and Marron (2007) and Ayd{\i}n et al. (2009). We will also provide the tree space versions of their PCA results, and prove their optimality in the more general tree space. In this paper the term \emph{tree} is reserved for rooted tree graphs in which each node is distinguished from each other through labels. The labeling method can differ depending on the properties of any tree data set. For labeling binary trees, Wang and Marron (2007) uses a level-order indexing method. In this scheme the root node has index 1. For the remaining nodes, if a node has index $i$, then the index of its left child is $2i$ and of its right child is $2i+1$. (See Figure $1$). Labeling general trees may get significantly more complicated. \begin{center} \begin{figure}[h]\label{Figure1} \[ \begin{array}{cc} \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, circle, inner sep=1pt] \draw (0,0) node (r) {\tiny 1}; \draw (-.5,-.5) node (v1) {\tiny 2}; \draw (-.7,-1.1) node (v2) {\tiny 4}; \draw (-.3,-1.1) node (v3) {\tiny 5}; \draw (.5,-.5) node (v4) {\tiny 3}; \draw (v4) -- (r) -- (v1) -- (v2) (v3) -- (v1); \end{tikzpicture} & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, circle, inner sep=1pt] \draw (0,0) node (r) {\tiny 1}; \draw (-.5,-.5) node (v1) {\tiny 2}; \draw (.5,-.5) node (v4) {\tiny 3}; \draw (.7,-1.1) node (v5) {\tiny 7}; \draw (.3,-1.1) node (v6) {\tiny 6}; \draw (v5) -- (v4) -- (r) -- (v1) (v4) -- (v6); \end{tikzpicture} \end{array} \] \caption{Two trees of which nodes are labeled using level-order indexing method. The children of any node are distinct from each other. The nodes 1,2 and 3 in the left data tree correspond to the nodes 1,2 and 3 in the right data tree.} \end{figure} \end{center} A data set, $\mathcal{T}$, is an indexed finite set of $n$ trees. A distance metric between two trees is the symmetric difference of their nodes. Given two trees, $t_1$ and $t_2$, the {\bf distance} between $t_1$ and $t_2$, denoted by $d(t_1,t_2)$, is \[ |t_1\setminus t_2|+|t_2\setminus t_1|, \] where $|\cdot|$ is the number of nodes and $\setminus$ is the node set difference. In Figure $1$, the nodes 1, 2 and 3 are common to both of the trees, so they do not contribute to the distance between them. The nodes 4,5, 6 and 7 exist in one data tree but not in the other, therefore, the distance between the left and right trees in the figure is $|\{4,5,6,7\}|=4$. The {\bf support tree} and the {\bf intersection tree} of a data set $\mathcal{T}=\{t_1, \dots, t_n\}$ are defined as: \[ Supp(\mathcal{T})= \cup_{i=1}^{n}t_i \text{ and } Int(\mathcal{T})= \cap_{i=1}^{n}t_i, \] respectively. As before, the line concept is a close counterpart to the lines in Euclidean space. In the most general sense line refers to a set of points that are next to each other. These points lie in a given direction, which makes the line ``one-dimensional". Due to the discrete nature of tree space, the points (trees) that are next to each other are defined the points with distance $1$, the smallest possible distance between two non-identical trees. To mimic the one-dimensional direction property, we require that every next point on the line in tree space is obtained by adding a child of most recently added node. The resulting construct is a set of trees that start from a starting tree and expands following a path away from the root, which is akin to the sense of direction in Euclidean space. A formal definition of a line in tree space is given as follows: \begin{Definition} Given a data set $\mathcal{T}$, a {\bf tree-line}, ${L=\{ l_0, \dots, l_k\}}$, is a sequence of trees where $l_0$ is called the starting tree, and $l_{i}$ is defined from $l_{i-1}$ by the addition of a single node $v_i\in Supp(\mathcal{T})$. In addition, each $v_{i}$ is a child of $v_{i-1}$. \end{Definition} See Example \ref{exa:exa1} for an example tree-line. The next concept to construct is the projection in this space. In general, the projection of a point onto an object can be defined as the closest point on the object to the projected point. This can be formalized in tree space as: \begin{Definition} The {\bf projection} of a tree $t$ onto the tree-line $L$ is \[ P_L(t)= \arg \min_{\tiny l\in L} \{d(t,l) \} \] \end{Definition} The projection of a data tree onto a tree-line can be regarded as the point in the tree-line most similar to the data tree. Example \ref{exa:exa1} contains a small data set and a tree-line, and illustrates how the projection of each data point onto the given tree-line can be found. \begin{Example}\label{exa:exa1} Let us consider the following data set consisting of $3$ data points. For simplicity, we use a set consisting of binary trees only. \[ \mathcal{T} =\left\{ \begin{array}{ccccc} t_1= \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node (r) {}; \draw (-.6,-.5) node (v1) {}; \draw (-.8,-1) node (v11) {}; \draw (-.4,-1) node (v12) {}; \draw (0,-.5) node (v2) {}; \draw (.2,-1) node (v22) {}; \draw (v1) -- (r) -- (v2) (v11) -- (v1) -- (v12) (v2) --(v22); \end{tikzpicture} & , & t_2= \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node (r) {}; \draw (-.6,-.5) node (v1) {}; \draw (-.4,-1) node (v12) {}; \draw (0,-.5) node (v2) {}; \draw (-.2,-1) node (v21) {}; \draw (.6,-.5) node (v3) {}; \draw (v1) -- (r) -- (v2) (v1) -- (v12) (v21) -- (v2) (r) -- (v3); \end{tikzpicture} & , & t_3= \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node (r) {}; \draw (0,-.5) node (v2) {}; \draw (-.2,-1) node (v21) {}; \draw (.2,-1) node (v22) {}; \draw (.6,-.5) node (v3) {}; \draw (.4,-1) node (v31) {}; \draw (.8,-1) node (v32) {}; \draw (r) -- (v2) (v21) -- (v2) --(v22) (r) -- (v3) (v31) -- (v3) -- (v32); \end{tikzpicture} \end{array} \right\}, \] \[ Supp(\mathcal{T}) = \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node (r) {}; \draw (-.6,-.5) node (v1) {}; \draw (-.8,-1) node (v11) {}; \draw (-.4,-1) node (v12) {}; \draw (0,-.5) node (v2) {}; \draw (-.2,-1) node (v21) {}; \draw (.2,-1) node (v22) {}; \draw (.6,-.5) node (v3) {}; \draw (.4,-1) node (v31) {}; \draw (.8,-1) node (v32) {}; \draw (v1) -- (r) -- (v2) (v11) -- (v1) -- (v12) (v21) -- (v2) --(v22) (r) -- (v3) (v31) -- (v3) -- (v32); \end{tikzpicture} \] and a tree-line \[ L =\left\{ \begin{array}{ccccccc} l_0= \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node (r) {}; \draw (-.6,-.5) node (v1) {}; \draw (-.8,-1) node (v11) {}; \draw (v1) -- (r) (v11) -- (v1); \end{tikzpicture} & , & l_1= \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node (r) {}; \draw (-.6,-.5) node (v1) {}; \draw (-.8,-1) node (v11) {}; \draw (0,-.5) node (v2) {}; \draw (v1) -- (r) -- (v2) (v11) -- (v1); \end{tikzpicture} & , & l_2= \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node (r) {}; \draw (-.6,-.5) node (v1) {}; \draw (-.8,-1) node (v11) {}; \draw (0,-.5) node (v2) {}; \draw (.2,-1) node (v22) {}; \draw (v1) -- (r) -- (v2) (v11) -- (v1) (v2) --(v22); \end{tikzpicture} \end{array} \right\}. \] The following table gives the distance between each tree of $\mathcal{T}$ and each tree of $L$: \begin{center} \begin{tabular}{|c|ccc|} \hline & $l_0$ & $l_1$ & $l_2$ \\ \hline $t_1$ & 3 & 2 & 1 \\ \hline $t_2$ & 5 & 4 & 5 \\ \hline $t_3$ & 8 & 7 & 6 \\ \hline \end{tabular} \end{center} So, we can observe that $P_L(t_1)=l_2$, $P_L(t_2)=l_1$ and $P_L(t_3)=l_2$. \end{Example} Finally, we will define the concept of ``path" that will be useful later on. \begin{Definition} Given a tree-line $L=\{l_0, \cdots, l_k \}$, the {\bf path} of $L$ is the unique path from the root to $v_k$, the last node added in L, and it is denoted by $p_L$. \end{Definition} Note that our path definition is different than the one given in Ayd{\i}n et al. (2009), which included only the nodes added to the starting tree instead of forming a set starting from the root node. The next lemma provides an easy-to-use a formula for the projection of a data point. The proof of it can be found in the Appendix. \begin{Lemma}\label{lemma1} Let $t$ be a binary tree and $L=\{l_0, \cdots, l_k \}$ be a tree-line. Then \[ P_L(t)=l_0\cup( t \cap p_L). \] \end{Lemma} It follows that projection of a tree over a tree-line is unique. Wang and Marron (2007) gave a definition of first principal component tree-line in the binary tree space. It was defined as the tree-line that minimizes the sum of distances of the data points to their projections on the line. This can be viewed as the one-dimensional line that best fits the data. We will provide their definition below, adopted to the general tree space. We also note that this is the ``forward PCA" approach where a subspace that carries the most amount of variation is sought. We will develop the ``backward PCA" approach in the upcoming section. \begin{Definition} For a data set $\mathcal{T}$ and the set of all tree-lines $\mathcal{L}$ in $Supp(\mathcal{T})$ with the same starting point $l_0$, the {\bf first (forward) principal component tree-line}, PC1, is \[ L_1^f=\arg \min_{L\in \mathcal{L}} \sum_{t\in \mathcal{T}} d(t,P_L(t)). \] \end{Definition} As we will see in Example \ref{exa:forward}, the definition of the principal components allows multiple solutions. A tie-breaking rule depending on the nature of the data should be established to reach consistent results in the existence of ties. In order to have a tie breaking rule dealing with the PC's definition, we assume that the set of all tree-lines is totally ordered. This tie-breaking rule (total order) can be induced to the set of paths. Thus, we denote by $p_L>p_{L'}$ that the path $p_L$ is preferred to $p_{L'}$. For an analogous notion of the additional components in tree space, we need to define the concept of the union of tree-lines, and projection onto a union. We say that given tree-lines $L_1=\{l_{1,0}, l_{1,1}, \dots, l_{1,m_1}\}$, \dots, $L_q=\{l_{q,0}, l_{m,1}, \dots, l_{q,m_q}\}$, their {\bf union} is the set of all possible unions of members of $L_1$ trough $L_q$: \begin{eqnarray*} L_1\cup\cdots \cup L_q & = & \{l_{1,i_1}\cup\cdots\cup l_{m,i_m} \mid i_1\in \{1,\cdots, m_1\}, \cdots, i_q\in \{0, \cdots, m_q\} \}. \end{eqnarray*} In light of this, the projection of a tree $t$ onto $L_1 \cup \cdots \cup L_q$ is: \[ P_{L_1 \cup \cdots \cup L_q}(t)=\arg \min_{\tiny l\in L_1 \cup \cdots \cup L_q} \{d(t,l) \} \] Next, we provide the definition of the general $k^{th}$ PC: \begin{Definition} For a data set $\mathcal{T}$ and the set of all tree-lines $\mathcal{L}$ in $Supp(\mathcal{T})$ with the same starting point $l_0$, the {\bf $k$-th (forward) principal component tree-line}, PCk, is defined recursively as \[ L_k^f=\arg \min_{L\in \mathcal{L}} \sum_{t\in \mathcal{T}} d(t,P_{L_1^f \cup \cdots \cup L_{k-1}^f\cup L}(t)). \] The path of the $k$-th principal component tree-line will be denoted by $p_k^f$. \end{Definition} The following lemma describes a key property that will be used to interpret the projection of a tree onto a subspace defined by a set of tree-lines. The reader may refer to the Appendix for the proof. \begin{Lemma}\label{lemma2} Let $L_1, L_2, \dots, L_q$ be tree-lines with a common starting point, and $t$ be a tree. Then \[ P_{L_1 \cup \cdots \cup L_q}(t)=P_{L_1}(t) \cup \cdots \cup P_{L_q}(t) \] \end{Lemma} Ayd{\i}n et al. (2009) provided a linear time algorithm to find the forward principal components in binary tree space. We will give a generalization of that algorithm in tree space, and prove that the extended version also gives the optimal PC's. The algorithm uses the weight function $w_k(v)$, defined as follows: \begin{Definition} Let $\mathcal{T}$ be a data set and $\mathcal{L}$ be the set of all tree-lines with the same starting point $l_0$. Let $\delta$ be an indicator function, defined as $\delta(v,t)=1$ if $v\in t$, and $0$ otherwise. Given $L_1^f, \dots, L_{k-1}^f$, the first $k-1$ PC tree-lines. The $k$-th weight of a node $v\in Supp(\mathcal{T})$ is \[ w_k(v)= \begin{cases} 0, & \text{ if } v\in l_0\cup p_1^f \cup \cdots \cup p_{k-1}^f,\\ \sum_{t\in \mathcal{T}}\delta(v,t), & otherwise. \end{cases} \] \end{Definition} The following algorithm computes the $k$-th PC tree-line: \begin{Algorithm}{Forward algorithm.} Let $\mathcal{T}$ be a data set and $\mathcal{L}$ be the set of all tree-lines with the same starting point $l_0$.\\ {\bf Input:} $L_1^f, \dots, L_{k-1}^f$, the first $(k-1)$-st PC tree-lines.\\ {\bf Output:} A tree-line.\\ Return the tree-line whose path maximizes the sum of $w_k$ weights in the support tree. Break ties according to an appropriate tie-breaking rule. \end{Algorithm} To better explain how the algorithm works, we will apply the forward algorithm to the toy data set given in Example \ref{exa:exa1}. \begin{Example}\label{exa:forward} In this example, we select as tie-breaking rule the tree-line with leftmost path. We take the intersection tree as the starting point (illustrated in red below). The table given below summarizes iterations of the algorithm, where each row corresponds to one iteration. At each of the iterations, the name of the principal component obtained at that iteration is given in left column. The support tree with updated weights ($w_i'(.)$) is given in the middle column. The paths of selected PC tree-lines according to these weights is given in right column \begin{center} \begin{tabular}{ccc} PC 1 & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $2$}] (v1) {}; \draw (-.8,-1) node[label=below:{\tiny $1$}] (v11) {}; \draw (-.4,-1) node[label=below:{\tiny $2$}] (v12) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (-.2,-1) node[label=below:{\tiny $2$}] (v21) {}; \draw (.2,-1) node[label=below:{\tiny $2$}] (v22) {}; \draw (.6,-.5) node[label=right:{\tiny $2$}] (v3) {}; \draw (.4,-1) node[label=below:{\tiny $1$}] (v31) {}; \draw (.8,-1) node[label=below:{\tiny $1$}] (v32) {}; \draw (v1) -- (r) (v11) -- (v1) -- (v12) (v21) -- (v2) -- (v22); \draw[thick, color=red] (r) -- (v2); \draw (r) -- (v3) (v31) -- (v3) -- (v32); \end{tikzpicture} & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $2$}] (v1) {}; \draw (-.4,-1) node[label=below:{\tiny $2$}] (v12) {}; \draw (v1) -- (r) (v12) -- (v1); \end{tikzpicture} \\%Secondly, we use the weight $w_2(\cdot)$ over the nodes, the weighted support tree is PC 2 & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $0$}] (v1) {}; \draw (-.8,-1) node[label=below:{\tiny $1$}] (v11) {}; \draw (-.4,-1) node[label=below:{\tiny $0$}] (v12) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (-.2,-1) node[label=below:{\tiny $2$}] (v21) {}; \draw (.2,-1) node[label=below:{\tiny $2$}] (v22) {}; \draw (.6,-.5) node[label=right:{\tiny $2$}] (v3) {}; \draw (.4,-1) node[label=below:{\tiny $1$}] (v31) {}; \draw (.8,-1) node[label=below:{\tiny $1$}] (v32) {}; \draw (v1) -- (r) (v11) -- (v1) -- (v12) (v21) -- (v2) -- (v22); \draw[thick, color=red] (r) -- (v2); \draw (r) -- (v3) (v31) -- (v3) -- (v32); \end{tikzpicture} & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (.6,-.5) node[label=right:{\tiny $2$}] (v3) {}; \draw (.4,-1) node[label=below:{\tiny $1$}] (v31) {}; \draw (r) -- (v3) -- (v31); \end{tikzpicture} \\ PC 3 & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $0$}] (v1) {}; \draw (-.8,-1) node[label=below:{\tiny $1$}] (v11) {}; \draw (-.4,-1) node[label=below:{\tiny $0$}] (v12) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (-.2,-1) node[label=below:{\tiny $2$}] (v21) {}; \draw (.2,-1) node[label=below:{\tiny $2$}] (v22) {}; \draw (.6,-.5) node[label=right:{\tiny $0$}] (v3) {}; \draw (.4,-1) node[label=below:{\tiny $0$}] (v31) {}; \draw (.8,-1) node[label=below:{\tiny $1$}] (v32) {}; \draw (v1) -- (r) (v11) -- (v1) -- (v12) (v21) -- (v2) -- (v22); \draw[thick, color=red] (r) -- (v2); \draw (r) -- (v3) (v31) -- (v3) -- (v32); \end{tikzpicture} & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (-.2,-1) node[label=below:{\tiny $2$}] (v21) {}; \draw (v21) -- (v2); \draw[thick, color=red] (r) -- (v2); \end{tikzpicture} \\ PC 4 & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $0$}] (v1) {}; \draw (-.8,-1) node[label=below:{\tiny $1$}] (v11) {}; \draw (-.4,-1) node[label=below:{\tiny $0$}] (v12) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (-.2,-1) node[label=below:{\tiny $0$}] (v21) {}; \draw (.2,-1) node[label=below:{\tiny $2$}] (v22) {}; \draw (.6,-.5) node[label=right:{\tiny $0$}] (v3) {}; \draw (.4,-1) node[label=below:{\tiny $0$}] (v31) {}; \draw (.8,-1) node[label=below:{\tiny $1$}] (v32) {}; \draw (v1) -- (r) (v11) -- (v1) -- (v12) (v21) -- (v2) -- (v22); \draw[thick, color=red] (r) -- (v2); \draw (r) -- (v3) (v31) -- (v3) -- (v32); \end{tikzpicture} & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (.2,-1) node[label=below:{\tiny $2$}] (v22) {}; \draw (v2) -- (v22); \draw[thick, color=red] (r) -- (v2); \end{tikzpicture} \\ PC 5 & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $0$}] (v1) {}; \draw (-.8,-1) node[label=below:{\tiny $1$}] (v11) {}; \draw (-.4,-1) node[label=below:{\tiny $0$}] (v12) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (-.2,-1) node[label=below:{\tiny $0$}] (v21) {}; \draw (.2,-1) node[label=below:{\tiny $0$}] (v22) {}; \draw (.6,-.5) node[label=right:{\tiny $0$}] (v3) {}; \draw (.4,-1) node[label=below:{\tiny $0$}] (v31) {}; \draw (.8,-1) node[label=below:{\tiny $1$}] (v32) {}; \draw (v1) -- (r) (v11) -- (v1) -- (v12) (v21) -- (v2) -- (v22); \draw[thick, color=red] (r) -- (v2); \draw (r) -- (v3) (v31) -- (v3) -- (v32); \end{tikzpicture} & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $0$}] (v1) {}; \draw (-.8,-1) node[label=below:{\tiny $1$}] (v11) {}; \draw (v1) -- (r) (v11) -- (v1); \end{tikzpicture} \\ PC 6 & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $0$}] (v1) {}; \draw (-.8,-1) node[label=below:{\tiny $0$}] (v11) {}; \draw (-.4,-1) node[label=below:{\tiny $0$}] (v12) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (-.2,-1) node[label=below:{\tiny $0$}] (v21) {}; \draw (.2,-1) node[label=below:{\tiny $0$}] (v22) {}; \draw (.6,-.5) node[label=right:{\tiny $0$}] (v3) {}; \draw (.4,-1) node[label=below:{\tiny $0$}] (v31) {}; \draw (.8,-1) node[label=below:{\tiny $1$}] (v32) {}; \draw (v1) -- (r) (v11) -- (v1) -- (v12) (v21) -- (v2) -- (v22); \draw[thick, color=red] (r) -- (v2); \draw (r) -- (v3) (v31) -- (v3) -- (v32); \end{tikzpicture} & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (.6,-.5) node[label=right:{\tiny $0$}] (v3) {}; \draw (.8,-1) node[label=below:{\tiny $1$}] (v32) {}; \draw (r) -- (v3) -- (v32); \end{tikzpicture} \end{tabular} \end{center} \end{Example} The next theorem states that the tree-line returned by the forward algorithm is precisely the $k$-th PC tree-line. The proof is in the Appendix. \begin{Theorem}\label{theorem1} Let $\mathcal{T}$ be a data set and $\mathcal{L}$ be the set of all tree-lines with the same starting point $l_0$. Let $L_1^f, \dots, L_{k-1}^f$ be the first $(k-1)$-st PC tree-lines. Then, the forward algorithm returns the $k$th PC tree-line, $L_{k}^f$. \end{Theorem} In theory, an arbitrary line would extend to infinity. In this paper, we limit our scope to the line pieces that reside within the support tree of a given data set since extending lines outside of support tree's scope would introduce unnecessary trivialities. Within this restriction, it can be seen that the possible principal component tree-lines for a given data set are those that theirs paths are maximum (there is no other path in $Supp(\mathcal{T})$ containing $p_L$). We also consider only the tree-lines that are not trivial (the tree-line consist of $l_0$ and at least one more point). In the light of this, we let $\mathcal{L_P}$ denote the set of all maximal non trivial tree-lines with staring point $l_0$, contained in $Supp(\mathcal{T})$. Also we name $\mathcal{P}$ to be the set of all paths in $Supp(\mathcal{T})$ from the root to leaves that are not in $l_0$. It is easy to see that $\mathcal{P}$ is the set of paths of tree-lines in $\mathcal{L_P}$. Also note that $|\mathcal{L_P}|=|\mathcal{P}|=n$ and $\displaystyle Supp(\mathcal{T})=l_0 \cup\bigcup_{p_L\in \mathcal{P}} p_L$. \section{Dimension Reduction for Rooted Trees}\label{backward} In this section, we will define \emph{backward principal component tree-lines}. This structure is the tree space equivalent of the backward principal component in the classical dimension reduction setting. They represent the directions that carry the least information about the data set and thus can be taken out. Our definition describes backward principal components as directions such that when eliminated, the remaining subspace will retain the maximum amount of variation. Or alternatively, the remaining subspace will have the minimum sum of squared distances to the data points. These are considered to be the components with least influence. We also present an algorithm that finds these components, and we provide a theoretical result proving the optimality of our algorithm. While using the backward approach, we must use the opposite tie-breaking rule we used in the forward approach. That is, $p_L>p_{L'}$ means that the path $p_{L'}$ is preferred to $p_{L}$. \begin{Definition} For a data set $\mathcal{T}$ and the set of tree-lines $\mathcal{L_P}$ with the same starting point $l_0$, the {\bf $\bf n^{th}$ backward principal component tree-line}, \emph{BPCn}, is \[ L_n^{b}=\arg \min_{L\in \mathcal{L_P}} \sum_{t\in \mathcal{T}} d(t,P_{\bigcup L'\in\mathcal{L_P}\setminus \{L\}}(t)). \] The {\bf $\bf (n-k)^{th}$ backward principal component tree-line} is defined recursively as \begin{eqnarray}\label{eqn:kBCP} L_{n-k}^{b}= & \arg \min_{L\in \mathcal{L_P}\setminus \{L_n^{b}, \cdots, L_{n-k+1}^{b}\} } \sum_{t\in \mathcal{T}} d(t,P_{\bigcup L'\in\mathcal{L_P}\setminus \{L_n^{b}, \cdots, L_{n-k+1}^{b} , L\}}(t)). \end{eqnarray} \end{Definition} The path associated to the $(n-k)$-th backward principal component tree-line will be denoted by $p_{n-k}^b$. The following node weight definition will be key to the upcoming algorithm for finding backward components: \begin{Definition} Let $\mathcal{T}$ be a data set and $\mathcal{L}$ be the set of all tree-lines with the same starting point $l_0$. Let $L_n^{b}, \dots, L_{n-k+1}^{b}$ be the last $k$ BPC tree-lines and $\textbf{B}=\mathcal P\setminus \{p_{n}^b,\dots , p_{n-k+1}^{b}\}$. For $v\in Supp({\bf B})$, the $(n-k)$-th backward weight of the node $v$ is \[ w_{n-k}'(v)= \begin{cases} 0 & \displaystyle \text{If } v\in l_0 \text{ or } v \text{ belongs to at least two different paths of }{\bf B}\\ \sum_{t\in \mathcal{T}}\delta(v,t) & \text{Otherwise.}\\ \end{cases} \] \end{Definition} The following algorithm computes the backward principal components. \begin{Algorithm}{Backward Algorithm.} Let $\mathcal{T}$ be a data set of binary set and $\mathcal{L}$ be the set of all tree-lines on $Supp(\mathcal{T})$ with the same starting point $l_0$.\\ {\bf Input:} $L_n^{b}, \dots, L_{n-k+1}^{b}$, the last $k$ BPC tree-lines.\\ {\bf Output:} $L_{n-k}^b$, the $(n-k)^{th}$ BPC tree-line.\\ Let $\textbf{B}=\mathcal P\setminus \{p_{n}^b,\dots , p_{n-k+1}^{b}\}$.\\ Return the tree-line $L_{n-k}^b$ whose path minimizes the sum of $w_k'$ weights in the support tree $Supp({\bf B})$. If there are more than one candidate, select the tree-line according to an appropriate tie-breaking rule (it coincides with the opposite tie-breaking rule used in the forward algorithm). \end{Algorithm} As the forward algorithm explained in previous section, the backward algorithm also finds the optimal solution in linear time. Next, we provide an example illustrating the steps of the backward algorithm. We will apply the backward algorithm to the toy data set given in Example \ref{exa:exa1}. In this example, we use the same starting point as in example \ref{exa:forward}. Furthermore, we use the opposite tie-breaking rule we used in the forward algorithm, in this case is to select the rightmost tree-line. \begin{Example} The table given below summarizes iterations of the algorithm, where each row corresponds to one iteration. At each of the iterations, the name of the backward principal component obtained at that iteration is given in left column. The pruned support tree with updated weights ($w_i'(.)$) is given in the middle column. The paths of selected PC tree-lines according to these weights is given in right column. \begin{center} \begin{tabular}{ccc} BPC 6 & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $0$}] (v1) {}; \draw (-.8,-1) node[label=below:{\tiny $1$}] (v11) {}; \draw (-.4,-1) node[label=below:{\tiny $2$}] (v12) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (-.2,-1) node[label=below:{\tiny $2$}] (v21) {}; \draw (.2,-1) node[label=below:{\tiny $2$}] (v22) {}; \draw (.6,-.5) node[label=right:{\tiny $0$}] (v3) {}; \draw (.4,-1) node[label=below:{\tiny $1$}] (v31) {}; \draw (.8,-1) node[label=below:{\tiny $1$}] (v32) {}; \draw (v1) -- (r) (v11) -- (v1) -- (v12) (v21) -- (v2) -- (v22); \draw[thick, color=red] (r) -- (v2); \draw (r) -- (v3) (v31) -- (v3) -- (v32); \end{tikzpicture} & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (.6,-.5) node[label=right:{\tiny $0$}] (v3) {}; \draw (.8,-1) node[label=below:{\tiny $1$}] (v32) {}; \draw (r) -- (v3) -- (v32); \end{tikzpicture} \\ BPC 5 & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $0$}] (v1) {}; \draw (-.8,-1) node[label=below:{\tiny $1$}] (v11) {}; \draw (-.4,-1) node[label=below:{\tiny $2$}] (v12) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (-.2,-1) node[label=below:{\tiny $2$}] (v21) {}; \draw (.2,-1) node[label=below:{\tiny $2$}] (v22) {}; \draw (.6,-.5) node[label=right:{\tiny $2$}] (v3) {}; \draw (.4,-1) node[label=below:{\tiny $1$}] (v31) {}; \draw (v1) -- (r) (v11) -- (v1) -- (v12) (v21) -- (v2) -- (v22); \draw[thick, color=red] (r) -- (v2); \draw (r) -- (v3) -- (v31); \end{tikzpicture} & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $0$}] (v1) {}; \draw (-.8,-1) node[label=below:{\tiny $1$}] (v11) {}; \draw (v11) -- (v1) -- (r); \end{tikzpicture} \\ BPC 4 & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $2$}] (v1) {}; \draw (-.4,-1) node[label=below:{\tiny $2$}] (v12) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (-.2,-1) node[label=below:{\tiny $2$}] (v21) {}; \draw (.2,-1) node[label=below:{\tiny $2$}] (v22) {}; \draw (.6,-.5) node[label=right:{\tiny $2$}] (v3) {}; \draw (.4,-1) node[label=below:{\tiny $1$}] (v31) {}; \draw (r) -- (v1) -- (v12) (v21) -- (v2) -- (v22); \draw[thick, color=red] (r) -- (v2); \draw (r) -- (v3) -- (v31); \end{tikzpicture} & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (.2,-1) node[label=below:{\tiny $2$}] (v22) {}; \draw (v2) -- (v22); \draw[thick, color=red] (r) -- (v2); \end{tikzpicture} \\ BPC 3 & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $2$}] (v1) {}; \draw (-.4,-1) node[label=below:{\tiny $2$}] (v12) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (-.2,-1) node[label=below:{\tiny $2$}] (v21) {}; \draw (.6,-.5) node[label=right:{\tiny $2$}] (v3) {}; \draw (.4,-1) node[label=below:{\tiny $1$}] (v31) {}; \draw (r) -- (v1) -- (v12) (v21) -- (v2); \draw[thick, color=red] (r) -- (v2); \draw (r) -- (v3) -- (v31); \end{tikzpicture} & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (0,-.5) node[label=left:{\tiny $0$}, color=red] (v2) {}; \draw (-.2,-1) node[label=below:{\tiny $2$}] (v21) {}; \draw (v21) -- (v2); \draw[thick, color=red] (r) -- (v2); \end{tikzpicture} \\ BPC 2 & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $2$}] (v1) {}; \draw (-.4,-1) node[label=below:{\tiny $2$}] (v12) {}; \draw (.6,-.5) node[label=right:{\tiny $2$}] (v3) {}; \draw (.4,-1) node[label=below:{\tiny $1$}] (v31) {}; \draw (r) -- (v1) -- (v12); \draw (r) -- (v3) -- (v31); \end{tikzpicture} & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (.6,-.5) node[label=right:{\tiny $2$}] (v3) {}; \draw (.4,-1) node[label=below:{\tiny $1$}] (v31) {}; \draw (r) -- (v3) -- (v31); \end{tikzpicture} \\ BPC 1 & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $2$}] (v1) {}; \draw (-.4,-1) node[label=below:{\tiny $2$}] (v12) {}; \draw (r) -- (v1) -- (v12); \end{tikzpicture} & \begin{tikzpicture}[scale=1] \tikzstyle{every node}=[draw, fill, circle, inner sep=1pt] \draw (0,0) node[label=above:{\tiny $0$}, color=red] (r) {}; \draw (-.6,-.5) node[label=left:{\tiny $2$}] (v1) {}; \draw (-.4,-1) node[label=below:{\tiny $2$}] (v12) {}; \draw (r) -- (v1) -- (v12); \end{tikzpicture} \end{tabular} \end{center} \end{Example} The key theoretical result of the section, the optimality of the backward algorithm, is summarized as follows: \begin{Theorem}\label{backthm} Let $\mathcal{T}$ be a data set and $\mathcal{L_P}$ be the set of all tree-lines with the same starting point $l_0$ for this data set. Let $L_n^{b}, \dots, L_{n-k+1}^{b}$ be the last $k$ BPC tree-lines. Then, the backward algorithm returns the optimum $(n-k)^{th}$ BPC tree-line, $L_{n-k}^{b}$. \end{Theorem} The proof of this theorem is in the Appendix. \section{Equivalence of PCA and BPCA in Tree Space}\label{equivalence} A very important aspect of tree space is that, the notion of orthogonality does not exist. In the Euclidean space equivalent of backward PCA, the orthogonality property ensures that the components do not depend on the method used to find them, i.e., the most informative principal component is the same when forward or backward approaches are used. This powerful property of path-independence brings various advantages to the analyst. In this section, we will prove that the forward and backward approaches are equivalent in the tree space as well when tree-lines are used. This is a surprising result given the lack of any notion of orthogonality. In practice, this result will ensure that the components of backward and forward approaches in binary tree space are comparable. We will show this equivalence by proving that, for each $1\leq k \leq n$, the $k^{th}$ PC tree-line and the $k^{th}$ BPC tree-line are equal. An equivalent statement is that their paths are equal: $p_k^f=p_k^b$. Without loss of generality, we will assume that a consistent tie-breaking method is established for both methods in choosing principal components whenever candidate tree-lines have the same sum of weights. All the proofs can be found in the Appendix. \begin{Proposition}\label{pro:1} Given an integer $1\leq k\leq n$, let $p_1^f,..., p_k^f$ be the paths of the first $k$ principal components yielded by the forward algorithm and $p_n^b,..., p_{k+1}^b$ be the paths of the last $n-k$ principal components yielded by the backward algorithm, then there exist no $i$ and $j$ such that $1\leq i\leq k<j\leq n$ and $p_i^f=p_j^b$. \end{Proposition} This proposition motivates the following theorem: \begin{Theorem}\label{equivalence} For each $1\leq k \leq n$ the $k^{th}$ PC tree-line obtained by the forward algorithm is equal to the $k^{th}$ BPC tree-line obtained by the backward algorithm. \end{Theorem} This result guarantees the comparability of principal components obtained by either method, enabling the analyst to use them interchangeably depending on which type of analysis is appropriate at the time. \section{Numerical Analysis}\label{numerical} In this section we will analyze two different data sets with tree structure. The first data set consists of branching structures of brain arteries belonging to $98$ healthy subjects. An earlier version of this data set was used in Ayd{\i}n et al. (2009) to illustrate the forward tree-line PCA ideas. In that study they have shown that a significant correlation exists between the branching structure of brain arteries and the age of subjects. Later on, $30$ more subjects are added to that data set, and the set went through a data cleaning process described in Ayd{\i}n et al. (2011). In our study we will use this updated data set. The second data set describes the organizational structure of a large company. The details of this data set are propriety information, therefore revealing details will be held back. We will investigate the organizational structural differences between business units, and differences between types of departments. As stated before, we focus on data trees where nodes are distinctly labeled. When constructing a tree data set, labeling of the nodes is crucial since these labels help determine which nodes in a data tree correspond to the nodes in another, and thus shaping the outcome of the whole analysis. The word correspondence is used to refer to this choice. We will handle the correspondence issue separately for each data set we introduce. \subsection{Brain Artery Data Set}\label{artery} \subsubsection{Data Description} The properties of the data set were previously explained in Ayd{\i}n et al. (2009). For the sake of completeness, we will provide a brief summary. The data is extracted from Magnetic Resonance Angiography (MRA) images of $98$ heathy subjects of both sexes, ranging from 18 to 72. This data can be found at Handle (2008). Aylward and Bullitt (2002) applied a tube tracking algorithm to construct $3D$ images of brain arteries from MRA images. See also Bullitt et al. (2010) for further results on this set. The artery system of the brain consists of $4$ main systems, each feeding a different region of the brain. In Figure \ref{3D} they are indicated by different colors: gold for the back, cyan for the left, blue for the right and red for the front regions. The system feeding each of the regions are represented as binary trees, reduced from the $3D$ visuals seen in Figure \ref{3D}. The reason for this is to focus on the branching structure only. Each node in a binary tree represents a vessel tube between two split points in the $3D$ representation. The two tubes formed by this split become the children nodes of the previous tube. The initial main artery that enters the brain, and feeds the region through its splits, constitutes the root node in the binary tree. The binary tree provided in Figure \ref{3D} (right panel) is an example binary tree extracted from a $3D$ image through this process. \begin{figure*} [ptb] \begin{center} \includegraphics[ natheight=1.4in,natwidth=2.1in,height=1.4in,width=2.1in ]% {VesselsNEW.jpg}% \includegraphics[ natheight=1.4in,natwidth=2.1in,height=1.4in,width=2.1in ]% {binarytreeback.jpg}% \caption{Left panel: Reconstructed set of trees of brain arteries. The colors indicate regions of the brain: Back (gold), Right (blue), Front (red), Left (cyan). Right panel: An example binary tree obtained from one of the regions. Only branching information is retained.} \label{3D}% \end{center} \end{figure*} The correspondence issue for this data set is solved as follows. At each split, the child with more number of nodes that descent from it is determined to be the left child, and the other node becomes the right child. This scheme is called descendant correspondence. The study of brain artery structure is important in understanding how various factors affect this structure, and how they are related to certain diseases. The correlation between aging and branching structure was shown in previous studies (Ayd{\i}n et al. (2009), Bullitt et al. (2010)). The brain vessel structure is known to be affected by hypertension, atherosclerosis, retinal disease of prematurity, and with a variety of hereditary diseases. Furthermore, results of studying this structure may lead to establishing ways to help predict risk of vessel thrombosis and stroke. Another very important implication regards malignant brain tumors. These tumors are known to change and distort the artery structure around them, even at stages where they are too small to be detected by popular imaging techniques. Statistical methods that might differentiate these changes from normal structure may help earlier diagnoses. See Bullitt et al. (2003) and the references therein for detailed medical studies focusing on these subjects. \subsubsection{Analysis of Artery Data} The forward tree-line PCA ideas were previously applied to an earlier version of this data set. Our first theoretical contribution of this paper, extension of tree-line PCA to general trees, does not effect this particular data set since all trees in it are binary. Therefore we first focus on the dimension reduction approach we bring. In Ayd{\i}n et al. (2009), only first $10$ principal components were computed, and age effect were presented through first $4$ components. In general, the main philosophy of our dimension reduction or backward technique is to determine how many dimensions need to be removed for enough noise to get cleared from the data set before the statistical correlations become visible or significant. We ask this question for the brain artery data set and the effect of aging on it, on the updated brain artery data set. Also, Ayd{\i}n et al. (2009) had used the intersection trees as the starting point in calculating the principal components. In this numerical study, we will use the root node as the starting point of the tree-lines. An observation on this data set, or any data set consisting of large trees is the abundance of leaves. Many of the leaves of the trees exist in one or few number of data trees. This leads to support trees that are much larger than any of the original data trees. The underlying structures are expected to be seen in upper levels, and most of the leaves can in fact be considered as noise. In our setting, the leaves that only exist in one or few data trees make up the first backward components. A question to ask is, what percentage of variation is created by the low-weight leaves, and what percentage is due to the high-weight nodes, or underlying shape? Figure \ref{Figure2} provides two plots that illustrate an answer. \begin{figure*} \begin{center} \includegraphics[scale=0.5 {CUvsCum_all.pdf}% \includegraphics[scale=0.5 {NCUvsNCum_all.pdf}% \caption{Left panel: X axis represents the total number of backward principal components removed from data. Y axis represents the number of nodes (variation) explained by the remaining subspace after removal. Four subpopulations are shown: Back (blue), Left (red), Right (magenta), Front (green). Right panel: Same information as the left panel is used. For each subpopulation, the total variation and the number of total backward principal components are scaled so that the maximum is $100$.} \label{Figure2}% \end{center} \end{figure*} In Figure \ref{Figure2}, the number of backward components removed from the tree space data is in, versus the total variation explained by the remaining subspace is shown (left panel). The $Y$ values at the $X=0$ point correspond to the total variation before any components are removed. This value is different for each subpopulation, as the sizes of their support trees are different. As backward components are removed from each of the sub-spaces, the variation covered decreases. We can observe that the initial backward components carry very little variation, and therefore result in a very small drop in the total number of explained nodes by the remaining sub-space. This is caused by the very large amount of leaves that aren't part of any underlying structure. The $Y=0$ points for each of the curves mark the total number of principal components that cover the whole data. This number is in fact equal to the number of leaves on the support trees of each of the subpopulations. On the right panel, we see the same information, only the $X$ and $Y$ axes for each of the curves are scaled so that the maximum corresponds to $100$. The first observation we see in this graph is that, the curves are almost plotted on top of each other: even if the sizes of their support trees are much different, the same percentage of variation is explained by same percentage of principal components in each of these data sets. We can conclude from this that the variation is structured similarly for each of these subpopulations. The second observation is that, the majority of the principal components explain very little variation. In the right panel of Figure \ref{Figure2}, we see that for all the subpopulations, the first $70\%$ of the principal components only cover $10\%$ of the nodes, and the last $10\%$ of these components explain about $70\%$. This data set is known to be very high-dimensional (about $270$ for the back subpopulation). However, Figure \ref{Figure2} shows that a very small ratio of them are actually necessary to preserve the underlying structures. Our next focus is to see, during the backward elimination process, at which points the age-structure correlation is visible. \begin{figure*} \begin{center} \includegraphics[scale=0.5]{p_value_all.pdf} \caption{$X$ axis represents the scaled number of backward principal components removed from the subspace of each of the subpopulations. At each $X$ value, the data points are projected onto the remaining subspace. The sizes of these projections, plotted against age, show a downward trend (not shown here). Statistical significance of this downward trend is tested by calculating the standard linear regression p-value ($Y$ axis) for the null hypothesis of $0$ slope. $Y$ axis is scaled using natural logarithm, while the $Y$ axis ticks are given in original values. The grey horizontal lines indicate $0.05$ and $0.01$ p-value levels. The subpopulations are colored as: Back (blue), Left (red), Right (magenta), Front (green). A statistically significant age effect is observed for subpopulations Back, Left and Right.} \label{Figure3} \end{center} \end{figure*} It was established previously that the branching of brain arteries are reduced with age. Bullitt et al. (2002) noted an observed trend on this phenomenon, while Ayd{\i}n et al. (2009) showed this effect on left subpopulation using principal components. In this paper, for each subpopulation, we start from the whole subspace and reduce it gradually by removing backward principal components. At each step the data trees are projected onto the remaining subspace. The relationship between the age of each data point and the size of the data tree projection is explored by fitting a linear regression line to these two series. These plots are not shown here, but similar ones can be found at Ayd{\i}n et al. (2009). This line tends to show a downward slope, suggesting that the projection sizes are reduced by age. To measure the statistical significance of the observation, the p-values are found for the null hypothesis of $0$ slope. Figure \ref{Figure3} shows the the plots of p-values at each step of removing BPC's, for each subpopulation. The p-values are scaled using natural logarithm while the $Y$ axis ticks are left at their original values. The rule-of-thumb for the p-value is that $0.05$ or less is considered significant. For tight tests, $0.01$ can also be used. Figure \ref{Figure3} provides grey lines for both of these levels for reference. In Figure \ref{Figure3} we see that, the front subpopulation does not reach the p-value levels that are considered significant at any sub-space. The front region of the brain, unlike the other regions, do not get fed by a direct artery entering the brain from below, but it is fed by vessels extending from other regions. (See Figure \ref{3D}). Therefore it is not surprising that the front vessel subpopulation does not carry a structural property presented by the other three subpopulations. For other subpopulations, we identify two different kinds of age-structure dependence. First, for left and back subpopulations, the age versus projection size relationship is very sharp until the last $5\%$ of the components are left. Most of the early BPC's correspond to the small artery splits that are abundant in younger population, which people tend to lose as age increases (Bullitt at al. (2002)). Therefore the overall branchyness of the artery trees are reduced. Figure \ref{Figure3} is consistent with this previous observation. The p-value significance gets volatile at the last $5\%$ of the components, where the BPC's corresponding to the small artery splits are removed, and only the largest components remain in the subspace. These largest components correspond to the main arteries that branch the most. The location-specific relationship between structure and age, noted in Ayd{\i}n et. al. (2009) can be observed for left and back subpopulations towards the end of the $X$ axis. This is the second kind of dependence we observe in the data sets. For right subpopulation, we only observe the first kind, and it does not seem to be as strong as left and back subpopulations. Our second focus is to repeat the question of age-structure relationship for the male and female subpopulations. Our data set consists of $49$ male, $47$ female and $2$ trans-gender subjects. We run our analysis for the largest two groups to see how aging effects males and females separately. \begin{figure*} \begin{center} \includegraphics[scale=0.4]{p_value_women.pdf} \includegraphics[scale=0.4]{p_value_men.pdf} \caption{The left and right panels are the p-value versus subspace plots for female and male populations. The axes are as explained in Figure \ref{Figure3}. The subpopulations are colored as: Back (blue), Left (red), Right (magenta), Front (green). For males, a statistically significant age effect is observed for subpopulations Back, Left and Right. No such effect is observed for females.} \label{Figure4} \end{center} \end{figure*} In Figure \ref{Figure4}, the p-value versus subspace graphs are given for the male and female subpopulations. As before, the front subpopulation does not show any statistical significance at any subspace level. For the other subpopulations, a clear difference between male and female groups emerges. For the female group, the first kind of structural affect of age (overall branchyness) cannot be observed for any subpopulation. For the location-specific relationship (branchyness of the main arteries) the lowest p-value that could be achieved comes from the right subpopulation at $0.5015$, slightly higher than the rule-of-thumb significance level of $0.05$. For the male group, the age versus overall branchyness can be observed for left, right and back subpopulations at very significant levels (below $0.01$ p-values). The location-specific relationship can again be observed for these three subpopulations at significant levels. The study on the full data set implies that two kinds of age-structure relationships can be observed in the whole population using this method. Subsequent analysis of male and female groups shows that the same effects are observed, more strongly, in the male group. Meanwhile, no statistically significant age effect could be observed in the female group using these methods. These results suggest that the brain vessel anatomy of male and females may respond differently to aging: The overall branchyness and the branchyness of longest arteries get reduced by age in males, while these affects aren't apparent for the female group. Therefore the effects observed in the whole population may in fact be driven by the male sub-group. \subsection{Company Organization Data Set}\label{company} \subsubsection{Data Description} In this analysis, we use a company organization data set of a large US company. This data set is a snapshot of the employee list taken sometime during the last ten years. It also includes the information on hierarchical structure and the organizations that employees belong to. The set includes more than two hundred thousand employees active at the time when the snapshot was taken. In this section we will explain the general aspects of the data set that are relevant to our analysis, but we will hold back any specifics due to privacy reasons. The original company structure can be considered as one giant tree. Each employee is represented as a node. The CEO of the company is the root node. The child-parent relationships are established through the reporting structure: the children of a node are the employees that directly report to that person in the company. Since every employee directly reports to exactly one person (except the CEO, the root node), this system naturally lends itself to a tree representation. A vert important structural property of organization trees is that, each higher-level employee usually has many employees reporting to him/her. Therefore this organization tree is not binary, but a general rooted tree. It has a maximum depth of $13$ levels. The company operations span various business activities, each main category being pursued by a different business unit of the company. The heads of each of these business units report directly to the CEO. Every person working in the company is assigned to one business unit, and these units form the first level of organization codes. These business units are further divided into sub-organizations, primarily with respect to their geographical locations around the world. A third level of hierarchy again divides these units based on territory and job focus. The last organization level, which we will be using to construct our data sets, is the fourth level of the hierarchy, and is used to define departments that are dedicated to a particular type of job for a particular product or service. For example, the Marketing department responsible of promoting a product group in a given region of one of the business units is an organization at the fourth level of hierarchy. Just like the business unit, every person in the company is assigned to an organization code of second, third and fourth levels. A person working in a particular department shares first, second, third and fourth levels of organization codes with her colleagues working in the same department. In this study we will focus on populations of different departments across the company that are assigned to a similar type of job. When the whole organization tree is considered, the directors of these departments are at the fifth level of that tree. To form our data set, we gathered the list of all the directors in the company who are at the fifth level. Then, based on the organization codes, we determined the main job focus of the departments that the directors are leading. We selected four main groups of jobs to compare for our study: finance, marketing, research and development, and sales. The departments that focus on one of these four categories are assigned to those categories. Other departments that focus on different jobs, like legal affairs or IT support, are left out. For each category, each department assigned to that category forms one data point. The director of that department is taken as the root node of the data tree representing the department, and the people who work at that department are nodes of this tree. The structure of the tree is determined by the reporting structure within the department. The correspondence issue within the data sets requires some attention. A job-based correspondence scheme between two data trees would involve determining which individuals in one department perform a similar function to which individuals at the same reporting level in another department, so that the nodes of those people can be considered "corresponding". With the exception of the directors (who form the root nodes and naturally correspond to each other), this kind of matching is virtually impossible for this data set, since job definitions within one department greatly depends on the particulars of that department's job, and may not match with jobs within another department. Since this job-based correspondence is not possible, we employ the descendant correspondence for the data points. Descendant correspondence was elaborated before for the binary tree setting. In the general tree setting, it works in a similar setting: for the nodes that are the children of the same parent node, the order from left to right is determined by the total number of descendants of each of them. That is, the node with the most number of descendants is assigned as the left-most child, and so on. The data set of finance departments constructed in this fashion consists of $37$ data trees, with a maximum depth of $6$ levels. The marketing set has $60$ trees, maximum depth of $5$, sales has $41$ trees, maximum depth $5$, and research data set has $20$ trees, maximum depth $6$. The support trees of these sets can be seen in Figure \ref{Figure5}. Visualizing the organization trees require a somewhat different approach than binary trees. The depth of these trees is not very large: $6$ levels for the deepest data point. However, the node population at each level is very dense. Therefore a radial drawing approach is used to display them. (See Di Battista et al. (1999) for details on this method and many others for graph visualization.) In radial drawing of rooted trees, the root node is at the origin. The root is surrounded by concentric circles centered at the origin. We plot our nodes on these circles, each circle is reserved for the nodes in one level of the tree. The coordinate of each node on a circle is determined by the number of descendants count. For example, for the nodes on the second level, the $360$ degrees available on the circle is distributed to the nodes with respect to the number of descendants they have. Nodes with more descendants get more space. The nodes are put at the middle of the arc on the circle corresponding to the degrees set for that node. The children of that node in the next circle share these degrees according to their own number of descendants. This scheme allows the allocation of most space on the graph to the largest sub-trees and the distribution of nodes on the graph space as evenly as possible. \begin{figure*} \begin{center} \includegraphics[scale=0.4]{DSfinance-ST.png} \includegraphics[scale=0.4]{DSmarketing-ST.png} \includegraphics[scale=0.4]{DSresearch-ST.png} \includegraphics[scale=0.4]{DSsales-ST.png} \caption{Radial drawings of the support trees of four organization subsets: Finance, Marketing, Research and Sales. The root nodes are at the center. The principal components are represented through colors: Earlier BPC's start from the blue end of the color scala while the latter BPC's go towards the red end. Nodes that are in multiple components are colored with respect to the highest total weighted component they are in. The color bar on the right of each panel shows the coloring scheme according to the total weight of each BPC.} \label{Figure5} \end{center} \end{figure*} \subsubsection{Analysis of Company Organization Data} The comparative structural analysis of these four organization data sets is conducted via the principal component tree-lines. We have run the dimension reduction method for general rooted trees as described in Section \ref{backward}, although the forward method of Section \ref{preliminaries} would have given the same set of components, as shown in Section $4$. The principal components obtained with this analysis are shown in Figure \ref{Figure5}. They are expressed through the coloring scheme. A color scala starting from dark red, going through shades of yellow, green, cyan and blue and ending at dark blue is used. The components that have higher sum of weights ($\sum{w'(k)}$) are colored with the shades on the red side, and lower sum of weights get the cooler shades. Since the backward principal components are ordered from low sum of weights $\sum{w'(k)}$ to higher, this means the earlier BPC's (lower impact components) are shown in blue, while the stronger components are in yellow to red part of the scala. The color bar on the right of each support tree shows which $\sum{w'(k)}$ corresponds to which shade for that support tree. The first conclusions on the differences across types of departments come from the comparison of their support tree structure. It can be clearly seen that the sales departments are larger than others in population. Another clear distinction is in the flatness of each organization type. Typically, a flat organization does not have many levels of hierarchy, and most of the workers are do not have subordinates. This is common in organizations of a technical focus. In Figure \ref{Figure5}, we can see that the research departments are visibly flatter than other three types: most of the nodes are at the leaves and not at the interim levels. This is due to the fact that most of the employees in these departments do engineering-research type of work, for which a strongly hierarchical organizational model is less efficient. The other three data sets, finance, marketing and sales have most of their employees on interim levels, pointing to a strong hierarchy. This seems especially strong in sales departments. In the next figure (Figure \ref{Figure6}), the effect of reducing the principal components gradually on the amount of nodes explained is shown. This figure is constructed in the same way as Figure \ref{Figure2}, right panel. \begin{figure*} \begin{center} \includegraphics[scale=0.6]{DS-c_vs_en.pdf} \caption{The $X$ axis is the number of backward principal components subtracted from the subspace. The $Y$ axis is the amount of nodes that can be explained by the remaining subspace at each $X$ level. Both of the axes scaled within themselves so that the highest $X$ and $Y$ coordinates for all of the organization curves are $100$. The blue curve is for research, green is for marketing, black is for sales and red is for finance.} \label{Figure6} \end{center} \end{figure*} Figure \ref{Figure6} shows that none of the organization data sets have a very concave variation-versus-components curve like the brain artery set did. Therefore for the organizational structure setting, the earlier BPC's have more potential to carry information compared to the artery setting. Between the organization data sets, we see that the curves belonging to research and sales are very close to each other (the less concave pair), while the curves of finance and marketing are shape-wise close (the more concave pair). The concavity of these curves depend on what percentage of the variation is explained by the early BPC's, and what percentage by the later, stronger components. A very concave curve means that most of the nodes of the data set can in fact be expressed through a small number of principal components. This means that the structures within the data points are not very diverse: the data trees of the set structurally look like each other, allowing a smaller number of PC's to explain more of the nodes. Vice versa, a less concave curve points to a data set where a small portion of the principal components are not enough to explain many nodes due to the diversity in the structures of the data points. Figure \ref{Figure6} shows that finance and marketing departments are more uniformly structured than research and sales departments. I.e., two random finance data trees are more likely to have a shorter distance to each other than two random research data trees. A variation-versus-components curve is helpful in establishing the trend in the distribution of variation within the data set: the earlier BPC's express nodes that are not common across the data points, and the later BPC's cover the nodes that are common to most data points. The next, and more in-depth question is that, how these more common and less common nodes are distributed among the data points themselves? To answer this question, we divide the set of all BPC's into two subsets. The first $90\%$ of the BPC's on the $X$ axis of Figure \ref{Figure6} form the one set (SET $2$). These BPC's collectively represent the subspace where the less-common-nodes are in. The remaining $10\%$ of the BPC's form the other set (SET $1$). These BPC's express the subspace where the more common structures are in. For any data tree $t$, the projection of it onto SET $1$ ($P_{SET 1}(t)$) represents the portion of the tree that is more common with other data trees in the data set. The projection of $t$ onto SET $2$ ($P_{SET 2}(t)$) carries the nodes of it that are less common with others. Since these two sets are complementary, the two projections of $t$ would give $t$ itself when combined: $P_{SET 1}(t) \bigcup P_{SET 2}(t) = t$. Figure \ref{Figure7} shows how the nodes in SET $1$ and $2$ are distributed among the data trees for each of the organization data sets. For each data point, the length of its projection onto SET $2$ is on the $Y$ axis, and the length of its projection onto SET $1$ is given on the $X$ axis. Each of these axes are scaled such that the highest coordinate for each data set is $1$ on each of the axes. Blue stars denote the research data points, green squares are marketing data points, black crosses are sales data points and red circles are finance data points. \begin{figure*} \begin{center} \includegraphics[scale=0.6]{FMRSdata10.pdf} \caption{The data points of each of the data sets: Research (blue stars), marketing (green squares), sales (black crosses) and finance (red circles). For each of the data points, the length of its projection onto SET $2$ is on the $Y$ axis, and the length of its projection onto SET $1$ is given on the $X$ axis. Each of these axes are scaled such that the highest coordinate for each data set is $1$ on each of the axes.} \label{Figure7} \end{center} \end{figure*} In Figure \ref{Figure7}, it can be seen that none of the data points are above the $45$ degree line. This is an artifact of the descendant correspondence. A very interesting aspect of Figure \ref{Figure7} is that, the data points of each data set visually separate from each other. This is especially true for the marketing departments which follow a distinctly more convex pattern compared to other kinds of departments. For finance departments, we observe an almost linear trend, starting from around $X=0.3$. The bottom left data points are trees that are small in general: they contain little of the common nodes set and almost none of the non-common set. As we go top-right, the trees grow in SET $1$ and SET $2$ spaces proportionally. A similar pattern is there for sales departments, with the exception of a group of data points lying on the $X$ axis, pointing to a group of very small departments that only consist of the main structure nodes. The research departments follow a lower angle pattern. However, this might be due to the one outlier department at the coordinate $(1,1)$, pushing all others to the left/bottom of the graph. The most significant pattern on this graph belongs to the marketing group. Unlike other departments, there is no linear alignment trend. The set seemingly consists of two kinds of departments: First is the group with very little projection on SET $2$, and varying sizes of projection on SET $1$. These are relatively small departments. The second is a group of departments that contain all the nodes represented by SET $1$ (therefore the 'common structure' part of the trees are common to all of these trees), and varying, but large amounts of nodes represented in SET $2$. These trees are much larger than the first trees of the group. These two different modes of structure within this group may be due to particular kind of marketing activity, product family, etc they focus on. The details of activities of each department is not part of our data set, therefore we are not able to offer a reason for this separation. Note that two data sets that are shown to be structurally similar in Figure \ref{Figure6}, finance and marketing, are the furthest apart sets in Figure \ref{Figure7}. This is because Figure \ref{Figure6} focuses on the overall dispersion of variation, while Figure \ref{Figure7} focuses on the relative differences between the individual data trees. \section{Appendix}\label{appendix} \textbf{Proof of Lemma \ref{lemma1}:} Since $l_i = l_{i-1}\cup v_i$, we have that \[ d(t,l_i) = \begin{cases} d(t,l_{i-1})-1 & \text{if } v_i\in t,\\ d(t,l_{i-1})+1 & \text{otherwise.}\\ \end{cases} \] In other words, the distance of the tree to the line decreases as we keep adding nodes of $p_L$ that are in $t$, and when we step out of $t$, the distance begins to increase. \qed \textbf{Proof of Lemma \ref{lemma2}:} For simplicity, we only prove the statement for $q=2$. Assume that \[ L_1=\{l_{1,0}, l_{1,1}, \ldots, l_{1,k_1} \}, L_2=\{l_{2,0}, l_{2,1}, \ldots, l_{2,k_2} \} \] with $l_0=l_{1,0}=l_{2,0}$, and \begin{eqnarray*} l_{1,i}=l_{1,i-1}\cup v_{1,i} & \text{ for } 1\leq i \leq k_1,\\ l_{2,j}=l_{2,j-1}\cup v_{2,j} & \text{ for } 1\leq j \leq k_2. \end{eqnarray*} Also assume \begin{equation}\label{eqn:1} P_{L_1}(t)=l_{1,r_1} \end{equation} and \begin{equation} P_{L_2}(t)=l_{2,r_2}. \end{equation} Let $f(i,j)$ be the distance between the trees $t$ and $l_{1,i}\cup l_{2,j}$, for $1\leq i \leq k_1$ and $1\leq j \leq k_2$. Using lemma \ref{lemma1}, equation (\ref{eqn:1}) means \begin{eqnarray*} v_{1,i}\in t, & \text{ if } i\leq r_1 \text{, and }\\ v_{1,j}\in t, & \text{ if } j\leq r_2. \end{eqnarray*} Hence, \begin{eqnarray}\label{eqn:2} f(i,j) \leq f(i-1,j), & \text{ if } i\leq r_1,\\ f(i,j) \geq f(i-1,j), & \text{ if } i> r_1. \nonumber \end{eqnarray} By symmetry, we have \begin{eqnarray}\label{eqn:3} f(i,j) \leq f(i,j-1), & \text{ if } j\leq r_2,\\ f(i,j) \geq f(i,j-1), & \text{ if } j> r_2. \nonumber \end{eqnarray} Overall, equations (\ref{eqn:2}) and (\ref{eqn:3}) imply that the function $f$ attains its minimum at $i=r_1, j=r_2$, which is what we had to prove. \qed \textbf{Proof of Theorem \ref{theorem1}:} The definition of $k^{th}$ PC tree-line in terms of paths is equivalent to the equation \begin{eqnarray*} p_{k}^f & = & \arg \min_{p_L \in \mathcal{P} } \sum_{t\in \mathcal{T}} d\left(t,l_0\cup \left(\left(\cup_{i=1\cdots k-1} p_i^f \cup p_L \right)\cap t\right)\right)\\ & = & \arg \min_{p_L \in \mathcal{P} } \sum_{t\in \mathcal{T}} \left| t \setminus \left( l_0\cup \left(\left( \cup_{i=1\cdots k-1} p_i^f \cup p_L \right)\cap t \right)\right)\right | + \left| \left( l_0\cup \left(\left( \cup_{i=1\cdots k-1} p_i^f \cup p_L \right)\cap t \right)\right) \setminus t\right |\\ & = & \arg \min_{p_L \in \mathcal{P} } \sum_{t\in \mathcal{T}} \left| t \setminus \left( l_0\cup p_1^f \cup\cdots \cup p_{k-1}^f \cup p_L \right)\right | + \left| \left( l_0\cup \left(\left(p_1^f \cup\cdots \cup p_{k-1}^f \cup p_L \right)\cap t \right)\right) \setminus t\right |\\ & = & \arg \min_{p_L \in \mathcal{P} } \sum_{t\in \mathcal{T}} \left| t \setminus \left( l_0\cup p_1^f \cup\cdots \cup p_{k-1}^f \cup p_L \right)\right | + \left| l_0 \setminus t\right |\\ & = & \arg \min_{p_L \in \mathcal{P} } \sum_{t\in \mathcal{T}} \left| t \setminus \left( l_0\cup p_1^f \cup\cdots \cup p_{k-1}^f \cup p_L \right)\right |\\ & = & \arg \min_{p_L \in \mathcal{P} } \sum_{t\in \mathcal{T}} \left| t \setminus \left( l_0\cup p_1^f \cup\cdots \cup p_{k-1}^f \right)\right | - \left| (t\cap p_L) \setminus \left( l_0\cup p_1^f \cup\cdots \cup p_{k-1}^f \right)\right |\\ & = & \arg \min_{p_L \in \mathcal{P} } - \sum_{t\in \mathcal{T}} \left| (t\cap p_L) \setminus \left( l_0\cup p_1^f \cup\cdots \cup p_{k-1}^f \right)\right |\\ & = & \arg \max_{p_L \in \mathcal{P} } \sum_{t\in \mathcal{T}} \left| (t\cap p_L) \setminus \left( l_0\cup p_1^f \cup\cdots \cup p_{k-1}^f \right)\right |\\ & = & \arg \max_{p_L \in \mathcal{P} } \sum_{v\in p_L} w_k(v). \end{eqnarray*} The last equation correspond to the path with maximum sum of $w_k$ weights in the support tree. \qed \textbf{Proof of Theorem \ref{backthm}:} The definition of $k^{th}$ BPC tree-line (see Equation \ref{eqn:kBCP}) in terms of paths is equivalent to the equation \begin{eqnarray*} p_{n-k}^b & = & \arg \min_{p_L \in \mathbf{B} } \sum_{t\in \mathcal{T}} d\left(t,l_0\cup \left(\left({\displaystyle\cup_{p\in{\bf B}\setminus \{ p_L\}}p}\right)\cap t\right)\right),\text{ where }\textbf{B}=\mathcal P\setminus \{p_n^b,\dots , p_{n-k+1}^b\} \\ & = & \arg \min_{p_L \in \mathbf{B} } \sum_{t\in \mathcal{T}} \left|t\setminus l_0\cup \left(\left({\displaystyle\cup_{p\in{\bf B}\setminus \{ p_L\}}p}\right)\cap t\right)\right|+ \left| l_0\cup \left(\left({\displaystyle\cup_{p\in{\bf B}\setminus \{ p_L\}}p}\right)\cap t\right)\setminus t \right| \\ & = & \arg \min_{p_L \in \mathbf{B} } \sum_{t\in \mathcal{T}} \left|t\setminus l_0\cup \left(\left({\displaystyle\cup_{p\in{\bf B}\setminus \{ p_L\}}p}\right)\cap t\right)\right|+ \left| \left(l_0\setminus t\right)\cup \left(\left(\left({\displaystyle\cup_{p\in{\bf B}\setminus \{ p_L\}}p}\right)\cap t\right)\setminus t\right) \right| \\ & = & \arg \min_{p_L \in \mathbf{B} } \sum_{t\in \mathcal{T}} \left|t\setminus l_0\cup \left(\left({\displaystyle\cup_{p\in{\bf B}\setminus \{ p_L\}}p}\right)\cap t\right)\right|+ \left| l_0\setminus t \right| \\ & = & \arg \min_{p_L \in \mathbf{B} } \sum_{t\in \mathcal{T}} \left|t\setminus l_0\cup \left(\left({\displaystyle\cup_{p\in{\bf B}\setminus \{ p_L\}}p}\right)\cap t\right)\right|\\ & = & \arg \min_{p_L \in \mathbf{B} } \sum_{t\in \mathcal{T}} \left|t\setminus l_0\cup \left({\displaystyle\cup_{p\in{\bf B}\setminus \{ p_L\}}p}\right)\right|\\ & = & \arg \min_{p_L \in \mathbf{B} } \sum_{t\in \mathcal{T}} \left|\left(t\cap p_L\right)\setminus \left( l_0\cup \left({\displaystyle\cup_{p\in{\bf B}\setminus \{ p_L\}}p}\right)\right)\right| +\sum_{t\in \mathcal{T}} \left|\left(t\cap \left(\cup_{p\in \mathcal{P}\setminus {\mathbf B}} p\right)\right)\setminus \left( l_0\cup \left({\displaystyle\cup_{p\in{\bf B}}p}\right)\right)\right|\\ & = & \arg \min_{p_L \in \mathbf{B} } \sum_{t\in \mathcal{T}} \left|\left(t\cap p_L\right)\setminus \left( l_0\cup \left({\displaystyle\cup_{p\in{\bf B}\setminus \{ p_L\}}p}\right)\right)\right|\\ & = & \arg \min_{p_L \in \mathbf{B} } \sum_{t\in \mathcal{T}} \sum_{v\in \left(t\cap p_L\right)\setminus \left( l_0\cup \left({\displaystyle\cup_{p\in{\bf B}\setminus \{ p_L\}}p}\right)\right)} 1\\ & = & \arg \min_{p_L \in \mathbf{B} } \sum_{v\in p_L} w_k'(v). \end{eqnarray*} From the last equation the result follows. \qed \textbf{Proof of Proposition \ref{pro:1}:} Suppose there exist $i$ and $j$ with $1\leq i\leq k<j\leq n$ and $p_i^f=p_j^b$. Without loss of generality, suppose that $j$ is the largest index where the assumption holds. Let $p_L$ denote the path $p_i^f=p_j^b$, and let $B=\{p_n^b,..., p_{j+1}^b \}$. Since $1\leq i\leq k<j\leq n$, the set of paths $\mathcal{P}\setminus \{B\}$ contains at least two paths. Let $v\in p_L$ be the first node from the leaf to the root that has at least two children in $Supp(\mathcal{P}\setminus \{B\})$. There are two possibilities: \begin{enumerate}[$I.$] \item $v\notin l_0$ {\it i.e.} there is at least one path different of $p_L$ in $\mathcal{P}\setminus \{B\}$ that has $v$ as node or \item $v\in l_0$. \end{enumerate} In both cases, $w_j'(u)=0$ for all $u$ in the path $p_L$ from $v$ to the root. Consider case $I.$ Let $p_{L'}\in\mathcal{P}\setminus \{B\}$ be a path different from $p_L$ that contains $v$ in it. Let $p_v$ be the path from the root to $v$. Since $p_L=p_j^b$ \begin{equation}\label{eqn:ineq1} \sum_{u\in p_L\setminus p_v}w_j'(u)=\sum_{u\in p_L}w_j'(u)\leq\sum_{u\in p_{L'}}w_j'(u)=\sum_{u\in p_{L'}\setminus p_v}w_j'(u). \end{equation} On the other hand, since $p_L=p_i^f$ \begin{equation}\label{eqn:ineq2} \sum_{u\in p_L}w_i(u)\geq\sum_{u\in p_{L'}}w_i(u). \end{equation} Next, we need to show that following holds: \begin{equation}\label{eqn:ineq5} \sum_{u\in p_{L'}\setminus p_v}w_j'(u)\leq\sum_{u\in p_{L'}\setminus p_v}w_i(u). \end{equation} To do this, suppose that $\sum_{u\in p_{L'}\setminus p_v}w_j'(u)>\sum_{u\in p_{L'}\setminus p_v}w_i(u)$. It implies that there is at least one node $v'$ that has $w_j'(v')>0$ and $w_i(v')=0$. Since $w_i(v')=0$, a path that contains $v'$ and is different of $p_{L'}$ was yielded by the forward algorithm before $p_{L'}$. However, this implies that there are at least two paths that has $v'$ as node at step $j$ in the backward algorithm, then $w_j'(v')=0$. This gives a contradiction. It is straightforward to see \begin{equation}\label{eqn:ineq6} \sum_{u\in p_L\setminus p_v}w_i(u)\leq\sum_{u\in p_L\setminus p_v}w_j'(u). \end{equation} Let us suppose that the inequality in (\ref{eqn:ineq1}) is strict, {\it i.e.} \begin{equation}\label{eqn:ineq3} \sum_{u\in p_L\setminus p_v}w_j'(u)<\sum_{u\in p_{L'}\setminus p_v}w_j'(u). \end{equation} We have \begin{eqnarray*} \sum_{u\in p_L}w_i(u) & = & \sum_{u\in p_v}w_i(u)+\sum_{u\in p_L\setminus p_v}w_i(u)\\ & \leq_{(\ref{eqn:ineq6})} & \sum_{u\in p_v}w_i(u)+\sum_{u\in p_L\setminus p_v}w_j'(u)\\ & <_{(\ref{eqn:ineq3})} & \sum_{u\in p_v}w_i(u)+\sum_{u\in p_{L'}\setminus p_v}w_j'(u)\\ & \leq_{(\ref{eqn:ineq5})} & \sum_{u\in p_v}w_i(u)+\sum_{u\in p_{L'}\setminus p_v}w_i(u) = \sum_{u\in p_{L'}}w_i(u)\\ \end{eqnarray*} which is a contradiction to equation (\ref{eqn:ineq2}). Therefore, equation (\ref{eqn:ineq1}) has to be an equality, {\it i.e.} \begin{equation}\label{eqn:ineq4} \sum_{u\in p_L\setminus p_v}w_j'(u)=\sum_{u\in p_{L'}\setminus p_v}w_j'(u). \end{equation} If one or both equations \[ \sum_{u\in p_{L'}\setminus p_v}w_j'(u)<\sum_{u\in p_{L'}\setminus p_v}w_i(u) \text{ and } \sum_{u\in p_L\setminus p_v}w_i(u)<\sum_{u\in p_L\setminus p_v}w_j'(u), \] holds, then the result follows in the same way as above. Finally, let us suppose \[ \sum_{u\in p_{L'}\setminus p_v}w_j'(u)=\sum_{u\in p_{L'}\setminus p_v}w_i(u) \text{ and } \sum_{u\in p_L\setminus p_v}w_i(u)=\sum_{u\in p_L\setminus p_v}w_j'(u), \] which implies that \[ \sum_{u\in p_{L'}}w_j'(u)=\sum_{u\in p_L}w_j'(u) \text{ and } \sum_{u\in p_{L'}}w_i(u)=\sum_{u\in p_L}w_i(u). \] Now, since $p_i^f=p_L$, we have $p_L>p_{L'}$ . And, since $p_j^b=p_L$, we have $p_L<p_{L'}$. Which is a contradiction. In the case $II$, where $v\in l_0$, let $v'$ be the last node from the root to the leaf in $p_{L}$ that belongs to $l_0$. Take $p_{L'}\in\mathcal{P}\setminus \{B\}$ as a different path of $p_L$, and $v''$ as the last node from the root to the leaf in $p_{L'}$ that belongs to $l_0$. Let $p_{v'}$ be the unique path from the root to the node $v'$ and $p_{v''}$ the unique path from the root to the node $v''$. Since $p_{v'}$ and $p_{v''}$ are contained in $l_0$, we have \[ \sum_{u\in p_{v'}}w_i(u)=\sum_{u\in p_{v''}}w_i(u)=\sum_{u\in p_{v'}}w_j'(u)=\sum_{u\in p_{v''}}w_j'(u)=0. \] Since $p_L=p_j^b$ \begin{equation}\label{eqn:ineq1l0} \sum_{u\in p_L}w_j'(u)\leq\sum_{u\in p_{L'}}w_j'(u) \end{equation} On the other hand, since $p_L=p_i^f$ \begin{equation}\label{eqn:ineq2l0} \sum_{u\in p_L}w_i(u)\geq\sum_{u\in p_{L'}}w_i(u). \end{equation} Similar to case I, we can see that \ref{eqn:ineq1l0} is an equality. This gives a contradiction. \qed \textbf{Proof of Theorem \ref{equivalence}:} By the proposition \ref{pro:1}, we have that at step $n-1$ of the forward algorithm there is no tree-line yielded by the forward algorithm equal to $L_n^b$, then $L_n^b=L_n^f$. At the step $n-2$, there is no tree-line yielded by the forward algorithm equal to $L_n^b$ or $L_{n-1}^b$. Since $L_n^b=L_n^f$, we have the $L_{n-1}^b=L_{n-1}^f$. We continue iteratively until step 1. At the end, we will have $L_{k}^b=L_{k}^f$ for all $1\leq k \leq n$. \qed
1,314,259,993,119
arxiv
\section{Introduction}\label{section:1} The predictive distributions provided by Deep Neural Networks (DNNs) have been increasingly used for decision-support systems, for applications ranging from medical diagnoses assistance \citep{esteva2017dermatologist} to self-driving cars \citep{bojarski2016end}. In DNNs, the predictive distributions usually corresponds to the output of a softmax layer, which is typically interpreted as the confidence over the different classes. The i.i.d hypothesis made in learning usually assumes that the data distributions over the classes are the same at learning and inference time. However, in real-world applications, the distribution of data at inference time (i.e., the test data) may shift and actually be different from the original training distribution -- corresponding to distribution shift in representation of data which we refer that as domain shift. For instance, in image classification problem, domain shift happens when the test images are different in illumination, view point, resolution, background or intensity noise from the training set. However, they are the same classification problem with the same objects occurance rate. Arguably, building DNNs that are robust to the domain shift problem is necessary for its safe deployment in decision-making systems. Dealing with this, predictive uncertainty is the key to obtain a meaningful estimation for practitioners to know when prediction accuracy is degrading and allows a system to abstain from making decisions due to low confidence. The predicted uncertainty in DNNs usually are not calibrated with a tendency to be overconfident. Many probabilistic and post-processing calibration methods have been proposed under i.i.d assumptions to adjust the certainty of DNNs. In recent studies \citep{ovadia2019can, maddox2019simple}, uncertainty under domain shift condition gets more attention and the common calibration methods have been assessed regarding to the domain shift, although they are not designed to be robust under such condition. In this paper, for the first time, we specifically focused on calibration for the domain shift in image classification. We show post-processing calibration approaches that use Negative Log Likelihood (NLL) as the calibration loss like Temperature Scaling (TS) \citep{guo2017calibration} may become robust to the domain shift problem if they calibrate the model using the test samples. However, they need labels of the samples to apply calibration. Labeling the test samples even for the small set is not always an easy task and need human experts effort which can be accompanied with the labeling noise and huge time burden. Neuron cells classification taken by electron microscope \citep{ostroff2015electron}, pathology images \citep{khosravi2018deep} and skin disease classification \citep{kolkur2018machine} are three examples of applications that have expensive labeling procedure with high risk of labeling noise that need senior experts to label them. In this work, we propose a new approach called Unsupervised Temperature Scaling (UTS) with similar framework of TS and using unlabeled test samples for calibrating the pre-trained model. This novel idea brings the chance of robust calibration to the domain shift. Possibility of using the test samples to calibrate, makes UTS a proper solution not only for domain shift but also for many practical calibration problems like calibrating off-the-shelf-models. More specifically, UTS is proposed with following contributions and foreseen impacts: \begin{itemize} \item \textbf{Unsupervised post-processing calibration:} UTS brings a new look to NLL loss function which is used as the calibration loss in several post-processing methods. UTS approximates a weight function to estimate the per class distribution of data to compute NLL. This new way of computing NLL makes it independent of the labels. In addition, computing weighted NLL has the same order of time and memory complexity as the classic NLL, without any additional hyper-parameter to fine-tune. \item \textbf{Robustness to the domain shift}: UTS is a robust calibration solution to shifted domains. It adjusts the the model uncertainty based on the test and not the training domain. Therefore by change of distribution in the test domain, UTS can follow the distribution shift easily. \item \textbf{Calibration of off-the-shelf models}: The pre-trained models for classification tasks are usually only trained to achieve higher accuracy rate without paying attention to the predictive uncertainty. In fact, many of them are released without the training data that removes the possibility of retraining them to calibrate, like available Pytorch pre-trained models. UTS brings a chance to use these models for decision-making applications by calibrating them on a test data without need of labeling them. \end{itemize} \section{Related Work}\label{section:2} Calibration of predictive uncertainty for DNNs are widely investigated in recent literature. Calibration methods can be categorized in two groups: probabilistic or post-processing approaches. \textbf{Probabilistic approaches} refer to methods that use Bayesian theory \citep{bernardo2009bayesian} for estimating the conditional distribution of data. As the exact Bayesian inference is not practical, a variety of approximation are proposed such as Laplace approximation \citep{mackay1992bayesian, ritter2018scalable,ritter2018online,kirkpatrick2017overcoming}, Variational Bayesian methods \citep{molchanov2017variational,louizos2017multiplicative,blundell2015weight,louizos2016structured,wen2018flipout} and Monte Carlo Markov Chains (MCMC) \citep{neal2012bayesian,balan2015bayesian,chen2014stochastic} to make Bayesian deep networks tractable. MC-dropout \citep{gal2016dropout} replaces complicated sampling with simple dropout in training and test phases, which has been shown to approximate Variational Bayesian inference. Ensemble of DNNs (\cite{lakshminarayanan2017simple}) is another straightforward probabilistic approach that can achieve better calibrated results than MC-dropout. This approach is appropriate for parallel computing, with multiple DNNs running at the same time. However, keeping the models in the memory during the test time brings high memory complexity. \textbf{Post-processing approaches} are much less complex, albeit less accurate compared to probabilistic calibration. In post-processing approaches, the main idea is to decrease the miscalibration of the network by minimizing a calibration loss \citep{gneiting2007strictly} such as NLL. In order to train the neural network, NLL is used to simultaneously increase accuracy and decrease miscalibration. However, it easily gets overfitted to confidence and makes the network overconfident \citep{guo2017calibration}. Post-processing approaches like TS, Platt-Scaling \citep{ platt1999probabilistic}, Histogram Binning \citep{zadrozny2001obtaining}, Isotonic Regression \citep{zadrozny2002transforming}, and Bayesian Binning into Quantiles \citep{naeini2015obtaining} fine-tunes the softmax layer by keeping the DNNs' weights unchanged. They do not need to retrain the deep network from scratch and they only need to find the best parameter of softmax softening function by minimizing a calibration loss (like NLL) on a small validation set. Temperature Scaling is the state-of-the-art among the post-processing approaches which uses NLL as the loss function. It only uses one parameter $T$ to rescale the logit layer and soften the softmax output. Therefore with keeping the accuracy unchanged it can calibrate the model with the minimum time and memory complexity. These features leads us to focus on TS and try to propose a robust post-processing solution for domain shift based on TS framework. \textbf{Robustness to the domain shift}: Previously, the results of calibrated model were also reported for different domains such as Out-Of-Distribution (OOD) and Adversaries \citep{lakshminarayanan2017simple,ritter2018scalable} to show the model is uncertain about what it does not learn before. Recently, people get into importance of domain shift problem in calibration and assess how the calibrated methods would behave under domain shift condition \citep{ovadia2019can,maddox2019simple}. Domain shift concept is different from adversaries and OOD. In the case of OOD, training and test domains are completely different in task distributions and in the case of adversaries the distribution shifts between the training and test is made with the goal of fooling the classifier. In domain shift, the training and test domains are distributionally different but related. The relation between two domains can be used as the prior knowledge to help improving the accuracy or having better calibration. In the literature of calibration, to the best of our knowledge, there is no work that specifically designed to calibrate the model considering domain shift assumptions. In this paper we will focus on Covariate shift as the most famous domain shift setting in image classification and propose UTS as a robust calibration method accordingly. \section{Preliminaries}\label{section:3} In this section, we define the domain shift and calibration setup to clarify UTS objectives. Then, we explain why NLL can be used as a calibration loss and when optimizing NLL will lead to a calibrated model toward the domain shift settings. Finally, we bring a deep analysis of the post-processing method TS which uses NLL as a calibration loss. We show TS can be a robust calibration solution to the domain shift if it uses labeled samples from the test domain to apply calibration. We discuss TS sensitivity to the labels of samples as the preliminaries to propose UTS method in the next section. \subsection{Problem Setup}\label{section:3.1} Considering the domain shift assumptions, the goal of calibration in this work is to improve uncertainty estimation of a pre-trained model for different shifted domains. In this setting, $q_s({\mathbf{x}},y)$ is considered as the ground-truth distribution of the \textbf{source domain} and $q_t({\mathbf{x}},y)$ is considered as the ground-truth distribution of the \textbf{target domain} where ${\mathbf{x}}\sim\mathcal{X}\in{\mathbb{R}}^d$ and $y\in\{1,2,\ldots,K\}$. \textbf{In the setting of domain shift, the source and target domains have different but related distributions}. The relation between the domains is defined by Covariate Shift assumption (\cite{adel2015probabilistic}) which is: the data distributions $q_s({\mathbf{x}},y)\not=q_t({\mathbf{x}},y)$ where the conditional distribution $q_s(y|{\mathbf{x}})=q_t(y|{\mathbf{x}})$, and the task and marginal distributions $ q_s(y)=q_t(y)$ and $q_s({\mathbf{x}})\not=q_t({\mathbf{x}})$, respectively. Let $d({\mathbf{x}})=\{S_y({\mathbf{x}}),\hat{y}\}$ denotes the pre-trained model in which $\hat{y}$ is the class prediction and $S_y({\mathbf{x}})$ is its associated confidence. In domain shift setting, for deep neural networks, model $d(\cdot)$ is a DNN trained on the source domain and would be tested on the target domain. In this setting, $S_y({\mathbf{x}})$ is the output of the softmax layer which is calibrated when $S_y({\mathbf{x}})= q_t(y|{\mathbf{x}})$. Miscalibration of DNNs can be explained in different ways. Temperature Scaling models the miscalibration as the rescaled logit layers by scaling factor $T^*$. TS objective is to find $T^*$ value to rescale the logit layer back and makes the model calibrated. More specifically, the calibrated output of TS is defined as $S_y({\mathbf{x}};{T^*})=\exp(\frac{{\textnormal{f}}_y({\mathbf{x}})}{T^*})/ \sum_{j=1}^K \exp(\frac{{\textnormal{f}}_j({\mathbf{x}})}{T^*})$ where ${\mathbf{f}}({\mathbf{x}})=[{\textnormal{f}}_1({\mathbf{x}}),{\textnormal{f}}_2({\mathbf{x}}),\ldots,{\textnormal{f}}_k({\mathbf{x}})]^\top$ is the logit layer of model $d({\mathbf{x}})$. In this paper, considering the same definition of miscalibration as TS, we propose UTS under domain shift condition. \textbf{UTS objective} is to find the scaling factor $T^*$ that $S_y({\mathbf{x}};T^*)=q_t(y|{\mathbf{x}})$, given that we have access to the source pre-trained model $d(\cdot)$, unlabeled calibration set ${\mathbb{C}}=\{{\mathbf{x}}_i\}_{i=1}^L\sim q_t({\mathbf{x}})$ and the known task distribution $q_s(y)$. \subsection{Robustness to Domain Shift with NLL Loss Function}\label{section:3.2} To calibrate a model, first we need to evaluate the quality of predicted uncertainty of the model. Evaluating the quality of predictive uncertainty is challenging, as the ground-truth of the uncertainty estimate is usually not available. Accordingly, scoring rules are defined to measure the quality of predictive uncertainty (\cite{gneiting2007strictly}). Scoring rules are numerical scores that rank the distribution prediction $p_\theta(y|{\mathbf{x}})$ by giving lower score to better prediction of true distribution $q(y|{\mathbf{x}})$. Let a scoring rule be a function $R(p_\theta,({\mathbf{x}},y))$ that evaluates the quality of the predictive distribution $p_\theta(y|{\mathbf{x}})$ based on the samples $({\mathbf{x}},y)\sim q({\mathbf{x}},y)$ where $q({\mathbf{x}},y)$ is the true distribution of the data. The expected scoring rule is defined by $R(p_\theta,q)=\int q({\mathbf{x}},y)R(p_\theta,({\mathbf{x}},y))dy d{\mathbf{x}}$. A \textbf{proper scoring rule} function is one where $R(p_\theta,q)\leq R(q,q)$ with equality if and only if $p_\theta(y|\mathbf{x})=q(y|\mathbf{x})$ for all samples. Negative Log Likelihood (NLL) is a proper scoring rule based on Gibbs inequality \cite{} i.e., $R(p_{\theta},q)=\mathbb{E}_{q({\mathbf{x}})}q(y|{\mathbf{x}})\log p_{\theta}(y|{\mathbf{x}})\leq\mathbb{E}_{q({\mathbf{x}})}q(y|{\mathbf{x}})\log q(y|{\mathbf{x}})$. Therefore minimizing NLL w.r.t $\theta$ on the samples generated from $q({\mathbf{x}},y)$ distribution, leads to $p_{\theta}(y|{\mathbf{x}})\rightarrow{q(y|{\mathbf{x}})}$. Under the domain shift assumption, as the training and test domains have different distributions, the final goal of calibration is $p_{\theta}(y|{\mathbf{x}})=q_t(y|{\mathbf{x}})$. In the case of using NLL as the loss function, if we minimize NLL on the samples that are generated from the test domain, we will have $p_{\theta}(y|{\mathbf{x}})\rightarrow{q_t(y|{\mathbf{x}})}$ and makes the method robust to the domain shift. One of the post-processing methods that uses NLL as the loss function is TS. Therefore, TS has the ability to get robust to the domain shift. \subsection{Temperature Scaling Analysis}\label{section:3.3} TS (\cite{guo2017calibration}) is the state-of-the-art post-processing approach which rescales the logit layer of a deep model by parameter $T$ that is called temperature. TS is used to soften the output of the softmax layer and makes it more calibrated. The best value of $T$ will be obtained by minimizing NLL loss function (explained in Sec.(\ref{section:3.2}), why minimizing NLL leads to more calibrated model) respecting to $T$ conditioned by $T>0$ on the calibration set as defined in Eq.~(\ref{equation:1}): \begin{equation} \begin{split} T_{TS}^* =&\operatorname*{arg\,min}_{T} \overbrace{\left(-\sum_{i=1}^L\log\big(S_{y_i}({\mathbf{x}}_i;T)\big)\right)}^{\text{NLL}}\hspace {3mm} s.t: T>0, \hspace {3mm} \{\mathbf{x}_i,y_i\}^L_{i=1}\in {\mathbb{C}} \sim q({\mathbf{x}},y), \end{split} \label{equation:1} \end{equation} where $S_{y_i}({\mathbf{x}}_i;T)= \exp({\frac{{\textnormal{f}}_{y_i}({\mathbf{x}}_i)}{T}})/ \sum_{j=1}^K \exp(\frac{{\textnormal{f}}_j({\mathbf{x}}_i)}{T})$, is the softed version of softmax by applying parameter $T$ on the logit layer ${\textnormal{f}}_j({\mathbf{x}})$. TS has the minimum time and memory complexity with order of $\mathcal{O}(1)$ among calibration approaches as it only optimizes one parameter $T$ on small labeled calibration set. Having only one parameter helps TS not only to be efficient and practical but also not to get overfitted to NLL loss function when it is optimized on small calibration set. TS previously is applied for calibration (\cite{guo2017calibration}), distilling the knowledge (\cite{hinton2015distilling}) and enhancing the output of DNNs for better discrimination between the in and out distribution samples (\cite{liang2017enhancing}). TS models the uncalibration as the rescaling factor in the logit layer. By computing the derivative of NLL respecting to $T$ in Eq.~(\ref{equation:1}), and find the minimum, we will have: \begin{equation} \sum_{i=1}^L{\textnormal{f}}_{y_i}({\mathbf{x}}_i) = \sum_{i=1}^L\sum_{k=1}^K{\textnormal{f}}_{k}S_{k}({\mathbf{x}}_i;T^*_{TS}). \label{equation:2} \end{equation} It shows regarding to the true label of the samples, TS selects the $T$ value which maximizes the $S_{k}(\mathbf{x}_i,T)$ for $k=y_i$ and minimize $S_{k}(\mathbf{x}_i,T)$ for all the other $k\not=y_i$. Therefore for correctly classified samples that $y_i = \arg\max_yS_{y}(\mathbf{x}_i)$, $T$ approaches $0$ to increase the confidence of that class toward $1$ and for the misclassified samples, $T$ goes toward $\infty$ to decrease the confidence of predicted label. The balance between the correctly classified and misclassified samples brings back the optimal value of $T$. TS uses NLL as the loss function then by selecting the calibration set from the test domain instead of the training one, we can make TS robust to the domain shift problem. We refer to this approach as \textbf{TS-Target}. TS is highly dependent on the labels of the samples. however, Labeling the test samples is a challenging task. When the calibration set contains data with labeling noise or unlabeled samples, TS loses the balance to find the optimal $T^*_{TS}$ and cannot calibrate the network successfully. Later, in Sec.~(\ref{section:5.3}), we will show even a small portion of the noise has big distortion in TS results. This brings the idea of Unsupervised Temperature Scaling which makes TS independent of labeled data and robust to the domain shift. \section{Unsupervised Temperature Scaling}\label{section:4} Considering the assumptions in Sec.~(\ref{section:3.1}), its main objective is to find $T^*_{UTS}$ for the unlabeled calibration set. UTS the same as TS uses NLL as the proper scoring rule to minimize the calibration gap and finds the optimal $T$ value. The first step of using NLL in UTS is to make NLL independent of labeled data. Considering NLL loss function, we can rewrite it with focus on per class distribution, formalized as: \begin{equation} \begin{split} \label{equation:3} \text{NLL} = {-\sum_{k=1}^K\sum_{({\mathbf{x}}_i,y_i)\in q({\mathbf{x}},y=k)}\log\big(S_{y_i}({\mathbf{x}}_i;T)\big)}\hspace {3mm}, \end{split} \end{equation} In Eq.~(\ref{equation:3}), NLL is the summation of $K$ different sample sets, generated from class distribution $q({\mathbf{x}},y=k)$ where $k\in\{1,2,\ldots,K\}$. When the labels of the samples are available, they can be used as the guide to select the samples set for each class distribution. But in the absence of the labels, the main question is how to select the samples generated from $q_t({\mathbf{x}},y=k)$ and calculate NLL. To come along with this challenge, instead of selecting the samples by labels, UTS applies weights on the samples. The weight function $\hat{W}_k({\mathbf{x}};w^*)$ represents the probability that sample ${\mathbf{x}}$ is drawn from $q({\mathbf{x}},y=k)$. Later, in Sec.~(\ref{section:4.1}) we will give specific details of the weight function $\hat{W}_k({\mathbf{x}};w^*)$ and how to approximate it. By applying weights on the samples, the UTS loss function is defined as the Weighted NLL (WNLL) which is: \begin{equation} \begin{split} &T^*_{UTS} = \operatorname*{arg\,min}_{T}\overbrace{\left(-\sum_{k=1}^{K}\sum_{i=1}^{L}\hat{W}_k({\mathbf{x}}_i;w^*)\log \left(S_{k}(\mathbf{x}_i;T)\right)\right)}^{\text{WNLL}} \quad s.t: T>0,\quad \{\mathbf{x}_i\}^L_{i=1} \in {\mathbb{C}}\sim q({\mathbf{x}}) \end{split} \label{equation:4} \end{equation} \begin{figure*}[t!] \centering \includegraphics[width=12cm, height = 4cm]{Images/uts.png} \caption{A color view of the weight function in a three class classification problem. For the samples which are classified as class $\hat{y}=k$ and for the samples that $\hat{y}\not=k$ but located near to the decision boundary, the $\hat{W}_k({\mathbf{x}},w^*) = 1$ which is shown with darker hue. For the samples that $\hat{y}\not =k$ and are far from the decision boundary $\hat{W}_k({\mathbf{x}},w^*)\rightarrow{0}$, which is with lighter hue. The decision boundary is illustrated by the black line.} \label{figure:uts} \end{figure*} \subsection{Weight Function \texorpdfstring{$\hat{W}_k(\cdot;\cdot)$}{Lg}}\label{section:4.1} We start the discussion about the weight function by introducing a fact from the Bayes rule (\cite{jin2017introspective}): \begin{equation} q({\mathbf{x}},y=k) = \frac{q(y=k|{\mathbf{x}})}{q(y\not=k|{\mathbf{x}})}q({\mathbf{x}},y\not=k) \label{equation:5} \end{equation} When a sample has a true label of $y=k$ it is drawn from distribution $q({\mathbf{x}},y=k)$. However, Eq.(\ref{equation:5}) shows that even samples with true label of $y\neq k$ can be used as the samples drawn from the distribution $q({\mathbf{x}},y=k)$ by applying weight of ${q(y=k|{\mathbf{x}})}/{q(y\not=k|{\mathbf{x}})}$. Therefore, we can simply use the weight of $1$ for the samples with true label of $y=k$ and the weight of ${q(y=k|{\mathbf{x}})}/{q(y\not=k|{\mathbf{x}})}$ for the samples $y \neq k$ to change all the samples in the calibration set to the samples drawn from $q({\mathbf{x}},y=k)$. Therefore, we define the weight function as Eq.(\ref{equation:6}): \begin{equation} W_k({\mathbf{x}}_i)=\begin{cases} 1, & \text{if}\quad {\mathbf{x}}_i\sim q({\mathbf{x}},y=k)\\ \frac{q(y=k|{\mathbf{x}}_i)}{1-q(y=k|{\mathbf{x}}_i)}, & \text{otherwise}. \end{cases} \label{equation:6} \end{equation} To compute $W_k(\cdot)$ for the samples, we need the ground-truth distribution $q({\mathbf{x}},y=k)$ that in UTS setting is not available. However we can approximate it empirically referring to UTS assumptions (Sec.~(\ref{section:3.1})). \textbf{Proposition 1}: \textit{Let $d(\cdot)$ be a model which gets miscalibrated with rescaled logit layer by factor ${w^*}$. Then, with known $q_t(y)$, the empirical approximation of $W_k(\cdot)$ is equal $\hat{W}_k(\cdot;\cdot)$ which is defined as}: \begin{equation} \hat{W}_k({\mathbf{x}}_i,w^*)=\begin{cases} 1, & \text{if} \quad \hat{y}_i=k\\ {1}/{\exp(\frac{1}{w^*}\log(S_{y=k}({\mathbf{x}}_i;\frac{1}{w^*})^{-1}-1))}, & \text{otherwise}. \end{cases} \label{equation:7} \end{equation} where $w^*$ is: \begin{equation} w^* = \arg\min_w\left(\sum_{k=1}^{K}\sum_{i=1}^L \hat{W_k}({\mathbf{x}}_i;w)-q_t(y=k)\right)^2, \quad \{\mathbf{x}_i\}_{i=1}^L\in {\mathbb{C}}\sim q_t({\mathbf{x}}) \label{equation:8} \end{equation} \begin{algorithm}[t!] \SetAlgoLined \textbf{Require:}{ $q_s(y)$: task distribution}\\ \textbf{Require:}{ $d(\cdot)$: the pre-trained model}\\ \textbf{Require:}{ ${\mathbb{C}}\sim q_t({\mathbf{x}})$: unlabeled calibration set derived from test domain}\\ 1: Find the optimal $w^* = \arg\min_w(\sum_{k=1}^{K}\sum_{i=1}^L \hat{W_k}({\mathbf{x}}_i;w)-q(y=k))^2,$\\ 2: Find the optimal $T^*_{UTS} =\operatorname*{arg\,min}_{T}{\left(-\sum_{k=1}^{K}\sum_{i=1}^{L}\hat{W}_k({\mathbf{x}}_i;w^*)\log \left(S_{k}(\mathbf{x}_i;T)\right)\right)}$\\ 3: Calibrate softmax output of model $d(\cdot)$ by: $S_{y}({\mathbf{x}};T^*_{UTS})= \exp({\frac{{\textnormal{f}}_{y}({\mathbf{x}})}{T^*_{UTS}}})/ \sum_{j=1}^K \exp(\frac{{\textnormal{f}}_j({\mathbf{x}})}{T^*_{UTS}})$ . \caption{Unsupervised Temperature Scaling} \label{algorithm:1} \end{algorithm} Proposition 1 valid for the domain shift setting with Covariate shift assumption and also for the case that there is no distribution shift between the training and test datasets. The validity of Proposition 1 for both settings are provided in {Appendix~\ref{appendix:A}}. Fig.~\ref{figure:uts} illustrates a schematic view of the weight function $\hat{W}_k(\cdot;\cdot)$ in the feature space. The color hue is correlated with the weight values. For the samples that are classified as $\hat{y}=k$ and for the samples with $\hat{y}\not=k$ located near to the decision boundary, the weight is equal to $1$. The weight would decrease as the samples fall further from the decision boundary which shows they are less probable to be drawn from distribution $q({\mathbf{x}},y=k)$. \textbf{Time Complexity of UTS:} Computing $\hat{W}_k(\cdot;\cdot)$ is a one parameter optimization which has the time complexity of $\mathcal{O}(1)$. After approximating the weight function $\hat{W}_k(\cdot;\cdot)$, UTS minimizes WNLL (Eq.(\ref{equation:4})) with the same time complexity to find the optimal $T^*_{UTS}$ which leads to have UTS with the total time complexity of $\mathcal{O}(1)$. Algorithm 1 summarizes UTS. \textbf{Validity of UTS in Practice:} UTS is valid when there is no domain shift or when the there is Covariate Shift between domains. When the test and training datasets are different in representation but keeps the same proportions of each class occurrence, it is categorized as Covariate shift assumption. In many applications like medical image classifications, the probability of happening a class of object is staying the same during the training and test phases which means $q_s(y) = q_t(y)$ but the illumination, capturing noise, resolution, and image size and viewpoint can vary between two domains which means $q_s({\mathbf{x}}) \neq q_t({\mathbf{x}})$. Therefore, in classification problems with Covariate Shift assumption or without any shift, UTS only needs to calculate empirically the number of occurrence of each class to the total number of samples in the training set and use it as $q_t(y)$ to calibrate the model. \section{Experiments}\label{section:5} We conduct the experiments to analyze the behavior of UTS in comparison to the other methods for two different calibration scenarios. First, we compare UTS with several post processing methods that use NLL as the loss function in the experiment with \textbf{ the same training and test domain distributions}. This experiment is designed to be a proof of concept to show that weighted NLL of UTS can indeed calibrate the model without accessing the labels. Second, in order to show the success of UTS in calibrating the model under domain shift condition, we compare UTS, TS and 3 more probabilistic approaches for \textbf{the training and test domains with different distributions}. We also study the results of \textbf{TS-Target} that is a TS which selects the calibration set from the target (test) domain. TS-Target has the most accurate uncertainty prediction among all other baselines for the shifted domain distributions. However, we will show in the third section of Experiments part, it suffers from the labeling noise which justifies our try to make TS unsupervised. \subsection{Calibration with the Same Training and Test Domains}\label{section:5.1} Here we consider the training and the test domains have the same distribution. Our goal is to show UTS can calibrate the models without labels in the case of no domain shift. \textbf{Experiment Setup} We compare UTS with several post-processing baselines which are Temperature Scaling (TS) (\cite{guo2017calibration}) and Matrix and Vector Scaling (\cite{platt1999probabilistic}) on a wide range of different state-of-the-art deep convolutional networks with variations in depth which are ResNet (\cite{he2016deep}), WideResNet (\cite{zagoruyko2016wide}), DenseNet (\cite{iandola2014densenet}), LeNet (\cite{lecun1998gradient}), and VGG (\cite{simonyan2014very}). We test the methods on different datasets such as CIFAR-10 and CIFAR-100 (\cite{krizhevsky2009learning}), SVHN (\cite{netzer2011reading}), MNIST (\cite{lecun1998mnist}), and Calthec-UCSD Birds (\cite{wah2011caltech}). We use all the data pre-processing, training procedures and hyper-parameters tuning for each dataset as described in each mentioned reference. To setup the calibration set, we randomly select $20\%$ of the test dataset. Then, we consider the rest to be evaluated as a test set. We repeat each experiment $20$ times independently and report the mean of NLL as a calibration metric. More explanation of experiment setup and the baselines, datasets and calibration metrics are provided in Appendix~\ref{appendix:B}. \textbf{Results} In Table~\ref{table:nll}, the calibration results based on NLL are compared between TS and UTS, which have only one parameter for fine-tuning the softmax output layer, and Matrix and Vector scaling, which apply a linear function on logit layers. In all cases, TS calibrates the network better than all the other methods. Although the results of UTS are not better than TS, UTS shows improvement in calibration for all dataset-models. It means the weighted NLL as the approximation of NLL with unlabeled samples can work properly to calibrate the model even though it is not as accurate as the NLL with access to labels. Although Matrix and Vector Scaling can define more complex functions to soften the softmax layer, they suffer from over-fitting w.r.t the validation set in confidence. We also provide the complete results with mean and standard deviation for accuracy as well as other standard calibration metrics (NLL, ECE and Brier Score) in Appendix~\ref{appendix:C.1}. The explanation about the ECE and Brier are given in Appendix~\ref{appendix:B.3}. \input{Table/table_nll.tex} \subsection{Calibration Under Domain Shift Setting}\label{section:5.2} In this section, we divide the experiments into two parts. First, we compare UTS to the uncalibrated model (UNC), TS and TS-Target on different domain shift scenarios. The goal of this experiment is to compare the gap of calibration between UTS and TS-Target that can be considered as the ground-truth for UTS when the labels are available. Later, we also evaluate the robustness of UTS, which was specifically designed to domain shift, and several probabilistic approaches, which only consider the case of calibration for the same distribution setting. The goal of the experiment is to show that UTS can be indeed robust for different shifting domain scenarios. \textbf{Experiment Setup} We follow the same experimental setup as Sec~(\ref{section:5.1}) but with different domain shift assumptions. We use the benchmark proposed specifically for domain shift problem in (\cite{ovadia2019can}). They model the distribution shift by applying different operations like rotation, translation (rolling), and with different severity levels of intensity corruptions proposed in (\cite{hendrycks2019benchmarking}). In the first part of this section, we compare the result of the UTS to uncalibrated, TS and TS-Target on MNIST and CIFAR10 datasets applying rotation, pixel translation and Gaussian noise to the test domain. In the second part, we add more probabilistic baselines such as LL-Dropout(\cite{gal2016dropout}), SVI (\cite{blundell2015weight}) and Ensemble (\cite{lakshminarayanan2017simple}) with more variation of the domain shifts in the sense of corruption of intensities. Specific details of baselines and experiment can be found in Appendix~\ref{B.1} \textbf{Results} As we can see in Fig.~\ref{figure:postprocessing}, the accuracy of the model degrades by the effect of domain shift. TS family approaches does not change the accuracy during the calibration therefore, all the methods have the same accuracy as the uncalibrated one. TS-Target has the same setting as TS with the difference that it uses the labeled calibration set from the target domain. Then, in ideal situation, UTS uncertainty prediction would reach the TS-Target performance. We can see that UTS is working better than uncalibrated model (UNC) and TS which uses the source data for calibration under the domain shift condition. The gap between UTS and TS-Target is interestingly small in the sense of Brier Score. More results for other domain shifts are provided in Appendix~\ref{appendix:C.2}. We also provide the analysis of UTS sensitivity to number of calibration samples in compared to TS and TS-Target in Appendix \ref{appendix:C.4} which shows UTS obtains stable results in decreasing the number of samples from $20\%$ to $2.5\%$ of test detest size. \input{ImagesInput/postprocessing_images.tex} \input{ImagesInput/probabilistic_images.tex} In Fig.~\ref{figure:probabilistic} we compare probabilistic approaches to UTS, uncalibrated model (UNC), TS and TS-Target. As all the calibration metrics are dependent on the accuracy of the models, controlling the accuracy to have fair comparison between the methods is important. Otherwise, the better calibration can be as the result of having better accuracy and not the calibration itself. Accordingly, we apply the shifts to the datasets and check the accuracy of the UTS with other approaches, and select the domain shift settings that UTS accuracy is near to the others. As we can see for different combination of model and datasets, UTS can achieve better results than any other probabilistic approaches and has a small gap with TS-Target which achieves the best results. It shows that using the test samples to fine-tune the calibration toward test distribution can help the model to be robust to the domain shift problem. As mentioned before, labeling the test samples is not a trivial task, therefore, using directly TS-Target might not be possible in many cases which justifies the importance of an unsupervised approaches like UTS. In the next section, we show that for a weakly supervision of the test samples, TS-Target cannot be successful in calibration and it needs exact labeling of the test domain samples that might be impractical in many cases. \subsection{TS Sensitivity to Labeling Noise}\label{section:5.3} When the labeled samples are available for calibration, TS shows the best results with and without domain shifts. In this section, we will investigate the sensitivity of TS with labeling noise. We apply different rates of random altering the labels only for the calibration set and evaluate the calibration success of TS, UTS and uncalibrated methods accordingly. As we can see in Fig.~\ref{figure:noise}, TS is extremely sensitive to the noise of labeling. Therefore, in order to have a successful calibration for TS in shifted domains, the exact labeling of test samples is essential which might be not feasible for many applications. UTS is robust to the labeling noise as it is an unsupervised calibration method and it can remove the challenge of labeling the test samples, completely. More results for more datasets-models are provided in Appendix~\ref{appendix:C.3}. \input{ImagesInput/noise_images.tex} \section{Discussion and Future Work}\label{section:6.0} In this paper, we propose UTS as a robust unsupervised post-processing method to the domain shift calibration challenge. UTS is a member of TS family approaches which have low time and memory complexity, and can calibrate with few number of samples while preserving the accuracy intact. UTS utilizes a new calibration loss function, weighted NLL which is independent of the labels. The computational complexity of weighted NLL is in the same order of NLL which makes UTS a fast and practical calibration solution. Since UTS uses the test samples to adjust the uncertainty, we show it is robust to domain shift and can make off-the-shelf models calibrated when their training samples are not available anymore. Recent studies \citep{maddox2019simple, kumar2018trainable} mentioned that using TS with probabilistic approaches can even improve the uncertainty prediction of already calibrated models. Therefore, we believe this work can be extended in the direction of combining UTS with such approaches in order to achieve more robust domain shift solutions. We also consider another direction of this work towards exploring UTS for more variant domain shift assumptions. In this paper, we study UTS only for Covariate Shift assumption, however, it can be extended to other shifting scenarios such as OOD and adversaries, in future. \section{Proof of Proposition 1}\label{appendix:A} First we investigate the validity of Proposition 1 for the settings with no distribution shift.\\ {\textbf{Proposition 1}:} \textit{let $W_k(\cdot)$ be a weight function defined as:} \begin{equation*} W_k({\mathbf{x}}_i)=\begin{cases} 1, & \text{if}\quad {\mathbf{x}}_i\sim q({\mathbf{x}},y=k)\\ \frac{q(y=k|{\mathbf{x}}_i)}{1-q(y=k|{\mathbf{x}}_i)}, & \text{otherwise}. \end{cases} \end{equation*} \textit{and let $d(\cdot)$ be a model which gets miscalibrated with rescaled logit layer by factor ${w^*}$. Then, with known task distribution $q(y)$, the empirical approximation of $W_k(\cdot)$ is equal $\hat{W}_k(\cdot;\cdot)$ which is defined as}: \begin{equation*} \hat{W}_k({\mathbf{x}}_i,w^*)=\begin{cases} 1, & \text{if}\quad \hat{y}_i=k\\ {1}/{\exp(\frac{1}{w^*}\log(S_{y=k}({\mathbf{x}}_i)^{-1}-1))}, & \text{otherwise}. \end{cases} \end{equation*} \textit{where $w^*$ is:} \begin{equation*} w^* = \arg\min_w\left(\sum_{k=1}^{K}\sum_{i=1}^L \hat{W_k}({\mathbf{x}}_i;w)-q(y=k)\right)^2 \quad \mathbf{x}_i\in {\mathbb{C}}\sim q({\mathbf{x}}) \end{equation*} \textbf{Proof}: \textit{For simplicity we split the proof into two parts. First, we show that for a known value of $w^*$, $\hat{W}_k(\cdot;\cdot)$ is the approximation $W_k(\cdot)$. In other words, ${1}/{\exp(\frac{1}{w^*}\log(S_{y=k}({\mathbf{x}})^{-1}-1))}={q(y=k|{\mathbf{x}})}/{q(y\not=k|{\mathbf{x}})}$:} \textit{The softmax output of an uncalibrated model $d(\cdot)$ with rescaled logit layer by a factor ${w^*}$, can be formulated as:} \begin{equation*} \begin{split} S_{y=k}({\mathbf{x}}) &= \frac{\exp(w^*{\textnormal{f}}_k(x_i))}{\sum_{j=1}^K \exp(w^*{\textnormal{f}}_j({\mathbf{x}}_i))} \\ S_{y\not=k}({\mathbf{x}}) &= 1-S_{y=k}({\mathbf{x}}) \end{split} \end{equation*} \textit{Therefore calibrated output will be defined as:} \begin{equation*} \begin{split} S_{y=k}({\mathbf{x}};w^*) &= \frac{\exp({\textnormal{f}}_k(x_i))}{\sum_{j=1}^K \exp({\textnormal{f}}_j({\mathbf{x}}_i))} = q(y=k|{\mathbf{x}}) \\ S_{y\not=k}({\mathbf{x}},w^*) &= 1-S_{y=k}({\mathbf{x}},w^*) = q(y\not=k|{\mathbf{x}}) \end{split} \end{equation*} \textit{Considering these definitions: } \begin{equation*} \begin{split} \frac{1}{ \exp \left (\frac{1}{w^*}\log(S_{y=k}({\mathbf{x}})^{-1}-1) \right) } =& \frac{1}{ \exp \left ( \frac{1}{w^*} \log \left (\frac{\sum_{j=1}^K \exp(w^*{\textnormal{f}}_j({\mathbf{x}}_i)) - \exp(w^* {\textnormal{f}}_k(x_i))} {\exp(w^*{\textnormal{f}}_k({\mathbf{x}}_i))} \right ) \right ) } \\ &= \frac{1}{ \exp \left ( \log \left (\frac{\sum_{j=1}^K \exp({\textnormal{f}}_j(x_i)) - \exp( {\textnormal{f}}_k(x_i))} {\exp({\textnormal{f}}_k({\mathbf{x}}_i))} \right ) \right ) }\\ &= \frac{1}{ \left (\frac{\sum_{j=1}^K \exp({\textnormal{f}}_j({\mathbf{x}}_i)) - \exp( {\textnormal{f}}_k({\mathbf{x}}_i))} {\exp({\textnormal{f}}_k({\mathbf{x}}_i))} \right ) }\\ &= \frac{\exp({\textnormal{f}}_k({\mathbf{x}}_i))}{ \left (\sum_{j=1}^K \exp({\textnormal{f}}_j({\mathbf{x}}_i)) - \exp( {\textnormal{f}}_k({\mathbf{x}}_i)) \right ) }\\ &= \frac{\left(\frac{\exp({\textnormal{f}}_k({\mathbf{x}}_i))}{\sum_{j=1}^K\exp({\textnormal{f}}_k({\mathbf{x}}_i))}\right)}{\left(1 - \frac{\exp({\textnormal{f}}_k({\mathbf{x}}_i))}{\sum_{j=1}^K \exp({\textnormal{f}}_j({\mathbf{x}}_i))}\right)}\\ &= \frac{S_{y=k}({\mathbf{x}};w^*)}{1-S_{y=k}({\mathbf{x}};w^*)} = \frac{q(y=k|{\mathbf{x}})}{1-q(y=k|{\mathbf{x}})} = \frac{q(y=k|{\mathbf{x}})}{q(y\not=k|{\mathbf{x}})} \end{split} \end{equation*} \textit{ We consider $\hat{y}_i=k$ as the rough estimation of condition ${\mathbf{x}}_i\sim q({\mathbf{x}},y=k)$ . The accuracy of the uncalibrated model is the same as calibrated one when it gets uncalibrated by rescaled logit layer. Therefore we can use the prediction output of uncalibrated model to have a rough estimation of the samples that ${\mathbf{x}}\sim q({\mathbf{x}},y=k)$}. \textit{Now in the second part, we show that $w^* = \arg\min_w\left(\sum_{k=1}^{K}\sum_{i=1}^L \hat{W_k}({\mathbf{x}}_i;w)-q(y=k)\right)^2$ where $\mathbf{x}_i\in {\mathbb{C}}\sim q({\mathbf{x}})$ } \textit{Referring to $W_k(\cdot)$ definition, we can simply show:} \begin{equation*} \int_xW_k({\mathbf{x}})q({\mathbf{x}},y)d{\mathbf{x}}=q(y=k) \end{equation*} \textit{Which means $\mathbb{E}_{q({\mathbf{x}},y)}[W_k({\mathbf{x}})] = q(y=k)$, and as $\hat{W}_k(\cdot;\cdot)$ is equal to $W_k(\cdot)$, empirically we can show $\sum_{{\mathbf{x}}^c_i\in {\mathbb{C}}}\hat{W}_k({\mathbf{x}}_i;w^*) = q(y=k)$. In this problem setting, we assume $q(y)$ is known. Therefore, $w^*$ can be found easily by minimizing $\left(\sum_{k=1}^{K}\sum_{i=1}^L \hat{W_k}({\mathbf{x}}_i;w)-q(y=k)\right)^2$}. \textit{Now we show the validity of Proposition 1 under Covariate Shift assumptions. } {\textbf{Corollary 1}:} \textit{Considering the same weight function $W_k(\cdot)$ defined in Proposition 1, let $d(\cdot)$ be a model which gets miscalibrated with rescaled logit layer by factor ${w^*}$ toward the target domain $q_t({\mathbf{x}},y)$. Assume covariate shift where $q_s(y|{\mathbf{x}})=q_t(y|{\mathbf{x}})$, $q_s(y) = q_t(y)$ and $q_s({\mathbf{x}}) \not= q_t({\mathbf{x}})$. Then, with known $q_s(y)$, the empirical approximation of $W_k(\cdot)$ is equal $\hat{W}_k(\cdot;\cdot)$ which is defined as}: \begin{equation*} \hat{W}_k({\mathbf{x}}_i,w^*)=\begin{cases} 1, & \text{if}\quad \hat{y}_i=k\\ {1}/{\exp(\frac{1}{w^*}\log(S_{y=k}({\mathbf{x}}_i)^{-1}-1))}, & \text{otherwise}. \end{cases} \end{equation*} \textit{where $w^*$ is:} \begin{equation*} w^* = \arg\min_w\left(\sum_{k=1}^{K}\sum_{i=1}^L \hat{W_k}({\mathbf{x}}_i;w)-q_s(y=k)\right)^2 \quad \mathbf{x}_i\in {\mathbb{C}}\sim q_t({\mathbf{x}}) \label{Eq(A11)} \end{equation*} \textbf{Proof}:\textit{ The first part of the proof is exactly the same as Proposition 1 with considering that $w^*$ with new assumptions is the scaling factor that makes the model uncalibrated towards the target domain. Therefore, we will conclude: } \begin{equation*}\label{Eq(A16)} \begin{split} &= \frac{S_{y=k}({\mathbf{x}};w^*)}{1-S_{y=k}({\mathbf{x}};w^*)} = \frac{q_t(y=k|{\mathbf{x}})}{1-q_t(y=k|{\mathbf{x}})} = \frac{q_t(y=k|{\mathbf{x}})}{q_t(y\not=k|{\mathbf{x}})} \end{split} \end{equation*} \textit{ For the second part of the proof, we have:} \begin{equation*} \int_xW_k({\mathbf{x}})q_s({\mathbf{x}},y)d{\mathbf{x}}=q_s(y=k) \end{equation*} \textit{Referring to covariate shift assumption where $q_s(y|{\mathbf{x}})=q_t(y|{\mathbf{x}})$ and $q_s(y) = q_t(y)$, we can deduce:} \begin{equation*} \int_xW_k({\mathbf{x}})q_t({\mathbf{x}},y)d{\mathbf{x}}=q_t(y=k) \end{equation*} \textit{Which means $\mathbb{E}_{q_t({\mathbf{x}},y)}[W_k({\mathbf{x}})] = q_t(y=k)$, and as $\hat{W}_k(\cdot;\cdot)$ is equal to $W_k(\cdot)$, empirically we can show $\sum_{{\mathbf{x}}^c_i\in {\mathbb{C}}}\hat{W}_k({\mathbf{x}}_i;w^*) = q_t(y=k)$. In this problem setting, we assume $q_s(y)$ is known that is equal to $q_t(y)$. Therefore, $w^*$ can be found easily by minimizing $\left(\sum_{k=1}^{K}\sum_{i=1}^L \hat{W_k}({\mathbf{x}}_i;w)-q_s(y=k)\right)^2$}. \section{Experimental Setups}\label{appendix:B} \subsection{Baselines}\label{B.1} \begin{itemize} \item \textit{Temperature Scaling} (\cite{guo2017calibration}): It is explained in Sec.~(\ref{section:3.3}) \item \textit{Matrix and Vector Scaling} (\cite{platt1999probabilistic}): Matrix Scaling applies a linear transformation on the logits to soften them: \begin{equation} \begin{split} &S_{y=\hat{y_i}}({{\mathbf{x}}_i};\mathbf{\theta},{\mathbf{b}}) = \underset{k}{\text{max}} \ \sigma(\mathbf{\theta}.{\mathbf{f}}({\mathbf{x}}_i)+\boldsymbol{b})^{(k)}\\ & \hat{y_i} = \operatorname*{arg\,max}_{k} \ \sigma(\mathbf{\theta}.{\mathbf{f}}({\mathbf{x}}_i)+\boldsymbol{b})^{(k)}\\ \end{split} \end{equation} Where $\sigma$ is the softmax function which takes logit layer ${\mathbf{f}}({\mathbf{x}})$ as an input. The parameters $\mathbf{\theta}_{K\times K}$ and ${\mathbf{b}}_{K}$ are optimized with respect to NLL on the validation set. Vector Scaling is the relaxed version of Matrix Scaling in which $\mathbf{\theta}_{K\times K}$ is a diagonal matrix. \item \textit{ll-Dropout} Monte-Carlo Dropout(\cite{gal2016dropout}), A pre-trained model which is trained with dropout rate $p = 0.5$ only on the activation function before the last layer, and keeping the dropout active during the test with the same rate. \item \textit{Ensembles} Ensembles of $10$ networks trained independently on the entire dataset using the random initialization(\cite{lakshminarayanan2017simple}). \item \textit{SVI} Stochastic Variational Bayesian Inference for deep learning (\cite{blundell2015weight,wen2018flipout} with the specific settings of training mentioned in (\cite{ovadia2019can}) \end{itemize} \subsection{Datasets}\label{appendix:B.2} We apply the calibration method on different image classification datasets. For each experiment, the size of validation set is $20\%$ of the test set which is selected randomly. For all the model-dataset we have trained them on the training set. \begin{enumerate} \item \textit{CIFAR-10} (\cite{krizhevsky2009cifar}): It contains 60000, 32$\times$32 color images of 10 different objects, with 6000 images per class. The size of training and test sets are 50000 and 10000 respectively. \item \textit{CIFAR-100} (\cite{krizhevsky2009cifar}): With the same setting as CIFAR-10, except it has 100 classes of different objects containing 600 images in each class. \item \textit{SVHN} (\cite{netzer2011reading}): It contains 32$\times$32 color images of numbers between 0 to 9 that has 73257 digits for training, 26032 digits for testing. \item \textit{MNIST} (\cite{lecun1998mnist}): It contains 28$\times$28 gray-scale images of numbers between 0 to 9. It has 60,000 images for training, and 10,000 images for test. \item \textit{Calthec-UCSD Birds} (\cite{wah2011caltech}): It contains 11,788 color images of 200 different birds species. We divided randomly into 7073 training, and 4715 testing samples. \end{enumerate} \subsection{Calibration Metrics}\label{appendix:B.3} \textit{Expected Calibration Error (ECE)}\\ ECE (\cite{naeini2015obtaining}) measures the average gap between the accuracy and predicted probabilities. Based on this definition of calibration, ECE is proposed as empirical expectation error between the accuracy and confidence. It is calculated by partitioning the range of confidence between $[0\,,1]$ into $B$ equally-spaced confidence bins and then assign the samples to each bin $B_b$ where $b=\{1,\ldots,B\}$ by their confidence range. Later it calculates the weighted absolute difference between the accuracy and confidence for each subset $B_l$. More specifically: \begin{equation} \text{ECE} = \sum_{b=1}^B{\frac{|B_b|}{N}}\Big|\text{acc}(B_b)-\text{conf}(B_b)\Big|, \label{Eq(10)} \end{equation}\\ where $N$ is the total number of samples. In this paper, we consider $B=15$ to report the ECE error. ECE is not derivable function. Therefore mostly it is ignored as the loss function of the post-processing approaches in calibrating the model by gradient decent optimizing methods. \\ \textit{Brier Score}\\ Brier Score (\cite{brier1950verification}) is a scoring rule for measuring the accuracy of predicted probabilities. It is computed as the square error of predicted probability and the one-hot encoding of the correct label. That is:\\ \begin{equation} B({\mathbf{x}}_i,y_i) = \frac{1}{K}\sum_{y=1}^K\left(S_y({\mathbf{x}}_i)-\delta(y-y_i)\right)^2 \label{Eq(11)} \end{equation}\\ \section{More Experimental Results}\label{appendix:C} \subsection{Tables of Accuracy, NLL, ECE and Brier Score }\label{appendix:C.1} We report additional results of the experiment applied in Sec.~\ref{section:5.1} to evaluate the behavior of UTS in calibration with the same training and test domains. We report the accuracy, NLL, ECE and Brier Score in Table~\ref{table:acc_std}, \ref{table:nll_std}, \ref{table:ece_std}, and \ref{table:brier_std}, respectively. Notice that TS family approaches keep the accuracy unchanged while Matrix and Vector Scaling can change the accuracy. In two cases Matrix and Vector Scaling improve the accuracy. However, they get overfitted on validation set and lose the accuracy and calibration in general. We report the results for different calibration score to show that UTS will calibrate the model regarding different calibration evaluation metrics. NLL and Brier Score have related definition of calibration as both of them are proper scoring rule. But ECE has different calibration definition. The detail explanation are given in Sec.~(\ref{appendix:B.3}) for each score. We can see in Table~\ref{table:ece_std}, UTS even can calibrate the model better than TS for two dataset-model combination with ECE definition of calibration. \input{TableAppendix/table_acc_std.tex} \input{TableAppendix/table_nll_std.tex} \input{TableAppendix/table_ece_std.tex} \input{TableAppendix/table_brier_std.tex} \subsection{Comparing Different TS Family approaches to UTS for Domain Shift Scenarios }\label{appendix:C.2} In this section, we provide more results to compare the behavior of UTS with TS and TS-Target in different domain shift applied to CIFAR10 dataset. Details of shifting operation can be found in \citep{hendrycks2019benchmarking}. We can see in Fig.~\ref{figure_append:postprocessing} that TS-Target has the most calibration robustness to the domain shift and followed by UTS with small gap. \input{ImagesInput/appendix_postprocessing_images.tex} \subsection{More Experimental Results for Sensitivity of TS to the Labeling Noise }\label{appendix:C.3} In this section, we provide more experiments for more models and datasets. As we can see in Fig.~\ref{figure:probabilistic}, TS shows a huge sensitivity to the noise of labeling in calibration set. UTS is completely robust to this noise as it is an unsupervised method. It shows if TS wants to calibrate the model without available labeled data, the labeling phase should be handled precisely. Otherwise TS results are not reliable. \input{ImagesInput/appendix_noise_images.tex} \subsection{Analysis of UTS Sensitivity to Number of Samples}\label{appendix:C.4} In this experiment, we show the impact of number of available calibration samples on TS, UTS and TS-Target. As we can see in Fig.\ref{figure_append:utssensitivitytonumberofsamples}, TS and UTS vary significantly when the number of samples are really few (between $30\sim50$) while TS-Target is not affected severely (the variance of TS-Target comparing to other methods is small then it is not properly shown in the image). However, by increasing the number of samples to 500, UTS reaches to the optimal $T^*_{UTS}$ and after that increasing the number of samples does not consequences better results. \input{ImagesAppendix/UTSSampleSensitivity/brievssample_images.tex} \subsubsection*{Acknowledgments} \section{$\mathcal{L_{ATS}}$ is a calibration measure} \paragraph{Lemma}: Suppose $T^*=\underset{T}{\arg\min}(\mathcal{L_{ATS}})$ on validation set $\mathcal{V}$. Therefore, $S_{y=k}(x,T^*)$ approaches $Q(y=k|x)$ for $k=1,\ldots,K$ and consequently $S_{y}(x,T^*)$ approaches toward $Q(y|x)$ which means $\mathcal{L_{ATS}}$ is a calibration measure.\vspace{-0.3cm} \paragraph{Proof}: The samples in subset $M_k$ are supposed to be generated from $Q(x,y=k)$ distribution. Based on Gibbs inequality (refer to Eq.~(1)) minimizing negative log of likelihood function on $M_k$ samples leads that likelihood function to approach $Q(y=k|x)$. In $M_k$ there are two groups of samples. The samples which are originally generated from $Q(x,y=k)$ distribution and have the true label $y_i = k$ and the samples which are borrowed from other distributions as the surrogate samples for $Q(x,y=k)$ and their true labels are $y_i\not=k$. These two groups of samples have different probability weights. Therefore, to converge to $Q(y=k|x)$, the loss function should differ based on the type of the samples. $\mathcal{L_{ATS}}$ is defined as: $ \begin{aligned} \mathcal{L_{ATS}} &= \sum_{k=1}^{K} \sum_{(x_i,y_i) \in M_k}\\ & -\log \left ( \frac{S_{y=k}(x_i,T)(1-S_{y = y_i}(x_i,T))}{S_{y\not=k}(x_i,T)} \right ) \\ \text{where} & \quad T^* = \operatorname*{arg\,min}_{T}(\mathcal{L_{ATS}}) \quad \quad \text{s.t:}\quad T > 0 \end{aligned} $ which can be analyzed for two cases: \begin{itemize} \item \textbf{Case I}: In this case, the samples are $(x_i,y_i=k)$ which means they are generated directly from $Q(x,y=k)$. The likelihood function of $\mathcal{L_{ATS}}$ in this case is equal to:\\ $\begin{aligned} \mathcal{L_{ATS}} &= \sum_{k=1}^{K} \sum_{(x_i,y_i) \in \{M_k|y_i=k\}}\\ & -\log \left ( \frac{S_{y=k}(x_i,T)(1-S_{y = k}(x_i,T))}{S_{y\not=k}(x_i,T)} \right ) \end{aligned} $\\ which means: $\begin{aligned} \mathcal{L_{ATS}} &= \sum_{k=1}^{K} \sum_{(x_i,y_i=k)} -\log \left (S_{y=k}(x_i,T) \right ), \end{aligned} $\\ that is the NLL loss function. Minimizing NLL respecting to $T$ on the samples generated from $Q(x,y=k)$, consequences $S_{y=k}(x_i,T^*)$ to approach $Q(y=k|x)$ for each $k=\{1,\ldots,K\}$. \item \textbf{Case II}: In this case $(x_i,y_i\neq k)$, which means the samples are selected from $Q(x,y\not=k)$ distribution. Using these samples instead of samples which are directly generated from $Q(x,y=k)$, applies a weight on distribution. Referring to Eq.(6) this weight is equal to $W = Q(y=k|x)/Q(y\neq k|x)$. Therefore, the negative log of likelihood function on these samples will approach instead of $Q(y=k|x)$ to $Q(y=k|x)Q(y=k|x)/Q(y\not=k|x)$. In this case: $\mathcal{L_{ATS}}$ is: $\begin{aligned} \mathcal{L_{ATS}} &= \sum_{k=1}^{K} \sum_{(x_i,y_i) \in \{M_k|y_i\not=k\}}\\ & -\log \left ( \frac{S_{y=k}(x_i,T)(1-S_{y \not= k}(x_i,T))}{S_{y\not=k}(x_i,T)} \right ) \end{aligned} $\\ which means: $\begin{aligned} \mathcal{L_{ATS}} &= \sum_{k=1}^{K} \sum_{(x_i,y_i\not=k)} -\log \left ( \frac{S_{y=k}(x_i,T)^2}{S_{y\not=k}(x_i,T)} \right ), \end{aligned} $\\ As we know $S_{y\not=k}(x_i,T^*) = 1- S_{y=k}(x_i,T^*)$ and $Q(y\not=k|x)= 1- Q(y=k|x)$. Minimizing $\mathcal{L_{ATS}}$ respecting to $T$, makes $S_{y=k}(x_i,T^*)^2/(1- S_{y=k}(x_i,T^*))$ approach $Q(y=k|x)^2/(1- Q(y=k|x))$ that means $S_{y=k}(x_i,T^*)$ becomes similar to $Q(y=k|x)$. \end{itemize} We have shown $S_{y=k}(x,T^*)$ becomes similar to $Q(y=k|x)$ on sample set $M_k$ for $k=1,\ldots,K$. Therefore we can deduce $S_{y}(x,T^*)$ becomes similar to $Q(y|x)$ which is the final goal of calibration. \begin{table*}[t] \tiny \centering \caption{ISIC Dataset statistics and accuracy of ResNet200 per each class} \resizebox{\textwidth}{!}{ \begin{tabular}{|l|c c c c c c c c|} \hline & \textbf{Melanoma} & \textbf{Melanocytic Nevus} & \textbf{BCC} & \textbf{Bowen} & \textbf{Benign Keratosis} & \textbf{Dermatofibroma} & \textbf{Vascular} & \textbf{Total} \\ \hline \textbf{\# of Training} &668 &4023 &309 &196 &659 &69 &85 &6009\\ \textbf{Acc Training} &97.16\% &99.68\% &99.68\% &94.90\% &98.18\% &97.10\% &100.00 &99.05\%\\ \hline \textbf{\# of Validation} &89 &536 &41 &26 &88 &9 &12 &801\\ \textbf{Acc Validation} &55.06\% &96.65\% &92.68\% &65.389\% &78.41\% &77.78\% &91.67 &88.53\%\\ \hline \textbf{\# of Test} &356 &2146 &164 &105 &352 &37 &45 &3205\\ \textbf{Acc Test} &60.67\% &96.83\% &82.93\% &50.00\% &73.86\% &66.67\% &91.67 &89.14\%\\ \hline \end{tabular} } \label{Tabel_ISIC_Stat} \end{table*} \begin{figure*}[t] \centering \subfloat{\label{fig:gull}\includegraphics[height=4.4cm,width=4.4cm]{photo/supplementary/noise/a.png}} \subfloat{\label{fig:gull}\includegraphics[height=4.4cm,width=4.4cm]{photo/supplementary/noise/b.png}} \subfloat{\label{fig:gull}\includegraphics[height=4.4cm,width=4.4cm]{photo/supplementary/noise/c.png}}\\ \subfloat{\label{fig:gull}\includegraphics[height=4.4cm,width=4.4cm]{photo/supplementary/noise/d.png}} \subfloat{\label{fig:gull}\includegraphics[height=4.4cm,width=4.4cm]{photo/supplementary/noise/e.png}} \subfloat{\label{fig:gull}\includegraphics[height=4.4cm,width=4.4cm]{photo/supplementary/noise/f.png}} \caption{ Calibration of different models-datasets with TS and ATS methods for $10\%\sim 50\%$ of labeling noise. } \label{noise} \end{figure*} \begin{figure*}[t] \centering \subfloat{\label{fig:gull}\includegraphics[height=4.4cm,width=4.4cm]{photo/supplementary/validation/a.png}} \subfloat{\label{fig:gull}\includegraphics[height=4.4cm,width=4.4cm]{photo/supplementary/validation/b.png}} \subfloat{\label{fig:gull}\includegraphics[height=4.4cm,width=4.4cm]{photo/supplementary/validation/c.png}}\\ \subfloat{\label{fig:gull}\includegraphics[height=4.4cm,width=4.4cm]{photo/supplementary/validation/d.png}} \subfloat{\label{fig:gull}\includegraphics[height=4.4cm,width=4.4cm]{photo/supplementary/validation/g.png}} \subfloat{\label{fig:gull}\includegraphics[height=4.4cm,width=4.4cm]{photo/supplementary/validation/f.png}} \caption{Calibration of different models-datasets with TS and ATS methods for different validation size.} \label{validation} \end{figure*} \section{Datasets Details} We apply the calibration method on different image classification datasets ( The results are reported in Sec. 6 in the main text). For each experiment, the size of validation set is $20\%$ of the test set which is selected randomly. For all the model-dataset used in Table 1\&3, we have trained them on the specified training set. Except for the experiments with ImageNet, that we used ResNet152 pre-trained PyTorch model to report the results. \begin{enumerate} \item CIFAR-10 [20]: It contains 60000, 32$\times$32 color images of 10 different objects, with 6000 images per class. The size of training and test sets are 50000 and 10000 respectively. \item CIFAR-100 [20]: With the same setting as CIFAR-10, except it has 100 classes of different objects containing 600 images in each class. \item SVHN [33]: It contains 32$\times$32 color images of numbers between 0 to 9 that has 73257 digits for training, 26032 digits for testing. \item MNIST [25] It contains 28$\times$28 gray-scale images of numbers between 0 to 9. It has 60,000 images for training, and 10,000 images for test. \item Calthec-UCSD Birds [41]: It contains 11,788 color images of 200 different birds species. We divided randomly into 7073 training, and 4715 testing samples. \item ImageNet2012 [10]: Natural scene images from 1000 classes. It contains 1.3 million and 25000 images for training and test, respectively. \item ISIC datset [8, 40] (data extracted from the "ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection" grand challenge datasets): It contains 10015 color images of 7 possible skin anomalies. We divide randomly the dataset into 6009 training and 4006 test images. \end{enumerate} \section{Robustness to Noise and Validation Size} In this section, we provide more models-datasets results for comparing the behavior of ATS vs. TS in calibrating the model in existence of labeling noise and few number of samples in validation. The results are shown in Figure \ref{noise} and Figure \ref{validation}, respectively. ATS is much more robust to the labeling noise and more stable when the number of validation samples are few. \section{Implementation Specification of Skin Lesion Detection System} To test the impact of calibration in the real application, we design a medical assistant system. We select ISIC dataset which contains color images of 7 different skin lesions which are , Melanoma, Melanocytic nevus, Basal cell carcinoma (BCC), Bowen, Benign keratosis, Dermatofibroma, and Vascular. The selected model is a ResNet200 with pretrained weights on ImageNet. In order to fine-tune it, we use 60\% of ISIC images resizing them to $224 \times 224$ and normalizing with mean and standard deviation of the ImageNet dataset. Notice that we use stratification to divide the dataset. We run the fine-tuning for 100 epochs with batchsize of 32 using Adam optimizer with starting learning rate of 1e-4 and a scheduled decaying rate of 0.95 every 10 epochs. To increase the variety of the training samples, we perform data augmentation with probability of 0.5 of transforming every image with a random horizontal or vertical flip or a random rotation of a maximum of 12.5\degree ~either to the left or to the right. The details statistic of dataset is provided in Table~\ref{Tabel_ISIC_Stat}. \section{More Results of Skin Lesion Detection System} In this section, we provide more results of skin lesion detection system. The confidence of the system before and after calibration with TS and ATS methods and for correctly classified and misclassified samples is reported in Figure~ \ref{Skin_Lesion_TS_ATS} for different skin lesion types. \begin{figure*}[!ht] \centering \subfloat[][\scriptsize Label = BCC\\Pred. = BCC\\Confidence = 0.99\\ATS Confidence = 0.92\\TS Confidence = 0.59 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/corr/ISIC_0027120.jpg}}\quad \subfloat[][\scriptsize Label = Benign keratosis\\Pred. = Benign keratosis \\Confidence = 0.99\\ATS Confidence = 0.93\\TS Confidence = 0.58 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/corr/ISIC_0027419.jpg}}\quad \subfloat[][\scriptsize Label = BCC\\Pred. = BCC nevus\\Confidence = 0.99 \\ATS Confidence = 0.92\\TS Confidence = 0.59 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/corr/ISIC_0027825.jpg}}\quad \subfloat[][\scriptsize Label = Dermatofibroma\\Pred. = Dermatofibroma \\Confidence = 0.99\\ATS Confidence = 0.94\\TS Confidence = 0.60 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/corr/ISIC_0028346.jpg}}\\ \subfloat[][\scriptsize Label = Bowen\\Pred. = Bowen\\Confidence = 0.99\\ATS Confidence = 0.95\\TS Confidence = 0.61 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/corr/ISIC_0028820.jpg}}\quad \subfloat[][\scriptsize Label = Melanoma\\Pred. = Melanoma\\Confidence = 0.99\\ATS Confidence = 0.92\\TS Confidence = 0.58 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/corr/ISIC_0031368.jpg}}\quad \subfloat[][\scriptsize Label = Vascular lesion\\Pred. = Vascular lesion\\Confidence = 0.99 \\ATS Confidence = 0.93\\TS Confidence = 0.62 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/corr/ISIC_0031950.jpg}}\quad \subfloat[][\scriptsize Label = Melanoma\\Pred. = Melanoma\\Confidence = 0.99\\ATS Confidence = 0.93\\TS Confidence = 0.59 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/corr/ISIC_0025589.jpg}}\\ \subfloat[][\scriptsize Label = BCC\\Pred. = Bowen\\Confidence = 0.91\\ATS Confidence = 0.68\\TS Confidence = 0.45 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/miss/ISIC_0024332.jpg}}\quad \subfloat[][\scriptsize Label = Benign keratosis\\Pred. = Melanocytic nevus\\Confidence = 0.96\\ATS Confidence = 0.76\\TS Confidence = 0.53 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/miss/ISIC_0025431.jpg}}\quad \subfloat[][\scriptsize Label = Melanoma\\Pred. = Bowen\\Confidence = 0.92\\ATS Confidence = 0.67\\TS Confidence = 0.44 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/miss/ISIC_0025709.jpg}}\quad \subfloat[][\scriptsize Label = Benign keratosis\\Pred. = Melanoma\\Confidence = 0.95\\ATS Confidence = 0.69\\TS Confidence = 0.43 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/miss/ISIC_0029014.jpg}}\\ \subfloat[][\scriptsize Label = Dermatofibroma\\Pred. = Melanoma\\Confidence = 0.96\\ATS Confidence = 0.76\\TS Confidence = 0.55]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/miss/ISIC_0029578.jpg}}\quad \subfloat[][\scriptsize Label = Melanocytic nevus\\Pred. = Melanoma\\Confidence = 0.93\\ATS Confidence = 0.69\\TS Confidence = 0.44 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/miss/ISIC_0029945.jpg}}\quad \subfloat[][\scriptsize Label = BCC\\Pred. = Bowen\\ Confidence = 0.98\\ATS Confidence = 0.75\\TS Confidence = 0.43 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/miss/ISIC_0030249.jpg}}\quad \subfloat[][\scriptsize Label = Melanoma\\Pred. = Melanocytic nevus\\Confidence = 0.94 \\ATS Confidence = 0.69\\ TS Confidence = 0.44 ]{\label{fig:gull}\includegraphics[height=3cm,width=3.2cm]{photo/supplementary/isic/miss/ISIC_0030898.jpg}} \caption{Different correctly classified and misclassified output of skin lesion detection system before and after calibration with TS and ATS. } \label{Skin_Lesion_TS_ATS} \end{figure*} \end{document}
1,314,259,993,120
arxiv
\section{I. Introduction} Thermodynamics is regarded as a discipline with a formal simplicity, but still covering a wide domain of applicability. One of the central problems in thermodynamics is the extent of heat-to-work conversion, with its focus on maximal work or power output and the consequent efficiency of the process. The seminal results of Carnot apply to the case of infinite reservoirs. However, in recent years, the study of the role of finite reservoirs has also caught attention \cite{Ondrechen1981, Ondrechen1983,Leff1987, Chen1997,Izumida2014, Wang2014, JohalRai2016}. This is motivated by practical considerations such as a limited supply of fuel (a finite heat source), or the working medium being in contact with a small environment (sink) which may be the case in small-scale devices, or even relevant for the design of modern cities. On the other hand, algebraic inequalities between the means hold a kind of poetic fascination. One of the most important \cite{Alsina} and well-known is the arithmetic mean-geometric mean (AM-GM) inequality, stated as follows. For two real positive numbers, $a$ and $b$, with arithmetic mean $A(a,b) = (a+b)/2$ and geometric mean $G(a,b) = \sqrt{a b}$, we have \begin{equation} \frac{a+b}{2} \geqslant \sqrt{a b}, \label{amgm} \end{equation} with equality only if $a=b$. Such inequalities are useful in proving elementary results in many disciplines \cite{Hardy52, Bullen88}. Especially, in the context of macroscopic thermodynamics, the second law of increase of entropy may be argued as follows \cite{Cashwell67}. Consider $n$ systems with a constant heat capacity $C$ and initial temperatures, $\{ T_i |i=1,...,n \}$. Placed in mutual thermal contact, these systems come to equilibrium at a common final temperature, say $T_f$. From the energy conservation condition (the first law), we have $\sum_i C(T_i - T_f) = 0$, which implies $T_f = \sum_i T_i /n$. Now the total entropy change: $\Delta S = \sum_i \int_{T_i}^{T_f} (C/T)dT = n C (\ln T_f - \ln (\Pi_i T_i)^{1/n})$, so by virtue of the AM-GM inequality \cite{genamgm}, we get $\Delta S \geqslant 0$ \cite{commenta, Tait1868, Sommerfeld64, Landsberg87}. Thus in the above argument, the manifestation of AM-GM inequality is specifically tied to the assumption of a particular model system. By assuming systems other than perfect gases, one can invoke inequalities between other means. It is apparent that alternative thermodynamic processes, such as optimal work-extracting processes, would exhibit a similar connection between physical models and specific inequalities between the means. In this paper, our objective is to compare the work output capacity and efficiency of two complementary scenarios, involving a finite system and a reservoir. During this analysis, we will uncover a rather general role of the AM-GM inequality. In particular, we will address the following question. Assume a pair of values for temperature, say $T_+$ and $T_- (<T_+)$, and a system A with a finite heat capacity. Also, a heat reservoir is present such that if the system is at temperature $T_+$, the reservoir is a sink at $T_-$. Conversely, if the system is at $T_-$, then the reservoir is a hot source at $T_+$. Which of these two situations (see Fig. 1) would yield a larger amount of extractable work, due to temperature difference? We answer this question by assuming that the process of maximal work extraction is carried out by some working medium (whose details are not important) via infinitesimal reversible heat cycles between system A and the reservoir. In practical terms, we may consider a toy engine which can ideally work in a reversible manner, utilizing the temperature gradient between system A and the environment. Let $T_+$ and $T_-$ be the environment temperatures, say, in summer and in winter season, respectively. So in summer, we cool the system A to temperature $T_-$, while in winter, we have to heat up the system to temperature $T_+$, in order to run the engine. The engine works till it equilibrates at the specific temperature of the environment. When will the engine yield a larger amount of total work, in summer, or in winter? The paper is organized as follows. In Section II, we describe the framework using two scenarios for work extraction due to temperature difference between a finite system and a heat reservoir. In Subsection II.A, the total extracted work and the corresponding efficiency are compared for the two scenarios. In Section III, physical examples are given based on thermodynamic systems where the temperature and the internal energy are related to the entropy by power laws. Section IV discusses the bounds on the efficiency at total work. Finally, Section V is devoted to summary and concluding remarks. \section{II. Work from a finite system and a reservoir} To set up the thermodynamic framework, consider system A following a certain fundamental relation $U = U(S,V,N)$. It has equilibrium states described by energy $U_+$, entropy $S_+$ at temperature $T_+$, and alternatively, by $U_-$ and $S_-$ at $T_-$, with some fixed values of volume $V$ and number of moles $N$. For simplicity, we consider only systems with a positive heat capacity ($C_V >0$). This implies that $U_+ > U_-$ and $S_+ > S_-$. Now, we first assume that system A acts as a finite heat sink at temperature $T_-$, relative to a very large hot reservoir (source) at temperature $T_+$. We couple the two by running infinitesimal heat cycles, which successively increase the temperature of A, till A comes in equilibrium with the hot source, see Fig.1 (i). At an arbitrary intermediate stage, when the temperature of A is $T$, the small amount of heat removed from the source $dQ_h$ is converted into an amount of work $dW$ with maximal (Carnot) efficiency $\eta = 1- T/T_+$. The heat discarded to the sink is $dQ_c = C_V dT$. Then, we can write $dW = \eta(1-\eta)^{-1} dQ_c$. The total extracted work is given by: \begin{eqnarray} W_+ &=& \int_{T_-}^{T_+} dW \\ & = & \int_{T_-}^{T_+} \frac{\eta}{1-\eta} C_V dT \\ &=& T_+ (S_+ - S_-) - (U_+ - U_-). \label{wp} \end{eqnarray} The heat absorbed from the hot source is $Q_+ = T_+ (S_+ - S_-)$. Then the efficiency at total work, $\eta_+ = W_+ /Q_+$, is calculated to be: \begin{equation} \eta_+ = 1- \frac{1}{T_+} \frac{U_+ - U_-}{S_+ - S_-}. \label{ep} \end{equation} Then, we consider the alternative situation in which A acts as a finite source at temperature $T_+$, relative to an infinite sink at $T_-$, see Fig.1 (ii). Again, we extract the maximal work by utilizing the temperature gradient between A and the reservoir, till A is at temperature $T_-$. Then, after a similar calculation \cite{Izumida2014} as above, the total work obtained is \begin{equation} W_- = (U_+ - U_-) - T_- (S_+ - S_-). \label{wm} \end{equation} This is termed as exergy in the engineering literature \cite{Exergy}. The heat absorbed from the source is $Q_- = U_+ - U_-$, while the efficiency of the process $\eta_- = W_- /Q_-$ is given by \begin{equation} \eta_- = 1- T_-\frac{ S_+ - S_-}{U_+ - U_-}. \label{em} \end{equation} \begin{figure} \includegraphics[width=13cm]{fig1_engines.pdf} \caption{Schematic of the reversible heat engine between a finite system and a heat reservoir, for a given pair of initial temperatures $(T_+,T_-)$: (i) System A is a finite sink at $T_-$ and is coupled to an infinite source at $T_+$, via heat engine. Work extraction $W_+$, Eq. (\ref{wp}), is completed when the temperature of A becomes $T_+$. (ii) System A is a finite source at $T_+$ and is coupled to an infinite sink at $T_-$, via heat engine. Total extracted work is $W_-$, Eq. (\ref{wm}), when the temperature of A becomes $T_-$.} \end{figure} Thus for the toy engine mentioned in Introduction, $W_+$ and $\eta_+$ ($W_-$ and $\eta_-$) may refer to the total work and the corresponding efficiency in summer (winter) season. \subsection{A. The Comparison} Now we compare the amounts of extracted work, and the efficiencies, in these alternative set-ups. For that purpose, we recall the classic result in calculus, known as the {\it mean value theorem}. Consider a continuous and differentiable function $U(S)$ in the domain $[S_-,S_+]$, with the derivative $T(S) = dU/dS$. Let us denote: $U(S_{\pm}) = U_{\pm}$. Following the theorem, there is a point $S_m$ strictly within this interval ($S_+ > S_m > S_-$), at which the derivative of the function $U$, i.e. $T(S_m) \equiv T_m$, is given by: \begin{equation} T_m = \frac{U_+ - U_-}{S_+ - S_-}. \label{tm} \end{equation} % We also assume $T(S)$ to be monotonic increasing function, or, in other words, $U(S)$ is a convex function. In the context of thermodynamics, this assumption implies positive heat capacity ($C_V$) of the system. Then it follows that $ T(S_+) > T(S_m) > T(S_-)$, or alternatively, $ T_+ > T_m > T_-$. Now, depending on the nature of the thermodynamic system i.e. the form of the function $U(S)$, $T_m$ can take values relative to $A(T_+, T_-)$ and $G(T_+, T_-)$, such that we have the following situations: \begin{eqnarray} (a) \qquad\qquad T_+ &>& T_m \geqslant \frac{T_+ + T_-}{2} > \sqrt{T_+ T_-} > T_- \nonumber \\ (b) \qquad\qquad T_+ &>& \frac{T_+ + T_-}{2} > T_m > \sqrt{T_+ T_-} > T_- \nonumber \\ (c) \qquad\qquad T_+ &>& \frac{T_+ + T_-}{2} > \sqrt{T_+ T_-} \geqslant T_m > T_- \nonumber \\ \label{abc} \end{eqnarray} We choose the means $A$ and $G$ to split the interval $(T_-,T_+)$ into three regions, because for $T_m = (T_+ + T_-)/2$, we have $W_+ = W_-$, and for $T_m = \sqrt{T_+ T_-}$, we have $\eta_+ = \eta_-$. This helps naturally to compare the magnitudes of work, and efficiency. Thus, if case $(a)$ holds, then applying $T_m \geqslant (T_+ + T_-)/2$, and using Eqs. (\ref{tm}), (\ref{wp}) and (\ref{wm}), we obtain $W_+ \leqslant W_-$. In this case, due to AM-GM inequality, we also have $T_m > \sqrt{T_+ T_-}$, which implies $\eta_+ < \eta_-$, due to Eqs. (\ref{tm}), (\ref{ep}) and (\ref{em}). Similarly, if case $(b)$ applies, then we conclude that $W_+ > W_-$, but due to AM-GM inequality, we have $\eta_+ < \eta_-$. If case $(c)$ is true, i.e. $\sqrt{T_+ T_-} \geqslant T_m$, it implies $\eta_+ \geqslant \eta_-$. Further, due to $(T_+ + T_-)/2 > T_m$, we also have $W_+ > W_-$. The above three scenarios are summarized in Table I. Thus we see that the comparison of $T_m$ with $A(T_+,T_-)$ decides the relative magnitudes of $W_+$ and $W_-$, whereas the comparison of $T_m$ with $G(T_+,T_-)$, serves to compare $\eta_+$ and $\eta_-$. In these comparisons, the AM-GM inequality provides a sort of background against which $T_m$ takes values depending on the nature of system A (see examples below). In terms of practical utility, the goal behind modelling of heat engines is to characterize their optimal working regimes. In this regard, if we are given a finite system A and a constraint to run the engine in one of the two scenarios, denoted as (i) and (ii) in the above, then a particular choice can be motivated as follows. In case the system A falls in category (a) of Table I, then choice (ii) provides a higher total work output and a higher efficiency. On the other hand, if system A belongs to category (c), then the choice (i) would provide a higher work output and a higher efficiency. In case the system belongs to regime (b), we have a situation with a trade-off. If we opt for a higher work output then the efficiency obtained is less, and vice versa. Heuristically, one may be able to make a choice in this situation as follows. A focus on a higher efficiency may become important, if the substance (system A) is in short supply or if the economic/ecological costs of preparing the system, in the desired state, are rather high. On the other hand, if such costs are not a consideration, then one may focus on higher total work, with the corresponding efficiency being less of a concern. \begin{table} \begin{tabular}{|c|c|c|} \hline $(a)$ & $(b)$ & $(c)$ \\ \hline \quad $W_+ \leqslant W_-$ \quad & \quad $W_+ > W_-$ \quad & \quad $W_+ > W_-$ \quad \\ \quad $\eta_+ < \eta_-$ \quad & \quad $\eta_+ < \eta_-$ \quad & \quad $\eta_+ \geqslant \eta_-$ \quad\\ \hline \end{tabular} \caption{Comparison of total work, Eqs. (\ref{wp}) and (\ref{wm}), and efficiency at total work, Eqs. (\ref{ep}) and (\ref{em}), corresponding to regimes $(a), (b)$ and $(c)$ in Eq. (\ref{abc}).} \end{table} \section{III. Examples} In this section, we illustrate the various cases noted in the above, by taking examples from different types of physical systems. Consider a class of thermodynamic systems that obey: $U \propto S^{\omega}$ and $T \propto S^{\omega -1}$, where $\omega$ is a constant real number. For heat capacity to be positive, we must have $\omega >1$. So, $T_m$ is evaluated to be: \begin{equation} T_m = \frac{1}{\omega} \frac{T_{+}^{\omega/(\omega -1)} - T_{-}^{\omega/(\omega -1)}} {T_{+}^{1/(\omega -1)}-T_{-}^{1/(\omega -1)}}. \label{ttw} \end{equation} It is convenient to introduce the generalized mean \cite{Stolarsky75,Alzer87} of two real, positive numbers $(a,b)$: \begin{equation} E_r(a,b) = \frac{r-1}{r} \frac{a^r-b^r}{a^{r-1}-b^{r-1}}. \end{equation} In our case, $T_m = E_r(T_+,T_-)$ with $r = \omega/(\omega -1)$. For $r=2 \; (\omega=2)$, $E_2( T_+,T_-) = (T_+ + T_-)/2$. For $r=1/ 2$ $(\omega=-1)$, $E_{1/2}(T_+,T_-) = \sqrt{T_+,T_-}$. Since $E_r(a,b)$ is increasing in parameter $r$ \cite{YangCao}, it follows that, for $r\geqslant2$ or $\omega \geqslant 2$, we have $T_m = E_r(T_+,T_-) \geqslant E_2(T_+,T_-)$, which implies $T_m \geqslant (T_+ + T_-)/2$, or case $(a)$. Therefore, for $2 > \omega > 1$, the system corresponds to case $(b)$. Some examples of physical systems in the above class, for appropriate values of $T_+$ and $T_-$, are: $\omega = 4/3$ (black-body radiation), $\omega =5/3$ (degenerate Bose gas) and $\omega = 2$ (ideal Fermi gas). The case of a perfect-gas system, can be discussed as the limit $r\to 1$, which yields $E_1(T_+,T_-) = L(T_+,T_-)$, known as the logarithmic mean \cite{Carlson72,Bhatia08}: \begin{equation} L(T_+,T_-) =\frac{T_+ - T_-}{\ln T_+ - \ln T_-}. \end{equation} Logarithmic mean temperature difference is a useful measure of the effectiveness with which a heat exchanger can transfer heat energy \cite{Nedderman1985}. This mean satisfies: \begin{equation} \frac{T_+ + T_-}{2} > L(T_+,T_-) > \sqrt{T_+ T_-}. \label{lmin} \end{equation} So if $T_m = L(T_+,T_-)$, then due to the above inequality, we have an instance of case $(b)$. Thus with a perfect-gas system, the finite-sink/infinite-source setup produces more work than finite-source/infinite-sink setup ($W_+ > W_-$), although the efficiency at total work follows the reverse order ($\eta_+ < \eta_-$). As our final model system, let A consist of $N$ non-interacting, localized spin-1/2 particles \cite{Pathria}. Each particle can be regarded as a two-level system, with energy levels ($0$, $\epsilon$). The mean energy for this system, in the limit of high temperatures such that $\epsilon \ll k T$, on keeping terms only upto $(\epsilon/k T)^2$, can be approximated as: $U \approx N ({\epsilon}/{2}-{\epsilon^2}/{4kT})$, with entropy $S \approx N k (\ln 2 - {\epsilon^2}/{8k^2T^2})$. Then from Eq. (\ref{tm}), we have: $T_m = 2T_+ T_-/(T_+ + T_-)$, which is the well-known harmonic mean $H(T_+,T_-)$. This mean is strictly less than $G(T_+,T_-)$, and thus our spins-system lies in regime $(c)$. \section{IV. Bounds on efficiency} So far, we have noted the comparison between work characteristics for the two given scenarios. In the following, we point out that within a given scenario, the efficiency at total extracted work obeys definite bounds, which are specific to each of the regimes $(a), (b)$ and $(c)$. Thus if $T_m \geqslant (T_+ + T_-)/2$, then we get from Eq. (\ref{ep}), $\eta_+ \leqslant \eta_C /2$ where $\eta_C = 1- T_-/T_+$ is the Carnot limit. Also from Eq. (\ref{ep}), we get $\eta_- \geqslant \eta_C /(2-\eta_C)$. Similarly, in regime (c), when $T_m \leqslant \sqrt{T_+ T_-}$, we get $\eta_{+} \geqslant \eta_{CA}$ and $\eta_{-} \leqslant \eta_{CA}$, where $\eta_{CA} = 1 - \sqrt{T_-/T_+}$ \cite{Chambadal, Novikov}, which is popularly known as CA-efficiency, after F. L. Curzon and B. Ahlborn who rediscovered this formula \cite{CA1975}, see also \cite{Feidt2014}. These comparative bounds are summarized in Table II, as well as they are depicted in Figs. 2 and 3. Note that the efficiencies $\eta_C /2$, $\eta_{CA}$ and $\eta_C /(2-\eta_C)$ are frequently discussed in the context of maximum power output in finite-time models \cite{Broeck2005, Chambadal, Novikov, CA1975, Esposito2010}. But we observe that, here, within a quasi-static framework, $\eta_{CA}$ serves to separate $\eta_+$ and $\eta_-$ in regimes $(b)$ and $(c)$. \begin{table} \begin{tabular}{|c|c|c|c|} \hline $(a)$ & $(b)$ & $(c)$ \\ \hline \quad $ 0< \eta_+ \leqslant \eta_C /2 $ \quad & \quad $\eta_C /2 < \eta_+ < \eta_{CA}$ \quad & \quad $ \eta_{CA} \leqslant \eta_+ < \eta_C $ \quad \\ \hline \quad $ \frac{\eta_C}{2-\eta_C} \leqslant \eta_- < \eta_C $ \quad & \quad $ \eta_{CA} < \eta_- < \frac{\eta_C}{2-\eta_C} $ \quad & \quad $ 0 < \eta_- \leqslant \eta_{CA} $ \quad \\ \hline \end{tabular} \caption{The bounds obeyed by efficiencies at total extracted work, $\eta_+$ and $\eta_-$, in respective regimes given in Eq. (\ref{abc}), where $\eta_C = 1 -T_-/T_+ $ and $\eta_{CA} = 1 -\sqrt{T_-/T_+}$.} \end{table} \begin{figure}[ht] \includegraphics[width=9cm]{epfig.pdf} \caption{Bounds on efficiency $\eta_+$, in the regimes (from bottom to top) $(a), (b)$, and $(c)$, as given in Table II.} \end{figure} \begin{figure}[ht] \includegraphics[width=9cm]{emfig.pdf} \caption{Bounds on efficiency $\eta_-$, in the regimes (from top to bottom) $(a), (b)$, and $(c)$, as in Table II.} \end{figure} The above bounds are universal as they depend only on the ratio of the initial temperatures. Note that the actual expressions, (\ref{ep}) and (\ref{em}), do depend, in general, on the nature of system A. But close to equilibrium, even the general expressions for $\eta_+$ and $\eta_-$ exhibit a universality. Thus assuming linear response, we can expand energy upto second order in the entropy difference $\delta S = S_+ - S_-$ \cite{JohalRai2016}: \begin{equation} U(S_-) = U(S_+) -T_+ \delta S + \frac{1}{2} \left. \frac{d T}{d S} \right|_{S= S_+} \hspace{-5mm}(\delta S)^2. \label{usm2} \end{equation} Using the above expansion in Eq. (\ref{tm}), and upon simplifying, we get $T_m = (T_+ + T_-)/2$. This implies that $W_+ = (T_+ - T_-)\delta S /2 = W_-$. Thus, under linear response, the extracted work is same in both the cases. However, the efficiency at total work is approximated as $\eta_+ = \eta_C /2$ and $\eta_- = {\eta_C}/(2-\eta_C)$. These expressions are consistent with the findings of Ref. \cite{JohalRai2016}, where the lower and the upper bounds for efficiency with unequal-sized source and sink, obey the same expressions. \section{V. Concluding remarks} We close this investigation by making a few remarks. Apart from an entropy-conserving process, we may analyze an energy-conserving process. The initial and final situations are the same as (i) and (ii) in Fig.1. Specifically, for situation (i), an amount of heat energy $U_+-U_-$ is removed quasi-statically from the reservoir and deposited in the same manner with the cold system. The change in entropy of system A is $(S_+ - S_-) > 0$. The change in entropy of the reservoir is: $-(U_+ - U_-)/T_+$. Thus the total change in the entropy of the universe is: \begin{equation} \Delta S_+ = (S_+ - S_-) - \frac{U_+ - U_-}{T_+}. \label{dsp} \end{equation} Similarly, if we consider situation (ii), we can conclude that the total entropy change of the universe, in an energy-conserving process, would be: \begin{equation} \Delta S_- = -(S_+ - S_-) + \frac{U_+ - U_-}{T_-}. \label{dsm} \end{equation} Now, if we wish to compare the entropy production in the above two cases, then we are led to consider the following situations: \begin{eqnarray} (a') \qquad\qquad T_+ &>& T_m \geqslant \frac{2 T_+ T_-}{T_+ + T_-} > T_- \nonumber \\ (b') \qquad\qquad T_+ &>& \frac{2 T_+ T_-}{T_+ + T_-}> T_m > T_-. \end{eqnarray} It is easy to see that if case $(a')$ is true, then $\Delta S_- \geqslant \Delta S_+$. The inverse inequality is valid, if case $(b')$ holds. Thus for an energy-conserving process, we see that the inequality between generalized mean $T_m$, and $H(T_+,T_-)$, quantifies the relative magnitudes of $\Delta S_-$ and $\Delta S_+$. Finally, we consider an interesting meaning of $T_m$, given by Eq. (\ref{tm}), in the sense of an effective temperature. Take two heat reservoirs with temperatures $T_m$ and $T_- (<T_m) $. Let $Q_m = U_+ - U_-$, be the heat extracted by the working medium from the hot reservoir in a reversible cycle. Here $U_{\pm}$ refer to the energies of the working medium. Then the change in entropy of the hot reservoir is $T_m Q_m = S_+ - S_-$. The total extractable work in a reversible cycle is then $(T_m - T_-)(S_+ -S_-)$, which is the same as $W_+$ in Eq. (\ref{wm}). The Carnot efficiency of this process is $\eta_m = 1-T_-/T_m$, which is Eq. (\ref{em}). A similar conclusion follows for the other scenario, when we consider two heat reservoirs at temperatures $T_+$ and $T_m (< T_+)$. Thus $T_m$ serves as the effective temperature of one of the two heat reservoirs in an equivalent reversible cycle, which extracts the same amount of work and with the same (Carnot) efficiency. Concluding, the main focus of this paper was the comparison of performance of a reversible heat engine operating between a finite system and an infinite reservoir, by switching the role of the source and the sink. We compared the total extracted work in the two cases, and the corresponding efficiency of the engine at those values of the work. Interestingly, we find that the conditions for comparison are determined by basic mathematical inequalities between the means, in particular the AM-GM inequality. The present instance of this inequality does not depend specifically on the nature of the system as was the case in earlier studies. The efficiency at total work is naturally split into three regimes, based on this inequality. The bounds separating these regimes are variously given as $\eta_C /2$, $\eta_{CA}$ and $\eta_C /(2-\eta_C)$. This highlights a new significance of these expressions for efficiency, which are usually discussed in regard to power output optimization in finite-time models. The utility of our conclusions may also be discussed in the context of the toy engine mentioned in the Introduction. Thus, for a given pair of temperatures $(T_+, T_-)$, we can characterize system A, or our device, based on the regime $(a), (b)$ or $(c)$, to which it corresponds. This determines how $W_+$ and $W_-$ compare with each other, which further guides whether $\eta_+$ will be greater, or lesser, relative to $\eta_-$. Moreover, in a particular regime, we know from Table II, the bounds within which the efficiency at total work is located. Thus given a choice of system A, the efficiency at total work is restricted within a certain range. Although derived for quasi-static processes, these bounds may serve as benchmarks for tuning the performance of real devices, and can be a useful element in their design. One of the limitations of our analysis may be that we have considered idealized quasi-static processes. In practical cases, the engines and other thermodynamic machines work in finite cycle-times. Thus an extension of our analysis within an irreversible framework \cite{Izumida2014} may help to see how the above conclusions are retained or modified in finite-time models, at least under linear response or beyond that \cite{JohalRai2016}. Another interesting line of enquiry seems to be the connection of the bounds on efficiency with the principles of inductive inference \cite{JRM2015,George2015}. Finally, it is hard to ignore the aesthetic motivation in revealing other inequalities, possibly new, with these investigations. But, this is left for future work. \section{Acknowledgements} The author wishes to thank Dr. Renuka Rai, for discussions, and Jannat, for sparing her blackboard.
1,314,259,993,121
arxiv
\section{Introduction} The photospheric plasma is thought to be in a state of fully developed magnetohydrodynamic turbulence at high magnetic Reynolds numbers. The spectral slope of the turbulent velocity field is believed to be of the order of $-5/3$, as theoretically proposed by \cite{Ko41} and \cite{Obukhov41}, and as is also expected from the theory of nonhelical hydromagnetic turbulence when the magnetic field is moderately strong and therefore noticeably anisotropic \citep{GS95}. For decaying turbulence, on the other hand, \cite{Lee2010} found that the scaling depends on the field strength, and a shallower Iroshnikov--Kraichnan $k^{-3/2}$ spectrum \citep{Iro63,Kraichnan65} occurs for weaker fields and a steeper $k^{-2}$ weak-turbulence spectrum for stronger fields \citep{Gal00,BKT15}; see the reviews by \cite{BS05} and \cite{BN11} for a discussion of the respective phenomenologies in the three cases. The observations of magnetic and velocity fields in the solar atmosphere provide a window to analyze solar hydromagnetic turbulence through their power spectra and compare with earlier work \citep{Abr05, AY10, Sten12, ZhaoChou13}. The technique used to obtain the scale dependence of magnetic helicity through observations is reminiscent of that of \cite{MGS82}, who made the assumption of isotropy to express the Fourier transform of the two-point correlation tensor of the magnetic field in terms of the Fourier transforms of the magnetic field. Their approach made use of one-dimensional spectra obtained from time series of all three magnetic field components and was applied to in situ measurements in the solar wind. The Taylor hypothesis \citep{Taylor38} was used to relate the two-point correlation function in time to one in space. In the work of \cite{Zhang2014, Zhang2016}, again the assumption of isotropy was made, but a full two-dimensional array of magnetic field vectors was used, so the Taylor hypothesis was not invoked. They applied this technique to a number of active regions to determine magnetic energy and helicity spectra and their change with time. The current helicity spectrum was estimated from the magnetic helicity spectrum under the assumption of isotropy, and its modulus showed a $k^{-5/3}$ spectrum at intermediate wavenumbers. A similar power law is also obtained for the magnetic energy spectrum. These are largely consistent with expectations from the turbulence simulations discussed by \cite{BS05}. In this letter we compare power spectra of the magnetic field with those of the velocity field inferred from photospheric vector magnetograms and Dopplergrams in solar active regions NOAA~11158, 12266, and the quiet Sun. In addition to the kinetic energy spectrum, we also compute cross helicity spectra. Cross helicity has been determined previously using both theory \citep{Pip11,RKB11,BR13,Yok13} and observations \citep{KPZ07,RKS12,Zha14}, and it may play a direct role in the production of active regions \citep{BGJKR14}. However, cross helicity spectra have previously only been obtained from theory. \section{Basic formalism} We begin by reviewing briefly the method of \cite{Zhang2014}. They introduced the two-point correlation tensor of the magnetic field, $\langle B_i({\bm{x}},t) B_j({\bm{x}}+\bm{\xi},t)\rangle$, and write its Fourier transform with respect to $\bm{\xi}$ as \begin{equation} \left\langle\tilde{B}_i({\bm{k}},t)\tilde{B}_j^*\!({\bm{k}}',t)\right\rangle =\Gamma_{ij}({\bm{k}},t)\delta^2({\bm{k}}-{\bm{k}}'), \end{equation} where the tildes indicate Fourier transformation, i.e., $\tilde{B}_i({\bm{k}},t)=\int B_i({\bm{x}},t)\,e^{i{\bm{k}}\cdot{\bm{\xi}}}d^2\xi$, and the asterisk denotes complex conjugation. Under the assumption of isotropy, the spectral correlation tensor $\Gamma_{ij}({\bm{k}},t)$ can be written as \begin{equation} \Gamma_{ij}({\bm{k}},t)=\frac{2E_{\rm M}(k,t)}{4\pi k}(\delta_{ij}-\hat{k}_i\hat{k}_j) +\frac{{\rm i}H_{\rm M}(k,t)}{4\pi k}\varepsilon_{ijk}k_k,\label{eq:helispec5} \end{equation} where $E_{\rm M}(k,t)$ and $H_{\rm M}(k,t)$ are the shell-integrated magnetic energy and helicity spectra, respectively, $\hat{{\bm{k}}}=\bm{k}/k$ is the unit vector of $\bm{k}$, and $k=(k_x^2+k_y^2)^{1/2}$ is the wavenumber. The spectra are normalized such that $\int E_{\rm M}\,{\rm d}k=\langle{\bm{B}}^2\rangle/2$ and $\int H_{\rm M}\,{\rm d}k=\langle{\bm{A}}\cdot{\bm{B}}\rangle$, where ${\bm{A}}$ is the magnetic vector potential with ${\bm{B}}=\bm\nabla\times{\bm{A}}$. The two spectra can also be computed as \citep[cf.][]{BN11} \begin{eqnarray} E_{\rm M}(k)&=&\;{\textstyle{1\over2}}\!\!\!\!\!\!\!\!\sum_{k_- < |{\bm{k}}|\leq k_+} \!\!\!\!\!\! |\tilde{\bm{B}}({\bm{k}})|^2,\label{Evec}\\ H_{\rm M}(k)&=&\;{\textstyle{1\over2}}\!\!\!\!\!\!\!\!\sum_{k_- < |{\bm{k}}|\leq k_+} \!\!\!\!\!\! (\tilde{\bm{A}}\cdot\tilde{\bm{B}}^\ast+\tilde{\bm{A}}^\ast\cdot\tilde{\bm{B}}), \label{EHvec} \end{eqnarray} where $k_\pm=k\pm\delta k/2$ and $\delta k=2\pi/L$ is the wavenumber increment and also the smallest wavenumber in the plane $L^2$ with $L$ being the size of the magnetograms. Following common convention, the magnetic energy density is measured in $\,{\rm G}^2$, so the units of the spectrum $E_{\rm M}(k)$ are $\,{\rm G}^2\,{\rm cm}$ \citep{AY10a}. To compute the magnetic helicity spectrum, \cite{Zhang2014} used the expression \begin{eqnarray} kH_M(k,t)&=&4\pi k\,\mbox{Im}\left\langle\cos\phi_k\Gamma_{yz} -\sin\phi_k\Gamma_{xz}\right\rangle_{\phi_k}, \label{kH_M} \end{eqnarray} where we have defined the polar angle in wavenumber space so that $k_x=k\cos\phi_k$ and $k_y=k\sin\phi_k$. The angle brackets with subscript $\phi_k$ denote averaging over annuli in wavenumber space. {Note that only the $xz$ and $yz$ components enter, so \Eq{kH_M} becomes \begin{eqnarray} kH_M(k,t)&=&4\pi k\,\mbox{Im}\left\langle \left(k_x\tilde{B}_y-k_y\tilde{B}_x\right) \tilde{B}_\|^\ast\right\rangle_{\phi_k}. \end{eqnarray} Thus, by introducing \begin{eqnarray} \tilde{A}_\|=(-ik_x\tilde{B}_y+ik_y\tilde{B}_x)/k^2 \equiv\tilde{J}_\|/k^2, \end{eqnarray} with $\tilde{J}_\|$ being the Fourier transform of $J_\|\equiv\partial_x B_y-\partial_y B_x$, we can relate $H_{\rm M}(k,t)$ to the vertical part (indicated by $\|$) of the current helicity spectrum, $H_{\rm C}(k,t)=k^2H_{\rm M}(k,t)$, which was already used in \cite{Zhang2014}. Therefore, instead of working with} Equation~(\ref{EHvec}), we compute from now on \begin{eqnarray} \label{} E_{\rm M}(k)&=&\;{\textstyle{1\over2}}\!\!\!\!\!\!\!\!\sum_{k_- < |{\bm{k}}|\leq k_+} \!\!\!\!\!\! |\tilde{B}_\|({\bm{k}})|^2,\\ H_{\rm M}(k)&=&\;{\textstyle{1\over2}}\!\!\!\!\!\!\!\!\sum_{k_- < |{\bm{k}}|\leq k_+} \!\!\!\!\!\! (\tilde{A}_\|\tilde{B}_\|^\ast+\tilde{A}_\|^\ast\tilde{B}_\|), \end{eqnarray} which is equivalent to \Eqs{Evec}{EHvec}. Note that $E_{\rm M}(k)$ and $H_{\rm M}(k)$ satisfy the realizability condition $2E_{\rm M}\ge k|H_{\rm M}|$, which is also the reason why we always plot the scaled combinations $2E_{\rm M}(k)$ and $kH_{\rm M}(k)$. This allows us to judge how close to fully helical the magnetic field is at each wavenumber. An analysis similar to that of the magnetic field can also be done for the velocity. Only the Doppler velocity field can readily be observed. Thus, we compute vertical kinetic energy and cross helicity spectra as \begin{eqnarray} E_{\rm K}(k)&=&\;{\textstyle{1\over2}}\!\!\!\!\!\!\!\!\sum_{k_- < |{\bm{k}}|\leq k_+} \!\!\!\!\!\! |\tilde{v}_\|({\bm{k}})|^2,\\ H_{\rm X}(k)&=&\;{\textstyle{1\over2}}\!\!\!\!\!\!\!\!\sum_{k_- < |{\bm{k}}|\leq k_+} \!\!\!\!\!\! (\tilde{v}_\|\tilde{B}_\|^\ast+\tilde{v}_\|^\ast\tilde{B}_\|). \label{EKEMHX} \end{eqnarray} Defining $q\equiv\sqrt{4\pi\rho_0}$, the realizability condition reads \begin{equation} qE_{\rm K}(k)+E_{\rm M}(k)/q \ge \, |H_{\rm X}(k)|, \label{Elimit} \end{equation} where we assume $\rho_0=10^{-6}\,{\rm g}\,{\rm cm}^{-3}$ for the density and ignore density fluctuations. As $v_\|$ is measured in ms$^{-1}$ and $B_\|$ in G, we have $q=100\,{\rm cm}\,{\rm m}^{-1}\sqrt{4\pi\rho_0}$. We determine $E_{\rm K}(k)$ and $H_{\rm X}(k)$ to study the spectral distribution of the line-of-sight velocity, and its relationship with that of the magnetic field. \begin{figure} \begin{center} \includegraphics[width=.23\textwidth]{Heli_velo_energy_adda_aq.ps} \includegraphics[width=.23\textwidth]{Heli_velo_energy_adda15_aq.ps} \vspace*{0.1mm} \includegraphics[width=.23\textwidth]{Heli_velo_energy_adda_avm.ps} \includegraphics[width=.23\textwidth]{Heli_velo_energy_adda15_avm.ps} \vspace*{-1mm} \includegraphics[width=.23\textwidth]{Heli_velo_energy_adda_a.ps} \includegraphics[width=.23\textwidth]{Heli_velo_energy_adda15_a.ps} \end{center} \caption{Doppler velocity field $v_\|$ (top), longitudinal magnetic field $B_\|$ (middle), and their product $v_\|B_\|$ (bottom), for active regions NOAA~11158 (left, field of view $256''\times256''$) on 2011 February 14 and NOAA~12266 (right, field of view $150''\times150''$) on 2015 January 19. Yellow (blue) shades show positive (negative), corresponding to the upward (downward) directions. \label{fig:corrmagheliveloa} }\end{figure} \begin{figure*} \begin{center} \hspace*{-10mm} \includegraphics[width=80mm]{Heli_velo_energy_adda_b.ps} \hspace*{2mm} \includegraphics[width=80mm]{Heli_velo_energy_adda15_bzhq.ps} \end{center} \caption{{The upper panels show spectra} of magnetic energy $E_{\rm M}(k)$ (black solid lines), normalized magnetic helicity $kH_{\rm M}(k)$ (black dotted lines; {red and blue symbols denote positive and negative values, respectively}) and kinetic energy $E_{\rm Kaq}(k)$ (green dashed lines for kinetic energy of the quiet Sun) and $E_{\rm Ka}(k)$ (green {solid} lines for kinetic energy related to magnetic features only) in active region NOAA~11158 (left) and NOAA~12266 (right). {The lower panels show $qE_{\rm K}(k)+E_{\rm M}(k)/q$ (solid lines) and $H_{\rm X}(k)$ (dashed-dotted lines; red and blue symbols denote positive and negative values, respectively).} \label{fig:corrmaghelivelob} }\end{figure*} For the cross helicity, there is a particular problem when considering bipolar active regions. We expect the cross helicity to be proportional to the mean ambient magnetic field \citep{RKB11}. It will therefore have contributions of opposite signs from the two poles of a bipolar region, leading to cancelation; see Figure~\ref{fig:corrmagheliveloa} for visualizations of $v_\|$, $B_\|$, and $v_\|B_\|$ for the active regions NOAA~11158 and 12266. NOAA~12266 has two clearly separated poles. A suitable technique to obtain a spectrum encompassing the entire bipolar region is the two-scale approach of \cite{BPS17}, who applied it to measuring magnetic helicity for the entire solar disk, taking the systematic sign change across the equator into account; see also \cite{Singh18}. This technique allows us to incorporate the sign change as a sinusoidal modulation proportional to $\sin\bm{K}\cdot\bm{x}$ with wavevector $\bm{K}$ and, in principle, arbitrary phase shifts, which are not considered here. \EEq{EKEMHX} then becomes \begin{equation} H_{\rm X}(\bm{K},k)=\!\!\!\!\!\!\!\! \sum_{k_- < |{\bm{k}}|\leq k_+}\!\!\!\!\!\!\!\! \tilde{v}_\|(\bm{k}+\bm{K}/2)\tilde{B}_\|^\ast(\bm{k}-\bm{K}/2),\; \label{EKEMHXdx} \end{equation} which is complex, and its real part agrees with \Eq{EKEMHX} for $\bm{K}=\bm{0}$. For a bipolar region aligned in the $x$ direction with an approximate separation $d$, we have $\bm{K}=(\pi/d,0)$. Analogous to \cite{BPS17}, the relevant spectrum is then $-{\rm Im}\,H_{\rm X}(\bm{K},k)$. We return to this below when discussing concrete examples. \section{Comparison of the Spectra} Figure~\ref{fig:corrmagheliveloa} shows the Doppler velocity and the corresponding longitudinal component of the vector magnetic field in the active regions NOAA~11158 on 2011 February 14 and NOAA~12266 on 2015 January 19 by the Helioseismic and Magnetic Imager (HMI) on board the {\em Solar Dynamics Observatory} ({\em SDO}). To obtain a representative nearly stationary pattern, we averaged over a continuous series of Dopplergrams observed during 20~minutes. The contribution from the five-minute oscillation is thus basically removed. However, projection effects have not been compensated for. A prominent Evershed flow can be seen in the strong magnetic structures of NOAA~11158 (S20W17) due to its location south-west of disk center. We can see a pattern of small-scale velocity nearby the active region. A similar situation can also be found in the active region NOAA~12266 (S06E06) located near disk center. These small-scale velocity patterns are indicated by arrows in Figure~\ref{fig:corrmagheliveloa}. Thus, the flow fields in Figure~\ref{fig:corrmagheliveloa} are expected have contributions both from the active regions and the quiet Sun in its proximity. \begin{figure*} \begin{center} \includegraphics[width=.49\textwidth]{Heli_velo_energy_adda_b_dx3.ps} \includegraphics[width=.49\textwidth]{Heli_velo_energy_adda15_b_dx1.ps} \end{center} \caption{{Cross helicity $H_{\rm X}(\bm{K},k)$ with two-scale} analysis for active regions NOAA~11158 (left) and NOAA~12266 (right) using $\bm{K}/\delta k=(3,0)$ and $(1,0)$, respectively; see \Eq{EKEMHXdx}. {Red and blue symbols denote positive and negative values, respectively.} Dotted lines with open symbols refer to strong fields only (the line-of-sight magnetic field exceeds $\pm50\,{\rm G}$), while solid lines with closed symbols apply to all points. $\bm{K}$ is given in units of $\delta k=2\pi/L$ defined below \Eq{EHvec}. \label{fig:corrmaghelivelob_dx} }\end{figure*} Figure~\ref{fig:corrmaghelivelob} shows magnetic energy and scaled helicity spectra, $E_{\rm M}(k)$ and $kH_{\rm M}(k)$, respectively, inferred from the photospheric vector magnetograms of NOAA 11158 \citep[cf.][]{Zhang2014} and 12266, along with the corresponding kinetic energy spectra inferred from the Dopplergrams of Figure~\ref{fig:corrmagheliveloa}. {To reduce fluctuations in the helicity spectra, we have averaged their values within logarithmically spaced wavenumber intervals.} In the range of wavenumbers $k=0.5$--$2.5\,{\rm Mm}^{-1}$, the mean slopes of $E_{\rm M}(k)$ and $k|H_{\rm M}(k)|$ are $ -1.8$ and $-3.4$, respectively, for active region NOAA~11158, and $-1.5$ and $-2.1$, respectively, for active region NOAA~12266. The temporal variation of the slopes of $E_{\rm M}(k)$ and $k|H_{\rm M}(k)|$ of active regions was discussed by \cite{Zhang2016}. The green dashed lines $E_{\rm Kaq}(k)$ show kinetic energy spectra of the two active regions and also those of the quiet sun. A similar result was shown by \cite{ZhaoChou13} with the continuous high spatial resolution Doppler observations of the Sun by {\em SDO}/HMI. They determined the power from the convective flows in the $k\omega$ diagram and found that the location of the convective peak is shifted toward lower wavenumbers in the power spectrum obtained from the sunspot compared to that of the quiet Sun. The green { solid} lines $E_{\rm Ka}(k)$ in Figure~\ref{fig:corrmaghelivelob} show { kinetic energy} spectra of the active region relative to the magnetic structures with $|B_\||> 50\,{\rm G}$ only. (The subscript ``a'' refers to active region.) We find that the uprise of kinetic energy near $k=2$--$5\,{\rm Mm}^{-1}$ is now removed for both active regions, and the slopes of the spectra of kinetic energy are consistent with a $k^{-5/3}$ spectrum. This removal is done by setting the velocity to zero at those points where $|B_\||<50\,{\rm G}$ just prior to taking the Fourier transform. In the range of wavenumbers $k=0.5$--$2.5\,{\rm Mm}^{-1}$, the mean slopes of $E_{\rm Ka}$ are $-1.4$ for NOAA~11158 and $-1.5$ for NOAA~12266. They are similar to those of magnetic energy $E_{\rm M}(k)$ and scaled magnetic helicity $kH_{\rm M}(k)$ in the photosphere in the range $k=1$--$2\,{\rm Mm}^{-1}$. While for NOAA~11158 the slope of $E_{\rm Kaq}$ is $0.05$, for NOAA~12266 it is $-0.2$ in the range $k=0.5$--$2.5\,{\rm Mm}^{-1}$. (The subscript ``aq'' refers to the combination of active region and quiet Sun.) The kinetic energy spectra $E_{\rm Kaq}(k)$ in the range $k=2$--$5\,{\rm Mm}^{-1}$ reflect the typical scale of the quiet Sun, which has contributions from the granulation. The $H_{\rm X}(k)$ spectra of \Eq{EKEMHX} have similar slopes as $qE_{\rm K}+E_{\rm M}/q$ with $|H_{\rm X}|$ being about 10 times smaller than the limit given by the total energy; see \Eq{Elimit}. However, it has no definite sign. This is different when taking the cancelation from the bipolarity into account. The two-scale method correlates functions whose wavenumbers differ by a small amount. Consider as an example $v_\|=\sin k_+x$ and $B_\|=\cos k_-x$ with $k_\pm=k\pm\delta k$ and $\delta k=\pi/d=2\pi/L$ being the lowest wavenumber of the domain, then \begin{equation} 2v_\|B_\|=\sin Kx + \sin2kx, \label{modulation} \end{equation} with $K$ being the $x$ component of $\bm{K}$, has a low-wavenumber modulation proportional to $\sin Kx$ with sign changes between the bipoles separated by $d$. Comparing with \Figp{fig:corrmagheliveloa}{f} for NOAA~12266, where $d\approx50\,{\rm Mm}$, the sign of $v_\|B_\|$ changes from positive values for $x<0$ to negative ones for $x>0$. This is the other way around than what is implied by the example given in \Eq{modulation}. Therefore, we expect $-{\rm Im}\,H_{\rm X}(\bm{K},k)$ itself to be negative. This is indeed the case; see \Fig{fig:corrmaghelivelob_dx}, which shows that $-{\rm Im}\,H_{\rm X}(\bm{K},k)$ is mostly negative for NOAA~12266. For NOAA~11158, there are two pairs of bipoles interlaced. Each pair has an approximate separation $d\approx L/3$, but the interlacing is not ideal and partially overlapping. We tried $K_x/\delta k=1-3$, but none had as clean a spectrum as NOAA~12266. In \Fig{fig:corrmagheliveloa} we show for NOAA~11158 the spectrum for $K_x/\delta k=3$, which had the least sign changes and is mostly negative --- in broad agreement with a $\sin3\delta kx$ modulation in \Figp{fig:corrmagheliveloa}{e}. For NOAA~12266, on the other hand, $K_x/\delta k=1$ was found to give the least sign changes -- in agreement with a $\sin\delta kx$ modulation in \Figp{fig:corrmagheliveloa}{f}. A negative correlation between a large-scale field proportional to $B_0\sin Kx$ and a correlation of the form $\bra{v_\|B_\|}\approx-(\eta_{\rm T}/H_\rho)\sin Kx$ is theoretically expected \citep{RKB11}, where $\eta_{\rm T}$ is the turbulent magnetic diffusivity and $H_\rho$ is the density scale height. Using $\bra{v_\|B_\|}=\int H_{\rm X}(\bm{K},k)\,{\rm d}k\approx -11,000\,{\rm G}\,{\rm m}\,{\rm s}^{-1}$ and $B_0\approxB_{\rm rms}\approx300\,{\rm G}$, we have $\bra{v_\|B_\|}/B_0\approx40\,{\rm m}\,{\rm s}^{-1}$ and $H_\rho=1\,{\rm Mm}$ yields $\eta_{\rm T}\approx4\times10^{11}\,{\rm cm}^2\,{\rm s}^{-1}$, which is about 10 times less than what was found by \cite{RKS12}. The slope for NOAA~12266 is between $-3$ and $-4$, which is much steeper than that for NOAA~11158, where the slope was $-2$. A steeper slope, especially at small $k$, is of interest when interpreting the $H_{\rm X}(\bm{K},k)$ spectrum as an indicator for inverse cascading being a possible mechanism for forming magnetic flux concentrations \citep{BGJKR14}. However, there are sign changes at $k\approx0.3$ and $1\,{\rm Mm}^{-1}$, which may not be compatible with this interpretation. On the other hand, a jump similar to that at $k\approx1\,{\rm Mm}^{-1}$ has also been seen in the simulations; see Figure~19 of \cite{BGJKR14}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{hmi_vel_sav_Zh20150119c.ps} \end{center} \caption{ {The upper panel shows spectra} of magnetic energy $E_{\rm M}(k)$ (black solid lines), kinetic energy $E_{\rm Kmq}(k)$ (green dashed lines, for the whole velocity in the field of view of the quiet Sun) and $E_{\rm Km}(k)$ (green solid lines for the velocity related with magnetic features only). { The lower panel shows $qE_{\rm K}(k)+E_{\rm M}(k)/q$ (solid lines) and $H_{\rm X}(k)$ (dashed-dotted lines; red and blue symbols denote positive and negative values, respectively).} \label{fig:corrmagheliveloc} \vspace*{6mm} }\end{figure} For comparison with the velocity field of active regions, we show in Figure~\ref{fig:corrmagheliveloc} the kinetic energy spectra for the velocity field of the quiet Sun near the center of the solar disk. $E_{\rm Kmq}$ shows the spectrum of the whole velocity field in the field of view of the quiet Sun, and it is almost consistent with the results of \cite{ZhaoChou13}, while $E_{\rm Km}$ shows that of the velocity field for magnetic fields with $|B_{\|}| >50\,{\rm G}$ only. (Here the subscript ``mq'' refers to the whole region in the field of view in the quiet sun.) Due to the averaging over a series of continuous Dopplergrams observed during 20~minutes, the contribution from the five-minute oscillation has effectively been removed in our analyzed velocity field. $E_{\rm M}(k)$ shows the spectrum of the magnetic energy inferred from the longitudinal component of the magnetic field. In the range of wavenumbers $k=0.5$--$2.5\,{\rm Mm}^{-1}$, the mean slope of magnetic energy $E_{\rm M}$ is $-1.0$ and that of $E_{\rm Km}$ is $0.2$. (Here the subscript ``m'' refers to the kinetic energy relative to areas with magnetic field only.) These are shallower than the $k^{-5/3}$ Kolmogorov spectrum. \section{Conclusions} {Our combined analysis of velocity and magnetic fields has shown that, within active regions, kinetic and magnetic energy spectra have similar slopes at intermediate scales. Here the field is also close to maximally helical. The magnetic helicity spectra of \cite{Zhang2014,Zhang2016} are found to be identical to those composed of just $A_\|$ and $B_\|$. This is analogous to the similarly constructed current helicity, $\bra{J_\| B_\|}$, which is frequently employed in solar physics. The helicity spectra are gauge-independent owing to the assumed horizontal periodicity and independence of $z$. This assumption affects only the smallest wavenumbers. Unlike $\bra{J_\|B_\|}$, which captures helicity effects only on small scales or high wavenumbers, here we have access to the helicity decomposition into different wavenumbers. } { Quite analogously, we have constructed cross helicity spectra. Their signs switch with the sign of the mean vertical magnetic field, which is the reason we have adopted here the two-scale approach. This approach is familiar from mean-field dynamo theory \citep{RS75} and has recently been applied to solar magnetic helicity spectra \citep{BPS17,Singh18}, but it is the first time that it has been applied to cross helicity. It allows us to capture properties of global spectra, avoiding cancellation from the different polarities of bipolar regions. This approach worked particularly well for NOAA~12266, where the separation of the two polarities is about half the extent of the magnetogram. By contrast, in NOAA~11158, two pairs of polarities are interlaced, making the direct application of the two-scale approach less straightforward. For NOAA12266, the spectral slope of the cross helicity is found to be $-4$.} A steep slope is suggestive of an inverse cascade phenomenon of cross helicity \citep{BGJKR14} and can be a possible mechanism responsible for causing magnetic flux concentrations into spots. Further work incorporating a larger sample of active regions and a global analysis would be an important future extension of this work. \begin{acknowledgements} This study is supported by grants from the National Natural Science Foundation (NNSF) of China under the project grants 11673033, 11427803, 11427901 and Huairou Solar Observing Station of National Astronomical Observatories, Chinese Academy of Sciences. This work has been supported in part by the NSF Astronomy and Astrophysics Grants Program (grant 1615100), and the University of Colorado through its support of the George Ellery Hale visiting faculty appointment. \end{acknowledgements}
1,314,259,993,122
arxiv
\section{Introduction} \subsection{The model} This paper is concerned with the Novikov equation \begin{align}\label{novikov_eq} u_t-u_{txx}+4u^2u_x=3uu_xu_{xx}+u^2u_{xxx}, \qquad t\in\mathbb{R},\,x\in\mathbb{R}, \end{align} where $u(t)$ is a real-valued function. This equation was derived by Novikov \cite{No} in a symmetry classification of nonlocal partial differential equations with cubic nonlinearity. By using the perturbative symmetry approach \cite{MiNo}, which yields necessary conditions for a PDE to admit infinitely many symmetries, Novikov was able to isolate equation \eqref{novikov_eq} and derive its first few symmetries. Later, he was able to find an associated scalar Lax-pair, proving the integrability of the equation. Moreover, Hone and Wang have recently found a matrix Lax-pair representation of the Novikov equation, more specifically, they have shown that equation \eqref{novikov_eq} arises as the zero curvature equation $F_t-G_x+[F,G]=0$ which is the compatibility condition for the linear system \cite{HoWa} \begin{align*} \Psi_x=F(y,\lambda)\Psi \quad \hbox{and} \quad \Psi_t=G(y,\lambda)\Psi, \end{align*} where $y=u-u_{xx}$ and the matrices $F$ and $G$ are defined by \begin{align*} F=\left(\begin{matrix} 0 & \lambda y & 1 \\ 0 & 0 & \lambda y \\ 1 & 0 & 0 \end{matrix}\right), \quad G= \left(\begin{matrix} \tfrac{1}{3\lambda^2}-uu_x & \tfrac{1}{\lambda}u_x-\lambda u^2y & u_x^2 \\ \tfrac{1}{\lambda}u & -\tfrac{2}{3\lambda^2} & -\tfrac{1}{\lambda}u_x-\lambda u^2y \\ -u^2 & \tfrac{1}{\lambda}u & \tfrac{1}{3\lambda^2}+uu_x \end{matrix}\right). \end{align*} Moreover, by using this matrix Lax-pair representation, Hone and Wang showed how the Novikov equation is related by a reciprocal transformation to a negative flow in the Sawada-Kotera hierarchy. \medskip The Novikov equation possesses infinitely many conservation laws, among which, the most important ones are given by \begin{align}\label{cons_e} E(u):=\int_\mathbb{R}\left(u^2(t,x)+u_x^2(t,x)\right)dx \quad \hbox{and}\quad F(u):=\int \Big(u^4+2u^2u_x^2-\dfrac{1}{3}u_x^4\Big)dx. \end{align} Solutions of \eqref{novikov_eq} are known to satisfy several symmetry properties: shifts in space and time, i.e. if $u(t,x)$ is a solution to equation \eqref{novikov_eq} then so is $u(t+t_0,x+x_0)$, as well as space-time invertion, i.e. if $u(t,x)$ is a solution of \eqref{novikov_eq}, then $u(-t,-x)$ is another solution. \medskip One of the most important features of the Novikov equations is the existence of \emph{peakon} and \emph{antipeakon} solutions \cite{HoWa} which are peaked traveling waves with a discontinuous derivative at the crest. In this case, for any $c>0$ they are explicitly given by \begin{align}\label{peakons_nov_def} \pm\varphi_{c}(x-ct)=\pm\sqrt{c}\varphi(x-ct):=\pm\sqrt{c}e^{-\vert x-ct\vert}. \end{align} Moreover, the Novikov equation also exhibit multi-peakons-antipeakons solutions. More precisely, for any given natural number $n\in\mathbb{N}$, let us denote by $\vec{q}=(q_1,...,q_n)$ and $\vec{p}=(p_1,...,p_n)$ the position and momenta vectors respectively. Then, the $n$-peaked traveling wave solution on the line is given by $ u(t,x)=\sum_{i=1}^n p_i(t)\exp(-\vert x-q_i(t)\vert)$, where $p_i$ and $q_i$ satisfy the following system of $2n$-differential equations \begin{align}\label{multipeak} \begin{cases} \dfrac{dq_i}{dt}=u^2\big(t,q_i(t)\big)=\displaystyle\sum_{j,k=1}^np_jp_ke^{-\vert q_i-q_j\vert-\vert q_i-q_k\vert}, \\ \displaystyle\dfrac{dp_i}{dt}=-p_i(t)u\big(t,q_i(t)\big)u_x\big(t,q_i(t)\big)=p_i\sum_{j,k=1}^np_jp_k\operatorname{sgn}(q_i-q_j)e^{-\vert q_i-q_j\vert-\vert q_i-q_k\vert}.\end{cases} \end{align} There exist some similar expressions for periodic peakons and multipeakons solutions but we do not intend to deepen into this direction in this work. On the other hand, equation \eqref{novikov_eq} can be rewritten in a \textit{compact} form in terms of its \emph{momentum density} as \begin{align}\label{nov_eq_y} y_t+u^2y_x+3uu_xy=0, \quad \hbox{where} \quad y:=u-u_{xx}, \end{align} which can be regarded as a cubic nonlinear generalization of the celebrated Camassa-Holm (CH) equation \cite{CH,FuFo}, \begin{align}\label{CH} u_t-u_{txx}=uu_{xxx}+2u_xu_{xx}-3uu_x,\quad \hbox{equivalently} \quad y_t+uy_x+2u_xy=0, \end{align} or the Degasperis-Procesi (DP) equation \cite{DP}, \begin{align}\label{DP} u_t-u_{txx}=uu_{xxx}+3u_xu_{xx}-4uu_x, \quad \hbox{equivalently} \quad y_t+uy_x+3u_xy=0. \end{align} It is worth noticing that the last three equations in terms of their momentum densities correspond to transport equations for $y(t)$. As a consequence, initial data with signed initial momentum density give rise to solutions with the same property. This is one of the key points to prove that smooth and sufficiently fast decaying initial data with signed initial momentum density give rise to global solutions. \medskip Regarding the CH and the DP equations, both can be derived as a model for the propagation of unidirectional shallow water waves over a flat bottom by writing the Green-Naghdi equations in Lie-Poisson Hamiltonian form and then making an asymptotic expansion which keeps the Hamiltonian structure \cite{AlLa,CH,CoLa,Jo}. Moreover, both of them can be written in Hamiltonian form as \[ \partial_tE'(u)=-\partial_xF'(u), \] where for the Camassa-Holm equation $E(u)$ and $F(u)$ are given by \[ E_{CH}(u):=\int u^2+u_x^2 \quad \hbox{and} \quad F_{CH}(u):=\int u^3+uu_x^2 \] while for the Degasperis-Procesi equation they are given by \[ E_{DP}(u):=\int yv=\int 5v^2+4v_x^2+v_{xx}^2 \quad \hbox{and}\quad F_{DP}(u):=\int u^3, \] where $v:=(4-\partial_x^2)^{-1}u$. Moreover, both of them belongs to the so-called $b$-family introduced by Degasperis, Holm and Hones in \cite{DeHoHo}, \[ u_t-u_{txx}=bu_xu_{xx}+uu_{xxx}-(b+1)uu_x. \] In \cite{MiNo} it was shown that the $b$-family corresponds to an integrable equation only when $b=2,3$, which corresponds exactly to the CH and the DP equations respectively. \medskip On the other hand, the Novikov equation, as well as the CH and the DP equations, can also be written in a nonlocal form in the following way. From now on we shall denote by $p(x)$ the fundamental solution of $1-\partial_x^2$ in $\mathbb{R}$, that is $p:=\tfrac{1}{2}e^{-\vert x\vert}$. Then, we can rewrite \eqref{novikov_eq} as \begin{align}\label{nov_eq_2} u_t+u^2u_x=-p*\left(3uu_xu_{xx}+2u_x^3+3u^2u_x\right), \end{align} which can be understood as a nonlocal perturbation of Burgers-type equations \[ u_t+\tfrac{1}{3}(u^3)_x=0, \] or more generally as a nonlinear nonlocal transport equation. This latter fact has many implications, for instance, from the blow-up criteria for transport equations we obtain that singularities are caused by the focusing of characteristics. It is worth noticing that, in order to give peakons and multi-peakons a precise meaning as (weak) solutions of the Novikov equation, it is necessary to rewrite equation \eqref{novikov_eq} in the non-local form as in \eqref{nov_eq_2}. In fact, due to their non-smoothness they can not be understood as strong solutions of the equation\footnote{Another way of defining peakons and multi-peakons as weak solutions of the Novikov equations is by rewriting \eqref{novikov_eq} in a derivative form.}. \medskip At this point it is clear that the Novikov equation shares many of its remarkable analytic properties with both the CH and the DP equations, as the existence of a Lax-pair, the completely integrability and the bi-Hamiltonian structure \cite{DP,HoWa}, but also all of them exhibit both existence of peaked traveling waves as well as the phenomenon of wave breaking \cite{CH,ChGuLiQu,CoLa,DP,No}. This latter one means that the wave profile remains bounded while its slope becomes unbounded. As the authors explain in \cite{ChGuLiQu}, understanding the wave-breaking mechanism not only presents fundamental importance from a mathematical point of view but also a great physical interest since it would help to provide a key-mechanism for localizing energy in conservative systems by forming one or several small-scale spots. Finally, we remark that, unlike the Novikov equation, peakon solutions for the CH and the DP equations have a slightly different form, which is given by \begin{align}\label{ch_dp_peakon_def} \widetilde{\varphi}_c(x-ct)=c\varphi(x-ct):=ce^{-\vert x-x_0-ct\vert}, \qquad c\in\mathbb{R}\setminus\{0\},\ x_0\in\mathbb{R}. \end{align} It is worth noticing that in sharp contrast with the Novikov equation, CH and DP peakons can move in both directions, left and right, just by changing the sign of $c$, while all Novikov peakons and anti-peakons move to the right. \medskip About the stability of these peaked solitary waves, the first proof of orbital stability was given in the Camassa-Holm case for $H^1$-perturbations assuming that their associated momentum density defines a non-negative Radon measure \cite{CoMo2}. The orbital stability for perturbations in the whole energy space $H^1(\mathbb{R})$ was proved by a direct approach by Constantin and Strauss in \cite{CoSt} (see also \cite{LiLi} for a proof in the Degasperis-Procesi case). Later, following the ideas in \cite{CoSt,LiLi} Liu et al. proved the orbital stability for Novikov's peakon solutions under the additional assumption of non-negative initial momentum density \cite{LiLiQu}. In this work we shall prove that we can drop this latter hypothesis (see Theorem \ref{MT4} below). \medskip From a physical point of view, all of these peakon solutions \eqref{peakons_nov_def} and \eqref{ch_dp_peakon_def} reveal some similarities to the well-known Stokes waves of greatest height, i.e. traveling waves of maximum possible amplitude that are solutions to the governing equations for irrotational water waves \cite{Co,To}. These traveling waves (Stokes waves) are smooth everywhere except at the crest, where the lateral tangents differ. Then, it is important from both a physical and a mathematical point of view to study these types of solutions. \subsection{Initial data space} Before stating our main results we need to introduce the functional spaces where our initial data shall belong. Following the ideas of \cite{CoMo,EM1,EM2,Mo,Pa} we define \[ Y:=\big\{u\in H^1(\mathbb{R}): \ u-u_{xx}\in\mathcal{M}_b\big\}, \] where $\mathcal{M}_b$ denotes the space of Radon measures with finite total variation on $\mathbb{R}$. Moreover, from now on we shall denote by $Y_+$ the subspace defined by $Y_+:=\{u\in Y: \ u-u_{xx}\in\mathcal{M}_{b}^+\}$, where $\mathcal{M}_{b}^+$ denotes the space of non-negative finite Radon measures on $\mathbb{R}$. A crucial remark in what follows is that, for any function $v\in C_0^\infty(\mathbb{R})$ we have \begin{align}\label{positive_mom_1} v(x)&=\dfrac{1}{2}\int_{-\infty}^x e^{x'-x}(v-v_{xx})(x')dx'+\dfrac{1}{2}\int_x^\infty e^{x-x'}(v-v_{xx})(x')dx' \end{align} and \begin{align}\label{positive_mom_2} v_x(x)&=-\dfrac{1}{2}\int_{-\infty}^x e^{x'-x}(v-v_{xx})(x')dx'+\dfrac{1}{2}\int_x^\infty e^{x-x'}(v-v_{xx})(x')dx' \end{align} Therefore, if $v-v_{xx}\geq 0$ on $\mathbb{R}$ we conclude that $\vert v_x\vert\leq v$. Thus, by density of $C_0^\infty(\mathbb{R})$ in $Y$, we deduce the same property for functions $v\in Y_+$. \begin{rem} We recall the following standard estimate which shall be useful in the sequel: \[ \Vert u\Vert_{W^{1,1}}=\Vert p*(u- u_{xx})\Vert_{W^{1,1}}\lesssim \Vert u-u_{xx}\Vert_{\mathcal{M}}, \] and hence it also holds that \[ \Vert u_{xx}\Vert_{\mathcal{M}}\leq \Vert u\Vert_{L^1}+\Vert u-u_{xx}\Vert_{\mathcal{M}}. \] Thus, we have $ Y(\mathbb{R})\hookrightarrow \left\{u\in W^{1,1}(\mathbb{R}): \, u_x\in \mathrm{BV}(\mathbb{R})\right\}$, where $\mathrm{BV}(\mathbb{R})$ denotes the space of functions with bounded variation. \end{rem} \subsection{Main results} As we mentioned before, in this work we intend to address both, the orbital and asymptotic stability problems for a train of peakons. \subsubsection{Orbital stability in the energy space} Our first result is an improvement of the orbital stability property for the single peakon solution. Indeed, by some slight improvements and modifications of the proof in \cite{LiLiQu}, we shall show that the sign assumption on the momentum density is artificial, and hence it can be removed. This is rather an observation regarding the fact that the proof follows the one in \cite{CoSt} for the Camassa-Holm equation. \begin{thm}[Orbital stability of peakons in the energy space]\label{MT4} Let $c>0$ be fixed. There exists $ 0<\varepsilon^\star\ll \min\{1,\sqrt{c}\}$ small enough such that if \[ u\in L^\infty((-T,T), H^1(\mathbb{R})\cap W^{1,4}(\mathbb{R})), \] is a solution to the Novikov equation \eqref{nov_eq_2} emanating from an initial data $u_0\in H^1(\mathbb{R})\cap W^{1,4}(\mathbb{R})$ satisfying \begin{align}\label{initial_cond_hyp_peakon} \left\Vert u_0-\varphi_{c}\right\Vert_{H^1}+\Vert u_{0,x}-\varphi_c'\Vert_{L^4}\leq \varepsilon^4 ,\quad \hbox{for some}\quad 0<\varepsilon<\varepsilon^\star, \end{align} such that $E(\cdot)$ and $F(\cdot)$ are conserved along the trajectory, then, the following estimate holds: \[ \quad \sup_{t\in[-T,T]}\Vert u(t)-\varphi_c(\cdot-\xi(t))\Vert_{H^1}\leq 2c^{3/8}\big(4+\textbf{c}\big)\varepsilon, \ \quad \textbf{c}:=\max\{1,c^{3/8}\}, \] where $\xi(t)\in\mathbb{R}$ is any point where the function $u(t,\cdot)$ attains its maximum. \end{thm} \begin{rem} It is worth noticing that, in contrast to the Camassa-Holm case, continuity with respect to time of the solution is not needed here. Specifically, we only need the quantities $E(\cdot)$ and $ F(\cdot) $ to be conserved. Indeed, for any $ v\in H^1\cap W^{1,4}$, it holds that \[ \Vert v-\varphi\Vert_{H^1} \to 0 \quad\hbox{as}\quad \vert E(v)-E(\varphi)\vert+\vert F(v)-F(\varphi)\vert \to 0, \] whereas the analogous result $\Vert v-\widetilde\varphi\Vert_{H^1} \to 0 $ as $ \vert E_{CH}(v)-E_{CH}(\widetilde\varphi)\vert +\vert F_{CH}(v)-F_{CH}(\widetilde\varphi)\vert \to 0 $ only holds if we additionally assume that $ v$ belongs to some $ L^\infty$-neighborhood of $ \widetilde\varphi$. \end{rem} Once we have proved the orbital stability of a single peakon in some functional space we may consider the orbital stability problem for a train of peakons under the same hypothesis. In this regard, we obtain the analogous result to the last theorem for peakon train solutions of the Novikov equation. \begin{thm}[Orbital stability of a train of peakons in the energy space]\label{MT2} Let $c_1,...c_n$ be $n$ real numbers such that $0<c_1<...<c_n$. There exists $\varepsilon^\star>0$ small enough, $L_0>0$ and an universal constant $C>0$ such that if for some $0<T\leq +\infty$, \[ u\in C([0,T),H^1(\mathbb{R}))\cap L^\infty([0,T),W^{1,4}(\mathbb{R})) \] is a solution to the Novikov equation \eqref{nov_eq_2} emanating from an initial data $u_0\in H^1(\mathbb{R})\cap W^{1,4}(\mathbb{R})$ such that $E(\cdot)$ and $F(\cdot)$ are conserved along the trajectory and satisfying \begin{align}\label{initial_cond_hyp_train} \left\Vert u_0-\sum_{i=1}^n \varphi_{c_i}(\cdot-z_i)\right\Vert_{H^1}+\left\Vert u_{0,x}-\sum_{i=1}^n \varphi_{c_i}'(\cdot-z_i)\right\Vert_{L^4}\leq \varepsilon^4 ,\quad \hbox{with}\quad 0<\varepsilon<\varepsilon^\star, \end{align} for some array of numbers $\{z_i\}_{i=1}^n\subset\mathbb{R}$ with $z_{i+1}-z_i\geq L$ where $L\geq L_0$, then the following holds: There exist $C^1$ functions $x_1(t),...,x_n(t):[0,T)\to\mathbb{R}$ such that \begin{align}\label{orb_concl} \sup_{t\in[0,T)}\left\Vert u(t,\cdot)-\sum_{i=1}^n\varphi_{c_i}(\cdot-x_i(t))\right\Vert_{H^1}\lesssim \varepsilon+L^{-1/8}. \end{align} Additionally, we have $x_{i+1}(t)-x_i(t)>\tfrac{L}{2}$ for all $t\in[0,T)$. \end{thm} \begin{rem} Notice that the local existence assumptions of Theorems \ref{MT4} and \ref{MT2} are satisfied, in particular, for initial data in $Y(\mathbb{R})$ (see Theorem \ref{theorem_lwp} below). \end{rem} On the other hand, in \cite{HoLuSz} Hones et al. studied the asymptotic behavior of multipeakon solutions in the case when no antipeakons are allow. In particular, the limits of $p_i(t)$ and $\dot{q}_i(t)$ in \eqref{multipeak} as $t$ goes to $+\infty$ are determined. As a corollary of the previous theorem together with the study made in \cite{HoLuSz} we obtain the orbital stability of the whole manifold \[ \mathcal{N}:=\left\{v(x)=\sum_{i=1}^n p_ie^{-\vert x-q_i\vert}: \ p_1,...,p_n\in \mathbb{R}_+, \ q_1<...<q_n\right\}. \] In concrete, we have the following result: \begin{cor}[Orbital stability of not-well ordered multi-peakons]\label{cor_MT2} Let $p_1^0,...,p_n^0$ be $n$ positive real numbers and $q_1^0<...<q_n^0$. For any $\alpha>0$ and any $\delta>0$ there exists $\varepsilon>0$ such that for any initial data $u_0\in Y_+(\mathbb{R})$ satisfying \begin{align}\label{hyp_cor_orb} \left\Vert u_0-\sum_{i=1}^np_i^0\exp\big(-\vert \cdot-q_i^0\vert\big)\right\Vert_{H^1}\leq \varepsilon \quad \hbox{with}\quad \Vert y_0\Vert_{\mathcal{M}}\leq \alpha, \end{align} then the following holds: For all time $t\in\mathbb{R}$ we have \begin{align}\label{first_part_cor} \inf_{\vec{q}\in\mathbb{R}^n,\,\vec{p}\in\mathbb{R}_+^n}\left\Vert u(t,\cdot)-\sum_{i=1}^np_i\exp\big(-\vert \cdot-q_i\vert\big)\right\Vert_{H^1}\leq \delta. \end{align} Moreover, there exists $T>0$ sufficiently large such that \[ \hbox{for all }\, t\geq T, \quad \inf_{\vec{q}\in\mathcal{G}}\left\Vert u(t,\cdot)-\sum_{i=1}^n\lambda_i\exp\big(-\vert \cdot-q_i\vert\big)\right\Vert_{H^1}\leq \delta, \] and \[ \hbox{for all }\, t\leq- T, \quad \inf_{\vec{q}\in\mathcal{G}}\left\Vert u(t,\cdot)-\sum_{i=1}^n\lambda_{n+1-i}\exp\big(-\vert \cdot-q_i\vert\big)\right\Vert_{H^1}\leq \delta. \] where $\mathcal{G}:=\{\vec{q}\in\mathbb{R}^n: \ q_1<...<q_n\}$ and the parameters $0<\lambda_1<...<\lambda_n$ are the square roots of the eigenvalues of the matrix $TPEP$, where: \[ P:=\mathrm{diag}\left(p_1^0,...,p_n^0\right), \quad E:=\left(e^{-\vert q_i-q_j\vert}\right)_{i,j=1}^n \ \hbox{ and } \,\ T:=\left(1+\mathrm{sgn}(j-k)\right)_{i,j=1}^n. \] \end{cor} \subsubsection{Asymptotic stability results} About the asymptotic stability of peaked traveling waves, the case of a single peakon have recently been addressed and proved by the author by a proof based in a rigidity property of the Novikov equation (see \cite{Pa} Theorems $1.2$ and $1.3$). \begin{thm}[\cite{Pa}]\label{AS_single_peakon} Let $c>0$ be fixed. There exists an universal constant $1\gg\varepsilon^\star>0$ such that for any $\beta\in(0,c)$ and initial data $u_0\in Y_+$ satisfying \begin{align}\label{AS_smallness_hip} \Vert u_0-\varphi_c\Vert_{H^1}\leq \varepsilon^\star\Big(\tfrac{\beta}{c}\Big)^8, \end{align} then the following property holds: There exists $c^*>0$ with $\vert c-c^*\vert\ll c$ and a $C^1$ function $x:\mathbb{R}\to\mathbb{R}$ satisfying $\dot{x}(t)\to c^*$ as $t\to+\infty$ \[ u(t,\cdot+x(t))\rightharpoonup \varphi_{c^*} \ \hbox{ in }\ H^1(\mathbb{R}). \] where\footnote{By this we mean that $u\in C(\mathbb{R},H^1(\mathbb{R}))$ with $y\in C_{ti}(\mathbb{R},\mathcal{M}_b(\mathbb{R}))$. See definition \ref{def_c_ti} below.} $u\in C_{ti}(\mathbb{R},Y_+)$ is the global weak solution to equation \eqref{nov_eq_2} associated to $u_0$. Moreover, for any $z\in\mathbb{R}$ the following strong convergence holds \begin{align}\label{AS_strong_h1_conv_peakon_mthm} \lim_{t\to+\infty}\Vert u(t)-\varphi_{c^*}(\cdot-x(t))\Vert_{H^1((-\infty,z)\cup(\beta t,+\infty))}=0. \end{align} \end{thm} As we mentioned before, the main ingredient in the proof of Theorem \ref{AS_single_peakon} is a rigidity property of the Novikov equation ensuring that every $H^1$-almost localized solution (c.f. Definition $1.1$ in \cite{Pa}) to equation \eqref{nov_eq_2} is actually a peakon. This latter property has been proved by introducing a new Lyapunov functional not related to the (not conserved) momentum of the equation. The main result of the present work is the asymptotic stability for peakon train solutions. \begin{thm}[Asymptotic stability of a train of peakons]\label{MT5} Let $c_1,...c_n$ be $n$ positive real number satisfying $c_1<...<c_n$ and $\beta\in(0,\tfrac{c_1}{4})$. There exists $L_0>0$ sufficiently large and $\varepsilon^\star>0$ small enough such that if a solution $u\in C_{ti}(\mathbb{R},Y_+(\mathbb{R}))$ to the Novikov equation associated to some initial data $u_0\in Y_+(\mathbb{R})$ satisfies \begin{align}\label{smallness_hyp_train_peakons} \left\Vert u_0-\sum_{i=1}^n\varphi_{c_i}(\cdot-z_i)\right\Vert_{H^1}\leq \varepsilon^4, \quad \hbox{with } \ 0<\varepsilon<\varepsilon^\star, \end{align} for some $\{z_i\}_{i=1}^n\subset\mathbb{R}$ satisfying $z_{i+1}-z_i\geq L$ for some $L\geq L_0$ then the following holds: There exists $n$ positive real numbers $c_1^\star<...<c_n^\star$ and $C^1$ functions $x_1,...,x_n:\mathbb{R}\to\mathbb{R}$ such that for all $i=1,...,n,$ \begin{align*} \dot{x}^\star_i(t)\to c_i^\star \ \hbox{ as } \ t\to+\infty \ \ \hbox{ and } \ \ u\big(t,\cdot+x_i(t)\big)\rightharpoonup \varphi_{c_i^\star} \ \hbox{ in } \ H^1 \ \hbox{ as } \ t\to+\infty. \end{align*} Moreover, for any $z\in\mathbb{R}$ the following strong convergence holds: \begin{align}\label{MT_5_conclusions} \lim_{t\to+\infty}\left\Vert u(t)-\sum_{i=1}^n\varphi_{c_i^\star}\big(\cdot-x_i^\star(t)\big)\right\Vert_{H^1(\mathcal{A}_t)}=0, \quad \hbox{with } \ \mathcal{A}_t:=(-\infty,z)\cup(\beta t,+\infty) \end{align} \end{thm} Finally, gathering the latter theorem with the asymptotic obtained by Hones et al in \cite{HoLuSz} we obtain the following asymptotic stability result for the whole manifold $\mathcal{N}$. \begin{cor}[Asymptotic stability of not-well ordered multi-peakons]\label{cor_MT5} Let $p_1^0,...,p_n^0$ be $n$ positive real numbers, $q_1^0,...,q_n^0$ any sequence of real numbers satisfying $q_1^0<...<q_n^0$ and let us consider $0<\lambda_1<...<\lambda_n$ defined as in Corollary \ref{cor_MT2}. For any $\alpha>0$, any $\delta>0$ there exists $\varepsilon>0$ sufficiently small such that if $u_0\in Y_+(\mathbb{R})$ satisfies \[ \left\Vert u_0-\sum_{i=1}^np_i^0\exp\big(-\vert \cdot-q_i^0\vert\big)\right\Vert_{H^1}\leq \varepsilon \quad \hbox{with}\quad \Vert y_0\Vert_{\mathcal{M}}\leq \alpha, \] then the following holds: There exists $0<p_1<...<p_n$ and $C^1$ functions $q_1,...,q_n:\mathbb{R}\to\mathbb{R}$ satisfying \[ \quad \vert p_i-\lambda_i\vert\leq\delta \quad \hbox{and}\quad \lim_{t\to+\infty}\dot{q}_i(t)=p_i^2, \quad \hbox{for all } i=1,...,n, \] such that \[ u(t)-\sum_{i=1}^n\varphi_{p_i}(\cdot-q_i(t))\xrightarrow{t\to+\infty} 0 \ \hbox{ in } \ H^1\left(\big(\tfrac{\lambda_1}{4}t,+\infty\big)\right). \] \end{cor} \begin{rem} Notice that due to the fact that the Novikov equation \eqref{nov_eq_2} is invariant under the transformation $u(t,x)\mapsto -u(t,x)$, we also deduce the orbital and asymptotic stability result for a train of antipeakon profiles with perturbations in the class of $H^1$ functions with momentum density belonging to $\mathcal{M}_b^-(\mathbb{R})$. \end{rem} \subsection{Organization of this paper} This paper is organized as follows. In Section \ref{preliminaries} we introduce some definitions and state a series of results needed in our analysis, for instance, the local and global well-posedness results in the class of solutions we shall work with. In section \ref{sec_MT4} we prove the orbital stability result for a single peakon solution. In section \ref{sec_MT2}, following the ideas of the latter section we prove the orbital stability of a train of peakons. Finally, in section \ref{sec_MT5} we prove the asymptotic stability of peakon trains for $Y_+(\mathbb{R})$ perturbations. \medskip \section{Preliminaries}\label{preliminaries} \subsection{Preliminaries and definitions} In order to make regularization arguments, in the sequel we shall need the following family of functions: Let $\{\rho_n\}_{n\in\mathbb{N}}$ be a mollifiers family definied by \begin{align}\label{def_rho} \rho_n(x):=n\left(\int_\mathbb{R}\rho(\xi)d\xi\right)^{-1}\rho(nx), \quad \hbox{ where } \quad \rho(x):=\begin{cases} e^{\frac{1}{x^2-1}} & \hbox{for } \vert x\vert<1 \\ 0 & \hbox{for } \vert x\vert\geq 1. \end{cases} \end{align} It worth to notice that for any $n\in\mathbb{N}$ we have $\Vert \rho_n\Vert_{L^1}=1$. On the other hand, from now on we shall denote by $C_b(\mathbb{R})$ the set of bounded continuous functions on $\mathbb{R}$, and by $C_c(\mathbb{R})$ the set of compactly supported continuous functions on $\mathbb{R}$. Throughout this paper we shall also need the following definitions. \begin{defn}[Weakly convergence of measures]\label{def_weakly_conv} We say that a sequence $\{\nu_n\}\subseteq\mathcal{M}$ converge weakly towards $\nu\in\mathcal{M}$, which we shall denote by $\nu_n\rightharpoonup \nu$, if \[ \langle \nu_n,\phi\rangle\to\langle \nu,\phi\rangle, \quad \hbox{for any }\, \phi\in C_c(\mathbb{R}). \] \end{defn} \begin{rem}\label{weak_weakstar_conv} Notice that we are adopting the standard Measure Theory's notation for the \emph{weak convergence} of a measure. Nevertheless, we recall that from a Functional Analysis point of view this convergence corresponds to the weak-* convergence on Banach spaces. \end{rem} \begin{defn}[Tightly and weak continuity of measure-valued functions]\label{def_c_ti} Let $I\subseteq \mathbb{R}$ be an interval. \begin{enumerate} \item We say that a function $f\in C_{ti}(I,\mathcal{M}_b)$ if for any $\phi\in C_b(\mathbb{R})$ the map $t\mapsto\langle f(t)\phi\rangle$ is continuous on $I$. \item We say that a function $f\in C_w(I,\mathcal{M})$ if for any $\phi\in C_c(\mathbb{R})$ the map $t\mapsto\langle f(t)\phi\rangle$ is continuous in $I$. \end{enumerate} \end{defn} \begin{defn}[Weak convergence in $C_{ti}(I)$] Let $I\subseteq \mathbb{R}$ be an interval. We say that a sequence $f_n\rightharpoonup f$ in $C_{ti}(I,\mathcal{M}_b)$ if for any $\phi \in C_b(\mathbb{R})$ we have \[ \langle f_n(\cdot)\phi\rangle \to \langle f(\cdot)\phi\rangle \,\hbox{ in } C(I). \] \end{defn} \subsection{Global well-posedness} In the proofs of Theorem \ref{MT5} we shall need to approximate non-smooth solutions of equation \eqref{nov_eq_2} by sequences of smooth solutions. In this regard, we shall need a global well-posedness result on a class of smooth solutions. In \cite{WuYi}, following the ideas of the seminal work of Constantin and Escher \cite{CoEs} on the Camassa-Holm equation, Wu and Yin proved the smooth global well-posedness for initial data with non-negative momentum density. \begin{thm}[\cite{WuYi}]\label{GWP_smooth} Let $u_0\in H^s$ for $s\geq 3$, with non-negative momentum density $y_0$ belonging to $L^{1}(\mathbb{R})$. Then, equation \eqref{novikov_eq} has a unique global strong solution \[ u\in C(\mathbb{R},H^s(\mathbb{R}))\cap C^1(\mathbb{R},H^{s-1}(\mathbb{R})). \] Moreover, we have that $E(u)$ and $F(u)$ are two conservation laws. Additionally, denoting by $y(t):=u(t)-u_{xx}(t)$ the momentum density, we have that $y(t)$ and $u(t)$ are non-negative for all times $t\in\mathbb{R}$ and $\vert u_x(t,\cdot)\vert\leq u(t,\cdot)$ on $\mathbb{R}$. \end{thm} Unfortunately, since peakon profiles do not belong\footnote{Actually, they do not belong to any $W^{1+\frac{1}{p},p}(\mathbb{R})$ for any $p\in [1,+\infty)$. However, peakon profiles do belong to $W^{1,\infty}(\mathbb{R})$, where $W^{1,\infty}(\mathbb{R})$ denotes the space of Lipschitz functions.} to $H^{3/2}(\mathbb{R})$, they do not enter into this framework either, and hence this theorem is not useful for our purposes. Nevertheless, by following the work of Constantin and Molinet \cite{CoMo}, in the same work Wu and Yin also proved a global well-posedness theorem for a class of functions containing peakons. This result shall be crucial in our analysis. However, we shall need a slightly improved version of this theorem, which we state below. \begin{thm}[\cite{WuYi}]\label{theorem_gwp} Let $u_0\in H^1(\mathbb{R})$ be a function satisfying $y_0:=(u_0-u_{0,xx})\in\mathcal{M}_b^+(\mathbb{R})$. Then, the following properties hold: \begin{enumerate} \item[1.] \emph{\textbf{Uniqueness and global existence:}} There exists a global weak solution \[ u\in C(\mathbb{R},H^1(\mathbb{R}))\cap C^1(\mathbb{R},L^2(\mathbb{R})), \] associated to the initial data $u(0)=u_0$ such that its momentum density \[ y(t,\cdot):=u(t,\cdot)-u_{xx}(t,\cdot)\in C_{ti}(\mathbb{R},\mathcal{M}_b^+(\mathbb{R})). \] Additionally $E(u)$ and $F(u)$ are conservation laws. Moreover, the solution is unique in the class \[ \{f\in C(\mathbb{R},H^1(\mathbb{R}))\}\cap\{f-f_{xx}\in L^\infty(\mathbb{R},\mathcal{M}_b^+)\}. \] \item[2.] \emph{\textbf{Continuity with respect to the initial data $H^1(\mathbb{R})$:}} For any sequence $\{u_{0,n}\}_{n\in\mathbb{N}}$ bounded in $Y_+(\mathbb{R})$ such that $ u_{0,n}\to u_0 \,\hbox{ in } H^1(\mathbb{R})$, the following holds: For any $T>0$, the family of solutions $\{u_{n}\}$ to equation \eqref{nov_eq_2} associated to $\{u_{0,n}\}$ satisfies \begin{align}\label{convergence_h1_ti} u_n\to u \,\hbox{ in }\, C([-T,T],H^1(\mathbb{R})) \quad \hbox{and} \quad y_{0,n}\rightharpoonup y \,\hbox{ in }\, C_{ti}([-T,T],\mathcal{M}). \end{align} \end{enumerate} \end{thm} \begin{proof} We refer to \cite{Mo,Mo2}, Propositions $2.2$, for a proof of this theorem in both the Camassa-Holm and the $b$-family case. Notice that the same proof applies to the Novikov equation, provided Theorem \ref{GWP_smooth} and the fact that the first point of the statement was proven in \cite{WuYi}, except for the fact that $y\in C_{ti}(\mathbb{R},\mathcal{M}_b^+)$, which can be proven in exactly the same fashion as in \cite{Mo} \end{proof} \subsection{Local well-posedness} To study the orbital stability of a train of peakons, since we are not assuming positivity of the momentum density we shall need a suitable well-posedness theorem. In this regard, in \cite{HiHo} the following local well-posedness for smooth solutions was derived. \begin{thm}[\cite{HiHo}]\label{lwp_smooth} Let $u_0\in H^s$ with $s>\tfrac{3}{2}$. Then, there exists $T>0$ and a unique solution $ u\in C([0,T],H^s(\mathbb{R}))$ to equation \eqref{nov_eq_2} associated to $u_0$. Moreover, the data-to-solution map depends continuously on $u_0$. \end{thm} Nevertheless, as we discussed before, neither peakons nor peakon trains belong to this class of initial data. However, in \cite{Da} Danchin noticed that, in the Camassa-Holm case, the maximal existence time of a solution $u\in H^s(\mathbb{R})$ for $s>\tfrac{3}{2}$ is bounded from below by a positive real number, which allowed him to obtain local weak solutions without the positivity assumption on the initial momentum density. The following theorem states the analogous result for the Novikov equation. \begin{thm}\label{theorem_lwp} Let $u_0\in H^1(\mathbb{R})$ be a function satisfying $y_0:=(u_0-u_{0,xx})\in\mathcal{M}_b(\mathbb{R})$. Then, there exists a function $T=T(\Vert y_0\Vert_{\mathcal{M}})>0$ and a unique solution $u\in\mathcal{C}_{ti}([-T,T],Y(\mathbb{R}))$ to equation \eqref{nov_eq_2} associated to the initial data $u_0$. Moreover, the functionals $E(\cdot)$ and $F(\cdot)$ are conserved along the trajectory. Additionally, if $y_0\in\mathcal{M}_b$ defines a signed Radon measure, then the solution $u(t)$ is global in time. Furthermore, if $\{u_{0,n}\}$ is a sequence in $Y(\mathbb{R})$ satisfying $u_{0,n}\to u_0$ in $Y(\mathbb{R})$, then the corresponding sequence of solutions $\{u_n\}$ to the Novikov equation emanating from $u_{0,n}$ satisfy $u_n\to u$ in $C([-T,T],H^1(\mathbb{R}))$. \end{thm} \begin{proof} As we discussed before, the proof of local existence for weak solutions is mainly contained in \cite{Da} for the Camassa-Holm case. Nevertheless, for the sake of completeness, since it has not been shown for the Novikov equation we sketch the prove of it. The idea is to combine the smooth global well-posedness theorem \ref{lwp_smooth} with an apriori estimate of the total variation of the momentum density. In fact, let us fix $T<2 \Vert y_0\Vert_{\mathcal{M}}^{-2}$ and consider the family of smooth initial data $u_{n,0}:=\rho_n*u_0\in H^\infty$. Notice that by Young's inequality we have $\Vert y_{n,0}\Vert_{L^1}\leq \Vert y_0\Vert_{\mathcal{M}}$. Then, by Theorem \ref{lwp_smooth} there exists a unique solution \[ u\in C(\mathbb{R},H^\infty(\mathbb{R})) \ \, \hbox{ such that } \ \, u\big\vert_{t=0}=u_0. \] On the other hand, by \eqref{nov_eq_y} we know that $y_n:=u_n-u_{n,xx}$ solves \[ y_{n,t}+u_n^2y_{n,x}+3u_nu_{n,x}y_{n}=0 \ \, \hbox{ and hence } \, \ \partial_t\vert y_{n}\vert +\partial_x(u_n^2\vert y_{n}\vert)=-\vert y_{n}\vert u_nu_{n,x}. \] Thus, after integration in the space variable and by using that $\Vert u_x\Vert_{L^\infty}\leq \tfrac{1}{2}\Vert y\Vert_{L^1}$ we deduce \[ \dfrac{d}{dt}\Vert y_n(t)\Vert_{L^1}\leq \dfrac{1}{4}\Vert y_n(t)\Vert_{L^1}^3, \] which leads us to, \[ \Vert y_n(t)\Vert_{L^1}\leq \left(\dfrac{2\Vert y_{0}\Vert_{\mathcal{M}}^2}{2-t\Vert y_{0}\Vert^2_{\mathcal{M}}}\right)^{1/2}. \] Finally, notice that the previous estimate give us an uniform bound on $\Vert u_{n,x}\Vert_{L^\infty}$ for $t\leq\min\{T_n,2\Vert y_0\Vert_{\mathcal{M}}^{-2}\}$, where $T_n$ denotes the maximal existence time of $u_n(t)$. Hence, due to the blow-up criteria, for all $n\in\mathbb{N}$ this uniform bound leads us to \[ \int_0^{T_{y_0}}\Vert u_{n,x}(t)\Vert_{L^\infty}dt<+\infty, \ \, \hbox{ and hence } \ \, T_n\geq 2\Vert y_0\Vert_{\mathcal{M}}^{-2}\geq T, \] where we are denoting by $T_{y_0}:=2\Vert y_0\Vert_{\mathcal{M}}^{-2}$. Therefore, we conclude the apriori estimate which leads us to the local existence. The remaining part of the proof follows from standard compactness arguments to justify the pass to the limit. We refer to \cite{Da} Theorem $5$ for a detailed proof of this last part (the interested reader may also consult to \cite{Mo4}, Theorem $4$ for a proof of this last part). \end{proof} \smallskip \section{Orbital stability of peakons in the energy space}\label{sec_MT4} In this section we show that under some slight improvements and modifications of the proof in \cite{LiLiQu} we are able to obtain Theorem \ref{MT4}. For the sake of simplicity we split the proof in several lemmas, which we shall state and prove in the next subsection. \subsection{General estimates}\label{general_estimates_peakon} In this subsection we shall prove some general formulas and estimates holding for any function belonging to $H^1(\mathbb{R})\cap W^{1,4}(\mathbb{R})$, which are the minimum requirements for $E(u)$ and $F(u)$ to be well-defined. \begin{lem}\label{lemma_three_one} For any $v\in H^1(\mathbb{R})$ and any $z\in\mathbb{R}$ we have \begin{align}\label{formula_energy} E(v)-E(\varphi_c)=\Vert v-\varphi_c(\cdot-z)\Vert_{H^1}^2+4\sqrt{c}(v(z)-\sqrt{c}). \end{align} \end{lem} \begin{rem} Notice that the previous lemma ensures us that the minimum of the $H^1(\mathbb{R})$-distance between $v$ and the set $\{\varphi_c(\cdot-\xi):\ \xi\in\mathbb{R}\}$ is reached exactly at any point $\xi\in\mathbb{R}$ where $v(\cdot)$ attains its maximum. \end{rem} \begin{proof} The proof follows from direct computations. In fact, recalling that $\varphi_c''=\varphi-2\delta$, after integration by parts we obtain \begin{align*} \Vert v-\varphi_c(\cdot-z)\Vert_{H^1}^2&=E(v)+E(\varphi_c)-2\int v(x)\varphi_c(\cdot-z)dx-2\int v_x(x)\varphi_c'(\cdot-z)dx \\ & =E(v)+E(\varphi_c)-4\sqrt{c}v(z)=E(v)-E(\varphi_c)-4\sqrt{c}(v(z)-\sqrt{c}). \end{align*} The proof is complete. \end{proof} \begin{lem}\label{lemma_three_two} Let $v\in H^1(\mathbb{R})\cap W^{1,4}(\mathbb{R})$ and let $\xi\in\mathbb{R}$ be any point where $v(\cdot)$ attains it maximum, that is, $v(\xi)=\max_\mathbb{R} v(x)$. Then, denoting this quantity by $M:=v(\xi)$ we have, \[ F(v)\leq \dfrac{4}{3} M^2E(v)-\dfrac{4}{3}M^4. \] \end{lem} \begin{proof} First of all let us start by introducing some notation. From now on we denote by $g$ and $h$ the functions given by \[ g(x):=\begin{cases} v(x)-v_x(x), & x<\xi, \\ v(x)+v_x(x), & x>\xi, \end{cases} \qquad h(x):=\begin{cases} v^2-\tfrac{2}{3}vv_x-\tfrac{1}{3}v_x^2, & x<\xi, \\ v^2+\tfrac{2}{3}vv_x-\tfrac{1}{3}v_x^2, & x>\xi. \end{cases} \] Then, on the one-hand, by direct computations we have \begin{align*} \int h(x)g^2(x)dx&=\int_{-\infty}^{\xi} \big(v^2-\tfrac{2}{3}vv_x-\tfrac{1}{3}v_x^2\big)(v-v_x)^2dx \\ & \quad +\int_\xi^{+\infty}\big(v^2+\tfrac{2}{3}vv_x-\tfrac{1}{3}v_x^2\big)(v+v_x)^2=:\mathrm{I}+\mathrm{II}. \end{align*} Thus, rearranging and simplifying terms we obtain \begin{align*} \mathrm{I}&=\int_{-\infty}^{x_i}\big(v^4+2v^2v_x^2-\tfrac{1}{3}v_x^4-\tfrac{8}{3}v^3v_x\big)dx=\int_{-\infty}^{x_i}\big(v^4+2v^2v_x^2-\tfrac{1}{3}v_x^4\big)dx-\dfrac{2}{3}M^4 \end{align*} Similarly, rearranging and simplifying terms we get \begin{align*} \mathrm{II}=\int_\xi^{+\infty}\big(v^4+2v^2v_x^2-\tfrac{1}{3}v_x^4\big)dx-\dfrac{2}{3}M^4. \end{align*} Plugging the last two formulas together we obtain \begin{align}\label{F_first_id} \int h(x)g^2(x)dx=F(v)-\dfrac{4}{3}M^4. \end{align} On the other hand, by using $a^2+b^2\geq 2ab$ we have \[ v^2\pm\dfrac{2}{3}vv_x-\dfrac{1}{3}v_x^2\leq \dfrac{4}{3}v^2. \] Therefore, by using the latter inequality we deduce $h(x)\leq \tfrac{4}{3}v^2$ and hence \begin{align}\label{F_second_id} \int h(x)g^2(x)dx&\leq \dfrac{4}{3}\int v^2(x)g^2(x)=\dfrac{4}{3}\int_{-\infty}^\xi v^2(v-v_x)^2dx+\dfrac{4}{3}\int_\xi^{+\infty}v^2(v+v_x)^2\nonumber \\ & \leq \dfrac{4}{3}M^2\int_{-\infty}^\xi \big(v^2+v_x^2-2vv_x\big)dx+\dfrac{4}{3}M^2\int_\xi^{+\infty}\big(v^2+v_x^2-2vv_x\big)dx\nonumber \\ & = \dfrac{4}{3}M^2E(v)-\dfrac{8}{3}M^4. \end{align} Gathering \eqref{F_first_id} with \eqref{F_second_id} we obtain the desired result. \end{proof} The following lemma gives us an estimate of the distances between the evaluations of $E$ and $F$ at $u$ and $\varphi_c$ in terms of the distance of $u_0$ and $\varphi_c$ in $H^1\cap W^{1,4}$. \begin{lem}\label{E_F_differences} Let $v\in H^1(\mathbb{R})\cap W^{1,4}(\mathbb{R})$ any function satisfying $\Vert v-\varphi_c\Vert_{H^1}+\Vert v_x+\varphi_c'\Vert_{L^4}<\epsilon$ for some $\min\{1,c\}\gg\epsilon>0$. Then, \begin{align*} \vert E(v)-E(\varphi_c)\vert\leq4\sqrt{c}\epsilon \quad \hbox{and}\quad \vert F(v)-F(\varphi_c)\vert \leq 120c^{3/2}\epsilon. \end{align*} \end{lem} \begin{proof} Let us start estimating the difference of the energies. First of all notice we recall that \begin{align}\label{recall_E_F} E(\varphi_c)=2c \quad \hbox{and}\quad F(\varphi_c)=\tfrac{4}{3}c^2. \end{align} Hence, by using the hypothesis, triangular inequality and the previous identities we obtain \[ \Vert v\Vert_{H^1}\leq \Vert \varphi_c\Vert_{H^1}+\epsilon\leq 2\sqrt{c} \quad \hbox{and}\quad \Vert v_x\Vert_{L^4}\leq \Vert \varphi_c'\Vert_{L^4}+\epsilon\leq \sqrt{c}. \] Then, by using the reverse triangular inequality we get \begin{align*} \vert E(v)-E(\varphi_c)\vert &\leq \big(\Vert v\Vert_{H^1}+\Vert \varphi_c\Vert_{H^1}\big)\Big\vert\Vert v\Vert_{H^1}-\Vert \varphi_c\Vert_{H^1}\Big\vert \\ & \leq 4\sqrt{c}\Vert v-\varphi_c\Vert_{H^1}\leq 4\sqrt{c}\epsilon. \end{align*} For the second estimate let us start by rearranging the integral terms involved in $F(\cdot)$. In fact, it easy to see that \begin{align*} \vert F(v)-F(\varphi)\vert&=\left\vert\int \left(v^4+2v^2v_x^2-\tfrac{1}{3}v_x^4\right)-\int\left(\varphi_c^4+2\varphi_c\varphi_c'^2-\tfrac{1}{3}\varphi_c'^4\right)\right\vert \\ & \leq \left\vert\int \big(v^2+v_x^2\big)^2-\big(\varphi_c^2+\varphi_c'^2\big)^2\right\vert+\dfrac{4}{3}\left\vert\int \big(v_x^4-\varphi_c'^4\big)\right\vert=:\mathrm{I}+\mathrm{II}. \end{align*} Now notice that, on the one hand, by using H\"older's and triangular inequalities, together with Sobolev's embedding $H^1\hookrightarrow L^4$ we obtain \begin{align*} \mathrm{I}&=\left\vert\int \big(v^2+v_x^2+\varphi_c^2+\varphi_c'^2\big)\big(v^2+v_x^2-\varphi_c^2-\varphi_c'^2\big)\right\vert \\ &=\left\vert\int \big(v^2+v_x^2+\varphi_c^2+\varphi_c'^2\big)\Big((v^2-\varphi_c^2)+(v_x^2-\varphi_c'^2)\Big)\right\vert \\ & \leq 2\big(\Vert v\Vert_{W^{1,4}}^2+\Vert \varphi_c\Vert_{W^{1,4}}^2\big)\big(\Vert v+\varphi_c\Vert_{L^4}\Vert v-\varphi_c\Vert_{L^4}+\Vert v_x+\varphi_c'\Vert_{L^4}\Vert v_x-\varphi_c'\Vert_{L^4}\big) \\ & \leq 100c^{3/2}\epsilon. \end{align*} On the other hand, by using H\"older's and triangular inequality again we get \begin{align*} \mathrm{II}&=\dfrac{4}{3}\left\vert\int (v_x^4-\varphi_c'^4)\right\vert=\dfrac{4}{3}\left\vert\int (v_x^2+\varphi_c'^2)(v_x^2-\varphi_c'^2)\right\vert \\ & = \dfrac{4}{3}\left\vert\int (v_x^2+\varphi_c'^2)(v_x+\varphi_c')(v_x-\varphi_c')\right\vert \\ & \leq 2\Vert v_x +\varphi_c'\Vert_{L^4}^3\Vert v_x-\varphi_c'\Vert_{L^4}\leq 20c^{3/2}\epsilon. \end{align*} Gathering both estimates we obtain the desired result. The proof is complete. \end{proof} We finish this section by estimating the remaining term in formula \eqref{formula_energy} when choosing $\xi\in\mathbb{R}$ to be the natural candidate to study the orbital stability of $\varphi_c$. \begin{lem}\label{lemma_three_four} Let $v\in H^1(\mathbb{R})\cap W^{1,4}(\mathbb{R})$ arbitrary and let us denote by $M:=\max_\mathbb{R} v(\cdot)$. Assume that for some $0<\epsilon\ll\min\{1,c\}$ the following estimates are satisfied \begin{align}\label{hyp_lemma_three_four} \vert E(v)-E(\varphi_c)\vert\leq4\sqrt{c}\epsilon \quad \hbox{and}\quad \vert F(v)-F(\varphi_c)\vert \leq 120c^{3/2}\epsilon. \end{align} Then, the following estimate holds \[ \vert M-\sqrt{c}\vert\leq 10c^{3/4}\epsilon^{1/2}. \] \end{lem} \begin{proof} First of all we recall that, by Lemma \ref{E_F_differences} we have \[ F(v)-\dfrac{4}{3} M^2E(v)+\dfrac{4}{3}M^4\leq0 \ \,\hbox{ and hence }\ \, M^4-M^2E(v)+\dfrac{3}{4}F(v)\leq0. \] Now, let us introduce the fourth-order polynomials $P(q)$ and $\widetilde{P}(q)$ which are given by \[ P(q):=q^4-E(v)q^2+\dfrac{3}{4}F(v) \quad \hbox{and}\quad \widetilde{P}(q):=q^4- E(\varphi_c)q^2+\dfrac{3}{4}F(\varphi_c). \] By evaluating the latter polynomial in $M$ and rearranging terms we have \[ \widetilde{P}(M)=P(M)+\big(E(v)-E(\varphi_c)\big)M^2-\dfrac{3}{4}\big(F(v)-F(\varphi_c)\big). \] At this point it is worth noticing that, due to both identities in \eqref{recall_E_F}, we can rewrite $\widetilde{P}(q)$ as: \[ \widetilde{P}(q)= \left(q+\sqrt{c}\right)^2\left(q-\sqrt{c}\right)^2. \] Noticing that $M\geq 0$ and due to the fact that (by Lemma \ref{E_F_differences}) $P(M)\leq 0$, we deduce that \begin{align}\label{final_estimate} c(M-\sqrt{c})^2\leq (M+\sqrt{c})^2(M-\sqrt{c})^2\leq \big(E(v)-E(\varphi_c)\big)M^2-\dfrac{3}{4}\big(F(v)-F(\varphi_c)\big). \end{align} Therefore, by using both hypotheses in \eqref{hyp_lemma_three_four} and due to the fact that $M\leq 2\sqrt{c}$, we conclude \[ \sqrt{c}\vert M-\sqrt{c}\vert\leq 10c^{3/4}\epsilon^{1/2}. \] The proof is complete. \end{proof} \subsection{Proof of Theorem \ref{MT4}} With all the previous lemmas we are able to conclude the proof of Theorem \ref{MT4}. First of all, we recall that since $E(\cdot)$ and $F(\cdot)$ are conserved along the trajectory, for any $t\in [0,T)$ we have \begin{align}\label{cons_E_F} E(u(t))=E(u_0) \quad \hbox{and}\quad F(u(t))=F(u_0). \end{align} Now, notice that by applying Lemma \ref{lemma_three_one}, for any time $t\in[0,T)$ we have \begin{align}\label{norm_equality} \Vert u(t)-\varphi_c(\cdot-\xi(t))\Vert_{H^1}^2=E(u_0)-E(\varphi_c)-4\sqrt{c}\Big(v(t,\xi(t))-\sqrt{c}\Big), \end{align} where $\xi(t)$ denotes any space-point where $v(t,\cdot)$ attains its maximum, i.e. $M(t)=v(t,\xi(t))=\max_\mathbb{R} v(t,\cdot)$. On the other hand, by using Lemma \ref{E_F_differences} with $u_0$ and $\epsilon=\varepsilon^4$ together with the conservation laws \eqref{cons_E_F} we deduce that $u(t)$ satisfies the hypothesis of Lemma \ref{lemma_three_four} for all times $t\in[0,T)$. Finally, notice that the right-hand side of estimate \eqref{final_estimate} only depends conserved quantities, and hence we obtain \[ \vert M(t)-\sqrt{c}\vert\leq 10c^{1/4}\varepsilon^2. \] Therefore, plugging the latter inequality into \eqref{norm_equality} we conclude \[ \Vert u(t)-\varphi_c(\cdot-\xi(t))\Vert_{H^1}^2\leq 4\sqrt{c}\varepsilon^4+40c^{3/4}\varepsilon^2. \] The proof is complete. \qed \medskip \section{Orbital stability of a train of peakons}\label{sec_MT2} The proof of the orbital stability for peakon trains follows similar ideas to those shown for the single peakon. Thus, as in the previous section, for the sake of simplicity we shall split the proof of Theorem \ref{MT2} in several lemmas which we shall state and prove in the next subsection. \medskip On the other hand, in order to make the computations more understandable we shall need to introduce some extra notation for the sum of peakons. From now on, for any choice of speeds $(c_1,...,c_n)\in \mathbb{R}_+^n$ and any vector $\vec{z}:=(z_1,...,z_n)\in\mathbb{R}^n$ we shall denote by $R_{\vec{z}}$ the sum of $n$ peakons with speeds $c_1,...,c_n$ centered at $z_1,...,z_n$ respectively, that is \[ R_{\vec{z}}(x):=\sum_{i=1}^n\varphi_{c_i}(x-z_i)=\sum_{i=1}^n\sqrt{c_i}e^{-\vert x-z_i\vert }. \] Now, before getting into the proof we shall need a modulation lemma in order to ensure that no strong interactions between different peakons happen. \subsection{Modulation around multipeakons}\label{modulation_section} Let $\alpha$ and $L$ any pair of positive real numbers. We consider the neighborhood of radius $\alpha$ around the sum of all well-ordered $n$ peakons with speeds $c_1,...,c_n$ separated by at least $L$, i.e, \begin{align}\label{tubular} \mathcal{U}(\alpha,L):=\left\{u\in H^1(\mathbb{R}):\ \inf_{x_j-x_{j-1}>L}\left\Vert u-\sum_{i=1}^n\varphi_{c_i}(\cdot-x_i)\right\Vert_{H^1}<\alpha\right\}. \end{align} By a bootstrapping argument and due to the continuity of the map $t\mapsto u(t)$ from $[0,T)$ into $H^1(\mathbb{R})$, in order to conclude Theorem \ref{MT2} it is sufficient to prove that the following holds: There exist $C>0$, $\varepsilon^\star>0$ and $L_0>0$ such that for all $L\geq L_0$ and $\varepsilon\in(0,\varepsilon^\star)$, if a solution $u\in C([0,T),H^1(\mathbb{R}))\cap L^\infty([0,T),W^{1,4}(\mathbb{R}))$ to the Novikov equation \eqref{nov_eq_2} satisfying the hypothesis of Theorem \ref{MT2} is such that there exists $t^*\in (0,T)$ with the property: \begin{align}\label{neigh_assumption} u(t)\in\mathcal{U}\left(C(\varepsilon+L^{-1/8}),\tfrac{1}{2}L\right),\ \, \hbox{ for all } \ t\in[0,t^*], \end{align} then, at $t=t^*$ we have \begin{align}\label{neigh_conclusion} u(t^*)\in\mathcal{U}\left(\tfrac{C}{2}(\varepsilon+L^{-1/8}),\tfrac{2}{3}L\right). \end{align} Therefore, in the rest of this section we shall assume that \eqref{neigh_assumption} holds for some $\varepsilon\in(0,\varepsilon^ \star)$ and some $L>L_0$ and we shall prove that under these hypothesis we have \eqref{neigh_conclusion}. \medskip The next lemma ensure us that the different bumps of $u(t)$ that are individually close to a peakon get away of each other as time evolves. This shall be crucial in the sequel. \begin{lem}\label{mod_lemma} Let $u\in C([0,T),H^1(\mathbb{R}))\cap L^\infty([0,T),W^{1,4}(\mathbb{R}))$ be a solution to the Novikov equation \eqref{nov_eq_2} satisfying the assumptions of Theorem \ref{MT2}. There exists $\alpha_0>0$ small enough and $L_0>0$ sufficiently large such that for any $0<\alpha<\alpha_0$ the following holds: If for some $t^*\in(0,T)$ the solution $u(t)$ satisfies \begin{align}\label{tubular_neigh} u(t)\in \mathcal{U}(\alpha,\tfrac{1}{2}L) \ \, \hbox{ for all } \ \,t\in[0,t^*], \end{align} then there exist $C^1$ functions $\widetilde{x}_1,...,\widetilde{x}_n:[0,t^*]\to\mathbb{R}$ such that \[ u(t,x)=\sum_{i=1}^n\varphi_{c_i}\big(x-\widetilde{x}_i(t)\big)+v(t,x), \] where $\{\widetilde{x}_i\}_{i=1}^n$ are chosen in such a way that for all $t\in[0,t^*]$ the following orthogonality conditions hold \begin{align}\label{mod_orthogonality} \int_\mathbb{R} v(t,x)\big(\rho_{n_0}*\varphi_{c_i}'\big)(\cdot-\widetilde{x}_i(t))dx=0, \ \, \hbox{ for all } \, i=1,...,n, \end{align} where $\rho_n$ is defined in \eqref{def_rho} and $n_0\in\mathbb{N}$ satisfies: \begin{align}\label{orth_cond_def} \hbox{For all }\, -\tfrac{1}{2}\leq y\leq \tfrac{1}{2}, \quad \int \varphi(\cdot-y)\big(\rho_{n_0}*\varphi'\big)=0 \ \iff \ y=0. \end{align} Moreover, with this election of shifts we have that there exists $C>0$ such that for all $t\in[0,t^*]$ we have: \begin{align}\label{mod_bound} \left\Vert u(t)-\sum_{i=1}^n\varphi_{c_i}\big(\cdot-\widetilde{x}_i(t)\big)\right\Vert_{H^1}\leq C\alpha^{1/2}. \end{align} Furthermore, for all $i=1,...n$ ($n-1$ respectively) and all $t\in[0,t^*]$ the following estimates hold: \begin{align}\label{mod_parameter_bound} \left\vert \dot{\widetilde{x}}_i(t)-c_i\right\vert \leq C\alpha^{1/2} \quad \hbox{and }\quad \widetilde{x}_i(t)-\widetilde{x}_{i-1}(t)\geq \tfrac{3}{4}L+\tfrac{1}{2}(c_i-c_{i-1})t. \end{align} Additionally, by setting the family of time-dependent intervals $\mathcal{J}_i(t):=[y_i(t),y_{i+1}(t)]$, where \begin{align}\label{def_y_i_intervals} y_1=-\infty, \quad y_{n+1}=+\infty, \ \, \hbox{ and }\ \, y_i(t)=\tfrac{1}{2}\big(\widetilde{x}_{i-1}(t)+\widetilde{x}_i(t)\big), \end{align} then, for all $t\in[0,t^*]$ there exists $x_i(t)\in\mathcal{J}_i(t)$ for $i=1,...,n$, such that \begin{align*} u\big(t,x_i(t)\big)=\max_{x\in\mathcal{J}_i(t)}u(t,\cdot) \quad \hbox{and}\quad \big\vert x_i(t)-\widetilde{x}_i(t)\big\vert\leq \tfrac{1}{12}L. \end{align*} \end{lem} \begin{proof} See the appendix, Section \ref{mod_appendix}. \end{proof} \subsection{Almost monotonicity property} By using the previous lemma we shall define the modified energy functionals measuring the energy at the right of each bump of $u(t)$. In fact, from Lemma \ref{mod_lemma} we deduce the existence of $C^1$ functions $\widetilde{x}_1,...,\widetilde{x}_n$ satisfying \eqref{mod_orthogonality}-\eqref{mod_parameter_bound}. From now on we shall denote by $\Psi_{i,K}$ the family of weight functions given by \begin{align}\label{def_Psi_i_K} \Psi_{i,K}=\Psi\left(\dfrac{x-y_i(t)}{K}\right) \quad \hbox{where}\quad \Psi(x):=\dfrac{2}{\pi}\arctan\left(e^x\right), \end{align} where the family $\{y_i\}_{i=1}^n$ is defined in \eqref{def_y_i_intervals}. Now, for each $i=1,...,n$ and $K>1$, we define the modified energy functional \begin{align*} \mathcal{I}_{i,K}(t)=\mathcal{I}_{i,K}\big(u(t)\big):=\int \big( u^2+u_x^2\big)(t,x)\Psi_{i,K}(t,x)dx. \end{align*} As we discussed before, the idea of defining these functionals is to be able to measure the energy of $u(t)$ at the right of each bump. In particular, for all times $t\in[0,T)$ we have \[ \mathcal{I}_{i,K}(t)\geq \dfrac{1}{2}\Vert u(t)\Vert_{H^1(y_i(t),+\infty)}. \] Finally, let us fix $\sigma_0:=\tfrac{1}{4}\min\{c_1,c_2-c_1,...,c_n-c_{n-1}\}$. The following lemma give us the almost monotonicity property of the energy at the right. \begin{lem}\label{AM_orb_train} Let $u\in C([0,T),H^1(\mathbb{R}))\cap L^\infty([0,T),W^{1,4}(\mathbb{R}))$ be a solution to the Novikov equation \eqref{nov_eq_2} satisfying \eqref{mod_bound} on $[0,t^*]$. Then, there exists $\alpha_0>0$ small enough and $L_0>0$, only depending on $c_1$, such that if $\alpha<\alpha_0$ and $L\geq L_0$ then, for any $4\leq K\lesssim L^{1/2}$ the following holds \begin{align}\label{AM_energy_orbital} \mathcal{I}_{i,K}(t)-\mathcal{I}_{i,K}(0)\leq O\left(e^{-L/8K}\right), \ \, \hbox{ for all } \, i=2,...,n, \, \hbox{ and all } \,t\in[0,t^*]. \end{align} \end{lem} \begin{proof} See the appendix, Section \ref{AM_orb_train_appendix}. \end{proof} \subsection{General estimates} In this subsection we shall prove some general formulas and estimates holding for any function belonging to $H^1(\mathbb{R})\cap W^{1,4}(\mathbb{R})$. All of these formulas and estimates are just the localized versions of the ones in Section \ref{general_estimates_peakon}. In this regard we shall need the following definitions. Let us consider the family of functions $\Phi_i(t,x)$ given by \[ \Phi_1(t,x):=1-\Psi_{2,K}(t,x), \quad \Phi_n(t,x):=\Psi_{n,K}(t,x) \ \, \hbox{ and } \ \, \Phi_i(t,x):=\big(\Psi_{i,K}-\Psi_{i+1,K}\big)(t,x). \] It is important to point out that this family of functions satisfies $\sum_{i=1}^n\Phi_{i,K}\equiv1$. On the other hand, notice that for $L,K>0$ large enough and every $i\neq j$ we have \begin{align*} \vert 1-\Phi_{i,K}\vert\leq 4e^{-L/4K} \ \hbox{ and } \ \vert \Phi_{j,K}\vert \leq 4e^{-L/4K} \ \, \hbox{ on } \ \, \left[\widetilde{x}_i-\tfrac{L}{4},\widetilde{x}_i+\tfrac{L}{4}\right] \end{align*} In the sequel we shall need localized versions of the conservation laws in \eqref{cons_e}. In this regard, from now on we denote by $E_i$ and $F_i$ the localized functionals given by \begin{align*} E_i(u)&:=\int \big(u^2+u_x^2\big)(t,x)\Phi_i(t,x)dx, \\ F_i(u)&:=\int \left(u^4+2u^2u_x^2-\dfrac{1}{3}u_x^4\right)(t,x)\Phi_i(t,x)dx. \end{align*} The next lemma give us a global identity which shall be crucial in the sequel. \begin{lem}\label{lemm_four_three} For any vector $\vec{z}\in\mathbb{R}^n$ satisfying $z_i-z_{i-1}>\tfrac{1}{2}L$ and any function $v\in H^1(\mathbb{R})$ we have \begin{align}\label{formula_train_energy} E(v)-\sum_{i=1}^nE(\varphi_{c_i})=\Vert v-R_{\vec{z}}\Vert_{H^1}^2+4\sum_{i=1}^n\sqrt{c_i}\big(v(z_i)-\sqrt{c_i}\big)+O\left(e^{-L/4}\right). \end{align} \end{lem} \begin{proof} The proof follows from direct computations. In fact, recalling that $\varphi_{c_i}'(x)=-\operatorname{sgn}(x)\varphi_{c_i}(x)$ and by integrating by parts we obtain \begin{align*} E\big(v-R_{\vec{z}}\big)&=E(v)+E(R_{\vec{z}})-2\sum_{i=1}^n\int v(x)\varphi_{c_i}(\cdot-z_i)+v_x(x)\varphi'_{c_i}(\cdot-z_i)dx \\&=E(v)+E(R_{\vec{z}})-2\sum_{i=1}^n\int v(x)\varphi_{c_i}(\cdot-z_i)dx \\ & \quad -2\sum_{i=1}^n\int_{-\infty}^{z_i} v_x(x)\varphi_{c_i}(\cdot-z_i)dx+2\sum_{i=1}^n\int_{z_i}^{+\infty}v_x\varphi_{c_i}(\cdot-z_i)dx \\ & = E(v)+E(R_{\vec{z}})-4\sum_{i=1}^n\sqrt{c_i}v(z_i) \end{align*} On the other hand, notice that since $z_i-z_{i-1}\geq \tfrac{1}{2}L$ we have \[ E(R_{\vec{z}})=\sum_{i=1}^n E(\varphi_{c_i})+O\left(e^{-L/4}\right)=2\sum_{i=1}^nc_i+O\left(e^{-L/4}\right). \] Gathering the last two formulas we obtain the desired result. Notice that the implicit constant involved in $O\left(e^{-L/4}\right)$ only depends on $c_1,...,c_n$. The proof is complete. \end{proof} \textbf{Important:} From now on we fix $K=\tfrac{1}{8}L^{1/2}$. \medskip The following lemma is the localized version of Lemma \ref{lemma_three_two} and shall be crucial in the sequel. \begin{lem}\label{lemm_four_four} Let $u(t,x)$ be the solution of the Novikov equation \eqref{nov_eq_2} associated to $u_0\in H^1(\mathbb{R})\cap W^{1,4}(\mathbb{R})$, satisfying the hypothesis of Lemma \ref{mod_lemma} on $[0,t^*]$ with $\alpha$ given by \eqref{neigh_assumption}. Then, for all $t\in[0,t^*]$ the following inequality holds: \begin{align}\label{F_E_exp} F_i(u)\leq \dfrac{4}{3}M_i^2E_i(u)-\dfrac{4}{3}M_i^4+\Vert u_0\Vert_{H^1}^4O\left(L^{-1/2}\right), \end{align} where $M_i$ denotes the local maximum $M_i:=\max\{u(t,x):\ x\in\mathcal{J}_i(t)\}$. \end{lem} \begin{proof} First of all let us start by introducing some notation. For each $i=1,...,n$ we define $g_i$ and $h_i$ the functions given by \[ g_i(t,x):=\begin{cases} u-u_x, & x<x_i(t), \\ u+u_x, & x>x_i(t), \end{cases} \qquad h_i(t,x):=\begin{cases} u^2-\tfrac{2}{3}uu_x-\tfrac{1}{3}u_x^2, & x<x_i(t), \\ u^2+\tfrac{2}{3}uu_x-\tfrac{1}{3}u_x^2, & x>x_i(t). \end{cases} \] Then, on the one-hand, by direct computations we have \begin{align*} \int h_i(t,x)g_i^2(t,x)\Phi_i(t,x)dx&=\int_{-\infty}^{x_i}\left(u^2-\tfrac{2}{3}uu_x-\tfrac{1}{3}u_x^2\right)(u-u_x)^2\Phi_i(t,x) \\ & \quad +\int_{x_i}^{+\infty}\left(u^2+\tfrac{2}{3}uu_x-\tfrac{1}{3}u_x^2\right)(u+u_x)^2\Phi_i(t,x)=:\mathrm{I}+\mathrm{II}, \end{align*} Now, by integration by parts we obtain \begin{align*} \mathrm{I}&=\int_{-\infty}^{x_i} \big(u^4+2u^2u_x^2-\tfrac{1}{3}u_x^4-\tfrac{8}{3}u^3u_x\big)\Phi_i(t,x)dx \\ & =\int_{-\infty}^{x_i} \big(u^4+2u^2u_x^2-\tfrac{1}{3}u_x^4\big)\Phi_i(t,x)dx+\dfrac{2}{3}\int_{-\infty}^{x_i}u^4\Phi_i'(t,x)dx-\dfrac{2}{3}M_i^4\Phi_i(t,x_i). \end{align*} Similarly, by integration by parts again we get \begin{align*} \mathrm{II}=\int_{x_i}^{+\infty} \big(u^4+2u^2u_x^2-\tfrac{1}{3}u_x^4\big)\Phi_i(t,x)dx-\dfrac{2}{3}\int_{x_i}^{+\infty}u^4\Phi_i'(t,x)dx-\dfrac{2}{3}M_i^4\Phi_i(t,x_i). \end{align*} Plugging the last two formulas together we deduce \begin{align}\label{formula_h_g} \int h_i(t,x)g_i^2(t,x)\Phi_i(t,x)dx=F_i(u)-\dfrac{4}{3}M_i^4\Phi_i(x_i)+\dfrac{2}{3}\int_{-\infty}^{x_i}u^4\Phi_i'dx-\dfrac{2}{3}\int_{x_i}^{+\infty}u^4\Phi_i'dx. \end{align} On the other hand, notice that by using $a^2+b^2\geq 2ab$ we have \[ u(t,x)^2\pm \dfrac{2}{3}u(t,x)u_x(t,x)-\dfrac{1}{3}u_x^2(t,x)\leq \dfrac{4}{3}u^2(t,x) \] Thus, by using the latter inequality deduce $h_i(x)\leq \tfrac{4}{3}u^2$ and hence, by using \eqref{neigh_assumption} we get \begin{align*} \int h_ig_i^2\Phi_i&\leq \dfrac{4}{3}\int u^2g_i^2\Phi_i \leq \dfrac{4}{3}M_i^2\int g_i^2\Phi_i+O\left(e^{-L^{1/2}}\right) \\ & =\dfrac{4}{3}M_i^2E_i(u)-\dfrac{8}{3}M_i^4\Phi_i(x_i)+\dfrac{4}{3}M_i^2\int_{-\infty}^{x_i}u^2\Phi_i'-\dfrac{4}{3}M_i^2\int_{x_i}^{+\infty}u^2\Phi_i'+O\left(e^{-L^{1/2}}\right). \end{align*} Now it is important to notice that, since $\vert x_i-z_i\vert\leq\tfrac{1}{20}L$, we deduce that $\Phi_i(x_i)=1+O\big(e^{-L^{1/2}}\big)$. Therefore, gathering the latter inequality with \eqref{formula_h_g} we obtain \begin{align}\label{final_est_F_i_lemma} F_i(u)&\leq \dfrac{4}{3}M_i^2E_i(u)-\dfrac{4}{3}M_i^4+\dfrac{4}{3}M_i^2\int_{-\infty}^{x_i}u^2\Phi_i'-\dfrac{4}{3}M_i^2\int_{x_i}^{+\infty}u^2\Phi_i'\nonumber \\ & \quad -\dfrac{2}{3}\int _{-\infty}^{x_i}u^4\Phi_i'+\dfrac{2}{3}\int_{x_i}^{+\infty}u^4\Phi_i'+O\big(e^{-L^{1/2}}\big). \end{align} Finally, we recall that since $K=\tfrac{1}{8}L^{1/2}$ we have $\vert \Phi_i'\vert\leq \tfrac{C}{K}=O(L^{-1/2})$. Therefore, we conclude the proof of \eqref{F_E_exp} by plugging the latter estimate for $\Phi_i'$ on $\mathbb{R}$ into \eqref{final_est_F_i_lemma}. \end{proof} As a consequence of the previous lemmas we obtain the following corollary. \begin{lem} Under the hypothesis of Lemma \ref{mod_lemma} and by considering $x_1(t),...,x_n(t)$ the functions constructed in such lemma, the following holds: For all $t\in[0,t^*]$ we have \begin{align}\label{lemma_tubular} \left\Vert u(t)-\sum_{i=1}^n\varphi_{c_i}(\cdot-x_i(t))\right\Vert_{H^1}\leq O(\sqrt{\alpha})+\left(e^{-L/8}\right). \end{align} \end{lem} \begin{proof} In fact, recalling that by hypothesis we have $u(t)\in\mathcal{U}(\alpha,\tfrac{1}{2}L)$ for all $t\in[0,t^*]$, on account of Lemma \ref{mod_lemma} there exists $\widetilde{x}_1(t),...,\widetilde{x}_n(t)$ such that $\widetilde{x}_i(t)\in\mathcal{J}_i(t)$ and \[ \left\Vert u(t)-\sum_{i=1}^n\varphi_{c_i}(\cdot-\widetilde{x}_i(t))\right\Vert_{H^1}=O(\sqrt{\alpha}). \] Finally, recalling that $u\big(t,x_i(t)\big)=\max_{\mathcal{J}_i(t)}u(t,\cdot)$, by applying Lemma \ref{lemm_four_three} we conclude \begin{align*} \left\Vert u(t)-\sum_{i=1}^n\varphi_{c_i}(\cdot-x_i(t))\right\Vert_{H^1}^2&=\left\Vert u(t)-\sum_{i=1}^n\varphi_{c_i}(\cdot-\widetilde{x}_i(t))\right\Vert_{H^1}^2 \\ & \quad -4\sum_{i=1}^n\sqrt{c_i}\big(u(t,x_i(t))-u(t,\widetilde{x}_i(t))\big)+O\left(e^{-L/4}\right) \\ & \leq O(\alpha)+O\left(e^{-L/4}\right), \end{align*} which lead us to the desired result. \end{proof} Finally, the next lemma gives us a more accurate estimate of the last non-negligible term in formula \eqref{formula_train_energy} once we choose $\{z_i\}_{i=1}^n$ to be the natural candidate to study the orbital stability of a train of peakons, that is, the family of local maxima $\{x_i\}_{i=1}^n$ given by Lemma \ref{mod_lemma}. \begin{lem}\label{lem_four_six} Let $u(t,x)$ be the solution of the Novikov equation \eqref{nov_eq_2} associated to $u_0\in H^1(\mathbb{R})\cap W^{1,4}(\mathbb{R})$, satisfying the hypothesis of Lemma \ref{mod_lemma} on $[0,t^*]$ with $\alpha$ given by \eqref{neigh_assumption}. Then, for all $t\in[0,t^*]$ we have \begin{align} \sum_{i=1}^n\sqrt{c_i}\left\vert M_i-\sqrt{c_i}\right\vert\leq O(\varepsilon^2)+O\left(L^{-1/4}\right). \end{align} \end{lem} \begin{proof} In fact, first of all we recall that for every $i=1,...,n$ the associated peakon profile satisfies \[ E(\varphi_{c_i})=2c_i \quad \hbox{and}\quad F(\varphi_{c_i})=\tfrac{4}{3}c_i^2. \] Hence, due to the fact that $M_i$ is positive and by using Lemma \ref{lemm_four_four}, we have \begin{align*} c_i\big(M_i-\sqrt{c_i}\big)^2&\leq \big(M_i+\sqrt{c_i}\big)^2\big(M_i-\sqrt{c_i}\big)^2=M_i^4-M_i^2E(\varphi_{c_i})+\dfrac{3}{4}F(\varphi_{c_i}) \\ & \leq \Big(M_i^2E_i(u(t))-M_i^2E(\varphi_{c_i})\Big)-\dfrac{3}{4}\Big(F_i(u(t))-F(\varphi_{c_i})\Big)+O\big(L^{-1/2}\big). \end{align*} Therefore, by adding-up all the inequalities for $i=1,...,n$ and rearranging terms we obtain \begin{align*} \sum_{i=1}^nc_i(M_i-\sqrt{c_i})^2&\leq \sum_{i=1}^nM_i^2\big(E_i(u(t))-E_i(u_0)\big)-\sum_{i=1}^nM_i^2\big(E(\varphi_{c_i})-E_i(u_0)\big) \\ & \quad -\dfrac{3}{4}\sum_{i=1}^n\big(F_i(u(t))-F(\varphi_{c_i})\big)+O\big(L^{-1/2}\big)=:\mathrm{I}+\mathrm{II}+\mathrm{III}+O(L^{-1/2}). \end{align*} Now for the sake of readability we split the proof into three step, each of which is devoted to bound each sum. \medskip \textbf{Step 1:} In this first step we are devoted to bound $\mathrm{I}$. First of all notice that by \eqref{lemma_tubular} and the continuous embedding $H^1(\mathbb{R})\hookrightarrow L^\infty(\mathbb{R})$ we immediately conclude for all $t\in[0,t^*]$, \[ M_i(t)=c_i+O(\sqrt{\alpha})+O\left(e^{-L/8}\right) \ \,\hbox{ and hence } \ \, 0<M_1<...<M_n. \] On the other hand, by using Abel's transformation, the almost monotonicity Lemma \ref{AM_orb_train} and the above estimate, we conclude \begin{align*} \mathrm{I}&=M_n^2(t)\sum_{i=1}^n\big(E_i(u(t))-E_i(u_0)\big)-\sum_{j=1}^{n-1}(M_{j+1}^2(t)-M_j^2(t))\sum_{i=1}^j\big(E_i(u(t))-E_i(u_0)\big) \\ & = \sum_{i=1}^{n-1}\big(M_{i+1}^2(t)-M_i^2(t)\big)\big(\mathcal{I}_{i+1}(t)-\mathcal{I}_{i+1}(0)\big)\leq O\left(e^{-\sqrt{L}}\right), \end{align*} which gives us an admissible estimate. \medskip \textbf{Step 2:} Now we intend to bound $\mathrm{II}$. In fact, by using the exponential decay of each $\varphi_{c_i}$ and each $\Phi_i$, due to hypothesis \eqref{initial_cond_hyp_train} and by using the reverse triangular inequality we obtain \begin{align*} \sum_{i=1}^n \big(E_i(u_0)-E(\varphi_{c_i})\big)&\leq \sum_{i=1}^n \Big\vert\Vert u_0\Vert_{H^1(\mathcal{J}_i(0))}^2-\Vert\varphi_{c_i}(\cdot-x_i(0))\Vert_{H^1(\mathcal{J}_i(0))}^2\Big\vert+O\left(e^{-\sqrt{L}}\right) \\ & \leq \sum_{i=1}^n \Big(\Vert u_0\Vert_{H^1(\mathcal{J}_i(0))}+\Vert\varphi_{c_i}(\cdot-x_i(0))\Vert_{H^1(\mathcal{J}_i(0))}\Big)\cdot \\ & \qquad \cdot \Vert u_0-\varphi_{c_i}(\cdot-x_i(0))\Vert_{H^1(\mathcal{J}_i(0))}+O\left(e^{-\sqrt{L}}\right) \\ &\leq O(\varepsilon^4)+O\left(e^{-\sqrt{L}}\right), \end{align*} Therefore, recalling that $M_i\leq \Vert u_0\Vert_{L^\infty}\leq \Vert u_0\Vert_{H^1}$ we conclude \begin{align*} \mathrm{II}=-\sum_{i=1}^nM_i^2\big(E(\varphi_{c_i})-E_i(u_0)\big)\leq O(\varepsilon^4)+O\left(e^{-\sqrt{L}}\right). \end{align*} \textbf{Step 3:} Finally, to estimate the last term we start by rearranging terms. In fact, notice that each term in $\mathrm{III}$ can be rewritten as \begin{align*} \big\vert F_i(u_0)-F(\varphi_{c_i})\big\vert&=\Big\vert\int \left(u_0^4+2u_0^2u_{0,x}^2-\dfrac{1}{3}u_{0,x}^4\right)(x)\Phi_i(0,x)dx \\ & \qquad -\int \left(\varphi_{c_i}^4+2\varphi_{c_i}^2\varphi_{c_i}'^2-\dfrac{1}{3}\varphi_{c_i}'^4\right)(\cdot-x_i(0))dx\Big\vert \\ & \leq \left\vert\int (u_0^2+u_{0,x}^2)^2\Phi_i(0,x)-(\varphi_{c_i}^2+\varphi_{c_i}'^2)^2(\cdot-x_i(0))dx\right\vert \\ & \qquad + \dfrac{4}{3}\left\vert\int \big( u_{0,x}^4\Phi_i(0,x)-\varphi_{c_i}'^4(\cdot-x_i(0))\big)dx\right\vert =: \mathrm{III}_1+\mathrm{III}_2. \end{align*} For the sake of readability from now on we shall denote by $\varphi_{c_i}=\varphi_{c_i}(\cdot-x_i(0))$. That being said, notice that by using the exponential decay of both $\varphi_{c_i}$ and $\Phi_i$, H\"older's and triangular inequalities together with Sobolev's embedding $H^1\hookrightarrow L^4$ we obtain \begin{align*} \mathrm{III}_1&\leq\left\vert\int_{\mathcal{J}_i(0)}\left(u_0^2+u_{0,x}^2+\varphi_{c_i}^2+\varphi_{c_i}'^2\right)\left(u_0^2+u_{0,x}^2-\varphi_{c_i}^2-\varphi_{c_i}'^2\right)dx\right\vert+O\left(e^{-\sqrt{L}}\right) \\ & \leq 2\Vert u_0^2+u_{0,x}^2+\varphi_{c_i}^2+\varphi_{c_i}'^2\Vert_{L^2\left(\mathcal{J}_i(0)\right)} \Big(\Vert u_0+\varphi_{c_i}\Vert_{L^4\left(\mathcal{J}_i(0)\right)}\Vert u_0-\varphi_{c_i}\Vert_{L^4\left(\mathcal{J}_i(0)\right)}+ \\ & \quad +\Vert u_{0,x}+\varphi_{c_i}'\Vert_{L^4\left(\mathcal{J}_i(0)\right)} \Vert u_{0,x}-\varphi_{c_i}'\Vert_{L^4\left(\mathcal{J}_i(0)\right)} \Big)+O\left(e^{-\sqrt{L}}\right) \\ & \leq O\big(\Vert u_0-\varphi_{c_i}\Vert_{H^1\left(\mathcal{J}_i(0)\right)}+\Vert u_{0,x}-\varphi_{c_i}'\Vert_{L^4\left(\mathcal{J}_i(0)\right)}\big)+O\left(e^{-\sqrt{L}}\right) \\ & \leq O\big(\varepsilon^4\big)+O\left(e^{-\sqrt{L}}\right). \end{align*} Similarly, due to the exponential decay of $\varphi_{c_i}$ and $\Phi_i$, by using H\"older's and triangular inequalities we get \begin{align*} \mathrm{III}_2&\leq\dfrac{4}{3}\left\vert\int_{\mathcal{J}_i(0)} \big(u_{0,x}^4-\varphi_{c_i}'^4\big)dx\right\vert+O\left(e^{-\sqrt{L}}\right) \\ & =\dfrac{4}{3}\left\vert\int_{\mathcal{J}_i(0)}\big(u_{0,x}^2+\varphi_{c_i}'^2\big)\big(u_{0,x}+\varphi_{c_i}'\big)\big(u_{0,x}-\varphi_{c_i}'\big)dx\right\vert+O\left(e^{-\sqrt{L}}\right) \\ & \leq 2\Vert u_{0,x}-\varphi_{c_i}'\Vert_{L^4\left(\mathcal{J}_i(0)\right)}^3\Vert u_{0,x}-\varphi_{c_i}'\Vert_{L^4\left(\mathcal{J}_i(0)\right)}+O\left(e^{-\sqrt{L}}\right) \\ & \leq O(\varepsilon^4)+O\left(e^{-\sqrt{L}}\right). \end{align*} Adding-up all the previous estimates we conclude the proof of the lemma. \end{proof} \subsection{Proof of Theorem \ref{MT2}} First of all, notice that by using \eqref{initial_cond_hyp_train} and the reverse triangular inequality we deduce \begin{align*} \left\vert E(u_0)-E\left(R_{\vec{z}_0}\right)\right\vert&=\left(\Vert u_0\Vert_{H^1}+\Vert R_{\vec{z}_0}\right\Vert _{H^1})\Big\vert \Vert u_0\Vert_{H^1}-\Vert R_{\vec{z}}\Vert_{H^1}\Big\vert \\ & \leq \left(\Vert u_0\Vert_{H^1}+\Vert R_{\vec{z}_0}\right\Vert _{H^1}) \Vert u_0- R_{\vec{z}}\Vert_{H^1} =O(\varepsilon^4). \end{align*} On the other hand, by using Lemma \ref{lemm_four_three} together with Lemma \ref{lem_four_six} as well as the previous estimate, recalling that $E\big(R_{\vec{z}_0}\big)=\sum_{i=1}^n E(\varphi_{c_i})+O(\exp(-L/4))$, we obtain \begin{align*} \left\Vert u(t^*)-\sum_{i=1}^n\varphi_{c_i}\big(\cdot-x_i(t^*)\big)\right\Vert_{H^1}^2&=E(u_0)-\sum_{i=1}^n E(\varphi_{c_i}) \\ & \quad -4\sum_{i=1}^n\sqrt{c_i}\big(M_i(t^*)-\sqrt{c_i}\big)+O\left(e^{-\sqrt{L}}\right) \\ & = O\left(\varepsilon^4\right)+O\left(\varepsilon^2\right)+O\left(L^{-1/4}\right)=O\left(\varepsilon^2\right)+O\left(L^{-1/4}\right). \end{align*} In other words, there exists $\widetilde{C}>0$ such that \[ \left\Vert u(t^*)-\sum_{i=1}^n\varphi_{c_i}\big(\cdot-x_i(t^*)\big)\right\Vert_{H^1}^2\leq \widetilde{C}\left(\varepsilon^2+L^{-1/4}\right). \] Therefore, by taking $C$, the constant appearing in \eqref{neigh_assumption}, so that $C^2=4\widetilde{C}$ we conclude the proof of the theorem. \qed \subsection{Proof of Corollary \ref{cor_MT2}}\label{sub_cor_multipeakon} As we mentioned in the introduction, the Novikov equation \eqref{nov_eq_2} possesses multi-peakon-antipeakon solutions given by \[ u(t,x)=\sum_{i=1}^np_i(t)e^{-\vert x-q_i(t)\vert}, \quad n\in\mathbb{N} \] where the pairs $(p_i,q_i)\in\mathbb{R}^2$ satisfy the Hamiltonian system of ODEs: \begin{align*} \begin{cases} \dfrac{dq_i}{dt}=u^2(q_i)=\displaystyle\sum_{j,k=1}^np_jp_ke^{-\vert q_i-q_j\vert-\vert q_i-q_k\vert}, \\ \displaystyle\dfrac{dp_i}{dt}=-p_iu(q_i)u_x(q_i)=p_i\sum_{j,k=1}^np_jp_k\operatorname{sgn}(q_i-q_j)e^{-\vert q_i-q_j\vert-\vert q_i-q_k\vert}.\end{cases} \end{align*} It is easy to check that the local solutions of this differential system can be uniquely extended as long as the $q_i$'s remain different from each other. In fact, if for some time $t^*$ and some $i\neq j$ we have $q_i(t^*)=q_j(t^*)$, then uniqueness fails and this breakdown leads to the usually subtle question regarding continuation of solutions beyond the collision. In the Camassa-Holm case this latter question is rather well-understood (see for instance \cite{BrCo,BrCo2,HoRa,HoRa2}). However, in \cite{HoLuSz} Theorem $4.5$, Hones et al. proved that if at the initial time all $p_i$'s are positives, i.e. there are only peakons, then all $q_i$'s stay different from each other for all times. Of course, the case where there are only antipeakons also holds, however this is not longer true if we allow the existence of peakons and antipeakons simultaneously. More precisely, Hones et. al. proved that if at the initial time \begin{align}\label{initial_hyp_cor_p_i} p_1^0,...,p_n^0>0 \quad \hbox{and}\quad q_1^0<...<q_n^0, \end{align} then, \eqref{initial_hyp_cor_p_i} holds for all times $t\in\mathbb{R}$. In particular, under these hypothesis the different peakons never overlaps each other. For example, if a large peak follows a smaller one, due to its different speeds, they shall eventually get close enough so that the larger one shall transfer part of its energy to the smaller one. Then, the smallest shall become the largest and both peakons shall be well ordered. \medskip Regarding the asymptotics of $(p_i,q_i)(t)$, in \cite{HoLuSz} Hones et al. also proved that under these hypotheses the following equalities hold: \begin{align}\label{asymptotics_p_q_i} \lim_{t\to+\infty} p_i^2(t)=\lim_{t\to+\infty}\dot{q}_i(t)=\lambda_i^2 \quad \hbox{ and }\quad \lim_{t\to-\infty} p_i^2(t)=\lim_{t\to-\infty}\dot{q}_i(t)=\lambda_{n+1-i}^2, \end{align} where we recall that the values $\lambda_i$ correspond to the square roots of the eigenvalues of the matrix $TPEP$ defined in the statement of Corollary \ref{cor_MT2}. \medskip Now, let $\delta>0$ small but fixed and let us consider $\big(p_i(0),q_i(0))$ satisfying \eqref{initial_hyp_cor_p_i}. Hence, from the asymptotics of $p_i(t)$ and $q_i(t)$ we deduce the existence of a time $T\gg1$ sufficiently large such that for all $i=2,...n$ we have \[ q_i(T)-q_{i-1}(T)\geq L \quad \hbox{and}\quad q_{i-1}(-T)-q_i(-T)\geq L, \] with $L$ being any positive number satisfying $L>2\max\big\{L_0,\left(\tfrac{\delta}{\textbf{A}}\right)^{-8}\big\}$, where $\textbf{A}=2\max\{1,A\}$ and $A>0$ is the implicit constant involved in \eqref{orb_concl}. Thus, by using the second item in Theorem \ref{theorem_gwp} we deduce that there exists $\varepsilon>0$ small enough only depending on $T$ and $\delta$ such that for any initial data $u_0\in Y_+(\mathbb{R})$ satisfying \eqref{hyp_cor_orb} the following holds: For all $t\in[-T,T]$ we have \begin{align}\label{bound_compact_time} \left\Vert u(t)-\sum_{i=1}^np_i(t)e^{-\vert x-q_i(t)\vert}\right\Vert_{H^1}\leq \left(\dfrac{\delta}{2\textbf{A}}\right)^4. \end{align} On the other hand, due to the fact that the Novikov equation \eqref{nov_eq_2} is invariant under the transformation $(t,x)\mapsto(-t,-x)$, Theorem \ref{MT2} also holds when replacing $t$ by $-t$, $z_i$ by $-z_i$ and $x_i(t)$ by $-x_i(-t)$. This give us the same stability result backwards in time for a train of peakons that are ordered in the inverse order than in the statement of Theorem \ref{MT2}. Therefore, by gathering \eqref{initial_hyp_cor_p_i} with \eqref{bound_compact_time} together with Theorem \ref{MT2} we conclude \eqref{first_part_cor}. \medskip Now, to prove the second part of the corollary, it is enough to notice that by using \eqref{asymptotics_p_q_i}, by making $T$ bigger if necessary, without loss of generality we may also assume that \[ \vert p_i(T)-\lambda_i\vert \leq \dfrac{1}{100n}\left(\dfrac{\delta}{\textbf{A}}\right)^4 \quad \hbox{and}\quad \vert p_i(-T)-\lambda_{n+1-i}\vert \leq \dfrac{1}{100n}\left(\dfrac{\delta}{\textbf{A}}\right)^4. \] Thus, by using \eqref{bound_compact_time} we obtain that \[ \left\Vert u(T,\cdot)-\sum_{i=1}^n\lambda_ie^{-\vert x-q_i(T)\vert}\right\Vert_{H^1}+ \left\Vert u(-T,\cdot)-\sum_{i=1}^n\lambda_{n+1-i}e^{-\vert x-q_i(-T)\vert}\right\Vert_{H^1}\leq \left(\dfrac{\delta}{\textbf{A}}\right)^4. \] Hence, we conclude by using Theorem \ref{MT2}. The proof is complete. \qed \medskip \section{Asymptotic stability of a train of peakons}\label{sec_MT5} In this section we aim to prove Theorem \ref{MT5}. Notice that gathering this latter result with the asymptotics for multipeakons stated in subsection \ref{sub_cor_multipeakon} and Corollary \ref{cor_MT2}, we immediately obtain Corollary \ref{cor_MT5}. \medskip From now on we shall follow Molinet's ideas (see \cite{Mo}) for the Camassa-Holm equation, which are based on the proof of asymptotic stability for the sum of $n$-solitons for the gKdV equation (see \cite{MaMeTs}). \subsection{Almost monotonicity lemma}\label{sec_six_one} In the rest of this paper we shall need to explicitly study the behavior of the solution $u(t)$ on both, the left and right part of the space. We recall that the weight function $\Psi$ is given by \begin{align}\label{psi_def_2} \Psi(x):=\dfrac{2}{\pi}\arctan\left(\exp(\tfrac{x}{6})\right), \quad \hbox{so that}\quad \Psi(x)\to 1 \ \hbox{ as }\ x\to+\infty. \end{align} Now, let us fix (for the rest of this paper) some parameters: \begin{align}\label{parameters} \varepsilon^\star:=\left(\dfrac{\min\{1,\sigma\}}{2^{18}}\right)^8, \ \ L_0:=\left(2^{18}\max\{1,\sigma^{-1}\}\right)^{32}, \end{align} and $\sigma:=\mathbf{C}\min\{c_2-c_1,...,c_n-c_{n-1},\beta\}$, where $\mathbf{C}:=\min\{1,\widetilde{C}^{-1}\}$ and $\widetilde{C}>0$ is the implicit constant involved in \eqref{orb_concl}. \begin{lem}[Almost monotonicity of the energy]\label{tech_lem_mon_exp} Assume that we are under the hypothesis and notations of Theorem \ref{MT5} and Lemma \ref{mod_lemma}. Additionally, set $\delta_1,...,\delta_n\in(0,1)$ and define the family of differentiable functions $z_1,...,z_n:\mathbb{R}\to\mathbb{R}$ as follows \[ \delta_1:=1-\tfrac{\beta}{4c_1}, \quad z_1(t):=\tfrac{\beta}{2}t \quad \hbox{and} \quad \delta_i:=\tfrac{5}{8}(c_i-c_{i-1}), \quad z_i(t):=(1-\delta_i)\widetilde{x}_i(t). \] Then, there exists $R_0>0$ sufficiently large such that for all $t\in\mathbb{R}$ it holds \begin{align}\label{condition_tail} \Vert u(t)\Vert_{L^\infty(x-\widetilde{x}_n(t)>R_0)}\leq \dfrac{(1-\delta_n)c_n}{\mathbf{b}},\ \hbox{ where } \ \mathbf{b}:=2^6\max\{1,\Vert u_0\Vert_{H^1}\}. \end{align} Moreover, for any $R>R_0$ the following property holds: For each $i=1,...,n$ there exists $t_R^i>0$ only depending on $R$ such that for any $t_0^i\geq t_R^i$, defining the modified energy functionals \begin{align*} \mathrm{I}_{i,t_0^i}^{\pm R}:=\int\big(u^2+u_x^2\big)(t,x)\Psi\big(\cdot-z_{i}^{\pm R}(t)\big)dx \ \,\hbox{ where } \ \, z_{i}^{\pm R}(t):=\widetilde{x}_i(t_0^i)\pm R+z_i(t)-z_i(t_{0}^i), \end{align*} the following estimates hold: \begin{align}\label{AM_right_n_energy} \forall t\leq t_0^n, \ \, \mathrm{I}_{n,t_0^n}^R(t_0^n)-\mathrm{I}_{n,t_0^n}^R(t)\leq Ce^{-R/6}, \ \hbox{ and } \,\ \forall t\geq t_0^n, \, \ \mathrm{I}_{n,t_0^n}^{-R}(t)-\mathrm{I}_{n,t_0^n}^{-R}(t_0^n)\leq Ce^{-R/6}. \end{align} Moreover, for any $i=1,...,n-1$ we have \begin{align}\label{AM_right_i_energy} \mathrm{I}_{i,t_0^i}^{-R}(t)-\mathrm{I}_{i,t_0^i}^{-R}(t_0^i)\leq Ce^{-R/24}, \ \hbox{ for all } t\geq t_0^i. \end{align} \end{lem} \begin{proof} The proof is somehow contained in the proof of Lemma \ref{AM_orb_train}, which at the same time is somehow contained in the proof of Lemma $3.2$ in \cite{Pa}. However, for the sake of completeness we show a sketch of the proof in the appendix. See Section \ref{sec_ap_tech_lem}. \end{proof} Now, before going further we shall need to introduce some additional notation. For $v\in Y$ and $R>0$ we define the functionals $\mathcal{J}_l^R$ and $\mathcal{J}_r^R$ given by \begin{align*} \mathcal{J}_r^R:=\left\langle v^2+v_x^2,\Psi(\cdot-R)\right\rangle \ \hbox{ and }\ \, \mathcal{J}_l^R:=\left\langle v^2+v_x^2,1-\Psi(\cdot+R)\right\rangle. \end{align*} Now, under the notations of the previous lemma notice that, for this choice of parameters and from the definitions of $\mathcal{J}_r^R$ and $\mathrm{I}_{i}^R$ we immediately obtain that \[ \mathcal{J}_{n,r}^R(t):=\mathcal{J}_r^R\big(u(t,\cdot+\widetilde{x}_n(t))\big)\geq \mathrm{I}_{n}^R(t), \quad \forall t\leq t_0^n, \] where $\mathrm{I}_{n}^R(t)$ is the functional defined in Lemma \ref{tech_lem_mon_exp}. Moreover, notice that in particular we have $\mathcal{J}_{n,r}^R\big(t_0^n\big)=\mathrm{I}_{n}^R(t_0^n)$. Thus, by using \eqref{AM_right_n_energy} we obtain \begin{align}\label{ineq_J_r} \mathcal{J}_{r}^R\big(u(t_0^n,\cdot+\widetilde{x}_n(t_0^n))\big)\leq \mathcal{J}_{r}^R\big(u(t,\cdot+\widetilde{x}_n(t))\big)+C e^{-\frac{R}{6}}, \quad \forall t\leq t_0^n, \end{align} where $C>0$ is the constant appearing in \eqref{AM_right_n_energy}. On the other hand, for the sake of notation we also introduce the functional $\widetilde{\mathrm{I}}_{i}^R(t)$ given by \[ \widetilde{\mathrm{I}}_{i}^R(t):=\left\langle u^2+u_x^2,1-\Psi\big(\cdot-\delta_i\widetilde{x}_i(t_0^i)+R-(1-\delta_i)\widetilde{x}_i(t)\big)\right\rangle=E(u)-\mathrm{I}_{i}^{-R}(t), \] where the parameter $\delta_i>0$ is defined in Lemma \ref{tech_lem_mon_exp}. Notice that due to the energy conservation together with inequality \eqref{AM_right_n_energy} we deduce \begin{align}\label{a_m_left_energy} \widetilde{\mathrm{I}}_{i}^R(t)\geq \widetilde{\mathrm{I}}_{i}^R(t_0^i)-Ce^{-R/6}. \end{align} Therefore, from the energy conservation and the previous inequality we deduce that for all $i=1,...,n$ and all $t\geq t_0^i$ we have \begin{align}\label{ineq_J_l} \mathcal{J}_l^R\big(u(t,\cdot+\widetilde{x}_i(t))\big)\geq \mathcal{J}_l^R\left(u\left(t_0^i,\cdot+\widetilde{x}_i(t_0^i)\right)\right)-Ce^{-\frac{R}{6}}. \end{align} The case of $\mathcal{J}_{i,r}^R$ is more subtle and its proof is the aim of the following lemma. \begin{lem}\label{tech_lem_left} Assume we are under the hypothesis and notations of Lemma \ref{tech_lem_mon_exp}. For $i=1,...,n-1$ define the following modified energy functionals \begin{align*} \mathcal{J}_{i,r}^R(t):=\int \big(u^2+u_x^2\big)(t,x)\Psi\big(\cdot-\widetilde{x}_i(t)-R\big)dx. \end{align*} Then, for any $R>0$ and all pair $(t,t_0)$ satisfying $t_R^{i+1}\leq t\leq t_0$ the following inequality holds: \begin{align}\label{ineq_i_J_r} \mathcal{J}_{i,r}^R(t_0)\leq \mathcal{J}_{i,r}^R(t)+Ce^{-R/24}, \end{align} where $\{t_R^i\}_{i=1}^n$ are defined in the proof of Lemma \ref{tech_lem_mon_exp} (see \eqref{t_i_R_def}). \end{lem} \begin{proof} See the appendix, Section \ref{ap_tech_left}. \end{proof} \subsection{End of the proof of Theorem \ref{MT5}} The following property is the analogous convergence result for a single peakon in the case of a train of peakons. \begin{lem}\label{convergence_result} For every $i=1,...,n$ the following strong convergence holds: \begin{align}\label{local_strong_conv} u\big(t,\cdot+\widetilde{x}_i(t)\big)-\rho_i(t)\varphi\to 0 \ \hbox{ in } \ H^1_{loc}(\mathbb{R}) \ \hbox{ as } \ t\to+\infty, \end{align} where $\rho_i(t):=u\big(t,x_i(t)\big)$, i.e. the maximum of $u(t)$ over the sets $\mathcal{J}_i(t)$ defined in Lemma \ref{mod_lemma}. Moreover, in the case $i=n$ the following holds: For every $A>0$ we have \begin{align}\label{conv_two} u\big(t,\cdot+\widetilde{x}_n(t)\big)-\rho_n(t)\varphi\to0 \ \hbox{ in } \ H^1\big((-A,+\infty)\big) \ \hbox{ as } \ t\to+\infty. \end{align} \end{lem} \begin{proof} This is a consequence of Proposition $4.2$ in \cite{Pa} and the second inequality in \eqref{mod_parameter_bound}. In fact, notice that the proof of that proposition only requires Lemmas \ref{tech_lem_mon_exp} and \ref{tech_lem_left} of the present work to hold. Since the proof follows exactly the same lines as the ones in Proposition $4.2$ in \cite{Pa}, we only sketch it for the case $i=n$. In fact, following exactly the same lines it is possible to show that for any increasing sequence $t_n\to+\infty$ there exists a subsequence $\{t_{n_k}\}$ and a function $u_0^\star\in Y_+$ such that as $k\to+\infty$ we have \begin{align}\label{weak_strong_conv} u\big(t_{n_k},\cdot+\widetilde{x}_n(t_{n_k})\big)\rightharpoonup u_0^\star \ \hbox{ in } \ H^1(\mathbb{R}) \quad \hbox{and}\quad u\big(t_{n_k},\cdot+\widetilde{x}_n(t_{n_k})\big)\to u_0^\star \ \hbox{ in } \ H^{1^-}_{loc}(\mathbb{R}). \end{align} Then, by using the almost monotonicity inequalities \eqref{ineq_J_r} and \eqref{ineq_J_l} we can prove that the weak solution to equation \eqref{nov_eq_2} associated to $u_0^\star$ is actually an $H^1$-almost localized solution, and hence it is a peakon (c.f. Theorem $1.3$ in \cite{Pa}). Therefore, there exists $x_0\in\mathbb{R}$ and $c_n^*>0$ such that \[ u_0^\star=\varphi_{c_n^*}(\cdot-x_0). \] On the other hand, notice that due to the local strong $L^2$ convergence we deduce that for all $K\subset \mathbb{R} $ compact we have \[ \lim_{k\to+\infty}\Vert u(t_{n_k},\cdot+\widetilde{x}_n(t_{n_k}))-\varphi_{c_n^*}\Vert_{L^2(K)}=0. \] On the other hand, due to the fact that $\vert v_x\vert\leq v$ for any $v\in Y_+$ we deduce \[ \liminf_{k\to+\infty}\Vert u_x(t_{n_k},\cdot+\widetilde{x}_n(t_{n_k}))\Vert_{L^2(K)}\leq \lim_{k\to+\infty}\Vert u(t_{n_k},\cdot+\widetilde{x}_n(t_{n_k}))\Vert_{L^2(K)}=\Vert \varphi_{c_n^*}\Vert_{L^2(K)} \] Hence, by using again that $\Vert \varphi'\Vert_{L^2(K)}=\Vert \varphi\Vert_{L^2(K)}$ we obtain \[ \liminf_{k\to+\infty}\Vert u(t_{n_k},\cdot+\widetilde{x}_n(t_{n_k}))\Vert_{H^1(K)}^2\leq2\Vert \varphi_{c_n^*}\Vert_{L^2(K)}^2=\Vert \varphi_{c_n^*}\Vert_{H^1(K)}^2, \] Thus, by a standard result in Functional Analysis we know that the weak convergence result together with the previous inequality implies that \begin{align}\label{strong_h1} u(t_{n_k},\cdot+\widetilde{x}_n(t_{n_k}))-\varphi_{c_n^*}\to0 \ \hbox{ in }\ H^1_{loc} \ \hbox{ as } \ k\to+\infty. \end{align} Finally, let us prove the strong $H^1$ convergence in $(-A,\infty)$ for any fixed $A>0$. In fact, first of all, notice that the weak convergence result \eqref{weak_strong_conv} together with the uniform estimate \eqref{mod_bound} and the definition of $\varepsilon^\star$ implies that \[ \Vert \varphi_{c_n^\star}(\cdot-x_0)-\varphi_{c_n}\Vert_{H^1}\leq C\varepsilon^\star \quad \hbox{and} \quad \vert c_n-c_n^*\vert\ll \sigma, \] and hence, by using the local strong convergence \eqref{strong_h1} we infer that $\vert x_0\vert \ll 1$. On the other hand, notice that the weak convergence result \eqref{weak_strong_conv} forces $u_0^\star$ to satisfy the $n$-th orthogonality condition \eqref{mod_orthogonality}. Therefore, by using \eqref{orth_cond_def} we obtain that $x_0$ has to be equal to zero. Finally, notice that the convergence result \eqref{strong_h1} together with \eqref{mod_bound} implies that \[ \sqrt{c_n^*}=\lim_{k\to+\infty}\max_{\mathcal{J}_n(t_{n_k})}u(t_{n_k}). \] Thus, defining $\rho_n(t):=\max\{ u(t,\cdot): \ x\in\mathcal{J}_n(t)\} $ we deduce that as $k\to+\infty$ we have \[ u(t_{n_k},\cdot+\widetilde{x}_n(t_{n_k}))-\rho_n(t_{n_k})\varphi\rightharpoonup0 \ \hbox{ in }\ H^1. \] Since this is the only possible limit we conclude that as $t\to+\infty$ we have \begin{align}\label{weak_conv_H1_peakon} u(t,\cdot+\widetilde{x}_n(t))-\rho_n(t)\varphi\rightharpoonup 0 \ \hbox{ in } \ H^1 \ \ \hbox{ and } \ \ u(t,\cdot+\widetilde{x}_n(t))-\rho_n(t)\varphi\to 0 \ \hbox{ in } \ H^{1}_{loc}. \end{align} Now, we claim that the latter convergence result implies that for any fixed $A>0$, as $t\to+\infty$, the following convergence holds: \begin{align}\label{local_strong_h1} u(t,\cdot+\widetilde{x}_n(t))-\rho_n(t)\varphi\to0 \ \hbox{ in } \ H^1((-A,\infty)). \end{align} In fact, let $\delta>0$ be fixed and consider $R\gg1$ sufficiently large such that \[ \mathcal{J}_r^R\big(u(0,\cdot+\widetilde{x}_n(0)\big)<\delta \quad \hbox{ and } \quad Ce^{-R/6}<\delta, \] where $C>0$ is the constant involved in \eqref{ineq_J_r}. Then, from the almost decay of the energy at the right \eqref{ineq_J_r} we infer that \[ \mathcal{J}_r^R\big(u(t,\cdot+\widetilde{x}_n(t))\big)<2\delta, \ \hbox{ for all } \, t\in\mathbb{R}. \] Nevertheless, the latter inequality together with the local strong convergence in $H^1$ given in \eqref{local_strong_h1} immediately implies that, for any $A>0$ we have \begin{align}\label{H1_conv_right} u(t,\cdot+\widetilde{x}_n(t))-\rho_n(t)\varphi\xrightarrow{t\to+\infty}0 \ \hbox{ in } \ H^1((-A,\infty)). \end{align} The sketch of the proof is complete. \end{proof} \textbf{Important:} Notice that in the same fashion as in \eqref{weak_strong_conv}, following the same lines in Proposition $4.2$ in \cite{Pa} and by using the rigidity Theorem we deduce that for any increasing sequence $t_n\to+\infty$ the exists a subsequence $t_{n_k}$ and positive numbers $c_1^*,...,c_n^*$ such that \begin{align} u\big(t_{n_k},\cdot+\widetilde{x}_i(t_{n_k})\big)-\varphi_{c_i^*}\to 0 \ \hbox{ in } \ H^1_{loc}(\mathbb{R}) \ \hbox{ as } \ k\to+\infty, \end{align} Now, before going further let us introduce some notation. From now on and for $i=1,...,n$ we shall denote by $W_i$ and $w_i$ the functions defined by \begin{align}\label{def_W_w_i} W_i:=\sum_{j=i}^n \sqrt{c_j^*}\varphi(\cdot-\widetilde{x}_j(t))\quad \hbox{and}\quad w_i:=u-W_i. \end{align} Additionally, we define the following modified energy functional \[ \widetilde{\mathcal{I}}(t):=\int\left(u^2+u_x^2\right)(t,\cdot)\Psi\big(\cdot-y_n(t)\big)dx, \] which corresponds to $\mathrm{I}_n^{y_n(t_{R}^n)-x_n(t_R^n)}(t)$ by redefining $z_n(\cdot)$ as $y_n(\cdot)$ in Lemma \ref{tech_lem_mon_exp}, where $\{y_i\}_{i=1}^n$ are defined in Lemma \ref{mod_lemma} and we change $y_1(t):=\tfrac{\beta}{2}$. Now, notice that for $t\geq t_R^n$ we have that $x_n(t_R^n)-y_n(t_R^n)\geq R\geq R_0$, and hence, by the same proof as in Lemma \ref{tech_lem_mon_exp} we deduce that $\widetilde{\mathcal{I}}$ is almost non-increasing for $t\geq t_R$. \medskip The following two lemmas about the convergences of $\rho_i(t)$ and $\dot{\widetilde{x}}_i(t)$ in the case of the fastest peakon (i.e. $i=n$) follow the same lines as the ones for the single peakon case (c.f. \cite{Pa}, see also \cite{Mo}). However, for the sake of completeness we prove it anyway. \begin{lem}\label{lem_convergence_rho_n} As $t$ goes to $+\infty$ the following convergence holds: \[ \rho_n(t)\to \sqrt{c_n^*}. \] In particular, as a consequence of the previous convergence, as $t$ goes to $+\infty$ we also have: \[ \int \left(\big(u^2-\sqrt{c_n^*}\varphi(\cdot-\widetilde{x}_n(t))\big)^2+\big(u_x^2-\sqrt{c_n^*}\varphi'(\cdot-\widetilde{x}_n(t))\big)^2\right)\Psi\big(\cdot-y_n(t)\big)\to 0. \] \end{lem} \begin{proof} In fact, let $\epsilon>0$ arbitrarily small but fixed and consider $R\gg1$ sufficiently large such that $Ce^{-R/6}<\epsilon$. Then, by using \eqref{ineq_J_l} as well as the energy conservation we obtain that for all $t>t'>t_0^n$ we have \[ \int \big(u^2+u_x^2\big)(t)\Psi\big(x-\widetilde{x}_n(t)+R\big)\leq\epsilon+ \int \big(u^2+u_x^2\big)(t')\Psi\big(x-\widetilde{x}_n(t')+R\big). \] On the other hand, due to the strong convergence result \eqref{conv_two} and the exponential localization of both $\varphi$ and $\Psi$, we infer that there exists $t_0\gg1$ sufficiently large such that for all $t\geq t_0$ we have \[ \left\vert\int \big(u^2+u_x^2\big)(t)\Psi(x-\widetilde{x}_n(t)+R)-\rho_n^2(t)E(\varphi)\right\vert\leq\epsilon. \] Plugging the last two inequalities together we conclude that for any pair of times $(t,t')\in\mathbb{R}^2$ satisfying $t>t'>\max\{t_0,t_0^n\}$ we have \[ \rho_n^2(t)E(\varphi)\leq \rho_n^2(t')E(\varphi)+3\epsilon. \] Since $\epsilon>0$ was arbitrary, the latter inequality forces $\rho(t)$ to have a limit at $+\infty$ and thus to converge to \[ \lim_{t\to+\infty}\rho_n(t)=\sqrt{c_n^*} \] what finish the proof of the lemma. \end{proof} \begin{lem}\label{lema_convergence_x_dot_n} As $t$ goes to $+\infty$ the following convergence holds $\dot{\widetilde{x}}_n(t)\to c_n^*$. \end{lem} \begin{proof} First of all, let us start by recalling and introducing some notation: \[ w_1(t)=u-\sum_{j=1}^n\varphi_{c_j^*}(\cdot-\widetilde{x}_j(t)), \ \ \omega_i:=\varphi_{c_i^*}(\cdot-\widetilde{x}_i(t)) \ \hbox{ and } \ \omega_{n_0}^i:=(\rho_{n_0}*\varphi_{c_i^*})(\cdot-\widetilde{x}_i(t)). \] Now, on the one-hand, by differentiating the $n$-th equation in \eqref{mod_orthogonality} with respect to time and by using that $\varphi$ satisfies the equation $\varphi-\varphi''=2\delta$ we obtain \[ \int w_{1,t}\omega_{n_0,x}^n=\dot{\widetilde{x}}_n\int w_1(t,x)\omega_{n_0}^n(t,x)dx-2\sqrt{c_n^*}\dot{\widetilde{x}}_n\int w_1(t,x)\rho_{n_0}\big(x-\widetilde{x}_n(t)\big)dx. \] On the other hand, by using that $\varphi$ solves \eqref{nov_eq_2} we infer that each $\omega_i(t,x)$ satisfy the following equation: \[ \omega_{i,t}+(\dot{\widetilde{x}}_i-c_i^*)\omega_{i,x}+\omega_i^2\omega_{i,x}=p_x*\Big(\omega_i^3+\dfrac{3}{2}\omega_i\omega_{i,x}^2\Big)-\dfrac{1}{2}p*\omega_{i,x}^3 \] Therefore, using that $u(t)$ also solves \eqref{nov_eq_2}, by replacing $u=w_1+\sum_{i=1}^n\omega_i$ and then using the equation satisfied by each $\omega_i$ we obtain \begin{align}\label{mod_huge_eq_11} w_{1,t}-\sum_{j=1}^n(\dot{\widetilde{x}}_j-c_j^*)\omega_{j,x}&=-\left(w_1+\sum_{j=1}^n\omega_j\right)^2w_{1,x}-w_1^2\sum_{j=1}^n\omega_{j,x}-2w_1\sum_{j,k=1}^n\omega_j\omega_{k,x}\nonumber \\ & \quad -\sum_{\substack{j,k,\ell=1 \\ (k,\ell)\neq(j,j)}}^n\omega_j\omega_k\omega_{\ell,x}-p_x*w_1^3-3\sum_{j=1}^np_x*(w_1^2\omega_j)\nonumber \\ & \quad -3\sum_{j,k=1}^np_x*w_1\omega_j\omega_k-\sum_{\substack{j,k,\ell=1 \\ (k,\ell)\neq(j,j)}}^np_x*\omega_j\omega_k\omega_\ell \nonumber \\ & \quad -\dfrac{3}{2}p_xw_1w_{1,x}^2-3\sum_{j=1}^np*w_1w_{1,x}\omega_{j,x}-\dfrac{3}{2}\sum_{j,k=1}^np*w_1\omega_{j,x}\omega_{k,x}\nonumber \\ & \quad -\dfrac{3}{2}\sum_{j=1}^np*w_{1,x}^2\omega_j-3\sum_{j,k=1}^np*w_{1,x}\omega_j\omega_{k,x}\nonumber \\ & \quad -\dfrac{3}{2}\sum_{\substack{j,k,\ell=1 \\ (k,\ell)\neq(j,j)}}^np_x*\omega_j\omega_{k,x}\omega_{\ell,x}-\dfrac{1}{2}p*w_{1,x}^3-\dfrac{3}{2}\sum_{j=1}^np*w_{1,x}^2\omega_{j,x}\nonumber \\ & \quad -\dfrac{3}{2}\sum_{j,k=1}^n p*w_{1,x}\omega_{j,x}\omega_{k,x}-\dfrac{1}{2}\sum_{\substack{j,k,\ell=1 \\ (k,\ell)\neq(j,j)}}^n p*\omega_{j,x}\omega_{k,x}\omega_{\ell,x}. \end{align} On the other hand, notice that due to \eqref{conv_two}, inequality \eqref{mod_parameter_bound} and the exponential decay of both $\omega_i$ and $\omega_{n_0}^n$ we deduce that $\Vert \omega_i \omega_{n_0}^n\Vert_{L^1}+\Vert \omega_{i,x}\omega_{n_0}^n\Vert_{L^1}\to 0$ as $t\to+\infty$ for $i\neq n$ an \[ \Vert w_1^2\omega_{n_0,x}^n\Vert_{L^1}+\Vert w_{1,x}^2\omega_{n_0,x}^n\Vert_{L^1}+\int \vert w_1\omega_{n_0}^n\vert dx+\int \vert w_1\rho_{n_0}(x-x(t))\vert dx\to0 \ \hbox{ as }\ t\to+\infty. \] Therefore, by taking the $L^2$-inner product from equation \eqref{mod_huge_eq_11} against $\omega_{n_0,x}^n$ and noticing that $\langle \omega_{n,x}(t),\omega_{n_0,x}^n(t)\rangle_{L^2,L^2}\equiv \mathrm{constant}>0$ for all times $t\in\mathbb{R}$ we conclude \[ \dot{\widetilde{x}}_n-c_n^*\to 0 \ \hbox{ as }\ t\to+\infty. \] The proof is complete. \end{proof} Finally, it only remains to prove the analogous properties to Lemmas \ref{lem_convergence_rho_n} and \ref{lema_convergence_x_dot_n} for the cases $i=1,...,n-1$. This is the aim of the remaining part of this subsection. \medskip \textbf{Inductive argument:} Now we proceed by an inductive argument, that is, from now on we assume that for some $i^\star\in\{1,...,n-2\}$ it holds that \begin{align}\label{inductive_hypothesis} &\int \left(u^2-\sum_{j=i^\star+1}^n\sqrt{c_j^*}\varphi(\cdot-\widetilde{x}_j(t))\right)^2\Psi\big(\cdot-y_{i^\star+1}(t)\big)dx\nonumber \\ & \qquad +\int\left(u_x^2-\sum_{j=i^\star+1}^n\sqrt{c_j^*}\varphi'(\cdot-\widetilde{x}_j(t))\right)^2\Psi\big(\cdot-y_{i^\star+1}(t)\big)dx\to 0 \end{align} and we intend to prove that, as $t$ goes to $+\infty$, this implies that $\dot{\widetilde{x}}_i(t)\to c_i^*$, $\rho_i(t)\to \sqrt{c_i^*}$ and \begin{align}\label{inductive_conclusion} &\int \left(u^2-\sum_{j=i^\star}^n\sqrt{c_j^*}\varphi(\cdot-\widetilde{x}_j(t))\right)^2\Psi\big(\cdot-y_{i^\star}(t)\big)dx\nonumber \\ & \qquad +\int\left(u_x^2-\sum_{j=i^\star}^n\sqrt{c_j^*}\varphi'(\cdot-\widetilde{x}_j(t))\right)^2\Psi\big(\cdot-y_{i^\star}(t)\big)dx\to 0. \end{align} For the sake of simplicity we split the proof into $6$ steps where only the first five of them are devoted to prove the inductive argument. First, we start by proving some extra monotonicity property. The second step intends to state the analogous to the convergence result \eqref{conv_two} in the case $i\neq n$. Then, we prove the convergences of the scaling and velocity parameters. In step five we intend to conclude the inductive argument by proving \eqref{inductive_conclusion}. Finally, in the last step we are devoted to conclude the convergence result \eqref{MT_5_conclusions} on the first set in $\mathcal{A}_t$. For the sake of simplicity from now on we drop the superindex in $i^\star$ and hence we just denote it by $i$. \medskip \textbf{Step 1:} Let $w_{i+1}:=u-W_{i+1}$ (see \eqref{def_W_w_i} for the definition of $\{W_i\}_{i=1}^n$). We claim that both functionals $\mathcal{J}_l^R\big(w_{i+1}(t,\cdot+\widetilde{x}_i(t)\big)$ and $\mathcal{J}_r^R\big(w_{i+1}(t,\cdot+\widetilde{x}_i(t)\big)$ enjoy the almost monotonicity properties \eqref{ineq_J_l}-\eqref{ineq_i_J_r} for $t\geq \tau^R_i\geq t_i^R$ where $\tau_i^R$ is a sufficiently large parameter to be fixed. In fact, first of all notice that due to \eqref{mod_parameter_bound} and \eqref{inductive_hypothesis} we deduce that for every $\epsilon>0$ there exists $t_\epsilon^i\gg1$ sufficiently large such that for all $t\geq t_\epsilon^i$ we have \[ \left\vert \mathcal{J}_l^R\Big(u\big(t,\cdot+\widetilde{x}_i(t)\big)\Big)-\mathcal{J}_l^R\Big(w_{i+1}\big(t,\cdot+\widetilde{x}_i(t)\big)\Big)\right\vert\leq \epsilon, \] what proves the assertion for $\mathcal{J}_l^R$. Now, in order to deal with the second case we start by rewriting $\mathcal{J}_r^R$ as \begin{align*} \mathcal{J}_r^R\big(u(t,\cdot+\widetilde{x}_i(t))\big)&=\int \big(u^2+u_x^2\big)\Psi\big(\cdot-\widetilde{x}_i(t)-R\big)\Big(1-\Psi\big(\cdot-y_{i+1}(t)\big)\Big) \\ & \quad +\int \big(u^2+u_x^2\big)\Psi\big(\cdot-\widetilde{x}_i(t)-R\big)\Psi\big(\cdot-y_{i+1}(t)\big)=:\mathbf{I}+\mathbf{II}. \end{align*} Thus, on the one-hand, by using \eqref{mod_parameter_bound} again we have \begin{align*} \mathbf{I}(t)-\int \big(w_{i+1}^2+w_{i+1,x}^2\big)\Psi\big(\cdot-\widetilde{x}_i(t)-R\big)\Big(1-\Psi\big(\cdot-y_{i+1}(t)\big)\Big)\to 0 \ \hbox{ as } \ t\to+\infty, \end{align*} while on the other hand, by using the inductive hypothesis \eqref{inductive_hypothesis} together with \eqref{mod_parameter_bound} we deduce that as $t$ goes to $+\infty$ we have \begin{align*} \mathbf{II}(t)-\int \big(w_{i+1}^2+w_{i+1,x}^2\big)\Psi\big(\cdot-\widetilde{x}_i(t)-R\big)\Psi\big(\cdot-y_{i+1}(t)\big)\to \sum_{j=i+1}^nE(\varphi_{c_j^*}). \end{align*} Therefore, by gathering both convergences we conclude the claim for $\tau_R^i\gg t_\epsilon^i$ sufficiently large. \medskip \textbf{Step 2:} Now we claim that for all $A>0$ the following strong convergence holds: \[ u\big(t,\cdot+\widetilde x_i(t)\big)-\rho_i(t)\varphi-W_{i+1}\big(t,\cdot+\widetilde{x}_i(t)\big)\to 0 \ \hbox{ in }\ H^1\big((-A,+\infty)\big) \ \hbox{ as }\ t\to+\infty. \] In fact, it is enough to recall that due to the inductive hypothesis \eqref{inductive_hypothesis} we have that for any $\epsilon>0$ arbitrarily small, there exists $t_\epsilon^i\gg1$ sufficiently large such that for all $t\geq t_\epsilon^i$ we have \begin{align*} \int \big(w_{i+1}^2+w_{i+1,x}^2\big)\big(t,x\big)\Psi\big(\cdot-y_{i+1}(t)\big)<\dfrac{\epsilon}{3}. \end{align*} Moreover, due to \eqref{mod_parameter_bound}, by making $t_\epsilon^i$ bigger if necessary we can also assume that for any $t\geq t_\epsilon^i$ it holds: \[ \int \big(\varphi^2+\varphi_x^2\big)\big(t,x\big)\Psi\big(\cdot-y_{i+1}(t)\big)<\dfrac{\epsilon}{3}. \] Therefore, by gathering the above inequalities together with the almost monotonicity result for $ \mathcal{J}_r^R\big(w_{i+1}(t,\cdot+\widetilde x_i(t))\big)$ with $R=y_{i+1}(t_\epsilon^i)+\widetilde x_i(t_\epsilon^i)$ and the strong convergence result \eqref{local_strong_conv}, we conclude that for all $A>0$ fixed we have \begin{align*} u\big(t,\cdot+\widetilde x_i(t)\big)-\rho_i(t)\varphi-W_{i+1}\big(t,\cdot+\widetilde{x}_i(t)\big)\to 0 \ \hbox{ in }\ H^1\big((-A,+\infty)\big) \ \hbox{ as }\ t\to+\infty, \end{align*} which proves the claim. \medskip \textbf{Step 3:} Our aim now is to prove the convergence of the scaling parameter $\rho_i(t)$. In fact, first of all notice that due to \eqref{mod_parameter_bound}, the exponential decay of $\varphi$, $\varphi'$ and $\Psi$ and the latter strong convergence result in $H^1((-A,\infty))$ we deduce that for any $\delta>0$ there exists $R_\delta>1$ and $t_\delta^i>1$ sufficiently larges such that \[ \left\vert\int\big(w_{i+1}^2+w_{i+1,x}^2\big)(t,x)\Psi\big(x-\widetilde{x}_i(t)+R_\delta\big)dx-\rho_i(t)E\big(\varphi\big)\right\vert\leq \delta \ \, \hbox{ for all } \, \ t\geq t_\delta^i. \] Then, the almost monotonicity of $\mathcal{J}_l^{R_\delta}(\cdot)$ implies that $\rho_i(t)\to \sqrt{c_i^*}$ as $t\to+\infty$ for some $c_i^*$ close to $c_i$. In fact, from the almost monotonicity of $\mathcal{J}_l^{R_\delta}\big(w_{i+1}(t,\cdot+\widetilde{x}_i(t))\big)$ and the latter inequality it follows that \[ \rho_i(t)E(\varphi)\leq \rho_i(t')E(\varphi)+3\delta, \quad \hbox{for all } \ t\geq t'\geq t_\delta^i. \] Since $\delta>0$ is arbritary, this forces $\rho(\cdot)$ to have a limit at $+\infty$, what ends the proof. Notice that, in particular, the following convergence holds\begin{align}\label{convergence_right} u\big(t,\cdot+\widetilde x_i(t)\big)-\sqrt{c_i^*}\varphi-W_{i+1}\big(t,\cdot+\widetilde{x}_i(t)\big)\to 0 \ \hbox{ in }\ H^1\big((-A,+\infty)\big) \ \hbox{ as }\ t\to+\infty. \end{align} \smallskip \textbf{Step 4:} Now we intend to prove that $\dot{\widetilde{x}_i}(t)\to c_i^*$ as $t\to+\infty$. We point out that the proof is somehow contained in the proof of Lemma \ref{lema_convergence_x_dot_n}. In fact, notice that by differentiating the $i$-th equation in \eqref{mod_orthogonality} with respect to time and recalling that $\varphi$ satisfies $\varphi-\varphi''=2\delta$ we obtain \[ \int w_{1,t}\omega_{n_0,x}^i=\dot{\widetilde{x}}_i\int w_1(t,x)\omega_{n_0}^i(t,x)dx-2\dot{\widetilde{x}}_i\sqrt{c_i^*}\int w_1(t,x)\rho_{n_0}\big(x-\widetilde{x}_i(t)\big)dx. \] On the other hand, notice that due to \eqref{convergence_right}, inequality \eqref{mod_parameter_bound} and the exponential decay of both $\omega_i$ and $\omega_{n_0}^i$ we deduce that $\Vert \omega_j \omega_{n_0}^i\Vert_{L^1}+\Vert \omega_{j,x}\omega_{n_0}^i\Vert_{L^1}\to 0$ as $t\to+\infty$ for $j\neq i$ and \[ \Vert w_1^2\omega_{n_0,x}^i\Vert_{L^1}+\Vert w_{1,x}^2\omega_{n_0,x}^i\Vert_{L^1}+\Vert w_1\omega_{n_0}^i\Vert_{L^1}+\Vert w_1\rho_{n_0}(\cdot-\widetilde{x}_i(t))\Vert_{L^1}\to0 \ \hbox{ as }\ t\to+\infty. \] Therefore, by taking the $L^2$-inner product from equation \eqref{mod_huge_eq_11} against $\omega_{n_0,x}^i$ and noticing that $\langle \omega_{i,x}(t),\omega_{n_0,x}^i(t)\rangle_{L^2,L^2}\equiv \mathrm{constant}>0$ for all times $t\in\mathbb{R}$, we conclude \[ \dot{\widetilde{x}}_i-c_i^*\to 0 \ \hbox{ as }\ t\to+\infty. \] \textbf{Step 5:} Now we intend to conclude the inductive proof, which at the same time proof the convergence result \eqref{MT_5_conclusions} on the second set in $\mathcal{A}_t$. In fact, first of all let us recall that from the previous steps we know that for any $A>0$ we have that as $t\to+\infty$ the following holds: \[ u\big(t,\cdot+\widetilde{x}_i(t)\big)-\varphi_{c_i^*}-W_{i+1}\big(t,\cdot+\widetilde{x}_i(t)\big)\to0 \, \hbox{ in }\, H^1((-A,\infty)). \] Now, let $\eta>0$ arbitrarily small but fixed. Let us consider $R\gg1$ sufficiently large such that \begin{align}\label{smallness_varphi_psi_proooof} \Vert \varphi\Vert_{H^1\left(\left(-\infty,-\frac{R}{2}\right)\right)}^2<\eta \quad \hbox{and}\quad \Vert \Psi-1\Vert_{L^\infty\left(\left(\frac{R}{2},+\infty\right)\right)}<\eta, \end{align} Hence, by the previous convergence results we deduce the existence of a time point $t_0>0$ sufficiently large for which $\widetilde{x}_i(t_0)>R$ and such that for all $t\geq t_0$ we have \[ \left\Vert u\big(t,\cdot+\widetilde{x}_i(t)\big)-\varphi_{c^*_i}-W_{i+1}\big(t,\cdot+\widetilde{x}_i(t)\big)\right\Vert_{H^1\left(\left(-\frac{R}{2},+\infty\right)\right)}<\eta. \] Moreover, by making both $R$ and $t_0$ bigger if necessary we can also assume that for all $y\geq R$ and all $t\geq t_0$ it holds \[ \left\vert\sum_{j=i}^nE(\varphi_{c_j^*})-\int \big(W_i^2+W_{i,x}^2\big)\Psi\big(\cdot-\widetilde{x}_i(t)+y\big)dx\right\vert<\eta \] On the other hand, by using \eqref{smallness_varphi_psi_proooof}, inequality \eqref{mod_parameter_bound} and the previous inequalities we deduce that for all $y\geq R$ and all $t\geq t_0$ we have \begin{align}\label{energy_bound} \left\vert \sum_{j=i}^nE(\varphi_{c_j^*})-\int \left(u(t,\cdot+\widetilde{x}_i(t))W_{i}+u_x(t,\cdot+\widetilde{x}_i(t))W_{i,x}\right)\Psi(\cdot+y)\right\vert\lesssim \eta. \end{align} Now, for $j=2,...,n-1$, we consider the following velocities \[ z_1(t):=\tfrac{\beta}{2}t \quad \hbox{and}\quad z_j(t):=\tfrac{3}{4}x_{j-1}(t)+\tfrac{1}{4}x_j(t). \] Now notice that, with this specific choice of velocities $z_i$, the functional $\mathrm{I}_{i,t_0}^{-R}$ defined in Lemma \ref{tech_lem_mon_exp} satisfies the almost monotonicity property. Thus, by using inequality \eqref{AM_right_i_energy} we obtain that for all $t\geq t_0$ it holds \begin{align*} &\int \left(u^2+u_x^2\right)(t,\cdot)\Psi\left(\cdot -z_i^{-R}(t)\right)\leq Ce^{-R/24} +\int \left(u^2+u_x^2\right)(t_0,\cdot)\Psi\left(\cdot -z_i^{-R}(t_0)\right), \end{align*} where $z_i^{-R}(t)=\widetilde{x}_i(t_0)-R+z_i(t)-z_i(t_0)$. On the other hand, by straightforward computations we have \begin{align*} \int (w_i^2+w_{i,x}^2)(t,\cdot)\Psi\left(\cdot-z_i^{-R}(t)\right)&=\int \left(u^2+u_x^2\right)(t,\cdot)\Psi\left(\cdot -z_i^{-R}(t)\right) \\ & \quad +\int \left(W_i^2+W_{i,x}^2\right)(t,\cdot)\Psi\left(\cdot -z_i^{-R}(t)\right) \\ & \quad -2\int \left(uW_i+u_xW_{i,x}\right)(t,\cdot)\Psi\left(\cdot -z_i^{-R}(t)\right) \\ & =:\mathrm{I}+\mathrm{II}+\mathrm{III}. \end{align*} Moreover, notice that due to \eqref{mod_parameter_bound} and the definition of $\{z_j\}_{j=1}^{n-1}$ we deduce that for all $t\geq t_0$ we have \[ \widetilde{x}_i(t)-\widetilde{x}_i(t_0)-z_i(t)+z_i(t_0)+R\geq R. \] Hence, by using inequality \eqref{energy_bound} and then the exponential decay of $\varphi$ we get \begin{align*} \mathrm{I}+\mathrm{II}+\mathrm{III}&\leq\int \left(u^2+u_x^2\right)(t_0,\cdot)\Psi\left(\cdot -\widetilde{x}_i(t_0)+R\right)+Ce^{-R/24} \\ & \quad+\int \left(W_i^2+W_{i,x}^2\right)(t_0,\cdot)\Psi\left(\cdot -\widetilde{x}_i(t_0)+R\right)+Ce^{-R/24} \\ & \quad -2\int \left(uW_i+u_xW_{i,x}\right)(t_0,\cdot)\Psi\left(\cdot -\widetilde{x}_i(t_0)+R\right)+C\eta \\ & \lesssim \int \big(w_i^2+w_{i,x}^2)(t_0,\cdot)\Psi\left(\cdot-\widetilde{x}_i(t_0)+R\right)+e^{-R/24}+\eta \\ & \lesssim \eta+e^{-R/24}, \end{align*} where we have used the exponential decay of $\varphi$ to obtain the latter inequality. Finally, notice that by taking $R\gg1$ sufficiently large and $t_1>t_0$ such that for all $i=2,...,n-1$ and all $t\geq t_1$ it holds \[ z_1^{-R}(t)\leq \tfrac{\beta}{2} \quad \hbox{and} \quad z_i^{-R}(t)\leq y_i(t). \] Therefore, recalling that if $i=1$ we defined $y_1(t):=\tfrac{\beta}{2}$ (see the beginning of this section), we conclude that for all $t\geq t_1$ we have \[ \int (w_{i}^2+w_{i,x}^2)(t,\cdot)\Psi\left(\cdot-y_i(t)\right)\lesssim \eta, \] which completes the proof the claim. \medskip \textbf{Step 6:} Finally, it only remains to prove the convergence in $(-\infty,z)$ for any fixed $z\in\mathbb{R}$. This is a consequence of a more general property, noticed by Molinet in \cite{Mo3}, ensuring that all the energy of solutions associated to initial data in $Y_+$ is traveling to the right. In fact, we shall prove the following lemma which immediately conclude the proof of Theorem \ref{MT5}. \begin{lem}[\cite{Pa}]\label{traveling_energy_lem} For any $u_0\in Y_+$ and any $z\in\mathbb{R}$, the solution $u\in C(\mathbb{R},H^1(\mathbb{R}))$ to equation \eqref{nov_eq_2} associated $u_0$ satisfies \begin{align*} \lim_{t\to+\infty}\Vert u(t)\Vert_{H^1((-\infty,z))}=0. \end{align*} \end{lem} \begin{proof}[Proof of Lemma \ref{traveling_energy_lem}] This lemma has already been proved for the Novikov equation in \cite{Pa}. However, for the sake of completeness we prove it again. First of all notice that, for $\Psi$ defined in \eqref{psi_def_2}, for any time $t\in\mathbb{R}$ fixed the map \[ z\mapsto\int \big(u^2+u_x^2\big)(t,x)\Psi(\cdot-z)dx, \] defines a decreasing continuous bijection from $\mathbb{R}$ into $(0,\Vert u_0\Vert_{H^1}^2)$. Hence, by setting any $0<\gamma<\Vert u_0\Vert_{H^1}^2$, we deduce that the map $x_\gamma:\mathbb{R}\to\mathbb{R}$ defined by the equation \begin{align}\label{def_xgamma} \int \big(u^2+u_x^2\big)(t,x)\Psi(\cdot-x_\gamma(t))dx=\gamma, \end{align} is well-defined. Moreover, since $u\in C(\mathbb{R},H^1(\mathbb{R}))$ we deduce that $x_\gamma$ is a continuous function. Now, notice that in order to conclude the proof of the lemma it is enough to show that for any $\gamma\in(0,\Vert u_0\Vert_{H^1}^2)$ we have \begin{align}\label{lim_final} \lim_{t\to+\infty} x_\gamma(t)=+\infty. \end{align} For the sake of readability we split the proof of the latter property in two steps. \medskip \textbf{Step 1:} First we claim that for any $\Delta>0$ and any $t\in\mathbb{R}$ we have \begin{align}\label{claim_step} x_\gamma(t+\Delta)-x_\gamma(t)\geq \dfrac{2}{5}\int_t^{t+\Delta}\int u^2(t,x)\Psi'(\cdot-x_\gamma(t))dx >0. \end{align} In fact, notice that by continuity with respect to the initial data it is enough to prove the claim for solutions $u\in C^\infty(\mathbb{R},H^\infty(\mathbb{R}))\cap L^\infty(\mathbb{R},H^1(\mathbb{R}))$. On the other hand, as an application of the Implicit Function Theorem we deduce that $x_\gamma(t)$ is of class $C^1$. In fact, let us define the functional \[ \psi(v,z):=\int \big(v^2+v_x^2\big)\Psi(\cdot-z)dx. \] Notice that $\psi$ clearly defines a $C^1$ function on $H^1(\mathbb{R})\times \mathbb{R}$. Moreover, notice that since any function $v\in Y_+\setminus\{0\}$ cannot vanish at any point $x\in\mathbb{R}$, we deduce that for any function $v\in H^\infty\cap Y_+$ and any $z\in\mathbb{R}$ we have \[ \dfrac{\partial\psi}{\partial z}=\int \big(v^2+v_x^2\big)\Psi'(\cdot-z)>0. \] Thus, recalling equation \eqref{dt_I_J_i} from the proof of Lemma \ref{tech_lem_mon_exp}, we obtain \begin{align*} \dot{x}_\gamma\int \big(u^2+u_x^2\big)\Psi'(\cdot-x_\gamma)&=\int u^2u_x^2\Psi'+\int \{p*(3uu_x^2+2u^3)\}u\Psi'+\int \{p_x*u_x^3\}u\Psi'. \end{align*} Now, due to the fact that $\vert v_x\vert\leq v$ for any $v\in Y_+$ we deduce $p*uu_x^2+p_x*u_x^3\geq 0$. On the other hand, since $u(t)$ is positive, from Lemma $3.7$ in \cite{Pa} we deduce \[ p*(3uu_x^2+5u^3)\geq 2u^3 \ \hbox{ in particular } \ p*(2uu_x^2+2u^3)\geq \tfrac{4}{5}u^3. \] Hence, by using again that $\vert v_x\vert\leq v$ for any $v\in Y_+$ and the previous inequalities we get \begin{align*} 2\dot{x}_\gamma\int u^2\Psi'(\cdot-x_\gamma)&\geq \int u^2u_x^2\Psi'+\dfrac{4}{5}\int u^4\Psi'. \end{align*} Therefore, due to the non-negativity of $\Psi'$ together with the fact that $\Vert \Psi'\Vert_{L^1}=1$ and by using H\"older's inequality we get \begin{align*} \dot{x}_\gamma(t)\geq \dfrac{2}{5}\int u^2\Psi'(\cdot-x_\gamma(t))dx. \end{align*} Integrating in time between $t$ and $t+\Delta$ we conclude the claim. \medskip \textbf{Step 2:} Now we intend to conclude the proof of \eqref{lim_final}. First of all notice that from the claim of the previous step we obtain, in particular, that $x_\gamma(\cdot)$ is increasing and hence it has a limit $x_\gamma^\infty\in \mathbb{R}\cup\{+\infty\}$, i.e. \begin{align*} \lim_{t\to+\infty}x_\gamma(t)=x_\gamma^\infty. \end{align*} Thus, the proof of \eqref{lim_final} is equivalent to prove that $x_\gamma^\infty=+\infty$. In fact, let us proceed by contradiction, i.e. let us suppose that $x_\gamma^\infty\in\mathbb{R}$. Then, notice that the latter hypothesis together with inequality \eqref{def_xgamma} and the fact that $\vert u_x\vert\leq u\leq \Vert u_0\Vert_{H^1}$ for all $(t,x)\in\mathbb{R}^2$ implies \begin{align}\label{contrad_2} \lim_{t\to+\infty}\int\big(u^2+u_x^2\big)(t,x)\Psi(\cdot-x_\gamma(t))=\lim_{t\to+\infty}\int \big(u^2+u_x^2\big)(t,x)\Psi(\cdot-x_\gamma^\infty)=\gamma. \end{align} On the other hand, by taking $\Delta=1$, from \eqref{claim_step} and the convergence of $x_\gamma(t)$ we deduce \[ \lim_{t\to+\infty}\int_{t}^{t+1}\int u^2\Psi'(\cdot-x_\gamma(t))=\lim_{t\to+\infty}\int_t^{t+1}\int u^2\Psi'(\cdot-x_\gamma^\infty)=0. \] Notice that the latter equality together with the fact that $\vert v_x\vert\leq v$ for all $v\in Y_+$ implies, in particular, that there exists a sequence of times $t_n\to+\infty$ such that for any compact set $K\subset\mathbb{R}$ the following holds: \begin{align}\label{zero_compact} \lim_{n\to+\infty}\Vert u(t_n)\Vert_{L^\infty(K)}=0. \end{align} Now we choose any $\gamma<\gamma'<\Vert u_0\Vert_{H^1}$, arbitrary but fixed. Then, we consider the compact set \[ K:=[x^\infty_\gamma-M,x_\gamma^\infty+M], \] with $M\gg1$ sufficiently large such that $x_\gamma^\infty-M<x_{\gamma'}(0)$. Thus, by using \eqref{zero_compact}, the monotonicity of $t\mapsto x_{\gamma'}(t)$ and recalling that $x_{\gamma'}(0)<x_{\gamma}(0)$ we conclude \[ \lim_{n\to+\infty}\int \big(u^2+u_x^2\big)(t_n,x)\Psi(\cdot-x_{\gamma}^\infty)=\gamma'. \] However, this contradicts hypothesis \eqref{contrad_2} what ends the proof of the lemma. \end{proof} \medskip \section{Appendix} \subsection{Proof of Lemma \ref{mod_lemma}}\label{mod_appendix} Let $\vec{z}=(z_1,...,z_n)\in\mathbb{R}^n$ be fixed and satisfying $z_i-z_{i-1}>L$. Consider the functionals given by the orthogonality conditions we are looking for, i.e., for each $i=1,...,n$ consider the functional given by \begin{align*} Y_i(y_1,...,y_n,u):=\int \big(u-R_{\vec{z}-\vec{x}}\big)\partial_x(\rho_m*\varphi_{c_i})(\cdot-z_i-y_i)dx. \end{align*} Notice that each $ Y_i:\mathbb{R}^n\times H^1\to \mathbb{R}$ defines a $\mathcal{C}^1$ functional in a neighborhood of $(0,...,0,R_{\vec{z}})$. Moreover, for any $\vec{z}\in\mathbb{R}^n$ we have $Y_i(0,...,0,R_{\vec z})=0$. For the sake of simplicity, from now on we denote by $Y$ the functional given by \[ Y(y_1,...,y_n,u):=\big(Y_1(y_1,...,y_n,u),...,Y_n(y_1,...,y_n,u)\big). \] Now, notice that for each $i=1,...,n$ we have \[ \dfrac{\partial Y_i}{\partial y_i}=\int\Big(u_x-\sum_{j\neq i}^n\partial_x\varphi_{c_j}(\cdot-z_j-y_j)\Big)\partial_x(\rho_m*\varphi_{c_i})(\cdot-z_i-y_i)dx, \] and for each $j=1,...,n$ with $j\neq i$ we have \[ \dfrac{\partial Y_i}{\partial y_j}=\int \partial_{x}\varphi_{c_j}(\cdot-z_j-y_j)\partial_x(\rho_m*\varphi_{c_i})(\cdot-z_i-y_i)dx. \] In particular, notice that there exists a constant $\textbf{C}>0$ depending only on $m\in\mathbb{N}$ such that for all $i=1,...,n$ and all $\vec{z}\in\mathbb{R}^n$ we have \[ \dfrac{\partial Y_i}{\partial y_i}(0,...,0,R_{\vec{z}})=\int \varphi_{c_i}'(\cdot-z_i)(\rho_m'*\varphi_{c_i})(\cdot-z_i)= \textbf{C}c_i\geq \textbf{C}c_1. \] On the other hand, by using the exponential decay of $\varphi$ and due to the fact that $z_i-z_j>L$ whenever $i\neq j$, we infer that for $L_0$ large enough we have \begin{align*} \left\vert\dfrac{\partial Y_i}{\partial y_j}(0,...,0,R_{\vec{z}})\right\vert&=\left\vert\int\varphi_{c_j}(\cdot-z_j)(\rho_m''*\varphi_{c_i})(\cdot-z_i)\right\vert \\ &\leq \left \vert \int \varphi_{c_j}(\cdot-z_j)(\rho_m*\varphi_{c_i})(\cdot-z_i)\right\vert+\left\vert\int\varphi_{c_j}(\cdot-z_j)\rho_m(\cdot-z_i)\right\vert \\ & =O\left(e^{-L/4}\right). \end{align*} Hence, for $L\gg1$ large enough we deduce that $D_{\vec{x}}Y(0,...,0,R_{\vec{z}})=D+P$, where $D$ is an invertible diagonal matrix and \[ \Vert D^{-1}\Vert\leq \dfrac{1}{\textbf{C}c_1} \quad\hbox{and} \quad \Vert P\Vert\leq O\left(e^{-L/4}\right). \] Thus, there exists $L_0>1$ such that for all $L>L_0$ and all $\vec{z}\in\mathbb{R}^n$ satisfying $z_i-z_{i-1}>L$, the jacobian $D_{\vec{x}}Y(0,...,0,R_{\vec{z}})$ defines an invertible matrix. Hence, by using the Implicit Function Theorem we infer the existence of positive constants $\delta>0$, $C_0>0$ and $C^1$ functions $(y_1,...,y_n)$ defined in a $H^1$-neighborhood of $R_{\vec{z}}$ with values in a neighborhood of zero, i.e. \[ y_1,...,y_n:B_{H^1}(R_{\vec{z}},\delta)\to B_\mathbb{R}(0,C_0\delta), \] which are uniquely determined by the equation \[ Y\big(y_1(u),...,y_n(u),u\big)=0 \quad \hbox{for any } \ u\in B_{H^1}\big(R_{\vec{z}},\delta\big). \] In particular, there exists a constant $K_0>0$ such that if $u\in B_{H^1}\left(R_{\vec{z}},\delta_*\right)$ for some $0<\delta_*\leq\delta$, then \begin{align}\label{small_neigh} \sum_{i=1}^n\vert y_i(u)\vert\leq K_0\delta_*. \end{align} It is worth noticing that $\delta$ and $K_0$ only depend on $c_1$ and $L_0$ but not on the point $\vec{z}\in\mathbb{R}$. Thus, for $u\in B_{H^1}(R_{\vec{z}},\delta)$ we can set $\widetilde{x}_i(u):=z_i+y_i(u)$. Hence, assuming that $\delta\leq \tfrac{L_0}{8K_0}$, we infer that $\widetilde{x}_1,...,\widetilde{x}_n$ are $C^1$ functions on $B_{H^1}(R_{\vec{z}},\delta_*)$ and satisfy \begin{align}\label{distances_ap} \widetilde{x}_i(u)-\widetilde{x}_{i-1}(u)=z_i-z_{i-1}+y_i(u)-y_{i-1}(u)>\dfrac{L}{2}-2K_0\delta_*>\dfrac{L}{4}. \end{align} Now we intend to define the modulation of $u$. In fact, let us consider $\alpha_0<\tfrac{1}{2}\delta$ to be chosen later. Then, for all $L\geq L_0$ and any $0<\alpha<\alpha_0$, we define the modulation of $u$ in the following way: We cover the trajectory of $u$ by a finite number of open balls by: \[ \left\{u(t):\ t\in[0,t_0]\right\}\subset\bigcup_{k=1,...,N}B_{H^1}\left(R_{\vec{z}_k},2\alpha\right) \] It is important to notice that, since $0<\alpha<\alpha_0<\tfrac{1}{2}\delta$, the functions $\widetilde{x}_i(u)$ are uniquely determined for \[ u\in B\left(R_{\vec{z}_k},2\alpha\right)\cap B\left(R_{\vec{z}_{k'}},2\alpha\right). \] Therefore, we can define the functions $t\mapsto \widetilde{x}_i(t)$ on $[0,t_0]$ by settin $\widetilde{x}_i(t):=\widetilde{x}_i(u(t))$. Thus, by construction \begin{align}\label{ort_proof} \int\Big(u(t,\cdot)-\sum_{j=1}^n\varphi_{c_j}\big(\cdot-\widetilde{x}_j(t)\big)\Big)\partial_x(\rho_m*\varphi_i)\big(\cdot-\widetilde{x}_i(t)\big)dx=0. \end{align} On the other hand, where $k$ is such that at time $t$ we have $u(t)\in B(R_{\vec{z}_k},2\alpha)$, by direct computation we get \begin{align*} \left\Vert u(t)-\sum_{i=1}^n\varphi_{c_i}\big(\cdot-\widetilde{x}_i(t)\big)\right\Vert_{H^1}&\leq \left\Vert u(t)-\sum_{i=1}^n\varphi_{c_i}\big(\cdot-z_{i,k}\big)\right\Vert_{H^1} \\ & \quad +\left\Vert\sum_{i=1}^n\big(\varphi_{c_i}\big(\cdot-\widetilde{x}_i(t)\big)-\varphi_{c_i}\big(\cdot-z_{i,k}\big)\big)\right\Vert_{H^1} \\ & \leq 2\alpha+\Big(2\sum_{i=1}^nE(\varphi_{c_i})-2\sum_{i=1}^n\int \varphi_{c_i}(\cdot-\widetilde{x}_i(t))\varphi_{c_i}(\cdot-z_{i,k}) \\ & \quad -2\sum_{i=1}^n\int \varphi_{c_i}'(\cdot-\widetilde{x}_i(t))\varphi_{c_i}'(\cdot-z_{i,k})\Big)^{1/2} =:2\alpha+\mathrm{II} \end{align*} by using \eqref{small_neigh} , $E(\varphi_{c_i})=2c_i$ by integration by parts and recalling that $\varphi''=\varphi-2\delta$ we obtain \begin{align*} \mathrm{II}&=2\sum_{i=1}^n\big(c_i-\varphi_{c_i}(z_{i,k}-\widetilde{x}_i(t)\big)^{1/2}= 2\sum_{i=1}^n\sqrt{c_i}\left(1-e^{-\vert z_{i,k}-\widetilde{x}_i(t)\vert}\right)^{1/2} \\ & \leq 2\sum_{i=1}^n\sqrt{c_i}\left(1-e^{-2K_0\alpha}\right)^{1/2}\leq 2\sqrt{2K_0\alpha}\sum_{i=1}^n\sqrt{c_i}=O\left(\sqrt{\alpha}\right) \end{align*} Therefore, for $\alpha\ll1$ small enough we conclude that for all $t\in[0,t_0]$ it holds \begin{align}\label{small_mod_ap_proof} \left\Vert u(t)-\sum_{i=1}^n\varphi_{c_i}\big(\cdot-\widetilde{x}_i(t)\big)\right\Vert_{H^1}=O\left(\sqrt{\alpha}\right). \end{align} Now, we split the prove of the remaining inequalities into four steps. \medskip \textbf{Step 1:} Now we intend to prove the first inequality in \eqref{mod_parameter_bound}. For the sake of simplicity let us start by defining some auxiliary variables: For each $i=1,...,n$ we define the functions $v,w_i,w_m^i$ as \[ v(t):=u(t)-\sum_{i=1}^n\varphi_{c_i}(\cdot-\widetilde{x}_i(t)),\quad w_i:=\varphi_{c_i}\big(\cdot-\widetilde{x}_i(t)\big) \ \hbox{ and } \ w_{m}^i(t):=(\rho_m*\varphi_{c_i})\big(\cdot-\widetilde{x}_i(t)\big). \] Then, by differentiating \eqref{ort_proof} and recalling that $\varphi-\varphi''=2\delta$ we obtain \begin{align*} \left\vert \int v_t(t,x)w_{m,x}^i(t,x)dx\right\vert&=\left\vert\dot{\widetilde{x}}_i(t)\langle w_{m,xx}^i,v\rangle_{H^{-1},H^1}\right\vert\leq \left\vert\dot{\widetilde{x}}_i(t)\right\vert O\big(\Vert v(t)\Vert_{H^1}\big) \\ & \leq \left\vert\dot{\widetilde{x}}_i(t)-c_i\right\vert O\big(\Vert v(t)\Vert_{H^1}\big)+O\big(\Vert v(t)\Vert_{H^1}\big). \end{align*} On the other hand, by using that $\varphi$ solves \eqref{nov_eq_2} we infer that each $w_i$ satisfies the following equation: \[ w_{i,t}+\left(\dot{\widetilde{x}}_i-c_i\right)w_{i,x}+w_i^2w_{i,x}=p_x\left(w_i^3+\dfrac{3}{2}w_iw_{i,x}^2\right)-\dfrac{1}{2}p*w_{i,x}^3 \] Thus, by using that $u(t)$ also solves \eqref{nov_eq_2}, replacing $u=v+w_1+...+w_n$ and then using the equation satisfied by each $w_i$ we get \begin{align}\label{mod_huge_eq_ap} v_{t}-\sum_{j=1}^n(\dot{\widetilde{x}}_j-c_j)w_{j,x}&=-\left(v+\sum_{j=1}^nw_j\right)^2v_{x}-v^2\sum_{j=1}^nw_{j,x}-2v\sum_{j,k=1}^nw_jw_{k,x}\nonumber \\ & \quad -\sum_{\substack{j,k,\ell=1 \\ (k,\ell)\neq(j,j)}}^nw_jw_kw_{\ell,x}-p_x*v^3-3\sum_{j=1}^np_x*(v^2w_j)\nonumber \\ & \quad -3\sum_{j,k=1}^np_x*vw_jw_k-\sum_{\substack{j,k,\ell=1 \\ (k,\ell)\neq(j,j)}}^np_x*w_jw_kw_\ell \nonumber \\ & \quad -\dfrac{3}{2}p_xvv_{x}^2-3\sum_{j=1}^np*vv_{x}w_{j,x}-\dfrac{3}{2}\sum_{j,k=1}^np*vw_{j,x}w_{k,x}\nonumber \\ & \quad -\dfrac{3}{2}\sum_{j=1}^np*v_x^2w_j-3\sum_{j,k=1}^np*v_xw_jw_{k,x}\nonumber \\ & \quad -\dfrac{3}{2}\sum_{\substack{j,k,\ell=1 \\ (k,\ell)\neq(j,j)}}^np_x*w_jw_{k,x}w_{\ell,x}-\dfrac{1}{2}p*v_x^3-\dfrac{3}{2}\sum_{j=1}^np*v_x^2w_{j,x}\nonumber \\ & \quad -\dfrac{3}{2}\sum_{j,k=1}^n p*v_xw_{j,x}w_{k,x}-\dfrac{1}{2}\sum_{\substack{j,k,\ell=1 \\ (k,\ell)\neq(j,j)}}^n p*w_{j,x}w_{k,x}w_{\ell,x}. \end{align} On the other hand, notice that due to \eqref{small_mod_ap_proof}, inequality \eqref{distances_ap} and the exponential decay of both $w_i$ and $w_{n_0}^i$ we deduce that $\Vert w_j w_{m}^i\Vert_{L^1}+\Vert w_{j,x}w_{m}^i\Vert_{L^1}=O\left(\exp(-L/4)\right)$ for $j\neq i$ and \[ \Vert v^2w_{m,x}^i\Vert_{L^1}+\Vert v_{x}^2w_{m,x}^i\Vert_{L^1}+\Vert vw_{m}^i\Vert_{L^1}+\Vert v\rho_{m}(\cdot-\widetilde{x}_i(t))\Vert_{L^1}=O\left(\sqrt{\alpha}\right)+O\left(e^{-L/4}\right). \] Hence, by taking the $L^2$-inner product from equation \eqref{mod_huge_eq_ap} against $w_{m,x}^i$ and noticing that there exists a constant $\textbf{a}>0$ such that $\langle w_{i,x}(t),w_{m,x}^i(t)\rangle_{L^2,L^2}\equiv \textbf{a}c_i>0$ for all times $t\in[0,t_0]$, we obtain \[ \left\vert\dot{\widetilde{x}}_i-c_i\right\vert\Big(\textbf{a}c_i+O\left(\sqrt{\alpha}\right)\Big)=O\left(\sqrt{\alpha}\right)+O\left(e^{-L/4}\right). \] Therefore, by taking $0<\alpha_0\ll1$ small enough and $L_0\gg1$ sufficiently large we conclude the first inequality in \eqref{mod_parameter_bound}. \medskip \textbf{Step 2:} Now we intend to prove the second inequality in \eqref{mod_parameter_bound}. In fact, it is enough to notice that from what we proved in the last step, by using \eqref{initial_cond_hyp_train} and \eqref{small_neigh}, after integration in time we obtain \[ \widetilde{x}_i(t)-\widetilde{x}_{i-1}(t)\geq L-2K_0\alpha_0+\dfrac{c_i-c_{i-1}}{2}t\geq \dfrac{3}{4}L+\dfrac{c_i-c_{i-1}}{2}t, \] what finish the proof of \eqref{mod_parameter_bound}. \medskip \textbf{Step 3:} In this step we are devoted to prove last part of the statement. In fact, notice that by using \eqref{small_mod_ap_proof} together with Sobolev's embedding we infer that for any time $t\in[0,t_0]$ we have \[ u(t,x)=R_{\widetilde{x}(t)}(x)+O\left(\sqrt{\alpha}\right). \] Now, on the one-hand notice that by applying the previous formula with $x(t):=\max_{J_i}u(t)$ and by using the second inequality in \eqref{mod_parameter_bound} we obtain \[ u(t,x_i)=\sqrt{c_i}+O\left(\sqrt{\alpha}\right)+O\left(e^{-L/4}\right)\geq \tfrac{2}{3}\sqrt{c_i}. \] On the other hand, notice that for any $x\in J_i\setminus[\widetilde{x}_i(t)-\tfrac{1}{12}L,\widetilde{x}_i(t)+\tfrac{1}{12}L]$ we have \[ u(t,x)\leq \sqrt{c_i}e^{-L/12}+O\left(\sqrt{\alpha}\right)+O\left(e^{-L/4}\right)\leq \tfrac{1}{2}\sqrt{c_i}. \] Therefore, the previous two inequalities ensures that $x_i(t)\in[\widetilde{x}_i(t)-\tfrac{1}{12}L,\widetilde{x}_i(t)+\tfrac{1}{12}L]$, what concludes the proof of the lemma. \medskip \textbf{Step 4:} Finally, it only remains to prove \eqref{orth_cond_def} for $n_0\in\mathbb{N}$ large enough. In fact, it is enough to notice that \[ \int \varphi'(x)\varphi(x-y)dx=(1-y)e^{-y}. \] Thus, for $n_0\in\mathbb{N}$ large enough we have \[ \dfrac{d}{dy}\int \varphi(\rho_{n_0}*\varphi)'(\cdot-y)=\int \varphi'(\rho_{n_0}*\varphi')(\cdot-y)\geq \dfrac{1}{4}e^{-1/2} \quad \hbox{on} \quad \left[-\tfrac{1}{2},\tfrac{1}{2}\right]. \] Therefore, the mapping $y\mapsto \int_\mathbb{R}\varphi(\rho_{n_0}*\varphi)'(\cdot-y)$ is increasing on $[-\tfrac{1}{2},\tfrac{1}{2}]$, and hence there exists $n_0\in\mathbb{N}$ satisfying \eqref{orth_cond_def}. Then, we conclude the proof by choosing $m=n_0$. \qed \medskip \subsection{Proof of Lemma \ref{AM_orb_train}}\label{AM_orb_train_appendix} The following computations can be rigorized by standard approximation and density arguments by considering, for instance, the convolution of $u_0$ with the mollifiers family $\rho_n$ defined in \eqref{def_rho} and by using the second statement in Theorem \ref{theorem_lwp}. We refer to \cite{EM1} for a complete justification of this argument. \medskip Now, our aim is to prove inequality \eqref{AM_energy_orbital} by integrating its time derivative. Hence, by taking the time derivative directly from the definition of $\mathcal{I}_{i,K}(t)$ we obtain \begin{align} \dfrac{d}{dt}\mathcal{I}_{i,K}(t)&=2\int \big(uu_t+u_xu_{xt}\big)\Psi_{i,K}-\dot y_i(t)\int (u^2(t)+u_x^2(t)\big)\Psi_{i,K}'\nonumber \\ & =:\mathrm{J}-\dot y_i(t)\int (u^2(t)+u_x^2(t)\big)\Psi_{i,K}'.\label{dt_I_thm} \end{align} By using both equations \eqref{novikov_eq} and \eqref{nov_eq_2} and by integrating by parts we get \begin{align*} \mathrm{J}&=2\int \big(u_t-u_{txx}\big)u\Psi_{i,K}-2\int uu_{tx}\Psi_{i,K}' \\ & =2\int \big(3uu_xu_{xx}+u^2u_{xxx}-4u^2u_x\big)u\Psi_{i,K} \\ & \quad +2\int (u^2u_{xx}+2uu_x^2+p_x*(3uu_xu_{xx}+2u_x^3+3u^2u_x))u\Psi_{i,K}' \\ & =4\int u^2u_x^2\Psi_{i,K}'+2\int u^4\Psi_{i,K}'+2\int \{p_x*(3uu_xu_{xx}+2u_x^3+3u^2u_x)\}u\Psi_{i,K}' \end{align*} On the other hand, recalling that for any $L^2$ function $f:\mathbb{R}\to\mathbb{R}$ we have $p*f_x=p_x*f$, and by using that $p$ is the fundamental solution of $(1-\partial_x^2)$, we obtain \begin{align*} 2p_x*(3uu_xu_{xx}+2u_x^3+3u^2u_x)=-2u^3-3uu_x^2+3p*uu_x^2+2p*u^3+p_x*u_x^3. \end{align*} Thus, by plugging this into \eqref{dt_I_thm} we get \begin{align} \dfrac{d}{dt}\mathcal{I}_{i,K}(t)&=-\dot y_i(t)\int \big(u^2+u_x^2\big)\Psi_{i,K}'+\int u^2u_x^2\Psi_{i,K}'\nonumber \\ & \qquad +\int \{p*(3uu_x^2+2u^3)\}u\Psi_{i,K}'+\int \{p_x*u_x^3\}u\Psi_{i,K}'\nonumber \\ & =-\dot y_i(t)\int \big(u^2+u_x^2\big)\Psi_{i,K}'+\mathrm{J}_1+\mathrm{J}_2+\mathrm{J}_3.\label{dt_I_J_i} \end{align} In order to bound $\mathrm{J}_i$, for $i=1,2,3$, we split $\mathbb{R}$ into two complementary regions related to the size of $u(t)$. In fact, for $i=2,...,n$ let us consider the family of time-dependent intervals $D_i$ \[ D_i(t):=\big[\widetilde{x}_{i-1}(t)+\tfrac{L}{4},\,\widetilde x_i(t)-\tfrac{L}{4}\big]. \] Hence, with these definitions, by splitting the space into $D_i$ and $D_i^c$ we can rewrite $\mathrm{J}_1$ as \[ \mathrm{J}_1=\int_{D_i} u^2u_x^2\Psi_{i,K}'+\int_{D_i^c} u^2u_x^2\Psi_{i,K}'=:\mathrm{J}_1^1+\mathrm{J}_1^2. \] Now notice that, on the one-hand, by using \eqref{mod_bound} we deduce that for all $t\in[0,t^*]$ we have \begin{align*} \Vert u(t)\Vert_{L^\infty(D_i)}&\leq \sum_{i=1}^n\left\Vert \varphi_{c_i}\big(\cdot-\widetilde x_i(t)\big)\right\Vert_{L^\infty(D_i)}+\left\Vert u(t)-\sum_{i=1}^n\varphi_{c_i}\big(\cdot-\widetilde x_i(t)\big)\right\Vert_{L^\infty(D_i)} \\ &\leq Ce^{-L/8}+O\big(\sqrt\alpha\big). \end{align*} Thus, for $\alpha>0$ small enough we can absorb $\mathrm{J}_1^1$ by the first integral term in \eqref{dt_I_J_i}. Now, on the other hand, by using the definition of the family $y_i$ in \eqref{def_y_i_intervals} and by using inequality \eqref{mod_parameter_bound}, we deduce \[ \hbox{for any }\,x\in D_i^c \,\hbox{ we have } \ \vert x-y_i(t)\vert\geq \tfrac{1}{2}\big(\widetilde x_i(t)-\widetilde x_{i-1}(t)\big)-\tfrac{L}{4}\geq \tfrac{1}{2}(c_i-c_{i-1})t+\tfrac{L}{8}. \] Therefore, by using the exponential decay of $\Psi_{i,K}'$, we deduce that for all $t\in[0,t^*]$ we have \begin{align}\label{J_one_two_ap} \mathrm{J}_1^2=\int_{D_i^c}u^2u_x^2\Psi'_{i,K}\leq \dfrac{C}{K}\Vert u_0\Vert_{H^1}^4e^{-\frac{1}{K}(\sigma_0 t+L/8)}. \end{align} Now in order to deal with $\mathrm{J}_2$ we proceed in a similar fashion. First, we split $\mathrm{J}_2$ into two different integrals by using the definition of $D_i$. In concrete, we define \[ \mathrm{J}_2=\int_{D_i}\{p*(3uu_x^2+2u^3)\}u\Psi_{i,K}'+\int_{D_i^c}\{p*(3uu_x^2+2u^3)\}u\Psi_{i,K}'=:\mathrm{J}_2^1+\mathrm{J}_2^2. \] Now notice that in order to follow the previous procedure we need to deal with the self-adjoint operator $(p*\cdot)$. However, it is enough to notice that for $K>4$, by using the definition of $\Psi_{i,K}$ in \eqref{def_Psi_i_K} we immediately obtain \[ (1-\partial_x^2)\Psi_{i,K}'\geq \left(1-\dfrac{10}{K^2}\right)\Psi_{i,K}', \ \,\hbox{ and hence } \ \, (1-\partial_x^2)^{-1}\Psi_{i,K}'\leq \left(1-\dfrac{10}{K^2}\right)^{-1}\Psi_{i,K}'. \] Thus, by using the previous estimate and proceeding in a similar fashion as before we obtain \[ \mathrm{J}_2^1\lesssim \Vert u(t)\Vert_{L^\infty(D_i)}^2\int_{D_i}\big(u^2+u_x^2\big)(1-\partial_x^2)^{-1}\Psi_{i,K}'\lesssim \Vert u(t)\Vert_{L^\infty(D_i)}^2\int\big(u^2+u_x^2\big)\Psi_{i,K}'. \] Hence, for $\alpha>0$ small enough this term can be absorb by the first integral term in \eqref{dt_I_J_i}. Finally, by using the exponential decay of $\Psi_{i,K}'$ and the definition of $D_i$ we get \[ \mathrm{J}_2^2=\int_{D_i^c}\{p*(3uu_x^2+2u^3)\}u\Psi_{i,K}'\leq \dfrac{C}{K}\Vert u_0\Vert_{H^1}^4e^{-\frac{1}{K}(\sigma_0t+L/8)}. \] The remaining term can be bound in exactly the same fashion. Therefore, gathering all the previous estimates we get \begin{align*} \dfrac{d}{dt}\mathcal{I}_{i,K}(t)\leq -\dfrac{c_1}{4}\int \big(u^2+u_x^2\big)\Psi_{i,K}'+\dfrac{C}{K}\Vert u_0\Vert_{H^1}^4e^{-\frac{1}{K}(\sigma_0t+L/8)}. \end{align*} Integrating the previous inequality between $0$ and $t$ we conclude. The proof is complete. \qed \medskip \subsection{Proof of Lemma \ref{tech_lem_mon_exp}}\label{sec_ap_tech_lem} For the sake of simplicity, we split the proof into three steps. The first of them is devoted to proof inequality \eqref{AM_right_n_energy} while the other two aim to prove \eqref{AM_right_i_energy}. \medskip \textbf{Step 1:} First of all notice that by considering $R_0>0$ sufficiently large so that \begin{align}\label{R_condition} nc_ne^{-R_0}<\dfrac{\sigma}{2^{18}}, \end{align} and combining this inequality together with \eqref{orb_concl} and the definitions in \eqref{parameters}, we immediately deduce that \eqref{condition_tail} is satisfied. Now, we set $t_R^n$ to be \[ t_R^n:=\max\Big\{\{0\}\cup\big\{t\in t\geq 0: \ \widetilde{x}_n(t)-\widetilde{x}_{n-1}(t)=2R\big\}\Big\}. \] On the other hand, recall that from the proof of Lemma \ref{AM_orb_train} (c.f. \eqref{dt_I_J_i}) we have \begin{align}\label{derivative_ap} \dfrac{d}{dt}\mathrm{I}_{n,t_0^n}^R=-\dot{z}_n^R(t)\int \big(u^2+u_x^2\big)\Psi'(\cdot-z_n^R(t))dx+\mathrm{J}_1+\mathrm{J}_2+\mathrm{J}_3. \end{align} Thus, by splitting the space into two regions: \[ \mathbb{R}=\big(-\infty,\widetilde{x}_n(t)+R_0\big]\cup\big[\widetilde{x}_n(t)+R_0,+\infty)=:D_1\cup D_2, \] we deduce that for any $x\leq \widetilde{x}_n(t)+R_0$ and any $t\leq t_0^n$ we have \[ x-z_n^R(t)\leq R_0-R-\delta_n(t_0^n-t), \ \, \hbox{ and hence } \ \ \mathrm{J}_1^1\leq \Vert u_0\Vert_{H^1}^4e^{\frac{1}{6}R_0-\frac{1}{6}R-\frac{1}{6}\delta_n(t_0^n-t)}, \] where $\mathrm{J}_1^1$ is the portion of $\mathrm{J}_1$ associated to $D_1$. On the other hand, by using \eqref{condition_tail} and proceeding in the same fashion as in the proof of Lemma \ref{AM_orb_train} we deduce that $\mathrm{J}_1^2$ can be absorbed by the first integral term in \eqref{derivative_ap}. The remaining terms can be treated in exactly the same fashion. Therefore, by integration in time we conclude the first inequality in \eqref{AM_right_n_energy}. \medskip Now we intend to prove the second inequality in \eqref{AM_right_n_energy}. In fact, by using the first inequality in \eqref{mod_parameter_bound} we deduce that for all $R\geq R_0$ we have \[ \vert c_n-\dot{\widetilde{x}}_n(t)\vert+\vert c_{n-1}-\dot{\widetilde{x}}_{n-1}(t)\vert\leq \dfrac{1}{12}(c_n-c_{n-1}), \ \ \hbox{ for all } t\geq 0. \] Moreover, defining the time-dependent interval $\Pi_n(t):=(\tfrac{5}{6}\widetilde{x}_{n-1}(t)+\tfrac{1}{6}\widetilde{x}_n(t),\widetilde{x}_n(t)-R_0)$ we deduce from this choice of parameters that \[ \Vert u(t)\Vert_{L^\infty(\Pi_n(t))}\leq \dfrac{(1-\delta_i)c_n}{\mathbf{b}}, \ \ \hbox{ for all } \ t\geq t_R^n. \] Thus, gathering the above information we deduce that for $x\leq \tfrac{5}{6}\widetilde{x}_{n-1}(t)+\tfrac{1}{6}\widetilde{x}_n(t)$ and all $t_0^n\geq t_R^n$ we have \begin{align*} x-z_n^{-R}(t)&=x-\widetilde{x}_n(t)+R+(\widetilde{x}_n(t)-z_n(t))-\big(\widetilde{x}_n(t_0^n)-z_n(t_0^n)\big) \\ & \leq -\tfrac{5}{6}\big(\widetilde{x}_n(t)-\widetilde{x}_{n-1}(t)\big)+R+\delta_nc_n(t-t_0^n) \\ & \leq -\tfrac{5}{3}R-\tfrac{11}{12}(c_n-c_{n-1})(t-t_0^n)+R+\tfrac{5}{8}(c_n-c_{n-1})(t-t_0^n) \\ & \leq -\dfrac{2}{3}R-\dfrac{1}{4}(c_n-c_{n-1})(t-t_0^n). \end{align*} Hence, proceeding in the same fashion as before, splitting the space into two regions \[ \mathbb{R}=\big(-\infty,\tfrac{5}{6}\widetilde{x}_{n-1}(t)+\tfrac{1}{6}\widetilde{x}_n(t)\big]\cup\big[\tfrac{5}{6}\widetilde{x}_{n-1}(t)+\tfrac{1}{6}\widetilde{x}_n(t),+\infty\big)=:D_1\cup D_2, \] we deduce that for any $x\leq \tfrac{5}{6}\widetilde{x}_{n-1}(t)+\tfrac{1}{6}\widetilde{x}_n(t)$ and all $t\geq t_0^n$ we have \[ \Psi\big(x-z_n^{-R}(t)\big)\lesssim \exp\left(-\tfrac{R}{9}-\tfrac{1}{48}(c_n-c_{n-1})(t-t_0^n)\right), \] Therefore, proceeding exactly in the same fashion as before, for $k=1,2,3$ we can bound \[ \mathrm{J}_k^1\lesssim \Vert u_0\Vert_{H^1}^4 e^{-\frac{R}{9}-\frac{1}{48}(c_n-c_{n-1})(t-t_0^n)}\quad \hbox{and} \quad \mathrm{J}_k^2\leq \dfrac{c_n}{2^6}\int \big(u^2+u_x^2\big)\Psi'(\cdot-z_n^R)dx. \] Integrating in time we obtain the desired result. The proof is complete. \medskip \textbf{Step 2:} Our aim now is to prove \eqref{AM_right_i_energy} in the case $i=2,...,n$. In fact, it is enough to notice that, by defining $t_i^R$ to be \begin{align}\label{t_i_R_def} t_i^R:=\max\Big\{\{0\}\cup \big\{t\geq 0: \ \widetilde{x}_i(t)-\widetilde{x}_{i-1}(t)=2R\big\}\Big\}, \end{align} we deduce that for all $t\geq t_i^R$ \[ \Vert u(t)\Vert_{L^\infty(\Pi_i(t))}\leq \dfrac{(1-\delta_i)c_i}{2^6}, \quad \hbox{where} \quad \Pi_i(t):=\left(\tfrac{5}{6}\widetilde{x}_{i-1}(t)+\tfrac{1}{6}\widetilde{x}_i(t),\widetilde{x}_i(t)-R_0\right), \] what in light of step $1$ is enough to prove the desired result. \medskip \textbf{Step 3:} Finally, in the case $i=1$ it is enough to notice that for all $t\in\mathbb{R}$ we have \[ \Vert u(t)\Vert_{L^\infty((-\infty,\widetilde{x}_1(t)-R_0))}\leq \dfrac{(1-\delta_1)c_1}{2^6}. \] Again, in light of step $1$ we conclude the desired result by following the same procedure. The proof is complete. \qed \medskip \subsection{Proof of Lemma \ref{tech_lem_left}}\label{ap_tech_left} Let $R$ be any positive real number and consider any $t_0\in\mathbb{R}$ satisfying $t_0>t_R^{i+1}$. Now we set $\widetilde{t}_0$ being \[ \widetilde{t}_0:=\max\Big\{\left\{t^{i+1}_R\right\}\cup\left\{t\in[t^{i+1}_R,t_0]: \ \widetilde{x}_i(t_0)+R+\tfrac{3}{4}\big(\widetilde{x}_i(t)-\widetilde{x}_i(t_0)\big)=y_{i+1}(t)\right\}\Big\}. \] Hence, by definition of $\{y_i\}_{i=1}^n$ in \eqref{def_y_i_intervals} and the definition of $\widetilde{t}_0$ above, we immediately obtain that on $\left[\widetilde{t}_0,t_0\right]$ it holds \[ \widetilde{x}_i(t_0)+R+\tfrac{3}{4}\big(\widetilde{x}_i(t)-\widetilde{x}_i(t_0)\big)\leq y_{i+1}(t). \] Now we set $z_{i}^R(t):=\widetilde{x}_i(t_0)+R+\tfrac{3}{4}\big(\widetilde{x}_i(t)-\widetilde{x}_i(t_0)\big)$. Thus, by the above inequality and by using \eqref{mod_parameter_bound} we obtain that for any $t\geq \widetilde{t}_0$ and any $x\geq \tfrac{3}{8}\widetilde{x}_i(t)+\tfrac{5}{8}\widetilde{x}_{i+1}(t)$ it holds \[ x-z_{i}^R(t)\geq x-y_{i+1}(t)\geq \tfrac{1}{8}\widetilde{x}_{i+1}(t)-\tfrac{1}{8}\widetilde{x}_i(t)\geq \tfrac{R}{4}+\tfrac{1}{2^4}(c_{i+1}-c_{i})\big(t-\widetilde{t}_0\big). \] As before, notice that the latter inequality lead us to \[ \Psi\left(x-z_{i}^R(t)\right)\leq \exp\left(-\frac{R}{24}-\frac{(c_{i+1}-c_i)\big(t-\widetilde{t}_0\big)}{2^7}\right), \] which give us the main information needed to obtain \eqref{J_one_two_ap}. Now, for the sake of simplicity let us define $\Pi_i(t):=\big(\widetilde{x}_i(t)+R,\frac{3}{8}\widetilde{x}_i(t)+\frac{5}{8}\widetilde{x}_{i+1}(t)\big)$. Hence, by the definition of both $\varepsilon_0$ and $\sigma$ in \eqref{parameters} and by using \eqref{orb_concl} we obtain \[ \Vert u(t)\Vert_{L^\infty\left(\Pi_i(t)\right)}<\frac{c_i}{2^7} \quad \hbox{for all }\,t\geq \widetilde{t}_0. \] Thus, by defining the modified energy functional \begin{align}\label{def_modified_ap} \mathcal{I}_{i}^R(t):=\int\big(u^2+u_x^2\big)(t,x)\Psi\big(\cdot-z_{i}^R(t)\big)dx, \end{align} and proceeding as in Lemmas \ref{AM_orb_train} and \ref{tech_lem_mon_exp}, we deduce that for all $t\in[\widetilde{t}_0,t_0]$ \[ \mathcal{I}_{i}^R(t_0)-\mathcal{I}_{i}^R(t)\leq Ce^{-R/24}. \] Therefore, by the same arguments as those at the middle of Section \ref{sec_six_one} we conclude \[ \mathcal{J}_{i,r}^R(t_0)\leq \mathcal{J}_{i,r}^R(t)+Ce^{-R/24},\quad \hbox{for all } \ \widetilde{t}_0\leq t\leq t_0. \] Hence, it only remains to prove that the latter inequality holds for $t_R^{i+1}\leq t\leq t_0$. First of all, notice that if $\widetilde{t}_0=t_R^{i+1}$ we are done. Otherwise, by definition of $\widetilde{t}_0$ and $z_i^R$ we must have $z_i^R\big(\widetilde{t}_0\big)=y_{i+1}\big(\widetilde{t}_0\big)$. Then, if this is the case, it is enough to notice that \[ \widetilde{x}_{i+1}(t_R^{i+1})-y_{i+1}(t_R^{i+1})=\tfrac{1}{2}\widetilde{x}_{i+1}(t_R^{i+1})-\tfrac{1}{2}\widetilde{x}_{i}(t_R^{i+1})\geq R. \] Thus, by replacing $\widetilde{x}_{i+1}(t_R^{i+1})-y_{i+1}(t_R^{i+1})$ instead of $R$ in \eqref{def_modified_ap} and by redefining $z_i$ to be equals to $z_i(t)=y_{i+1}(t)$ we obtain that for all $t\in[t_R^{i+1},\widetilde{t}_0]$ it holds: \[ \int \big(u^2+u_x^2\big)\Psi(\cdot-y_{i+1}(\widetilde{t}_0)\big)\leq \int \big(u^2+u_x^2\big)\Psi(\cdot-y_{i+1}(t)\big)+Ce^{-R/10}. \] Finally, since $t\geq t_R^{i+1}$, we have \[ \int \big(u^2+u_x^2\big)\Psi(\cdot-y_{i+1}(t)\big)\leq \mathcal{J}_{i,r}^R(t), \] from where we obtain the desired result for $t\in [t_R^{i+1},t_0]$. The proof is complete. \qed \medskip \textbf{Acknowledgements :} The author is very grateful to professor Luc Molinet for encouraging me in solving this problem and for many remarkably useful conversations. \medskip
1,314,259,993,123
arxiv
\section{Introduction} Edge-preserving smoothing (EPS) has attracted a strong interest in the fields of image processing and computer vision. Predominantly, it appears in a manipulation task that requires decomposing an image into a piecewise smooth layer and a detail layer. These layered signals can be recombined to match various application goals, e.g., detail enhancement and tone mapping \cite{Farbman2008}. Recent works on joint smoothing provide a new paradigm, enabling various applications such as dense correspondence \cite{Rhemann2011}, joint upsampling \cite{Kopf2007,Park2011}, and texture removal \cite{Xu2012}. The problem of segmentation \cite{Kim2008} and visual saliency \cite{Perazzi2012} may also be interpreted as a joint smoothing problem. The basic idea of joint smoothing is to provide structural guidance of how the smoothing should be performed. Thus, it is assumed that the guidance signal has enough information to alter structures in the input image. In this context, global optimization-based methods \cite{Farbman2008,Bi2015,Ham2015} are advocated. They find an optimal solution to an objective function that consists of a data fidelity term and a prior term. Thanks to such global formulation, the optimization-based methods show the state-of-the-art performance compared with local EPS approaches that typically have a form of weighted averaging \cite{Farbman2008}. However, this outperformance can be achieved only at the price of high computational cost, mainly arising from solving the global objective function. The optimization-based methods are still an order of magnitude slower than local ones \cite{He2013,Gastal2011,Chen2007}, even with recent acceleration techniques \cite{Krishnan2013,Wang2008,Afonso2010}. Progress in hardware will make an efficient implementation possible, but it is not unlikely that an image resolution will increase as well. Accordingly, a different optimization technique is needed for a highly efficient global EPS methods. In this paper, we formulate a global EPS (e.g., based on weighted-least squares or weighted-total variation) as an equivalent constrained optimization problem. This formulation results in a sequence of sub-problems that is much easier to optimize. The computational efficiency of our approach is due to a kind of variable splitting techniques. However, unlike the previous splitting-based methods\footnote{The most expensive operation in previous methods, in the senses of alternating minimization \cite{Wang2008,Afonso2010} or half quadratic optimization \cite{Xu2011}, is fast Fourier transforms with $O(n\log n)$ complexity.} \cite{Wang2008,Afonso2010}, our formulation enables using a highly efficient, linear time algorithm in both weighted-least squares (WLS) and -total variation (WTV) problems. As a result our approach has a time complexity linear to the number of image pixels. Another appealing aspect is that it converges with only few iterations. We also propose fast iteratively re-weighted algorithms for an objective function using a non-convex prior term. Note that the previous splitting-based methods in \cite{Wang2008,Afonso2010} are not directly applicable to many non-convex priors of practical relevance \cite{Schmidt2014}. \section{Problem formulation and Analysis} EPS can be formulated as finding a solution of an objective function. We can find the solution by using popular iterative methods \cite{Saad2003} or splitting-based approaches \cite{Wang2008,Afonso2010}, depending on the prior terms used in the objective function. We start with a basic formulation for EPS to provide some intuition. In the following, the subscript $p$ denotes the 2D spatial location of a pixel. Given an input image $f$ and a guidance image $g$, a desired output $u$ is obtained by minimizing the following objective function: \begin{equation} E\left( u \right) = \sum\limits_p {\left( {(u - f)_p^2 + \lambda \sum\limits_{j \in \{ 1,2\} } {{w_{j,p}}\phi {{({D_j}u)}_p}} } \right)}, \end{equation} where ${D_1}$ and ${D_2}$ are discrete implementations of the derivative in horizontal and vertical directions, respectively. $\lambda$ controls the strength of the smoothing. $g$ can be the input image $f$ or a different signal correlated with $f$. The weight $w_{j,p}=\exp ( - ({D_j}g)_p^2/\kappa )$ is defined using $g$ and constant $\kappa$. The potential function $\phi$ and the weight function $w$ allow one to employ various image priors that behave differently in preserving or smoothing image features. We always assume that $f$ and $g$ have the same width ($W$) and the height ($H$). \subsection{WLS for EPS} When $\phi = {\tau^2}$, the objective function of (1) becomes quadratic, and it corresponds to the WLS framework \cite{Farbman2008}. The minimizer $u$ satisfies the following linear system: \begin{equation} \bigg( {{\bf{I}} + \lambda \sum\limits_{j \in \{ 1,2\} } {{\bf{D}}_j^T{{\bf{W}}_j}{\bf{D}}_j^{}} }\bigg){\bf{u}} = {\bf{f}}, \end{equation} where ${{\bf{D}}_j}$ is a discrete difference matrix, ${{\bf{W}}_j}$ is a diagonal matrix containing the weights ${w_j}$, and ${\bf{I}}$ is an identity matrix. Iterative solvers such as Jacobi and conjugate gradient methods \cite{Saad2003} are applicable to solve the sparse linear system of (2). Since these methods consist of matrix-vector multiplications, each iteration runs in linear time to an image size. However, the number of iterations, required for achieving a particular accuracy, depends on the matrix dimension ($HW\times HW$) \cite{Krishnan2013}, and hence the computational cost is considerable. Despite recent progress in preconditioning techniques, all the existing solvers are still an order of magnitude slower than the state-of-the art local EPS approaches \cite{He2013,Gastal2011}. Moreover, the cost of constructing the preconditioner may outweigh the speed gain from the improved conditioning. Similarly, the preconditioning techniques \cite{Krishnan2013} may not accelerate EPS algorithms using the iteratively re-weighted least squares (IRLS) method \cite{Chartrand2008}. An intermediate linear system of the IRLS method varies during \emph{external} iterations and thus a series of preconditioners should be constructed, leading to a huge amount of computational overhead. \subsection{WTV for EPS} The WTV prior, $\phi = |\tau|$, often achieves a better capability along boundaries \cite{Xu2012,Bi2015}, but it needs more computational cost than using $\phi = {\tau^2}$ in the WLS. A common approach to minimizing the WTV objective function is to exploit the variable-splitting and penalty techniques, as follows \cite{Wang2008}: \begin{equation} E\left( {u,v,\beta } \right) = \sum\limits_p {\left( {(u - f)_p^2 + \sum\limits_{j \in \{ 1,2\} } {\left( {\frac{\beta }{2}({v_j} - {D_j}u)_p^2 + \lambda {w_{j,p}}{{\left| {{v_j}} \right|}_p}} \right)} } \right)} , \end{equation} where $\beta$ is a penalty parameter. An auxiliary variables $v_1$ and $v_2$ are introduced for an alternative minimization of the data and the prior terms. When $v=(v_1,v_2)$ is fixed, minimizing the objective function of (3) with respect to ${u}$ can be solved in a closed-form, and vice versa. The solver is hence iteratively applied while updating $u$, $v$, and $\beta$: \begin{equation} \begin{array}{l} {u^{t + 1}} = \mathop {\arg \min }\limits_u E(u,{v^t},{\beta ^t}),\\ {v^{t + 1}} = \mathop {\arg \min }\limits_v E({u^{t + 1}},v,{\beta ^t}),\\ {\beta ^{t + 1}} = \alpha {\beta ^t}, \end{array} \end{equation} where $\alpha {\rm{ > 1}}$ is a continuation parameter. Since ${v^{t+1}}$ can be obtained by \emph{soft-thresholding} \cite{Wang2008,Afonso2010,Bi2015}, the computational cost primarily lies in the $u$-subproblem: \vspace{5pt} \begin{equation} \bigg( {{\bf{I}} + \frac{\beta }{2}\sum\limits_{j \in \{ 1,2\} } {{\bf{D}}_j^T{\bf{D}}_j^{}} } \bigg){{\bf{u}}^{t + 1}} = \frac{\beta }{2}\sum\limits_{j \in \{ 1,2\} } {{\bf{D}}_j^T{\bf{v}}_j^t} + {\bf{f}}. \end{equation} Although numerous methods, such as alternating direction method of multipliers (ADMM) \cite{Afonso2010} and split-Bregamn (SB) \cite{Bi2015}, have been proposed, they use the fast Fourier Transform\footnote{The linear system (5) can be diagonalized by FFT. Thus, solving (5) requires three FFT calls.} (FFT) to update the variable $u$. As a result, the computational cost required for solving (3) is $O(n\log n)$ per iteration ($n=HW$). \section{Method} In this section, we first propose an efficient method of minimizing (1) when $\phi = \tau^2$ or $|\tau|$. We then apply it to solve non-convex objective functions. The key idea is to decompose (1) into each spatial dimension with a proper variable splitting, and then to use a constrained optimization technique. Consider the following optimization problem with linear equality constraint: \begin{equation} \mathop {\min }\limits_{\scriptstyle\;\;\;{\kern 1pt} u,v\hfill\atop \scriptstyle\,{\rm{s}}{\rm{.t}}{\rm{. u = v}}\hfill} \sum\limits_p {\left( {\sum\limits_{o \in \{ u,v\} } {\frac{1}{2}(o - f)_p^2 + } \lambda \left( {{w_{1,p}}\phi {{({D_1}u)}_p} + {w_{2,p}}\phi {{({D_2}v)}_p}} \right)} \right),} \end{equation} We again denote by $v$ an auxiliary variable. In our formulation, the size of $v$ is equal to the input $f$. Then, it is clear that (6) is equivalent to the original problem (1), under the constraint $(u=v)$. The penalty decomposition method\footnote{We have tested the augmented Lagrangian method \cite{Nocedal2006} to avoid using large $\beta$ values (continuation), but found this method did not improve a convergence rate, while increasing a memory demand (due to Lagrange multiplier).} \cite{Nocedal2006}, associated with (6) can be written as: \begin{equation} \mathop {\min }\limits_{u,v} \sum\limits_p {\left( {\sum\limits_{o \in \{ u,v\} } {\frac{1}{2}(o - f)_p^2 + \frac{\beta }{2}(u - v)_p^2 + } \lambda \left( {{w_{1,p}}\phi {{({D_1}u)}_p} + {w_{2,p}}\phi {{({D_2}v)}_p}} \right)} \right).} \end{equation} This problem can be solved with block coordinate descent by minimizing (7) with respects to $u$ and $v$, alternately \footnote{For scalar variables, $\mathop {\arg \min }\limits_e a{(e - c)^2} + b(e - d)^2 = \mathop {\arg \min }\limits_e (a + b){(e - \frac{{ac + bd}}{{a + b}})^2}$.}. \begin{equation} {u^{t + 1}} = \mathop {\arg \min }\limits_u \sum\limits_p {\left( {(u - \tilde f)_p^2 + \frac{2\lambda }{{1 + {\beta ^t}}}{w_{1,p}}\phi {{({D_1}u)}_p}} \right),} \end{equation} \begin{equation} {v^{t + 1}} = \mathop {\arg \min }\limits_v \sum\limits_p {\left( {(v - \bar f)_p^2 + \frac{2\lambda }{{1 + {\beta ^t}}}{w_{2,p}}\phi {{({D_2}v)}_p}} \right),} \end{equation} where $\tilde f = {(1 + {\beta ^t})^{ - 1}}(f + {\beta ^t}{v^t})$, ${\rm{ }}\bar f = {(1 + {\beta ^t})^{ - 1}}(f + {\beta ^t}{u^{t +1}})$, and ${\beta ^{t + 1}} = \alpha {\beta ^t}$. Now, we are ready to show the advantage of our formulation. As $D_1$ represents a difference operator with respect to the horizontal axis, we can decompose (8) into sub-problems defined with 1D horizontal signals only. By introducing a 1D slack variable $z$, we have: \begin{equation} {u^{t + 1,h}} = \mathop {\arg \min }\limits_z \sum\limits_x {\left( {(z - {{\tilde f}^h})_x^2 + \frac{{2\lambda }}{{1 + {\beta ^t}}}w_{1,x}^h\phi {{({D_1}z)}_x}} \right),} \end{equation} The super-script $h$ denotes a horizontal signal along the $x$ dimension ($x = 1, \ldots ,{W}$). This 1D optimization process is repeated for all horizontal signals ($H$ in number). Note that a similar result can be obtained for the auxiliary variable $v$. In this case, (9) is decomposed into a sub-problem with a 1D vertical signal along the $y$ dimension ($y = 1, \ldots ,{H}$). \QEDB To the best of our knowledge, the variable splitting technique has been applied in such a way that ${\left[ {\begin{array}{*{20}{c}} {{D_1}u}, {{D_2}u} \end{array}} \right]} = [v_1,v_2]$ , yielding $O(n\log n)$ complexity algorithm $(n=HW)$, as explained in Section 2. The previous variable splitting \cite{Afonso2010,Bi2015,Xu2011} aims to transfer $D_1u$ and $D_2u$ out of the potential function $\phi$ by introducing the auxiliary variable $v_1$ and $v_2$, respectively. In contrast, we utilize it to decompose the original problem of (1) into a series of 1D sub-problems. This not only leads to easily solvable sub-problems, but also significantly improves the convergence rate of the algorithm (see Section 4). In the following, we present an efficient method of solving the objective form of (10) defined with a 1D horizontal signal. Its vertical counterpart can be optimized in the same manner. \subsection{1D Fast solver} \subsubsection{WLS} With $\phi=\tau^2$, (10) can be rewritten with a 1D horizontal signal ${\tilde f^h}$ and a guide signal $g^h$ as \begin{equation} \mathop {\arg\min }\limits_z \sum\limits_x {\left( {(z - {{\tilde f}^h})_x^2 + \tilde w_{1,x}^h({D_1}z)_x^2} \right)} , \end{equation} where $\tilde w_{1,x}^h = {(1 + {\beta ^t})^{ - 1}}(2\lambda w_{1,x}^h)$. The 1D output $z$ that minimizes the above equation is obtained by solving the following linear system of size $W \times W$. \begin{equation} \left( {{\bf{I}} + {\bf{D}}_1^T{\bf{\tilde W}}_1^h{\bf{D}}_1^{}} \right){{\bf{z}}^h} = {{\bf{\tilde f}}^h}. \end{equation} Note that the size of ${{\bf{D}}_1^{}}$ and ${\bf{I}}$ is $W \times W$, not $HW \times HW$ as in (2). Interestingly, the problem (12) becomes much easier to solve than (2) since $\tilde L$ is a tridiagonal matrix. We can solve this equation with $O(n)$ cost ($n=W$) by the Thomas algorithm \cite{Golub1996}. It consists of forward-backward steps. More details can be found in the supplementary material. \subsubsection{WTV} When $\phi = |\tau|$, (10) is written as follows: \begin{equation} \mathop {\arg\min }\limits_z \sum\limits_x {\left( {(z - {{\tilde f}^h})_x^2 + \tilde w_{1,x}^h{{\left| {{D_1}z} \right|}_x}} \right)} , \end{equation} The IRLS algorithm can be applied to solve (13) approximately \cite{Chartrand2008}. However, there exists a non-iterative, $O(n)$ method for the 1D total variation \cite{Condat2013}. While this method is designed to solve 1D (un-weighted) total variation, it is possible to extend it to minimize 1D WTV. Note that this extension enables the fast iteratively re-weighted L1 (IRL1) algorithm for 2D image smoothing. See Section 3.3 for details. We introduce the (Fenchel-Moreau) dual form of (13) as follows: \begin{equation} \mathop {\min }\limits_s \sum\limits_x {({{\tilde f}^h} - D_1^Ts)_x^2} ,\;\; {\rm{ s}}{\rm{.t}}{\rm{. }}\;\;{\left| s \right|_x} \le {\tilde w}_{1,x}^h, \;\;{\rm{ }}{s_1} = {s_W} = 0,\; \end{equation} where $s$ is the dual variable. Once the solution ${s^*}$ of the problem (14) is found, we can recover the solution ${z^*}$ of its primal form by \begin{equation} z_x^* = \tilde f_x^h - s_x^* + s_{x - 1}^*,\;\;{\rm{ for }}\;\;1 \le x \le W. \end{equation} The optimality condition which characterizes the solutions $z^*$ and ${s^*}$ is then expressed as \begin{equation} \left\{ {\begin{array}{*{20}{c}} {s_x^* = \tilde w_{1,x}^h\quad \;\quad \;\;}&{{\rm{if}}\;z_{x + 1}^* > z_x^*}\\ {\begin{array}{*{20}{c}} {s_x^* = - \tilde w_{1,x}^h\,\quad \quad }\\ {s_x^* \in [ - \tilde w_{1,x}^h,\tilde w_{1,x}^h]} \end{array}}&{\begin{array}{*{20}{c}} {{\rm{if}}\;z_{x + 1}^* < z_x^*}\\ {{\kern 1pt} {\rm{if}}\;z_{x + 1}^* = z_x^*.} \end{array}} \end{array}} \right. \end{equation} More details about the derivation of (14) and (16) are available at the supplementary material. \begin{algorithm} \caption{Fast global image smoothing}\label{euclid} \begin{algorithmic}[1] \Procedure{Fast image smoothing using $\phi$}{} \State Initialize $u^{(t=1)}=v^{(t=1)}=f,$ $\beta^1$, $\alpha$ \For{$t=1:T$} \For{$y=1:H$} \State $\tilde f^{h}(x) = {(1 + {\beta ^t})^{ - 1}}(f(x,y) + {\beta ^t}{v^t(x,y)})$ for all $(x = 1, \ldots ,{{W}})$ \State ${\tilde w^h_1}(x) = {(1 + {\beta ^t})^{ - 1}}(2\lambda {w_1}(x,y))$ for all $x$ \State Compute $z$ minimizing (11) or (13) according to $\phi$ \State $u^{t+1}(x,y) = z(x)$ for all $x$ \EndFor \For{$x=1:W$} \State $\bar f^{v}(y) = {(1 + {\beta ^t})^{ - 1}}(f(x,y) + {\beta ^t}{u^{t+1}(x,y)})$ for all $(y = 1, \ldots ,{{H}})$ \State ${\tilde w^v_1}(y) = {(1 + {\beta ^t})^{ - 1}}(2\lambda {w_2}(x,y))$ for all $y$ \State Compute $z$ minimizing (11) or (13) according to $\phi$ \State $v^{t+1}(x,y) = z(y)$ for all $y$ \EndFor \State ${\beta ^{t + 1}} = \alpha {\beta ^t}$ \EndFor \EndProcedure \end{algorithmic} \end{algorithm} \begin{figure*}[t] \centering \renewcommand{\thesubfigure}{} \subfigure[(a) The WLS smoothing] {\includegraphics[width=0.49\linewidth]{figure/figure7/fwls_alpha.png}}\hfill \subfigure[(b) The WTV smoothing] {\includegraphics[width=0.49\linewidth]{figure/figure7/fwl1_alpha}}\hfill \caption{The energy evolutions of our approach, depending on the continuation parameter $\alpha$. (a) Our fast WLS and (b) Our fast WTV. $\lambda$ and $\kappa$ are set to $400$ and $7.65$, respectively.} \label{img:2} \end{figure*} \subsection{2D Smoothing algorithm and properties} The proposed algorithm in the case of 2D image smoothing is summarized in Algorithm 1. Given an input $f$, a guidance $g$ and a smoothing parameter $\lambda$, the global 1D smoothing operations are sequentially performed along with the horizontal and vertical directions. At each iteration, the input for 1D horizontal smoothing is re-calculated as the linear combination of the auxiliary variable $v^t$ and $f$ (line 5). In the same manner, 1D vertical smoothing is applied to the linear combination of $u^{t+1}$ and $f$ (line 11). These reflect the smoothing effect from the other side. The 1D smoothing parameter gradually decreases by factor of $2/(1 + {\beta ^t})$ (line 6, 12). The algorithm is terminated after $t=T$ iterations. Although the original formulation (1) is decomposed into a series of sub-problems, we compute exact solutions of the objectives (8) and (9) defined on the 1D dimension. This property considerably accelerates the convergence speed of the algorithm. Moreover, since these sub-problems can be decoupled, a straightforward parallelization is possible. \begin{prop} As $t \to \infty $, Algorithm 1 is convergent. \end{prop} \paragraph*{Proof} See the supplementary material. \QEDB \vspace{3pt} Our approach is not rotationally invariant, as the original formulation of (1) enforces the smoothness term to be aligned for each axis individually. For the WLS smoothing, we do not observe any visible artifacts in all experiments. A similar observation can be found in \cite{Farbman2008}. When $\phi = |\tau|$, it prefers object boundaries which are minimal in the Manhattan ($L_1$) distance. This may give blocky artifacts in the region boundaries. In order to alleviate this problem, Algorithm 1 can be customized to perform a smoothing using 8-neighborhood system $\mathcal{N}_8$ $ - $ we should introduce two more auxiliary variables (by symmetry) and additionally iterate diagonal and anti diagonal passes\footnote{But, all results in this paper are obtained using a $\mathcal{N}_4$ neighborhood system.}. \subsubsection{Continuation parameter $\alpha$} Our approach uses a continuation parameter $\alpha$ to gradually increase a penalty parameter $\beta$, at each iteration. This ensures that the auxiliary variable $v$ should be close to $u$, and thus Algorithm 1 is convergent (Proposition 1). Fig. 1 shows the energy evolutions of WLS and WTV with different values of $\alpha$=2, 4, and 8. Using a large $\alpha$ makes the convergence faster, but the objective value after the convergence becomes slightly higher than that of using a small $\alpha$. In this paper, we set $\alpha=4$ by considering the trade-off between speed and accuracy. \subsection{Extension to fast iteratively re-weighted algorithms} Our approach can be extended for a wider class of potential functions by using iteratively re-weighted algorithms \cite{Ochs2015}. Let us consider general heavy-tailed potentials $\psi $, then the original objective function of (1) is written as \begin{equation} E\left( u \right) = \sum\limits_p {\left( {(u - f)_p^2 + \lambda \sum\limits_{j \in \{ 1,2\} } {{w_{j,p}}\psi {{({D_j}u)}_p}} } \right)}. \end{equation} The principle of iteratively re-weighted algorithms is to find a convex function, which serves as the upper bound of (17). It allows for explicit algorithms according to the construction of this bound, i.e., IRLS vs IRL1. In the following, we will use $k = \left( {1, \ldots ,K} \right)$ to represent the \emph{external} iteration index used to minimize (17). \subsubsection{Fast IRLS (FIRLS)} The IRLS approach uses the fact that many potentials can be seen as minima of the quadratic upper bound \cite{Ochs2015}. Using an intermediate estimate ${u^{k}}$, we obtain the following update rule: \begin{equation} {u^{k + 1}} = \mathop {\arg \min }\limits_u \sum\limits_p {\left( {(u - f)_p^2 + \lambda \sum\limits_{j \in \{ 1,2\} } {{a_{j,p}}({D_j}u)_p^2} } \right)} , \end{equation} where ${a_{j,p}} = {w_{j,p}}\frac{{\psi '{{({D_j}{u^k})}_p}}}{{2{{({D_j}{u^k})}_p}}}$. Minimizing (18) is equivalent to solving a linear system (2), except that a Laplacian matrix is computed with $a_{j,p}$. Thus, we can obtain $u^{k+1}$ using our fast WLS solver with $T$ \emph{internal} iterations. This extension is attractive in several aspects. First, our approach can be applied to a range of applications, which require for sparse image gradients and sharp edges (See Section 5). Second, our approach involves only simple arithmetic operations for solving (18). While most existing linear solvers \cite{Saad2003} suffer from additionally computing the preconditioner at each \emph{external} iteration, as an intermediate linear system varies with $k=(1,...,K)$. \subsubsection{Fast IRL1 (FIRL1)} Theoretically, the IRLS algorithm can be applied only when the potential function is well approximated with a quadratic upper bound \cite{Ochs2015}. This, however, does not cover interesting functions such as $\psi = \log (1 + |\tau|)$ and ${L_p}$ ($p<1$), which are concave and non-differentiable at origin. Here we extend our fast WTV solver to the FIRL1 algorithm: \begin{equation} {u^{k + 1}} = \mathop {\arg \min }\limits_u \sum\limits_p {\left( {(u - f)_p^2 + \lambda \sum\limits_{j \in \{ 1,2\} } {{b_{j,p}}\left| {{D_j}u} \right|_p} } \right)} , \end{equation} where ${b_{j,p}} = {w_{j,p}}\partial \psi {({D_j}{u^k})_p}$, $\psi$ is concave, and $\partial$ denotes the sub-gradient. The algorithm exploits a convex function obtained by linearizing the potential $\psi$. Note that (19) serves as the upper bound of (17) due to concavity of $\psi$. Obviously, each of the IRL1 iterations is the WTV problem, which can be solved efficiently using our solver. The pseudo-code for our fast iteratively re-weighted algorithms is provided in Algorithm 2. \begin{algorithm} \caption{Fast iteratively re-weighted algorithms}\label{euclid} \begin{algorithmic}[1] \Procedure{Fast image smoothing using $\psi$}{} \State Initialize $u^{(k=1)}$ \For{$k=1:K$} \State Construct the sub-problem (18) or (19) \State Compute $u^{k+1}$ using Algorithm 1 \EndFor \EndProcedure \end{algorithmic} \end{algorithm} \section{Experimental Validation} We evaluate the convergence and runtime performance of our solver. The experiments are simulated with a single Intel i7 3.4GHz CPU. Our approach is easy to implement and we will release the source code at public website. All compared methods have been implemented in MATLAB with MEX interface. Primary parameters in our approach are set as: $\beta^1=1$, and $\alpha=4$. At each iteration, we gradually increase $\beta$ by factor of $\alpha$. The smoothing parameters $\lambda$ and $\kappa$ are set to $400$ and $7.65$, respectively, for all methods (the range of image values is $[0\thicksim255]$). For convergence analysis of the WLS smoothing, we compare our approach (FWLS) with the conjugate gradient (CG), Gauss-Seidel (GS), Jacobi iteration (JI), and successive over relaxation (SOR). We use the SuiteSparse library \cite{suitesparse} to solve a triangular system, arising from GS and SOR. In the case of the WTV smoothing, our approach (FWTV) is compared to the split Bregman method (SB) \cite{Bi2015,Goldstein2009} and classical penalty decomposition (classical-PD) \cite{Wang2008}. The FFTW library \cite{FFTW} is used to solve the $u$-subproblem of (5) for SB and PD-classical methods. We show in Fig. 2(a) how the WLS objective evolves at each iteration (all methods are initialized by the input $f$). Although each iteration of CG, GS, JI, and SOR runs in a linear time, they require a very large number of iterations to converge. In contrast, our solver converges in a few iterations. The result of the WTV is shown in Fig. 2(b). Likewise, our approach converges much faster than SB and classical-PD. Note that these methods have $O(n\log n)$ complexity per iteration $(n=H\times W)$, while our solver runs in linear time. The input image is shown in the inset Fig 1. \begin{figure*}[t] \centering \renewcommand{\thesubfigure}{} \subfigure[(a) The WLS smoothing] {\includegraphics[width=0.49\linewidth]{figure/figure1/123.png}}\hfill \subfigure[(b) The WTV smoothing] {\includegraphics[width=0.492\linewidth]{figure/figure1/666.png}}\hfill \caption{Comparison of the objective decrease versus the number of iterations. (a) the WLS smoothing and (b) the WTV smoothing. Our approach rapidly reduces the objective value of (1), and converges after a few iterations (the input image is shown in the inset of Fig. 1). $\beta^{1}$ and $\alpha$ are set to $1$ and $4$, respectively.} \label{img:2} \end{figure*} \begin{figure*}[t] \centering \renewcommand{\thesubfigure}{} \subfigure[] {\includegraphics[width=1\linewidth]{figure/figure5/ssim2.png}}\hfill \vspace{-15pt} \caption{The average SSIM indexes \cite{Wang2004} for the WTV smoothing, as a function of the number of iterations. (a) The average SSIM values on 100 natural images, (b)-(d) The visualization of differences between the reference and the results of each method (at 20 iteration).} \label{img:2} \end{figure*} \begin{table}[] \centering \caption{Runtime comparison for different methods (in seconds)} \label{my-label} \begin{tabular}{|c| >{\columncolor[HTML]{EFEFEF}}c |c|c|c| >{\columncolor[HTML]{EFEFEF}}c |c|c|c|} \hline Image size & \cellcolor[HTML]{EFEFEF} & PCG \cite{Krishnan2013} & MATLAB ``$\;\backslash$ " & FWLS & \cellcolor[HTML]{EFEFEF} & SB \cite{Bi2015,Goldstein2009} & FWTV \\ \cline{1-1} \cline{3-5} \cline{7-8} 427x640 & \cellcolor[HTML]{EFEFEF} & 0.85 & 0.95 & 0.07 & \cellcolor[HTML]{EFEFEF} & 1.47 & 0.25 \\ \cline{1-1} \cline{3-5}\cline{7-8} 660x800 & \cellcolor[HTML]{EFEFEF} & 1.58 & 1.97 & 0.13 & \cellcolor[HTML]{EFEFEF} & 2.56 & 0.43 \\ \cline{1-1} \cline{3-5}\cline{7-8} 923x1128 & \multirow{-4}{*}{\cellcolor[HTML]{EFEFEF}WLS} & 3.04 & 4.14 & 0.28 & \multirow{-4}{*}{\cellcolor[HTML]{EFEFEF}WTV} & 6.69 & 1.02 \\ \hline \end{tabular} \end{table} To further demonstrate the effectiveness of our approach, we additionally collect 100 natural images from the BSDS500 dataset \cite{Arbelaez11}, and compare the WLS smoothing results. The smoothing result obtained by MATLAB ``$\;\backslash$ " command is used as the reference. The average value of the structural similarity (SSIM) index \cite{Wang2004} is plotted as a function of the number of iterations in Fig. 3(a). The difference images between the reference and the results at 20 iteration is also visualized in Fig. 3(b)-(e). In general, we find the proposed approach yields satisfactory results after $T=3\sim5$. The average SSIM values at $T=3$, $5$, and $20$ are 0.9896, 0.9963, and 0.9975, respectively. The runtime comparison is shown in Table 1. In our methods, the number of iterations $T$ is fixed to 5 based on the above experiment. For the WLS smoothing, our approach is compared with the state-of-the-art preconditioning method (PCG) \cite{Krishnan2013} and MATLAB ``$\;\backslash$ " operator. We note that MATLAB ``$\;\backslash$ " operator uses the sparse cholesky decomposition in the SuiteSparse library \cite{suitesparse}. The result for the PCG \cite{Krishnan2013} is obtained from source code provided by the author. It should be noted that although the preconditioning method \cite{Krishnan2013} improves the rate of convergence significantly, constructing the preconditioner needs lots of setup time, taking about 2.6 second for 1$M$ image. The stopping criteria of the SB \cite{Bi2015,Goldstein2009} is ${\left\| {{u^{k + 1}} - {u^k}} \right\|_2}<0.1$. Our approach is magnitude faster than competing methods. Next, we present the analysis of our fast iteratively re-weighted algorithms. Using the FIRLS, we first minimize the objective\footnote{The same image in Fig. 1 is used and $\sigma$ is set to 7.65} (17) with $\psi = \sigma (1 - \exp ( - \frac{{{\tau^2}}}{\sigma }))$. Fig. 4(a) shows that the convergence rate of the FIRLS method differs depending on the number of inner iterations $T$. For the FIRL1, we use $\psi = \log (1 + |\tau|)$ as it is not well suited for the potential function that has a quadratic behavior around 0 (Fig. 4(b)). Overall, we observe that the iteratively re-weighted algorithms are stuck in bad local minima if internal iterations are not carried out sufficiently. The objective value even fluctuates with $T=1$, as the red line of Fig. 4(a). It is thus crucial to solve the subproblems of iteratively re-weighted algorithms until reaching the certain level of accuracy. In general, we find $K=5$, $T=5$ is a good choice for both algorithms\footnote{On this example, an average value of per-pixel difference, i.e., ${\left\| {{u^{k + 1}} - {u^k}} \right\|_2}$ is 0.15 after $K=5$ external iterations (the range of image values is $[0\thicksim255]$).}. Many heavy-tailed potential functions $\psi$ are non-convex, and thus any warm initializations for $u^1$ may further improve the convergence of our fast iteratively re-weighted algorithms. \begin{figure*}[t] \centering \renewcommand{\thesubfigure}{} \subfigure[(a) FIRLS $\psi = \sigma (1 - \exp ( - \frac{{{\tau^2}}}{\sigma }))$] {\includegraphics[width=0.49\linewidth]{figure/figure2/IRLSinner2.png}}\hfill \subfigure[(b) FIRL1 $\psi = \log (1 + |\tau|)$] {\includegraphics[width=0.49\linewidth]{figure/figure2/IRL1inner2.png}}\hfill \caption{Convergence of iteratively re-weighted algorithms depending on the number of inner iterations $T$ (please see legend). (a) the FIRLS and (b) the FIRL1. Each convex surrogate functions are solved with our FWLS and FWTV solvers. If each internal iteration is not performed sufficiently, the iteratively re-weighted algorithms stuck in bad local minima.} \label{img:2} \end{figure*} \begin{figure*}[t] \centering \renewcommand{\thesubfigure}{} \subfigure[] {\includegraphics[width=1\linewidth]{figure/figure3/texture.png}}\hfill \vspace{-15pt} \caption{Examples of the texture removal. For each method, the running times are reported in seconds (yellow). The image sizes are $1254\times 1067$ (top) and $495\times 536$ (bottom), respectively.} \label{img:2} \end{figure*} \begin{figure*}[t] \centering \renewcommand{\thesubfigure}{} \subfigure[] {\includegraphics[width=1\linewidth]{figure/figure4/partition2.png}}\hfill \vspace{-20pt} \caption{Examples of the content-based color quantization. Our approach shows performance comparable to state-of-the-art $L_0$ minimization \cite{Nguyen2015}. The running time is also reported in seconds (the image sizes are $494\times 371$).} \label{img:2} \vspace{-10pt} \end{figure*} \begin{figure*}[t] \centering \renewcommand{\thesubfigure}{} \subfigure[] {\includegraphics[width=1\linewidth]{figure/figure6/transfer.png}}\hfill \vspace{-20pt} \caption{Style transfer effect. The edge map of the input is used as the guidance image. The image size is $640\times 480$. The input image (a) is taken from \cite{Zhang20142}.} \label{img:2} \end{figure*} \section{Applications} Our approach is flexible, and can be applied to several tasks using EPS. In this section, we apply our method to texture removal, content-based color quantization, and style transfer. To this end, the various image priors, i.e., potential function and weight $w$, are exploited to match different application goals. All results and runtime in the comparison are obtained from source codes provided by the authors. The parameters are carefully tuned through extensive experiments. Additional results and other applications, such as scale-space filtering and image denoising, are also available in the supplementary material. \subsection{Texture removal} For texture removal, we employ the model proposed in \cite{Ham2015}: $\psi$ is set to $\sigma (1 - \exp ( - {{{\tau^2}}}/{\sigma }))$ and $g=G*f$ where $G$ is the Gaussian kernel with standard deviation 2. This type of guidance image is very effective since textures on the object are usually of small scale structures. The smoothing parameters $\kappa$, and $\sigma$ are fixed to 5 and 7.65, respectively, but $\lambda$ varies according to images. In this setting, we minimize the objective (17) using our FIRLS solver. Fig. 5 shows examples of texture removal obtained by the rolling guidance filter (RGF) \cite{Zhang2014}, the weighted median filter (WMF) \cite{Zhang20142}, the relative total variation (RTV) \cite{Xu2012}, and our FIRLS. The RGF \cite{Zhang2014} is implemented using fast bilateral filter \cite{Chen2007}. The proposed method is even faster than the texture smoothing tools based on the fast local filtering (RGF \cite{Zhang2014}, WMF \cite{Zhang20142}), while outperforming them in the subjective evaluation. Note that minimizing the objective function in the RTV \cite{Xu2012} needs to solve a large linear system iteratively\footnote{In the RTV \cite{Xu2012}, the prior term is defined in a form of total variation, resulting in a nonlinear system of equation. It can be easily addressed by using a fixed point iteration.}. Thus, it can be also accelerated by our FWLS solver. \subsection{Color quantization} (Content-based) color quantization attempts to increase the sparsity of colors while maintaining the overall structure in images. This is useful for many vision tasks, including image retrieval and segmentation. We use the sparsity prior, $\psi = \log (1 + |\tau|)$, to reduce the number of colors. The guidance image is not used in this application, i.e., $w=1$. With this, we minimize the objective (17) using our FIRL1 solver. Results and comparisons are provided in Fig. 6. Our solver shows performance comparable to state-of-the-art $L_0$ minimization \cite{Xu2011,Nguyen2015}, while running faster. The result in Fig. 6(c) has been obtained using the region fusion (RF) approach \cite{Nguyen2015}, which is tailored to $L_0$ minimization only. \subsection{Style transfer} Any feature of input image or others can be taken as the guidance $g$. To transfer structure of $g$ to input images $f$, we use our FWLS and FWTV solvers. Fig. 7 shows a specific style transfer example obtained by the weighted median filter (WMF) \cite{Zhang20142} and our FWLS. The edge map of $f$ is used as the guidance image $g$. The window size of the WMF \cite{Zhang20142} is set to $5\times5$, taking about 0.4 seconds. Our FWLS and FWTV solvers take only 0.08 and 0.28 seconds, respectively, for an image of size $640\times 480$. \section{Conclusions} We introduce a highly efficient splitting-based method for global EPS. Unlike previous splitting-based methods, our formulation enables linear time solvers for WLS and WTV problems. Our solver converges quickly, and its runtime is comparable to state-of-the-art local EPS approaches. We also propose fast iteratively re-weighted algorithms for a non-convex objective function. Our approach is flexible, and thus is applicable to a variety of applications. \section{Acknowledgements} \pagebreak \bibliographystyle{splncs}
1,314,259,993,124
arxiv
\section{Introduction}\label{section intro} Let $G$ be a connected real reductive group; precisely, a finite cover of a closed connected transpose-stable subgroup of $GL(n,\bR)$ with complexified Lie algebra $\mathfrak{g}$. Let $K$ be a maximal compact subgroup of $G$. Write $\mathfrak{g}=\mathfrak{k}+\mathfrak{p}$ for the corresponding Cartan decomposition of $\mathfrak{g}$, where $\mathfrak{k}$ is the complexified Lie algebra of $K$. \begin{subequations}\label{se:cohintro} Let $T\subset K$ be a maximal torus, so that $H_c = G^T$ is a maximally compact Cartan subgroup, with Lie algebra $\mathfrak{h}_{c}$. Let $\Lambda\subset\widehat{H_{c}}\subset\mathfrak{h}_{c}^{\star}$ be the lattice of weights of finite-dimensional $(\mathfrak{g},K)$-modules. For a fixed $\lambda_{0}\in\mathfrak{h}_{c}^{\star}$ regular, a family of virtual $(\mathfrak{g},K)$-modules $X_\lambda$, $\lambda\in\lambda_{0}+\Lambda$, is called {\it coherent} if for each $\lambda$, $X_\lambda$ has infinitesimal character $\lambda$, and for any finite-dimensional $(\mathfrak{g},K)$-module $F$, and for any $\lambda$, \begin{equation}\label{cohintro} X_\lambda\otimes F = \sum_{\mu\in\Delta(F)} X_{\lambda+\mu}, \end{equation} where $\Delta(F)$ denotes the multiset of all weights of $F$. (A more complete discussion appears in Section \ref{section coherent}.) The reason for studying coherent families is that if $X$ is any irreducible $(\mathfrak{g},K)$-module of infinitesimal character $\lambda_0$, then there is a {\it unique} coherent family with the property that \begin{equation}\label{existscoherent} X_{\lambda_0} = X. \end{equation} For any invariant of Harish-Chandra modules, one can therefore ask how the invariant of $X_\lambda$ changes with $\lambda \in \lambda_0 + \Lambda$. The nature of this dependence is then a new invariant of $X$. This idea is facilitated by the fact that \begin{equation} \text{$X_\lambda$ is irreducible or zero whenever $\lambda$ is integrally dominant;} \end{equation} zero is possible only for singular $\lambda$. (See for example \cite{V2}, sections 7.2 and 7.3.) The notion of ``integrally dominant'' is recalled in \eqref{intdom}; we write $(\lambda_0+\Lambda)^+$ for the cone of integrally dominant elements. We may therefore define \begin{equation}\label{annintro} \Ann(X_{\lambda}) = \text{annihilator in $U(\mathfrak{g})$ of $X_\lambda$} \qquad (\lambda \in (\lambda_0 + \Lambda)^+). \end{equation} The ideal $\Ann(X_\lambda)$ is primitive if $X_\lambda$ is irreducible, and equal to $U(\mathfrak{g})$ if $X_\lambda = 0$. Write $\rk(U(\mathfrak{g})/\Ann(X_{\lambda})$) for the Goldie rank of the algebra $U(\mathfrak{g})/\Ann(X_{\lambda})$. Let $W_\mathfrak{g}$ be the Weyl group of $\mathfrak{g}$ with respect to $\mathfrak{h}_c$. Joseph proved that the $\bN$-valued map defined on integrally dominant $\lambda \in \lambda_{0}+\Lambda$ by \begin{equation}\label{goldieintro} \lambda\mapsto \rk(U(\mathfrak{g})/\Ann(X_{\lambda})) \end{equation} extends to a $W_{\mathfrak{g}}$-harmonic polynomial $P_{X}$ on $\mathfrak{h}_{c}^{\star}$ called the {\it Goldie rank polynomial} for $X$. The polynomial $P_X$ is homogeneous of degree $\sharp R_{\mathfrak{g}}^{+}-\mathop{\hbox {Dim}}\nolimits(X)$, where $\sharp R_{\mathfrak{g}}^{+}$ denotes the number of positive $\mathfrak{h}_{c}$-roots in $\mathfrak{g}$ and $\mathop{\hbox {Dim}}\nolimits(X)$ is the Gelfand-Kirillov dimension of $X$. Moreover, $P_{X}$ generates an irreducible representation of $W_{\mathfrak{g}}$. See \cite{J1I}, \cite{J1II} and \cite{J1III}. There is an interpretation of the $W_{\mathfrak{g}}$-representation generated by $P_X$ in terms of the Springer correspondence. For all $\lambda\in (\lambda_0+\Lambda)^+$ such that $X_\lambda \ne 0$ (so for example for all integrally dominant regular $\lambda$), the associated variety ${\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(X_{\lambda})))$ (defined by the associated graded ideal of $\Ann(X_{\lambda})$, in the symmetric algebra $S(\mathfrak{g})$) is the Zariski closure of a single nilpotent $G_{\bC}$-orbit ${\mathcal O}$ in $\mathfrak{g}^{\star}$, independent of $\lambda$. (Here $G_\bC$ is a connected complex reductive algebraic group having Lie algebra $\mathfrak{g}$.) Barbasch and Vogan proved that the Springer representation of $W_{\mathfrak{g}}$ attached to ${\mathcal O}$ coincides with the $W_{\mathfrak{g}}$-representation generated by the Goldie rank polynomial $P_{X}$ (see \cite{BV1}). Here is another algebro-geometric interpretation of $P_{X}$. Write \begin{equation}\label{eq:Korbit} {\mathcal O}\cap (\mathfrak{g}/\mathfrak{k})^* = \coprod_{j=1}^r {\mathcal O}^j \end{equation} for the decomposition into (finitely many) orbits of $K_{\mathbb C}$. (Here $K_{\mathbb C}$ is the complexification of $K$.) Then the associated cycle of each $X_\lambda$ is \begin{equation}\label{multintro} \Ass(X_\lambda) = \coprod_{j=1}^r m^j_X(\lambda) \overline{{\mathcal O}^j} \qquad (\lambda \in (\lambda_0 + \Lambda)^+) \end{equation} (see Definition 2.4, Theorem 2.13, and Corollary 5.20 in \cite{V3}). The component multiplicity $m^j_X(\lambda)$ is a function taking nonnegative integer values, and extends to a polynomial function on $\mathfrak{h}_{c}^*$. We call this polynomial the {\it multiplicity polynomial} for $X$ on the orbit ${\mathcal O}^j$. The connection with the Goldie rank polynomial is that each $m^j_X$ is a scalar multiple of $P_X$; this is a consequence of the proof of Theorem 5.7 in \cite{J1II}. On the other hand, Goldie rank polynomials can be interpreted in terms of the asymptotics of the global character $\mathop{\hbox {ch}}\nolimits_{\mathfrak{g}}(X_{\lambda})$ of $X_{\lambda}$ on a maximally split Cartan subgroup $H_{s} \subset G$ with Lie algebra $\mathfrak{h}_{s,0}$. Namely, if $x \in \mathfrak{h}_{s,0}$ is a generic regular element, King proved that the map \begin{equation}\label{kingintro} \lambda\mapsto \lim_{t\rightarrow 0}t^{\mathop{\hbox {Dim}}\nolimits(X)}\mathop{\hbox {ch}}\nolimits_{\mathfrak{g}}(X_{\lambda})(\exp tx) \end{equation} on $\lambda_{0}+\Lambda$ extends to a polynomial $C_{X,x}$ on $\mathfrak{h}^{\star}_{c}$. We call this polynomial {\it King's character polynomial}. It coincides with the Goldie rank polynomial $P_{X}$ up to a constant factor depending on $x$ (see \cite{K1}). More precisely, as a consequence of \cite{SV}, one can show that there is a formula \begin{equation}\label{eq:multchar} C_{X,x} = \sum_{j=1}^r a^j_x m^j_{X}; \end{equation} the constants $a^j_x$ are independent of $X$, and this formula is valid for any $(\mathfrak{g},K)$-module whose annihilator has associated variety contained in $\overline{\mathcal O}$. The polynomial $C_{X,x}$ expresses the dependence on $\lambda$ of the leading term in the Taylor expansion of the numerator of the character of $X_\lambda$ on the maximally split Cartan $H_{s}$. \end{subequations} In this paper, we assume that $G$ and $K$ have equal rank. Under this assumption, we use Dirac index to obtain the analog of King's asymptotic character formula (\ref{kingintro}), or equivalently of the Goldie rank polynomial (\ref{goldieintro}), in the case when $H_s$ is replaced by a compact Cartan subgroup $T$ of $G$. In the course of doing this, we first prove a translation principle for the Dirac index. To define the notions of Dirac cohomology and index, we first recall that there is a {\it Dirac operator} $D\in U(\mathfrak{g})\otimes C(\mathfrak{p})$, where $C(\mathfrak{p})$ is the Clifford algebra of $\mathfrak{p}$ with respect to an invariant non-degenerate symmetric bilinear form $B$ (see Section \ref{section setting}). If $S$ is a spin module for $C(\mathfrak{p})$ then $D$ acts on $Y\otimes S$ for any $(\mathfrak{g},K)$-module $Y$. The {\it Dirac cohomology} $H_{D}(Y)$ of $Y$ is defined as \begin{equation*} H_{D}(Y)=\mathop{\hbox{Ker}}\nolimits D / \mathop{\hbox{Ker}}\nolimits D\cap\text{Im} D. \end{equation*} It is a module for the spin double cover $\widetilde{K}$ of $K$. Dirac cohomology was introduced by Vogan in the late 1990s (see \cite{V4}) and turned out to be an interesting invariant attached to $(\mathfrak{g},K)$-modules (see \cite{HP2} for a thorough discussion). We would now like to study how Dirac cohomology varies over a coherent family. This is however not possible; since Dirac cohomology is not an exact functor, it cannot be defined for virtual $(\mathfrak{g},K)$-modules. To fix this problem, we will replace Dirac cohomology by the Dirac index. (We note that there is a relationship between Dirac cohomology and translation functors; see \cite{MP}, \cite{MP08}, \cite{MP09}, \cite{MP10}.) \begin{subequations}\label{se:cohindex} Let $\mathfrak{t}$ be the complexified Lie algebra of the compact Cartan subgroup $T$ of $G$. Then $\mathfrak{t}$ is a Cartan subalgebra of both $\mathfrak{g}$ and $\mathfrak{k}$. In this case, the spin module $S$ for $\widetilde{K}$ is the direct sum of two pieces $S^+$ and $S^-$, and the Dirac cohomology $H_D(Y)$ breaks up accordingly into $H_D(Y)^+$ and $H_D(Y)^-$. If $Y$ is admissible and has infinitesimal character, define the {\it Dirac index of $Y$} to be the virtual $\widetilde{K}$-module \begin{equation} I(Y)= H_D(Y)^+-H_D(Y)^-. \end{equation} This definition can be extended to arbitrary finite length modules (not necessarily with infinitesimal character), replacing $H_D$ by the higher Dirac cohomology of \cite{PS}. See Section \ref{section index}. Then $I$, considered as a functor from finite length $(\mathfrak{g},K)$-modules to virtual $\widetilde{K}$-modules, is additive with respect to short exact sequences (see Lemma \ref{exact} and the discussion below (\ref{index formula})), so it makes sense also for virtual $(\mathfrak{g},K)$-modules. Furthermore, $I$ satisfies the following property (Proposition \ref{main}): for any finite-dimensional $(\mathfrak{g},K)$-module $F$, \begin{equation*} I(Y\otimes F)=I(Y)\otimes F. \end{equation*} Let now $\{X_{\lambda}\}_{\lambda \in \lambda_{0}+\Lambda}$ be a coherent family of virtual $(\mathfrak{g},K)$-modules. By a theorem of Huang and Pand{\v{z}}i{\'c}, the $\mathfrak{k}$-infinitesimal character of any $\widetilde{K}$-type contributing to the Dirac cohomology $H_D(Y)$ of an irreducible $(\mathfrak{g},K)$-module $Y$ is $W_{\mathfrak{g}}$-conjugate to the $\mathfrak{g}$-infinitesimal character of $Y$ (see Theorem \ref{HPmain}). In terms of the virtual representations $\widetilde{E}$ of $\widetilde{K}$ defined in Section \ref{section coherent}, the conclusion is that we may write \begin{equation}\label{indexformula} I(X_{\lambda_{0}})=\sum_{w\in W_\mathfrak{g}} a_w \widetilde{E}_{w\lambda_{0}} \end{equation} with $a_w\in \bZ$. Then, for any $\nu\in\Lambda$, we have (Theorem \ref{translindex}): \begin{equation} \label{indextransintro} I(X_{\lambda_{0}+\nu})=\sum_{w\in W_\mathfrak{g}} a_w \widetilde{E}_{w(\lambda_{0}+\nu)} \end{equation} with the same coefficients $a_w$. It follows that $I(X_{\lambda_{0}})\neq 0$ implies $I(X_{\lambda_{0}+\nu})\neq 0$, provided both $\lambda_{0}$ and $\lambda_{0}+\nu$ are regular for $\mathfrak{g}$ (Corollary \ref{nonzeroindex}). Combining the translation principle for Dirac index \eqref{indextransintro} with the Weyl dimension formula for $\mathfrak{k}$, we conclude that the map \begin{equation}\label{indexintro} \lambda_{0}+\Lambda\rightarrow\bZ,\qquad \lambda\mapsto\mathop{\hbox {dim}}\nolimits I(X_{\lambda}) \end{equation} extends to a $W_{\mathfrak{g}}$-harmonic polynomial $Q_{X}$ on $\mathfrak{t}^{\star}$ (see Section \ref{section Weyl group}). We call the polynomial $Q_X$ the {\it index polynomial} attached to $X$ and $\lambda_{0}$. If $Q_X$ is nonzero, its degree is equal to the number $R_{\mathfrak{k}}^{+}$ of positive $\mathfrak{t}$-roots in $\mathfrak{k}$. More precisely, $Q_X$ belongs to the irreducible representation of $W_{\mathfrak{g}}$ generated by the Weyl dimension formula for $\mathfrak{k}$ (Proposition \ref{harmonic}). Furthermore, the coherent continuation representation generated by $X$ must contain a copy of the index polynomial representation (Proposition \ref{wequi}). We also prove that the index polynomial vanishes for small representations. Namely, if the Gelfand-Kirillov dimension $\mathop{\hbox {Dim}}\nolimits(X)$ is less than the number $\sharp R_{\mathfrak{g}}^{+}-\sharp R_{\mathfrak{k}}^{+}$ of positive noncompact $\mathfrak{t}$-roots in $\mathfrak{g}$, then $Q_{X}=0$ (Proposition \ref{indexzero}). An important feature of the index polynomial is the fact that $Q_{X}$ is the exact analogue of King's character polynomial (\ref{kingintro}), but attached to the character on the compact Cartan subgroup instead of the maximally split Cartan subgroup (see Section \ref{section Goldie rank}). In fact, $Q_{X}$ expresses the dependence on $\lambda$ of the (possibly zero) leading term in the Taylor expansion of the numerator of the character of $X_\lambda$ on the compact Cartan $T$: for any $y\in \mathfrak{t}_0$ regular, we have \begin{equation*} \lim_{t\to 0+} t^{\sharp R_{\mathfrak{g}}^{+}-\sharp R_{\mathfrak{k}}^{+}} \mathop{\hbox {ch}}\nolimits_\mathfrak{g}(X_\lambda)(\exp ty)=\textstyle{\frac{\prod_{\alpha\in R_\mathfrak{k}^+}\alpha(y)}{\prod_{\alpha\in R_\mathfrak{g}^+}\alpha(y)}}\, Q_X(\lambda). \end{equation*} In particular, if $G$ is semisimple of Hermitian type, and if $X$ is the $(\mathfrak{g},K)$-module of a holomorphic discrete series representation, then the index polynomial $Q_{X}$ coincides, up to a scalar multiple, with the Goldie rank polynomial $P_{X}$ (Proposition \ref{holods}). Moreover, if $X$ is the $(\mathfrak{g},K)$-module of any discrete series representation (for $G$ not necessarily Hermitian), then $Q_X$ and $P_X$ are both divisible by the product of linear factors corresponding to the roots generated by the $\tau$-invariant of $X$ (Proposition \ref{tau}). Recall that the $\tau$-invariant of the $(\mathfrak{g},K)$-module $X$ consists of the simple roots $\alpha$ such that the translate of $X$ to the wall defined by $\alpha$ is zero (see Section 4 in \cite{V1} or Chapter 7 in \cite{V2}). Recall the formula \eqref{eq:multchar} relating King's character polynomial to the multiplicity polynomials for the associated cycle. In Section 7, we conjecture a parallel relationship between the index polynomial $Q_X$ and the multiplicity polynomials. For that, we must assume that the $W_{\mathfrak{g}}$-representation generated by the Weyl dimension formula for $\mathfrak{k}$ corresponds to a nilpotent $G_{\bC}$-orbit ${\mathcal O}_{K}$ via the Springer correspondence. (At the end of Section \ref{orbits}, we list the classical groups for which this assumption is satisfied.) Then we conjecture (Conjecture \ref{conj}): if ${\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(X)))\subset \overline{{\mathcal O}_K}$, then \begin{equation}\label{eq:multindex} Q_{X}=\sum_{j}c^j m_{X}^{j}. \end{equation} Here the point is that the coefficients $c^j$ should be integers independent of $X$. We check that this conjecture holds in the case when $G=SL(2,\bR)$ and also when $G=SU(1,n)$ with $n\geq 2$. In the following we give a few remarks related to the significance of the above conjecture. Associated varieties are a beautiful and concrete invariant for representations, but they are too crude to distinguish representations well. For example, all holomorphic discrete series have the same associated variety. Goldie rank polynomials and multiplicity functions both offer more information, but the information is somewhat difficult to compute and to interpret precisely. The index polynomial is easier to compute and interpret precisely; it can be computed from knowing the restriction to $K$, and conversely, it contains fairly concrete information about the restriction to $K$. In the setting of (\ref{eq:multindex}) (that is, for fairly small representations), the conjecture says that the index polynomial should be built from multiplicity polynomials in a very simple way. The conjecture implies that, for these small representations, the index polynomial must be a multiple of the Goldie rank polynomial. This follows from the fact that each $m^i_X$ is a multiple of $P_X$, mentioned below (\ref{multintro}). The interesting thing about this is that the index polynomial is perfectly well-defined for {\it larger} representations as well. In some sense it is defining something like ``multiplicities" for $\mathcal{O}_K$ even when $\mathcal{O}_K$ is not a leading term. This is analogous to a result of Barbasch, which says that one can define for any character expansion a number that is the multiplicity of the zero orbit for finite-dimensional representations. In the case of discrete series, this number turns out to be the formal degree (and so is something really interesting). This indicates that the index polynomial is an example of an interesting ``lower order term" in a character expansion. We can hope that a fuller understanding of all such lower order terms could be a path to extending the theory of associated varieties to a more complete invariant of representations. \end{subequations} \section{Setting}\label{section setting} Let $G$ be a finite cover of a closed connected transpose-stable subgroup of $GL(n,\bR)$, with Lie algebra $\mathfrak{g}_{0}$. We denote by $\Theta$ the Cartan involution of $G$ corresponding to the usual Cartan involution of $GL(n,\bR)$ (the transpose inverse). Then $K=G^\Theta$ is a maximal compact subgroup of $G$. Let $\mathfrak{g}_{0}=\mathfrak{k}_{0}\oplus\mathfrak{p}_{0}$ be the Cartan decomposition of $\mathfrak{g}_{0}$, with $\mathfrak{k}_0$ the Lie algebra of $K$. Let $B$ be the trace form on $\mathfrak{g}_{0}$. Then $B$ is positive definite on $\mathfrak{p}_{0}$ and negative definite on $\mathfrak{k}_{0}$, and $\mathfrak{p}_{0}$ is the orthogonal of $\mathfrak{k}_{0}$ with respect to $B$. We shall drop the subscript `0' on real vector spaces to denote their complexifications. Thus $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ denotes the Cartan decomposition of the complexified Lie algebra of $G$. The linear extension of $B$ to $\mathfrak{g}$ is again denoted by $B$. Let $G_{\bC}$ be a connected reductive algebraic group over $\bC$ with Lie algebra $\mathfrak{g}$. Let $C(\mathfrak{p})$ be the Clifford algebra of $\mathfrak{p}$ with respect to $B$ and let $U(\mathfrak{g})$ be the universal enveloping algebra of $\mathfrak{g}$. The Dirac operator $D$ is defined as \[ D=\sum_i b_i\otimes d_i\in U(\mathfrak{g})\otimes C(\mathfrak{p}), \] where $b_i$ is any basis of $\mathfrak{p}$ and $d_i$ is the dual basis with respect to $B$. Then $D$ is independent of the choice of the basis $b_i$ and is $K$-invariant. Moreover, the square of $D$ is given by the following formula due to Parthasarathy \cite{P}: \begin{equation} \label{Dsquared} D^2=-(\mathop{\hbox {Cas}}\nolimits_\mathfrak{g}\otimes 1+\|\rho_\mathfrak{g}\|^2)+(\mathop{\hbox {Cas}}\nolimits_{\mathfrak{k}_\Delta}+\|\rho_\mathfrak{k}\|^2). \end{equation} Here $\mathop{\hbox {Cas}}\nolimits_\mathfrak{g}$ is the Casimir element of $U(\mathfrak{g})$ and $\mathop{\hbox {Cas}}\nolimits_{\mathfrak{k}_\Delta}$ is the Casimir element of $U(\mathfrak{k}_\Delta)$, where $\mathfrak{k}_\Delta$ is the diagonal copy of $\mathfrak{k}$ in $U(\mathfrak{g})\otimes C(\mathfrak{p})$, defined using the obvious embedding $\mathfrak{k}\hookrightarrow U(\mathfrak{g})$ and the usual map $\mathfrak{k}\to\mathfrak{s}\mathfrak{o}(\mathfrak{p})\to C(\mathfrak{p})$. See \cite{HP2} for details. If $X$ is a $(\mathfrak{g},K)$-module, then $D$ acts on $X\otimes S$, where $S$ is a spin module for $C(\mathfrak{p})$. The Dirac cohomology of $X$ is the module \[ H_D(X)=\mathop{\hbox{Ker}}\nolimits D / \mathop{\hbox{Ker}}\nolimits D\cap\text{Im} D \] for the spin double cover $\widetilde{K}$ of $K$. If $X$ is unitary or finite-dimensional, then \[ H_D(X)=\mathop{\hbox{Ker}}\nolimits D=\mathop{\hbox{Ker}}\nolimits D^2. \] The following result of \cite{HP1} was conjectured by Vogan \cite{V3}. Let $\mathfrak{h}=\mathfrak{t}\oplus\mathfrak{a}$ be a fundamental Cartan subalgebra of $\mathfrak{g}$. We view $\mathfrak{t}^*\subset\mathfrak{h}^*$ by extending functionals on $\mathfrak{t}$ by 0 over $\mathfrak{a}$. Denote by $R_{\mathfrak{g}}$ (resp. $R_{\mathfrak{k}}$) the set of $(\mathfrak{g},\mathfrak{h})$-roots (resp. $(\mathfrak{k},\mathfrak{t})$-roots). We fix compatible positive root systems $R^{+}_{\mathfrak{g}}$ and $R^{+}_{\mathfrak{k}}$ for $R_\mathfrak{g}$ and $R_\mathfrak{k}$ respectively. In particular, this determines the half-sums of positive roots $\rho_\mathfrak{g}$ and $\rho_\mathfrak{k}$ as usual. Write $W_{\mathfrak{g}}$ (resp. $W_{\mathfrak{k}}$) for the Weyl group associated with $(\mathfrak{g},\mathfrak{h})$-roots (resp. $(\mathfrak{k},\mathfrak{t})$-roots). \begin{thm} \label{HPmain} Let $X$ be a $(\mathfrak{g},K)$-module with infinitesimal character corresponding to $\Lambda\in\mathfrak{h}^*$ via the Harish-Chandra isomorphism. Assume that $H_D(X)$ contains the irreducible $\widetilde{K}$-module $E_\gamma$ with highest weight $\gamma\in\mathfrak{t}^*$. Then $\Lambda$ is equal to $\gamma+\rho_\mathfrak{k}$ up to conjugation by the Weyl group $W_\mathfrak{g}$. In other words, the $\mathfrak{k}$-infinitesimal character of any $\widetilde{K}$-type contributing to $H_D(X)$ is $W_\mathfrak{g}$-conjugate to the $\mathfrak{g}$-infinitesimal character of $X$. \end{thm} \section{Dirac index} \label{section index} Throughout the paper we assume that $\mathfrak{g}$ and $\mathfrak{k}$ have equal rank, i.e., that there is a compact Cartan subalgebra $\mathfrak{h}=\mathfrak{t}$ in $\mathfrak{g}$. In this case, $\mathfrak{p}$ is even-dimensional, so (as long as $\mathfrak{p} \neq\{0\}$) the spin module $S$ for the spin group $\mathop{\hbox {Spin}}\nolimits(\mathfrak{p})$ (and therefore for $\widetilde{K}$) is the direct sum of two pieces, which we will call $S^+$ and $S^-$. To say which is which, it is enough to choose an $SO(\mathfrak{p})$-orbit of maximal isotropic subspaces of $\mathfrak{p}$. We will sometimes make such a choice by fixing a positive root system $\Delta^+(\mathfrak{g},\mathfrak{t})$ for $\mathfrak{t}$ in $\mathfrak{g}$, and writing $\mathfrak{n}=\mathfrak{n}_\mathfrak{k} + \mathfrak{n}_\mathfrak{p}$ for the corresponding sum of positive root spaces. Then $\mathfrak{n}_\mathfrak{p}$ is a choice of maximal isotropic subspace of $\mathfrak{p}$. The full spin module may be realized using $\mathfrak{n}_\mathfrak{p}$ as $S \simeq \bigwedge\mathfrak{n}_\mathfrak{p}$, with the $C(\mathfrak{p})$-action defined so that elements of $\mathfrak{n}_\mathfrak{p}$ act by wedging, and elements of the dual isotropic space $\mathfrak{n}_\mathfrak{p}^-$ corresponding to the negative roots act by contracting. (Details may be found for example in \cite{Chev} at the beginning of Chapter 3.) In particular, the action of $C(\mathfrak{p})$ respects parity of degrees: odd elements of $C(\mathfrak{p})$ carry $\bigwedge^{\text{even}}\mathfrak{n}_\mathfrak{p}$ to $\bigwedge^{\text{odd}}\mathfrak{n}_\mathfrak{p}$ and so on. Because $\mathop{\hbox {Spin}}\nolimits(\mathfrak{p}) \subset C_{\text{even}}(\mathfrak{p})$, it follows that $\mathop{\hbox {Spin}}\nolimits(\mathfrak{p})$ preserves the decomposition \[ S \simeq \bigwedge \mathfrak{n}_\mathfrak{p} = \bigwedge\nolimits^{\!\!\text{even}}\mathfrak{n}_\mathfrak{p} \oplus \bigwedge\nolimits^{\!\!\text{odd}}\mathfrak{n}_\mathfrak{p} \overset{\text{def.}}= S^+ \oplus S^-. \] The group $\widetilde{K}$ acts on $S$ as usual, through the map $\widetilde{K}\to\mathop{\hbox {Spin}}\nolimits(\mathfrak{p})\subset C(\mathfrak{p})$, and hence also the Lie algebra $\mathfrak{k}$ acts, through the map $\alpha:\mathfrak{k}\to \mathfrak{s}\mathfrak{o}(\mathfrak{p})\hookrightarrow C(\mathfrak{p})$. We call these actions of $\widetilde{K}$ and $\mathfrak{k}$ the spin actions. It should however be noted that although we wrote $S\simeq \bigwedge\mathfrak{n}_\mathfrak{p}$, the $\mathfrak{t}$-weights of $S$ for the spin action are not the weights of $\bigwedge\mathfrak{n}_\mathfrak{p}$, i.e., the sums of distinct roots in $\mathfrak{n}_\mathfrak{p}$, but rather these weights shifted by $-(\rho_\mathfrak{g}-\rho_\mathfrak{k})$. This difference comes from the construction of the map $\alpha$ and the action of $C(\mathfrak{p})$ on $S$. In particular, the weights of $S^+ \simeq \bigwedge^{\text{even}}\mathfrak{n}_\mathfrak{p} $ are \[ -\rho_\mathfrak{g} + \rho_\mathfrak{k} + \text{(sum of an even number of distinct roots in } \mathfrak{n}_\mathfrak{p}). \] Similarly, the weights of $S^- \simeq \bigwedge^{\text{odd}}\mathfrak{n}_\mathfrak{p}$ are \[ -\rho_\mathfrak{g} + \rho_\mathfrak{k} + \text{(sum of an odd number of distinct roots in } \mathfrak{n}_\mathfrak{p}). \] The Dirac operator $D$ interchanges $X\otimes S^+$ and $X\otimes S^-$ for any $(\mathfrak{g},K)$-module $X$. (That is because it is of degree 1 in the Clifford factor.) It follows that the Dirac cohomology $H_D(X)$ also breaks up into even and odd parts, which we denote by $H_D(X)^+$ and $H_D(X)^-$ respectively. If $X$ is of finite length, then $H_D(X)$ is finite-dimensional, as follows from (\ref{Dsquared}), which implies that $\mathop{\hbox{Ker}}\nolimits D^2$ is finite-dimensional for any admissible module $X$. If $X$ is of finite length and has infinitesimal character, then we define the Dirac index of $X$ as the virtual $\widetilde{K}$-module \begin{equation} \label{defindex wrong} I(X)= H_D(X)^+-H_D(X)^-. \end{equation} The first simple but important fact is the following proposition, which is well known for the case of discrete series or finite-dimensional modules. \begin{prop} \label{propindex} Let $X$ be a finite length $(\mathfrak{g},K)$-module with infinitesimal character. Then there is an equality of virtual $\widetilde{K}$-modules \[ X\otimes S^+ - X\otimes S^- = I(X). \] \end{prop} \begin{proof} By Parthasarathy's formula for $D^2$ (\ref{Dsquared}), $X\otimes S$ breaks into a direct sum of eigenspaces for $D^2$: \[ X\otimes S=\sum_\lambda (X\otimes S)_\lambda. \] Since $D^2$ is even in the Clifford factor, this decomposition is compatible with the decomposition into even and odd parts, i.e., \[ (X\otimes S)_\lambda=(X\otimes S^+)_\lambda \oplus (X\otimes S^-)_\lambda, \] for any eigenvalue $\lambda$ of $D^2$. Since $D$ commutes with $D^2$, it preserves each eigenspace. Since $D$ also switches parity, we see that $D$ defines maps \[ D_\lambda:(X\otimes S^{\pm})_\lambda \to (X\otimes S^{\mp})_\lambda \] for each $\lambda$. If $\lambda\neq 0$, then $D_\lambda$ is clearly an isomorphism (with inverse $\frac{1}{\lambda}D_\lambda$), and hence \[ X\otimes S^+ - X\otimes S^- = (X\otimes S^+)_0 - (X\otimes S^-)_0. \] Since $D$ is a differential on $\mathop{\hbox{Ker}}\nolimits D^2$, and the cohomology of this differential is exactly $H_D(X)$, the statement now follows from the Euler-Poincar\'e principle. \end{proof} \begin{cor} \label{exact} Let \[ 0\to U\to V\to W\to 0 \] be a short exact sequence of finite length $(\mathfrak{g},K)$-modules, and assume that $V$ has infinitesimal character (so that $U$ and $W$ must have the same infinitesimal character as $V$). Then there is an equality of virtual $\widetilde{K}$-modules \[ I(V) = I(U) + I(W). \] \end{cor} \begin{proof} This follows from the formula in Proposition \ref{propindex}, since the left hand side of that formula clearly satisfies the additivity property. \end{proof} To study the translation principle, we need to deal with modules $X\otimes F$, where $X$ is a finite length $(\mathfrak{g},K)$-module, and $F$ is a finite-dimensional $(\mathfrak{g},K)$-module. Therefore, Proposition \ref{propindex} and Corollary \ref{exact} are not sufficient for our purposes, because they apply only to modules with infinitesimal character. Namely, if $X$ is of finite length and has infinitesimal character, then $X\otimes F$ is of finite length, but it typically cannot be written as a direct sum of modules with infinitesimal character. Rather, some of the summands of $X\otimes F$ only have generalized infinitesimal character. Recall that $\chi:Z(\mathfrak{g})\to\mathbb{C}$ is the generalized infinitesimal character of a $\mathfrak{g}$-module $V$ if there is a positive integer $N$ such that \[ (z-\chi(z))^N=0\quad\text{on }V,\qquad \text{for every }z\in Z(\mathfrak{g}), \] where $Z(\mathfrak{g})$ denotes the center of $U(\mathfrak{g})$. Here is an example showing that Proposition \ref{propindex} and Corollary \ref{exact} can fail for modules with generalized infinitesimal character. \begin{ex}[\cite{PS}, Section 2] {\rm Let $G=SU(1,1)\cong SL(2,\mathbb{R})$, so that $K=S(U(1)\times U(1))\cong U(1)$, and $\mathfrak{g}=\mathfrak{s}\mathfrak{l}(2,\mathbb{C})$. Then there is an indecomposable $(\mathfrak{g},K)$-module $P$ fitting into the short exact sequence \[ 0\to V_0\to P\to V_{-2}\to 0, \] where $V_0$ is the (reducible) Verma module with highest weight 0, and $V_{-2}$ is the (irreducible) Verma module with highest weight -2. One can describe the $\mathfrak{g}$-action on $P$ very explicitly, and see that $\mathop{\hbox {Cas}}\nolimits_\mathfrak{g}$ does not act by a scalar on $P$, so $P$ does not have infinitesimal character. Using calculations similar to \cite{HP2}, 9.6.5, one checks that for the index defined by (\ref{defindex wrong}) the following holds: \begin{equation} \label{indexP} I(P)=-\mathbb{C}_1;\qquad I(V_0)=-\mathbb{C}_1;\qquad I(V_{-2})=-\mathbb{C}_{-1}, \end{equation} where $\mathbb{C}_1$ respectively $\mathbb{C}_{-1}$ is the one-dimensional $\widetilde{K}$-module of weight $1$ respectively $-1$. So Corollary \ref{exact} fails for $P$. It follows that Proposition \ref{propindex} must also fail. This can also be seen directly, by computing $P\otimes S^+-P\otimes S^-$. } \end{ex} The reason for the failure of both Proposition \ref{propindex} and Corollary \ref{exact} is the fact that the generalized 0-eigenspace for $D$ contains two Jordan blocks for $D$, one of length 1 and the other of length 3. The block of length 3 does contribute to $P\otimes S^+-P\otimes S^-$, but not to $I(P)$. With this in mind, a modified version of Dirac cohomology, called ``higher Dirac cohomology", has been recently defined by Pand\v zi\'c and Somberg \cite{PS}. It is defined as $H(X)=\bigoplus_{k\in\mathbb{Z}_+} H^k(V)$, where \[ H^k(V) = \mathop{\hbox {Im}}\nolimits D^{2k}\cap \mathop{\hbox{Ker}}\nolimits D \big/ \mathop{\hbox {Im}}\nolimits D^{2k+1}\cap\mathop{\hbox{Ker}}\nolimits D. \] For a module $X$ with infinitesimal character, $H(X)$ is the same as $H_D(X)$; in general, $H(X)$ contains $H_D(X)=H^0(X)$. If $X$ is an arbitrary finite length module, then $H(X)$ is composed from contributions from all odd length Jordan blocks in the generalized 0-eigenspace for $D$. It follows that if we let $H(X)^\pm$ be the even and odd parts of $H(X)$, and define the stable index as \begin{equation} \label{defindex right} I(X)=H(X)^+-H(X)^-, \end{equation} then Proposition \ref{propindex} holds for any module $X$ of finite length, i.e., \begin{equation} \label{index formula} I(X)= X\otimes S^+-X\otimes S^- \end{equation} (\cite{PS}, Theorem 3.4). It follows that the index defined in this way is additive with respect to short exact sequences (\cite{PS}, Corollary 3.5), and it therefore makes sense for virtual $(\mathfrak{g},K)$-modules, i.e., it is well defined on the Grothendieck group of the Abelian category of finite length $(\mathfrak{g},K)$-modules. Let us also mention that there is an analogue of Theorem \ref{HPmain} for $H(X)$ (\cite{PS}, Theorem 3.3.) There is another way to define the index that circumvents completely the discussion of defining Dirac cohomology in the right way. Namely, one can simply use the statement of Proposition \ref{propindex}, or (\ref{index formula}), as the definition of the index $I(X)$. It is clear that with such a definition the index does make sense for virtual $(\mathfrak{g},K)$-modules. Moreover, one shows as above that all of the eigenspaces for $D^2$ for nonzero eigenvalues cancel out in (\ref{index formula}), so what is left is a finite combination of $\widetilde{K}$-types, appearing in the 0-eigenspace for $D^2$. Whichever of these two ways to define $I(X)$ we take, we will from now on work with Dirac index $I(X)$, defined for any virtual $(\mathfrak{g},K)$-module $X$, and satisfying (\ref{index formula}). \section{Coherent families} \label{section coherent} Fix $\lambda_0\in\mathfrak{t}^*$ regular and let $T$ be a compact Cartan subgroup of $G$ with complexified Lie algebra $\mathfrak{t}$. We denote by $\Lambda\subset\widehat{T}\subset\mathfrak{t}^*$ the lattice of weights of finite-dimensional representations of $G$ (equivalently, of finite-dimensional $(\mathfrak{g},K)$-modules). A family of virtual $(\mathfrak{g},K)$-modules $X_\lambda$, $\lambda\in\lambda_{0}+\Lambda$, is called {\it coherent} if \begin{enumerate} \item $X_\lambda$ has infinitesimal character $\lambda$; and \item for any finite-dimensional $(\mathfrak{g},K)$-module $F$, and for any $\lambda\in\lambda_{0}+\Lambda$, \begin{equation} \label{coh} X_\lambda\otimes F = \sum_{\mu\in\Delta(F)} X_{\lambda+\mu}, \end{equation} where $\Delta(F)$ denotes the multiset of all weights of $F$. \end{enumerate} See \cite{V2}, Definition 7.2.5. The reason that we may use coherent families based on the compact Cartan $T$, rather than the maximally split Cartan used in \cite{V2}, is our assumption that $G$ is connected. A virtual $(\mathfrak{g},K)$-module $X$ with regular infinitesimal character $\lambda_{0}\in\mathfrak{h}_{c}^{\star}$ can be placed in a unique coherent family as above (see Theorem 7.2.7 in \cite{V2}, and the references therein; this is equivalent to \eqref{existscoherent}). Using this, one can define an action of the integral Weyl group $W(\lambda_{0})$ attached to $\lambda_{0}$ on the set ${\mathcal M}(\lambda_{0})$ of virtual $(\mathfrak{g},K)$-modules with infinitesimal character $\lambda_{0}$. Recall that $W(\lambda_{0})$ consists of those elements $w\in W_\mathfrak{g}$ for which $\lambda_0-w\lambda_0$ is a sum of roots. If we write $Q$ for the root lattice, then the condition for $w$ to be in $W(\lambda_0)$ is precisely that $w$ preserves the lattice coset $\lambda_{0}+Q$ (see \cite{V2}, Section 7.2). Then for $w\in W(\lambda_0)$, we set \begin{equation*} w\cdot X\overset{\text{def.}}= X_{w^{-1}(\lambda_{0})}. \end{equation*} We view ${\mathcal M}(\lambda_{0})$ as a lattice (a free $\bZ$-module) with basis the (finite) set of irreducible $(\mathfrak{g},K)$-modules of infinitesimal character $\lambda_{0}$. A decomposition into irreducible components of this $W(\lambda_{0})$-representation, known as the {\it coherent continuation} representation, was obtained by Barbasch and Vogan (see \cite{BV1b}). The study of coherent continuation representations is important for deeper understanding of coherent families. A weight $\lambda \in \lambda_0 + \Lambda$ is called {\it integrally dominant} if \begin{equation}\label{intdom} \langle\alpha^\vee,\lambda\rangle \ge 0 \ \text{whenever $\langle \alpha^\vee,\lambda_0 \rangle \in {\mathbb{N}}$} \qquad (\alpha \in R_\mathfrak{g}). \end{equation} Recall from the introduction that we write $(\lambda_0 + \Lambda)^+$ for the cone of integrally dominant weights. The notion of coherent families is closely related with the Jantzen-Zuckerman translation principle. For example, if $\lambda$ is regular and $\lambda+\nu$ belongs to the same Weyl chamber for integral roots (whose definition is recalled below), then $X_{\lambda+\nu}$ can be obtained from $X_\lambda$ by a translation functor, i.e., by tensoring with the finite-dimensional module $F_\nu$ with extremal weight $\nu$ and then taking the component with generalized infinitesimal character $\lambda+\nu$. The following observation is crucial for obtaining the translation principle for Dirac index. \begin{prop} \label{main} Suppose $X$ is a virtual $(\mathfrak{g},K)$-module and $F$ a finite-dimensional $(\mathfrak{g},K)$-module. Then \[ I(X\otimes F)=I(X)\otimes F. \] \end{prop} \begin{proof} By Proposition \ref{propindex} and (\ref{index formula}), \[ I(X\otimes F)=X\otimes F\otimes S^+ - X\otimes F\otimes S^-, \] while \[ I(X)\otimes F= (X\otimes S^+ - X\otimes S^-)\otimes F. \] It is clear that the right hand sides of these expressions are the same. \end{proof} \noindent Combining Proposition \ref{main} with (\ref{coh}), we obtain \begin{cor} \label{cohindex} Let $X_\lambda$, $\lambda\in\lambda_{0}+\Lambda$, be a coherent family of virtual $(\mathfrak{g},K)$-modules and let $F$ be a finite-dimensional $(\mathfrak{g},K)$-module. Then \begin{equation} \label{cohindexformula} I(X_\lambda)\otimes F=\sum_{\mu\in\Delta(F)} I(X_{\lambda+\mu}). \end{equation} \qed \end{cor} \noindent This says that the family $\{I(X_\lambda)\}_{\lambda\in\lambda_{0}+\Lambda}$ of virtual $\widetilde{K}$-modules has some coherence properties, but it is not a coherent family for $\widetilde{K}$, as $I(X_\lambda)$ does not have $\mathfrak{k}$-infinitesimal character $\lambda$. Also, the identity (\ref{cohindexformula}) is valid only for a $(\mathfrak{g},K)$-module $F$, and not for an arbitrary $\widetilde{K}$-module $F$. Using standard reasoning, as in \cite{V2}, Section 7.2, we can now analyze the relationship between Dirac index and translation functors. \begin{subequations}\label{Kcoherent} We first define some virtual representations of $\widetilde{K}$. Our choice of positive roots $R^+_\mathfrak{k}$ for $T$ in $K$ defines a Weyl denominator function \begin{equation}\label{Weyldenominator} d_\mathfrak{k}(\exp(y)) = \prod_{\alpha\in R^+_\mathfrak{k}} (e^{\alpha(y)/2} - e^{-\alpha(y)/2}) \end{equation} on an appropriate cover of $T$. For $\gamma\in \Lambda+\rho_\mathfrak{g}$, the Weyl numerator \begin{equation*} N_\gamma = \sum_{w\in W_\mathfrak{k}} \operatorname{sgn}(w) e^{w\gamma} \end{equation*} is a function on another double cover of $T$. According to Weyl's character formula, the quotient \begin{equation} \mathop{\hbox {ch}}\nolimits_{\mathfrak{k},\gamma} = N_\gamma/d_\mathfrak{k} \end{equation} extends to a class function on all of $\widetilde{K}$. Precisely, $\mathop{\hbox {ch}}\nolimits_{\mathfrak{k},\gamma}$ is the character of a virtual genuine representation $\widetilde{E}_\gamma$ of $\widetilde{K}$: \begin{equation} \widetilde{E}_\gamma = \begin{cases} \operatorname{sgn}(x)\left(\text{irr. of highest weight $x\gamma - \rho_\mathfrak{k}$}\right) &\text{$x\gamma$ is dom. reg. for $R^+_\mathfrak{k}$}\\ 0 &\text{$\gamma$ is singular for $R_\mathfrak{k}$} \end{cases} \end{equation} It is convenient to extend this definition to all of $\mathfrak{t}^*$ by \begin{equation} \widetilde{E}_\lambda = 0 \qquad (\lambda \notin \Lambda+\rho_\mathfrak{g}). \end{equation} With this definition, the Huang-Pand\v zi\'c infinitesimal character result clearly guarantees what we wrote in \eqref{indexformula}: \begin{equation*} I(X_{\lambda_{0}})=\sum_{w\in W_\mathfrak{g}} a_w \widetilde{E}_{w\lambda_{0}}. \end{equation*} We could restrict the sum to those $w$ for which $w\lambda_0$ is dominant for $R^+_\mathfrak{k}$, and get a unique formula in which $a_w$ is the multiplicity of the $\widetilde{K}$ representation of highest weight $w\lambda_0 - \rho_\mathfrak{k}$ in $I(X_{\lambda_0})$. But for the proof of the next theorem, it is more convenient to allow a more general expression. \end{subequations} \begin{thm} \label{translindex} Suppose $\lambda_0\in\mathfrak{t}^*$ is regular for $\mathfrak{g}$. Let $X_\lambda$, $ \lambda\in\lambda_{0}+\Lambda$, be a coherent family of virtual $(\mathfrak{g},K)$-modules based on $\lambda_0 + \Lambda$. By Theorem \ref{HPmain}, we can write \begin{equation} \label{indexatlambda} I(X_{\lambda_{0}})=\sum_{w\in W_\mathfrak{g}} a_w \widetilde{E}_{w\lambda_{0}}, \end{equation} where $\widetilde{E}$ denotes the family of finite-dimensional virtual $\widetilde{K}$-modules defined in \eqref{Kcoherent}, and $a_w$ are integers. Then for any $\nu\in\Lambda$, \begin{equation} \label{indexatlambda+nu} I(X_{\lambda_{0}+\nu})=\sum_{w\in W_\mathfrak{g}} a_w \widetilde{E}_{w(\lambda_{0}+\nu)}, \end{equation} with the same coefficients $a_w$. \end{thm} \begin{proof} We proceed in three steps. {\it Step 1:} suppose both $\lambda_{0}$ and $\lambda_{0}+\nu$ belong to the same integral Weyl chamber, which we can assume to be the dominant one. Let $F_\nu$ be the finite-dimensional $(\mathfrak{g},K)$-module with extremal weight $\nu$. Let us take the components of (\ref{cohindexformula}), written for $\lambda=\lambda_0$, with $\mathfrak{k}$-infinitesimal characters which are $W_\mathfrak{g}$-conjugate to $\lambda_{0}+\nu$. By Theorem \ref{HPmain}, any summand $I(X_{\lambda_{0}+\mu})$ of the RHS of (\ref{cohindexformula}) is a combination of virtual modules with $\mathfrak{k}$-infinitesimal characters which are $W_\mathfrak{g}$-conjugate to $\lambda_{0}+\mu$. By \cite{V2}, Lemma 7.2.18 (b), $\lambda_{0}+\mu$ can be $W_\mathfrak{g}$-conjugate to $\lambda_{0}+\nu$ only if $\mu=\nu$. Thus we are picking exactly the summand $I(X_{\lambda_{0}+\nu})$ of the RHS of (\ref{cohindexformula}). We now determine the components of the LHS of (\ref{cohindexformula}) with $\mathfrak{k}$-infinitesimal characters which are $W_\mathfrak{g}$-conjugate to $\lambda_{0}+\nu$. Since $\widetilde{E}$ is a coherent family for $\tilde{K}$, and $F_\nu$ can be viewed as a finite-dimensional $\tilde{K}$-module, one has \[ \widetilde{E}_{w\lambda_{0}}\otimes F_\nu=\sum_{\mu\in\Delta(F_\nu)}\widetilde{E}_{w\lambda_{0}+\mu}. \] The $\mathfrak{k}$-infinitesimal character of $\widetilde{E}_{w\lambda_{0}+\mu}$ is $w\lambda_{0}+\mu$, so the components we are looking for must satisfy $w\lambda_{0}+\mu = u(\lambda_{0}+\nu)$, or equivalently \[ \lambda_{0}+w^{-1}\mu=w^{-1}u(\lambda_{0}+\nu), \] for some $u\in W_\mathfrak{g}$. Using \cite{V2}, Lemma 7.2.18 (b) again, we see that $w^{-1}u$ must fix $\lambda_{0}+\nu$, and $w^{-1}\mu$ must be equal to $\nu$. So $\mu=w\nu$, and the component $\widetilde{E}_{w\lambda_{0}+\mu}$ is in fact $\widetilde{E}_{w(\lambda_{0}+\nu)}$. So (\ref{indexatlambda+nu}) holds in this case. {\it Step 2:} suppose that $\lambda_{0}$ and $\lambda_{0}+\nu$ lie in two neighbouring chambers, with a common wall defined by a root $\alpha$, and such that $\lambda_{0}+\nu=s_{\alpha}(\lambda_{0})$. Assume further that for any weight $\mu$ of $F_\nu$, $\lambda_{0}+\mu$ belongs to one of the two chambers. Geometrically this means that $\lambda_{0}$ is close to the wall defined by $\alpha$ and sufficiently far from all other walls and from the origin. We tensor (\ref{indexatlambda}) with $F_{\nu}$. By (\ref{cohindexformula}) and the fact that $\widetilde{E}$ is a coherent family for $\tilde{K}$, we get \begin{equation*} \sum_{\mu\in\Delta(F_{\nu})}I(X_{\lambda_{0}+\mu})=\sum_{w\in W_{\mathfrak{g}}}a_{w}\sum_{\mu\in\Delta(F_{\nu})}\widetilde{E}_{w(\lambda_{0}+\mu)}. \end{equation*} By our assumptions, the only $\lambda_{0}+\mu$ conjugate to $\lambda_{0}+\nu$ via $W_{\mathfrak{g}}$ are $\lambda_{0}+\nu$ and $\lambda_{0}$. Picking the corresponding parts from the above equation, we get \begin{equation*} I(X_{\lambda_{0}+\nu})+cI(X_{\lambda_{0}})=\sum_{w\in W_{\mathfrak{g}}}a_{w}\big(c\widetilde{E}_{w\lambda_{0}}+\widetilde{E}_{w(\lambda_{0}+\nu)}\big) \end{equation*} where $c$ is the multiplicity of the zero weight of $F_\nu$. This implies (\ref{indexatlambda+nu}), so the theorem is proved in this case. {\it Step 3:} to get from an arbitrary regular $\lambda_{0}$ to an arbitrary $\lambda_{0}+\nu$, we first apply Step 1 to get from $\lambda_{0}$ to all elements of $\lambda_{0}+\Lambda$ in the same (closed) chamber. Then we apply Step 2 to pass to an element of a neighbouring chamber, then Step 1 again to get to all elements of that chamber, and so on. \end{proof} \begin{cor} \label{nonzeroindex} In the setting of Theorem \ref{translindex}, assume that both $\lambda_{0}$ and $\lambda_{0}+\nu$ are regular for $\mathfrak{g}$. Assume also that $I(X_{\lambda_{0}})\neq 0$, i.e., at least one of the coefficients $a_w$ in (\ref{indexatlambda}) is nonzero. Then $I(X_{\lambda_{0}+\nu})\neq 0$. \end{cor} \begin{proof} This follows immediately from Theorem \ref{translindex} and the fact that $\widetilde{E}_{w(\lambda_{0}+\nu)}$ can not be zero, since $w(\lambda_{0}+\nu)$ is regular for $\mathfrak{g}$ and hence also for $\mathfrak{k}$. \end{proof} \section{Index polynomial and coherent continuation representation} \label{section Weyl group} As in the previous section, let $\lambda_{0}\in\mathfrak{t}^{\star}$ be regular. For each $X\in {\mathcal M}(\lambda_{0})$, there is a unique coherent family $\{X_{\lambda}\mid \lambda\in\lambda_{0}+\Lambda\}$ such that $X_{\lambda_{0}}=X$. Define a function $Q_X\colon \lambda_{0}+\Lambda \to\mathbb{Z}$ by setting \begin{equation} \label{dim} Q_{X}(\lambda)= \mathop{\hbox {dim}}\nolimits I(X_\lambda)\quad (\lambda\in\lambda_{0}+\Lambda). \end{equation} Notice that $Q_{X}$ depends on both $X$ {\it and} on the choice of representative $\lambda_{0}$ for the infinitesimal character of $X$; replacing $\lambda_{0}$ by $w_{1}\lambda_{0}$ translates $Q_{X}$ by $w_{1}$. By Theorem \ref{translindex} and the Weyl dimension formula for $\mathfrak{k}$, $Q_{X}$ is a polynomial function in $\lambda$. (Note that taking dimension is additive with respect to short exact sequences of finite-dimensional modules, so it makes sense for virtual finite-dimensional modules.) We call the function $Q_{X}$ the {\it index polynomial} associated with $X$ (or $\{X_{\lambda}\}$). Recall that a polynomial on $\mathfrak{t}^*$ is called $W_\mathfrak{g}$-harmonic, if it is annihilated by any $W_\mathfrak{g}$-invariant constant coefficient differential operator on $\mathfrak{t}^*$ without constant term (see \cite{V1}, Lemma 4.3.) \begin{prop} \label{harmonic} For any $(\mathfrak{g},K)$-module $X$ as above, the index polynomial $Q_X$ is $W_\mathfrak{g}$-harmonic. If $Q_X\neq 0$, then it is homogeneous of degree equal to the number of positive roots for $\mathfrak{k}$; more precisely, it belongs to the irreducible representation of $W_{\mathfrak{g}}$ generated by the Weyl dimension formula for $\mathfrak{k}$. \end{prop} \begin{proof} The last statement follows from $(\ref{indexatlambda+nu})$; the rest of the proposition is an immediate consequence. \end{proof} Recall the natural representation of $W(\lambda_{0})$ (or indeed of all $W_{\mathfrak{g}}$) on the vector space $S(\mathfrak{t})$ of polynomial functions on $\mathfrak{t}^*$, $$ (w\cdot P)(\lambda)=P(w^{-1}\lambda). $$ The (irreducible) representation of $W(\lambda_{0})$ generated by the dimension formula for $\mathfrak{k}$ is called the {\it index polynomial representation}. \begin{prop} \label{wequi} The map \begin{equation*} {\mathcal M}(\lambda_{0})\rightarrow S(\mathfrak{t}),\qquad X\mapsto Q_{X} \end{equation*} intertwines the coherent continuation representation of $W(\lambda_0)$ with the action on polynomials. In particular, if $Q_{X}\neq 0$, then the coherent continuation representation generated by $X$ must contain a copy of the index polynomial representation. \end{prop} \begin{proof} Let $\{X_\lambda\}$ be the coherent family corresponding to $X$. Then for a fixed $w\in W(\lambda_0)$, the coherent family corresponding to $w\cdot X$ is $\lambda_{0}+\nu\mapsto X_{w^{-1}(\lambda_{0}+\nu)}$ (see \cite{V2}, Lemma 7.2.29 and its proof). It follows that \begin{eqnarray*} (w\cdot Q_{X})(\lambda)&=&Q_{X}(w^{-1}\cdot\lambda)\\ &=&\mathop{\hbox {dim}}\nolimits I(X_{w^{-1}\lambda})\\ &=&Q_{w\cdot X}(\lambda), \end{eqnarray*} i.e., the map $X\mapsto Q_{X}$ is $W(\lambda_{0})$-equivariant. The rest of the proposition is now clear. \end{proof} \begin{ex} \label{exfd} {\rm Let $F$ be a finite-dimensional $(\mathfrak{g},K)$-module. The corresponding coherent family is $\{F_\lambda\}$ from \cite{V2}, Example 7.2.12. In particular, every $F_\lambda$ is finite-dimensional up to sign, or 0. By Proposition \ref{propindex} and (\ref{index formula}), for any $F_\lambda$, \[ \mathop{\hbox {dim}}\nolimits I(F_\lambda)=\mathop{\hbox {dim}}\nolimits(F_\lambda\otimes S^+-F_\lambda\otimes S^-)=\mathop{\hbox {dim}}\nolimits F_\lambda(\mathop{\hbox {dim}}\nolimits S^+-\mathop{\hbox {dim}}\nolimits S^-)=0, \] since $S^+$ and $S^-$ have the same dimension (as long as $\mathfrak{p}\neq 0$). It follows that \[ Q_{F}(\lambda)=0. \] (Note that the index itself is a nonzero virtual module, but its dimension is zero. This may be a little surprising at first, but it is quite possible for virtual modules.) This means that in this case Proposition \ref{wequi} gives no information about the coherent continuation representation (which is in this case a copy of the sign representation of $W_\mathfrak{g}$ spanned by $F$).} \end{ex} \begin{ex} \label{exds_sl2} {\rm Let $G=SL(2,\mathbb{R})$, so that weights correspond to integers. Let $\lambda_{0}=n_0$ be a positive integer. There are four irreducible $(\mathfrak{g},K)$-modules with infinitesimal character $n_0$: the finite-dimensional module $F$, the holomorphic discrete series $D^+$ of lowest weight $n_0+1$, the antiholomorphic discrete series $D^-$ of highest weight $-n_0-1$, and the irreducible principal series representation $P$. The coherent family $F_n$ corresponding to $F$ is defined by setting $F_n$ to be the finite-dimensional module with highest weight $n-1$ if $n>0$, $F_0=0$, and if $n<0$, $F_n=-F_{-n}$. Thus $s\cdot F=-F$, i.e., $F$ spans a copy of the sign representation of $W(\lambda_{0})=\{1,s\}$. As we have seen, the index polynomial corresponding to $F$ is zero. By \cite{V2}, Example 7.2.13, the coherent family $D_n^+$ corresponding to $D^+$ is given as follows: for $n\geq 0$, $D^+_n$ is the irreducible lowest weight $(\mathfrak{g},K)$-module with lowest weight $n+1$, and for $n<0$, $D^+_n$ is the sum of $D^+_{-n}$ and the finite-dimensional module $F_{-n}$. It is easy to see that for each $n\in\mathbb{Z}$, $I(D^+_n)$ is the one-dimensional $\widetilde K$-module $E_n$ with weight $n$. So the index polynomial $Q_{D^+}$ is the constant polynomial $1$. Moreover, $s\cdot D^+=D^++F$. One similarly checks that the coherent family $D_n^-$ corresponding to $D^-$ is given as follows: for $n\geq 0$, $D^-_n$ is the irreducible highest weight $(\mathfrak{g},K)$-module with highest weight $-n-1$, and for $n<0$, $D^-_n=D^-_{-n}+F_{-n}$. For each $n\in\mathbb{Z}$, $I(D^-_n)=-E_{-n}$, so the index polynomial $Q_{D^-}$ is the constant polynomial $-1$. Moreover, $s\cdot D^-=D^-+F$. Finally, one checks that the coherent family corresponding to $P$ consists entirely of principal series representations, that the $W(\lambda_{0})$-action on $P$ is trivial, and that the corresponding index polynomial is 0. Putting all this together, we see that the coherent continuation representation at $n_0$ consists of three trivial representations, spanned by $F+D^++D^-$, $D^+-D^-$ and $P$, and one sign representation, spanned by $F$. The index polynomial representation is the trivial representation spanned by the constant polynomials. The map $X\mapsto Q_X$ sends $P$, $F$ and $F+D^++D^-$ to zero, and $D^+-D^-$ to the constant polynomial $2$. } \end{ex} The conclusion of Example \ref{exfd} about the index polynomials of finite-dimensional representations being zero can be generalized as follows. \begin{prop} \label{indexzero} Let $X$ be a $(\mathfrak{g},K)$-module as above, with Gelfand-Kirillov dimension $\mathop{\hbox {Dim}}\nolimits(X)$. If $\mathop{\hbox {Dim}}\nolimits(X)<\sharp R_{\mathfrak{g}}^{+}-\sharp R_{\mathfrak{k}}^{+}$, then $Q_X=0$. \end{prop} \begin{proof} We need to recall the setting of \cite{BV2}, Section 2, in particular their Theorem 2.6.(b) (taken from \cite{J1II}). Namely, to any irreducible representation $\sigma$ of $W_\mathfrak{g}$ one can associate its degree, i.e., the minimal integer $d$ such that $\sigma$ occurs in the $W_\mathfrak{g}$-representation $S^d(\mathfrak{t})$. Theorem 2.6.(b) of \cite{BV2} says that the degree of any $\sigma$ occurring in the coherent continuation representation attached to $X$ must be at least equal to $\sharp R_{\mathfrak{g}}^{+}-\mathop{\hbox {Dim}}\nolimits(X)$. By assumption, the degree of $Q_X$, $\sharp R_{\mathfrak{k}}^{+} $, is smaller than $\sharp R_{\mathfrak{g}}^{+}-\mathop{\hbox {Dim}}\nolimits(X)$. On the other hand, by Proposition \ref{wequi} the index polynomial representation has to occur in the coherent continuation representation. It follows that $Q_X$ must be zero. \end{proof} \begin{ex} {\rm Wallach modules for $Sp(2n,\mathbb{R})$, $SO^*(2n)$ and $U(p,q)$, studied in \cite{HPP}, all have nonzero index, but their index polynomials are zero. This can also be checked explicitly from the results of \cite{HPP}, at least in low-dimensional cases. The situation here is like in Example \ref{exfd}; the nonzero Dirac index has zero dimension. In particular the conclusion $Q_X=0$ in Proposition \ref{indexzero} does not imply that $I(X)=0$. } \end{ex} We note that in the proof of Proposition \ref{indexzero}, we are applying the results of \cite{BV2} to $(\mathfrak{g},K)$-modules, although they are stated in \cite{BV2} for highest weight modules. This is indeed possible by results of Casian \cite{C}. We explain this in more detail. Let ${\mathcal B}$ be the flag variety of $\mathfrak{g}$ consisting of all the Borel subalgebras of $\mathfrak{g}$. For a point $x\in{\mathcal B}$, write $\mathfrak{b}_{x}=\mathfrak{h}_{x}+\mathfrak{n}_{x}$ for the corresponding Borel subalgebra, with nilradical $\mathfrak{n}_{x}$, and Cartan subalgebra $\mathfrak{h}_{x}$. Define a functor $\Gamma_{\mathfrak{b}_{x}}$ from the category of $\mathfrak{g}$-modules into the category of $\mathfrak{g}$-modules which are $\mathfrak{b}_x$-locally finite, by \begin{equation*} \Gamma_{\mathfrak{b}_{x}}M=\big\{\mathfrak{b}_{x}-\text{locally finite vectors in } M\big\}. \end{equation*} Write $\Gamma^{q}_{\mathfrak{b}_{x}}$, $q\geq 0$, for its right derived functors. Instead of considering the various $\mathfrak{b}_{x}$, $x\in{\mathcal B}$, it is convenient to fix a Borel subalgebra $\mathfrak{b}=\mathfrak{h}+\mathfrak{n}$ of $\mathfrak{g}$ and twist the module $M$. By a twist of $M$ we mean that if $\pi$ is the $\mathfrak{g}$-action on $M$ and $\sigma$ is an automorphism of $\mathfrak{g}$ then the twist of $\pi$ by $\sigma$ is the $\mathfrak{g}$-action $\pi\circ\sigma$ on $M$. Then Casian's generalized Jacquet functors $J_{\mathfrak{b}_{x}}^{q}$ are functors from the category of $\mathfrak{g}$-modules into the category of $\mathfrak{g}$-modules which are $\mathfrak{b}$-locally finite, given by \begin{equation*} J_{\mathfrak{b}_{x}}^{q}M=\Big\{\Gamma_{\mathfrak{b}_{x}}^{q}\mathop{\hbox {Hom}}\nolimits_{\bC}(M,\bC)\Big\}^{0} \end{equation*} where the superscript `0' means that the $\mathfrak{g}$-action is twisted by some inner automorphism of $\mathfrak{g}$, to make it $\mathfrak{b}$-locally finite instead of $\mathfrak{b}_{x}$-locally finite. In case $\mathfrak{b}_{x}$ is the Borel subalgebra corresponding to an Iwasawa decomposition of $G$, $J^0_{\mathfrak{b}_x}$ is the usual Jacquet functor of \cite{BB}, while the $J_{\mathfrak{b}_{x}}^q$ vanish for $q>0$. The functors $J_{\mathfrak{b}_{x}}^{q}$ make sense on the level of virtual $(\mathfrak{g},K)$-modules and induce an injective map \begin{equation*} X\mapsto \sum_{x\in{\mathcal B}/K}\sum_{q}(-1)^{q}J_{\mathfrak{b}_{x}}^{q}X \end{equation*} from virtual $(\mathfrak{g},K)$-modules into virtual $\mathfrak{g}$-modules which are $\mathfrak{b}$-locally finite. Note that the above sum is well defined, since the $J_{\mathfrak{b}_{x}}^{q}$ depend only on the $K$-orbit of $x$ in $\mathcal{B}$. An important feature of the functors $J_{\mathfrak{b}_{x}}^{q}$ is the fact that they satisfy the following identity relating the $\mathfrak{n}_{x}$-homology of $X$ with the $\mathfrak{n}$-cohomology of the modules $J_{\mathfrak{b}_{x}}^{q}X$ (see page 6 in \cite{C}): \begin{equation*} \sum_{p,q\geq 0}(-1)^{p+q}\mathop{\hbox {tr}}\nolimits_\mathfrak{h} H^{p}(\mathfrak{n},J_{\mathfrak{b}_{x}}^{q}X)=\sum_{q}(-1)^{q}\mathop{\hbox {tr}}\nolimits_\mathfrak{h} H_{q}(\mathfrak{n}_{x},X)^{0}. \end{equation*} Here the superscript `0' is the appropriate twist interchanging $\mathfrak{h}_{x}$ with $\mathfrak{h}$, and $\mathop{\hbox {tr}}\nolimits_\mathfrak{h}$ denotes the formal trace of the $\mathfrak{h}$-action. More precisely, if $Z$ is a locally finite $\mathfrak{h}$-module with finite-dimensional weight components $Z_\mu$, $\mu\in\mathfrak{h}^*$, then \[ \mathop{\hbox {tr}}\nolimits_\mathfrak{h} Z=\sum_{\mu\in\mathfrak{h}^*} \mathop{\hbox {dim}}\nolimits Z_\mu\, e^\mu. \] Using this and Osborne's character formula, the global character of $X$ on an arbitrary $\theta$-stable Cartan subgroup can be read off from the characters of the $J^q_{\mathfrak{b}_{x}}X$ (see \cite{C} and \cite{C2}). In particular, we deduce that if $\tau$ is an irreducible representation of the Weyl group $W_{\mathfrak{g}}$ occuring in the coherent representation attached to $X$ then $\tau$ occurs in the coherent continuation representation attached to $J_{\mathfrak{b}_{x}}^{q}X$ for some $q\geq 0$ and some Borel subalgebra $\mathfrak{b}_{x}$. Moreover, from the definitions, one has $\mathop{\hbox {Dim}}\nolimits(X)\geq \mathop{\hbox {Dim}}\nolimits(J_{\mathfrak{b}_{x}}^{q}X)$. Applying the results in \cite{BV2} to the module $J_{\mathfrak{b}_{x}}^{q}X$, we deduce that: \begin{equation*} d^{o}(\tau)\geq \sharp R^{+}_{\mathfrak{g}}-\mathop{\hbox {Dim}}\nolimits(J_{\mathfrak{b}_{x}}^{q}X)\geq \sharp R^{+}_{\mathfrak{g}}-\mathop{\hbox {Dim}}\nolimits(X), \end{equation*} where $d^{o}(\tau)$ is the degree of $\tau$. \section{Index polynomials and Goldie rank polynomials} \label{section Goldie rank} Recall that $H_s$ denotes a maximally split Cartan subgroup of $G$ with complexified Lie algebra $\mathfrak{h}_s$. As in Section \ref{section coherent}, we let $X$ be a module with regular infinitesimal character $\lambda_{0}\in\mathfrak{h}_s^{\star}$, and $\{X_{\lambda}\}_{\lambda\in\lambda_0+\Lambda}$ the corresponding coherent family on $H_s$. With notation from (\ref{annintro}) and (\ref{goldieintro}), Joseph proved that the mapping \begin{equation*} \lambda\mapsto P_{X}(\lambda)=\rk (U(\mathfrak{g})/\Ann(X_{\lambda})), \end{equation*} extends to a $W_\mathfrak{g}$-harmonic polynomial on $\mathfrak{h}_s^*$, homogeneous of degree $\sharp R^{+}_{\mathfrak{g}}-\mathop{\hbox {Dim}}\nolimits(X)$, where $\mathop{\hbox {Dim}}\nolimits(X)$ is the Gelfand-Kirillov dimension of $X$ (see \cite{J1I}, \cite{J1II} and \cite{J1III}). He also found relations between the Goldie rank polynomial $P_X$ and Springer representations; and (less directly) Kazhdan-Lusztig polynomials (see \cite{J2} and \cite{J3}). Recall from \eqref{kingintro} King's analytic interpretation of the Goldie rank polynomial: that for $x\in \mathfrak{h}_{s,0}$ regular, the expression \begin{equation} \label{charpol} \lim_{t\to 0+} t^d\mathop{\hbox {ch}}\nolimits_\mathfrak{g}(X_\lambda)(\exp tx) \end{equation} is zero if $d$ is an integer bigger than $\mathop{\hbox {Dim}}\nolimits(X)$; and if $d=\mathop{\hbox {Dim}}\nolimits(X)$, it is (for generic $x$) a nonzero polynomial $C_{X,x}$ in $\lambda$ called the character polynomial. Up to a constant, this character polynomial is equal to the Goldie rank polynomial attached to $X$. In other words, the Goldie rank polynomial expresses the dependence on $\lambda$ of the leading term in the Taylor expansion of the numerator of the character of $X_\lambda$ on the maximally split Cartan $H_{s}$. For more details, see \cite{K1} and also \cite{J1II}, Corollary 3.6. The next theorem shows that the index polynomial we studied in Section \ref{section Weyl group} is the exact analogue of King's character polynomial, but attached to the character on the compact Cartan subgroup instead of the maximally split Cartan subgroup. \begin{thm} \label{ind=char} Let $X$ be a $(\mathfrak{g},K)$-module with regular infinitesimal character and let $X_\lambda$ be the corresponding coherent family on the compact Cartan subgroup. Write $r_\mathfrak{g}$ (resp. $r_\mathfrak{k}$) for the number of positive $\mathfrak{t}$-roots for $\mathfrak{g}$ (resp. $\mathfrak{k}$). Suppose $y\in \mathfrak{t}_0$ is any regular element. Then the limit \begin{equation} \label{indexpol} \lim_{t\to 0+} t^d \mathop{\hbox {ch}}\nolimits_\mathfrak{g}(X_\lambda)(\exp ty) \end{equation} is zero if $d$ is an integer bigger than $r_\mathfrak{g}-r_\mathfrak{k}$. If $d=r_\mathfrak{g}-r_\mathfrak{k}$, then the limit \eqref{indexpol} is equal to \[ \textstyle{\frac{\prod_{\alpha\in R_\mathfrak{k}^+}\alpha(y)}{\prod_{\alpha\in R_\mathfrak{g}^+}\alpha(y)}}\, Q_X(\lambda), \] where $Q_X$ is the index polynomial attached to $X$ as in (\ref{dim}). In other words, the index polynomial, up to an explicit constant, expresses the dependence on $\lambda$ of the (possibly zero) leading term in the Taylor expansion of the numerator of the character of $X_\lambda$ on the compact Cartan $T$. \end{thm} \begin{proof} The restriction to $K$ of any $G$-representation has a well defined distribution character, known as the $K$-character. The restriction of this $K$-character to the set of elliptic $G$-regular elements in $K$ is a function, equal to the function giving the $G$-character (see \cite{HC}, and also \cite{AS}, (4.4) and the appendix). Therefore Proposition \ref{propindex} and (\ref{index formula}) imply \begin{equation*} \mathop{\hbox {ch}}\nolimits_\mathfrak{g}(X_\lambda)(\exp ty)=\frac{\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(I(X_\lambda))}{\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(S^+-S^-)}(\exp ty). \end{equation*} Also, it is clear that \[ \lim_{t\to 0+} \mathop{\hbox {ch}}\nolimits_\mathfrak{k}(I(X_\lambda))(\exp ty)=\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(I(X_\lambda))(e)=\mathop{\hbox {dim}}\nolimits I(X_\lambda)=Q_X(\lambda). \] Therefore the limit (\ref{indexpol}) is equal to \[ \lim_{t\to 0+} t^d \frac{\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(I(X_\lambda))(\exp ty)}{\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(S^+-S^-)(\exp ty)}=Q_X(\lambda)\lim_{t\to 0+} \frac{t^d}{\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(S^+-S^-)(\exp ty)}. \] On the other hand, it is well known and easy to check that \[ \mathop{\hbox {ch}}\nolimits_\mathfrak{k}(S^+-S^-)=\frac{d_\mathfrak{g}}{d_\mathfrak{k}}, \] where $d_\mathfrak{g}$ (resp. $d_\mathfrak{k}$) denotes the Weyl denominator for $\mathfrak{g}$ (resp. $\mathfrak{k}$). It is immediate from the product formula \eqref{Weyldenominator} that we know that \[ d_\mathfrak{g}(\exp ty)=t^{r_\mathfrak{g}}\prod_{\alpha\in R_\mathfrak{g}^+} \alpha(y) + \text{ higher order terms in } t \] and similarly \[ d_\mathfrak{k}(\exp ty)=t^{r_\mathfrak{k}}\prod_{\alpha\in R_\mathfrak{k}^+}\alpha(y) + \text{ higher order terms in } t. \] So we see that \[ \lim_{t\to 0+} \frac{t^d}{\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(S^+-S^-)(\exp ty)}=\lim_{t\to 0+} t^{d-r_\mathfrak{g}+r_\mathfrak{k}} \frac{\prod_{\alpha\in R_\mathfrak{k}^+}\alpha(y)}{\prod_{\alpha\in R_\mathfrak{g}^+} \alpha(y)}. \] The theorem follows. \end{proof} We are now going to consider some examples (of discrete series representations) where we compare the index polynomial and the Goldie rank polynomial. To do so, we identify the compact Cartan subalgebra with the maximally split one using a Cayley transform. Recall that if $X$ is a discrete series representation with Harish-Chandra parameter $\lambda$, then \[ I(X)= \pm H_D(X)= \pm E_\lambda, \] where $E_\lambda$ denotes the $\widetilde{K}$-type with infinitesimal character $\lambda$. (The sign depends on the relation between the positive system defined by $\lambda$ and the fixed one used in Section \ref{section index} to define the index. See \cite{HP1}, Proposition 5.4, or \cite{HP2}, Corollary 7.4.5.) The index polynomial $Q_X$ is then given by the Weyl dimension formula for this $\widetilde{K}$-type, i.e., by \begin{equation} \label{indexds} Q_X(\lambda)=\prod_{\alpha\in R_\mathfrak{k}^+} \frac{\langle\lambda,\alpha\rangle}{\langle\rho_\mathfrak{k},\alpha\rangle}. \end{equation} Comparing this with \cite{K2}, Proposition 3.1, we get: \begin{prop} \label{holods} Suppose $G$ is linear, semisimple and of Hermitian type. Let $X$ be the $(\mathfrak{g},K)$-module of a holomorphic discrete series representation. Then the index polynomial $Q_X$ coincides with the Goldie rank polynomial $P_X$ up to a scalar multiple. \end{prop} Of course, $Q_X$ is not always equal to $P_X$, since the degrees of these two polynomials are different in most cases. In the following we consider the example of discrete series representations for $SU(n,1)$. The choice is dictated by the existence of explicit formulas for the Goldie rank polynomials computed in \cite{K2}. The discrete series representations for $SU(n,1)$ with a fixed infinitesimal character can be parametrized by integers $i\in [0,n]$. To see how this works, we introduce some notation. First, we take for $K$ the group $S(U(n)\times U(1))\cong U(n)$. The compact Cartan subalgebra $\mathfrak{t}$ consists of diagonal matrices, and we identify it with $\mathbb{C}^{n+1}$ in the usual way. We make the usual choice for the dominant $\mathfrak{k}$-chamber $C$: it consists of those $\lambda\in\mathbb{C}^{n+1}$ for which \[ \lambda_1\geq\lambda_2\geq\dots \geq\lambda_n. \] Then $C$ is the union of $n+1$ $\mathfrak{g}$-chambers $D_0,\dots,D_n$, where $D_0$ consists of $\lambda\in C$ such that $\lambda_{n+1}\leq \lambda_n$, $D_n$ consists of $\lambda\in C$ such that $\lambda_{n+1}\geq \lambda_1$, and for $1\leq i\leq n-1$, \[ D_i=\{\lambda\in C\,\big|\, \lambda_{n-i}\geq \lambda_{n+1}\geq\lambda_{n-i+1}\}. \] Now for $i\in [0,n]$, and for $\lambda\in D_i$, which is regular for $\mathfrak{g}$ and analytically integral for $K$, we denote by $X_\lambda(i)$ the discrete series representation with Harish-Chandra parameter $\lambda$. We use the same notation for the corresponding $(\mathfrak{g},K)$-module. For $i=0$, $X_\lambda(i)$ is holomorphic and this case is settled by Proposition \ref{holods}; the result is that both the index polynomial and the Goldie rank polynomial are proportional to the Vandermonde determinant \begin{equation} \label{vandermonde} V(\lambda_1,\dots,\lambda_n)=\prod_{1\leq p<q\leq n}(\lambda_p-\lambda_q). \end{equation} The case $i=n$ of antiholomorphic discrete series representations is analogous. For $1\leq i\leq n-1$, the index polynomial of $X_\lambda(i)$ is still given by (\ref{vandermonde}). On the other hand, the character polynomial is up to a constant multiple given by the formula (6.5) of \cite{K2}, as the sum of two determinants. We note that King's expression can be simplified and that the character polynomial of $X_\lambda(i)$ is in fact equal to \begin{equation} \label{chards} \left|\begin{matrix} \lambda_1^{n-2}&\dots&\lambda_{n-i}^{n-2}&\lambda_{n-i+1}^{n-2}&\dots&\lambda_n^{n-2} \cr \lambda_1^{n-3}&\dots&\lambda_{n-i}^{n-3}&\lambda_{n-i+1}^{n-3}&\dots&\lambda_n^{n-3} \cr \vdots&&\vdots&\vdots&&\vdots \cr \lambda_1&\dots&\lambda_{n-i}&\lambda_{n-i+1}&\dots&\lambda_n \cr 1&\dots&1&0&\dots&0 \cr 0&\dots&0&1&\dots&1 \end{matrix} \right| \end{equation} \smallskip \noindent up to a constant multiple. For $i=1$, (\ref{chards}) reduces to the Vandermonde determinant $V(\lambda_1,\dots,\lambda_{n-1})$. Similarly, for $i=n-1$, we get $V(\lambda_2,\dots,\lambda_n)$. In these cases, the Goldie rank polynomial divides the index polynomial. For $2\leq i\leq n-2$, the Goldie rank polynomial is more complicated. For example, if $n=4$ and $i=2$, (\ref{chards}) becomes \[ -(\lambda_1-\lambda_2)(\lambda_3-\lambda_4)(\lambda_1+\lambda_2-\lambda_3-\lambda_4), \] and this does not divide the index polynomial. For $n=5$ and $i=2$, (\ref{chards}) becomes \begin{multline*} -(\lambda_1-\lambda_2)(\lambda_1-\lambda_3)(\lambda_2-\lambda_3)(\lambda_4-\lambda_5) \\ (\lambda_1\lambda_2+\lambda_1\lambda_3-\lambda_1\lambda_4-\lambda_1\lambda_5+\lambda_2\lambda_3 -\lambda_2\lambda_4-\lambda_2\lambda_5-\lambda_3\lambda_4-\lambda_3\lambda_5+\lambda_4^2+\lambda_4\lambda_5+\lambda_5^2), \end{multline*} and one can check that the quadratic factor is irreducible. More generally, for any $n\geq 4$ and $2\leq i\leq n-2$, the Goldie rank polynomial (\ref{chards}) is divisible by $(\lambda_p-\lambda_q)$ whenever $1\leq p<q\leq n-i$ or $n-i+1\leq p<q\leq n$. This is proved by subtracting the $q$th column from the $p$th column. On the other hand, if $1\leq p\leq n-i<q\leq n$, we claim that (\ref{chards}) is not divisible by $(\lambda_p-\lambda_q)$. Indeed, we can substitute $\lambda_q=\lambda_p$ into (\ref{chards}) and subtract the $q$th column from the $p$th column. After this we develop the determinant with respect to the $p$th column. The resulting sum of two determinants is equal to the Vandermonde determinant $V(\lambda_1,\dots,\lambda_{p-1},\lambda_{p+1},\dots,\lambda_n)$, and this is not identically zero. This proves that for $X=X_\lambda(i)$ the greatest common divisor of $P_X$ and $Q_X$ is \begin{equation} \label{gcd} \prod_{1\leq p<q\leq n-i}(\lambda_p-\lambda_q)\prod_{n-i+1\leq r<s\leq n}(\lambda_r-\lambda_s). \end{equation} Comparing with the simple roots $\Psi_i$ corresponding to the chamber $D_i$ described on p. 294 of \cite{K2}, we see that the linear factors of (\ref{gcd}) correspond to roots generated by the compact part of $\Psi_i$. On the other hand, the set of compact roots in $\Psi_i$ is equal to the $\tau$-invariant of $X_\lambda(i)$, as proved in \cite{HS}, Proposition 3.6 (see also \cite{K1}, Remark 4.5). Recall that the $\tau$-invariant of a $(\mathfrak{g},K)$-module $X$ consists of the simple roots $\alpha$ such that the translate of $X$ to the wall defined by $\alpha$ is 0; see \cite{V1}, Section 4. In particular, we have checked a special case of the following proposition. \begin{prop} \label{tau} Assume that $G$ is a real reductive Lie group in the Harish-Chandra class and that $G$ and $K$ have equal rank. Let $X$ be the discrete series representation of $G$ with Harish-Chandra parameter $\lambda$. Then the index polynomial $Q_X$ and the Goldie rank polynomial $P_X$ are both divisible by the product of linear factors corresponding to the roots generated by the $\tau$-invariant of $X$. \end{prop} \begin{proof} The $\tau$-invariant of $X$ is still given as above, as the compact part of the simple roots corresponding to $\lambda$. In particular, the roots generated by the $\tau$-invariant are all compact, and the corresponding factors divide $Q_X$, which is given by (\ref{indexds}). On the other hand, by \cite{V1}, Proposition 4.9, the Goldie rank polynomial is always divisible by the factors corresponding to roots generated by the $\tau$-invariant. We note that the result in \cite{V1} is about the Bernstein degree polynomial, which is up to a constant factor equal to the Goldie rank polynomial by \cite{J1II}, Theorem 5.7. \end{proof} Note that for $G=SU(n,1)$, the result we obtained is stronger than the conclusion of Proposition \ref{tau}. Namely, we proved that the product of linear factors corresponding to the roots generated by the $\tau$-invariant of $X$ is in fact the greatest common divisor $R$ of $P_X$ and $Q_X$. We note that it is easy to calculate the degrees of all the polynomials involved. Namely, if $2\leq i\leq n-2$, the degree of $R$ is $\binom{i}{2}+\binom{n-i}{2}$. Since $\mathop{\hbox {Dim}}\nolimits(X)=2n-1$ (see \cite{K2}), and $\sharp R_{\mathfrak{g}}^{+}=\binom{n+1}{2}$, the degree of $P_X$ is $\binom{n-1}{2}$. It follows that the degree of $P_X/R$ is $i(n-i)-(n-1)$. On the other hand, since the degree of $Q_X$ is $\sharp R_{\mathfrak{k}}^{+}=\binom{n}{2}$, the degree of $Q_X/R$ is $i(n-i)$. \section{Index polynomials and nilpotent orbits} \label{orbits} \begin{subequations}\label{Korbit} Assume again that we are in the setting \eqref{se:cohintro} of the introduction, so that $Y=Y_{\lambda_0}$ is an irreducible $(\mathfrak{g},K)$-module. (We use a different letter from the $X$ in the introduction as a reminder that we will soon be imposing some much stronger additional hypotheses on $Y$.) Recall from \eqref{multintro} the expression \begin{equation \Ass(Y_\lambda) = \coprod_{j=1}^r m^j_Y(\lambda) \overline{{\mathcal O}^j} \qquad (\lambda \in (\lambda_0 + \Lambda)^+), \end{equation} and the fact that each $m^j_Y$ extends to a polynomial function on $\mathfrak{t}^*$, which is a multiple of the Goldie rank polynomial: \begin{equation} m^j_Y = a^j_Y P_Y, \end{equation} with $a^j_Y$ a nonnegative rational number depending on $Y$. On the other hand, the Weyl dimension formula for $\mathfrak{k}$ defines a polynomial on the dual of the compact Cartan subalgebra $\mathfrak{t}^*$ in $\mathfrak{g}$, with degree equal to the cardinality $\sharp R_{\mathfrak{k}}^{+}$ of positive roots for $\mathfrak{k}$. Write $\sigma_{K}$ for the representation of the Weyl group $W_{\mathfrak{g}}$ generated by this polynomial. Suppose that $\sigma_{K}$ is a Springer representation, i.e., it is associated with a nilpotent $G_{\bC}$-orbit ${\mathcal O}_{K}$: \begin{equation}\label{assum1} \sigma_K \overset{\text{Springer}}\longleftrightarrow {\mathcal O}_K \subset \mathfrak{g}^*. \end{equation} Here $G_\mathbb{C}$ denotes a connected complex reductive algebraic group having Lie algebra $\mathfrak{g}$. Assume also that there is a Harish-Chandra module $Y$ of regular infinitesimal character $\lambda_0$ such that \begin{equation}\label{assum2} {\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y)))=\overline{{\mathcal O}_{K}}. \end{equation} Recall from the discussion before (\ref{eq:Korbit}) that ${\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y)))$ is the variety associated with the graded ideal of $\Ann(Y)$ in the symmetric algebra $S(\mathfrak{g})$. Our assumptions force the degree of the Goldie rank polynomial $P_Y$ attached to $Y$ to be \[ \sharp R_{\mathfrak{g}}^{+} - \mathop{\hbox {Dim}}\nolimits(Y)=\sharp R_{\mathfrak{g}}^{+}-\half\mathop{\hbox {dim}}\nolimits {\mathcal O}_{K}=\half(\mathop{\hbox {dim}}\nolimits \mathcal{N}-\mathop{\hbox {dim}}\nolimits\mathcal{O}_K)=\sharp R_{\mathfrak{k}}^{+}, \] where $\mathcal{N}$ denotes the cone of nilpotent elements in $\mathfrak{g}^{\star}$. In other words, the Goldie rank polynomial $P_Y$ has the same degree as the index polynomial $Q_Y$. We conjecture that for representations attached to ${\mathcal O}_K$, the index polynomial admits an expression analogous to \eqref{eq:multchar}. \end{subequations} \begin{conj} \label{conj} Assume that the $W_{\mathfrak{g}}$-representation $\sigma_K$ generated by the Weyl dimension formula for $\mathfrak{k}$ corresponds to a nilpotent $G_{\bC}$-orbit ${\mathcal O}_{K}$ via the Springer correspondence. Then for each $K_\bC$-orbit ${\mathcal O}_{K}^{j}$ on ${\mathcal O}_{K}\cap (\mathfrak{g}/\mathfrak{k})^{\star}$, there exists an integer $c_{j}$ such that for any Harish-Chandra module $Y$ for $G$ satisfying ${\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y))) \subset \overline{O_{K}}$, we have \begin{equation*} Q_{Y}=\sum_{j}c_{j}m_{Y}^j. \end{equation*} Here $Q_{Y}$ is the index polynomial attached to $Y$ as in Section \ref{section Weyl group}. \end{conj} \begin{ex} {\rm Consider $G=SL(2,\bR)$ with $K=SO(2)$. Then $\sigma_{K}$ is the trivial representation of $W_{\mathfrak{g}}\simeq\bZ/2\bZ$ and ${\mathcal O}_{K}$ is the principal nilpotent orbit. ${\mathcal O}_{K}$ has two real forms ${\mathcal O}_{K}^{1}$ and ${\mathcal O}_{K}^{2}$. One checks from our computations in Example \ref{exds_sl2} and from the table below that $c_{1}=1$ and $c_{2}=-1$. This shows that the conjecture is true in the case when $G=SL(2,\bR)$. \vspace*{0.2cm}\\ \begin{center} \begin{tabular}{|l|c|r|} \hline $\hspace*{2.5cm}Y$ & ${\mathcal V}(Y)$ & $Q_{Y}$ \\ \hline finite-dimensional modules & $\{0\}$ & $0$ \\ \hline holomorphic discrete series & ${\mathcal O}_{K}^{1}$ & $1$ \\ \hline antiholomorphic discrete series & ${\mathcal O}_{K}^{2}$ & $-1$ \\ \hline principal series & ${\mathcal O}_{K}^{1}\cup {\mathcal O}_{K}^{2}$ & $0$ \\ \hline \end{tabular} \end{center} \vspace*{0.5cm} Here ${\mathcal V}(Y)\subset {\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y)))$ is the associated variety of $Y$. } \end{ex} \begin{ex} \label{ex_su1n} {\rm Let $n>1$ and let $G=SU(1,n)$ with $K=U(n)$. Then ${\mathcal O}_K$ is the minimal nilpotent orbit of dimension $2n$. It has two real forms ${\mathcal O}_{K}^{1}$ and ${\mathcal O}_{K}^{2}$. The holomorphic and antiholomorphic discrete series representations $Y^1_\lambda$ and $Y^2_\lambda$ all have Gelfand-Kirillov dimension equal to $n$. By \cite{Ch}, Corollary 2.13, the respective associated cycles are equal to \[ \Ass(Y^i_\lambda)=m^i_{Y^i}(\lambda) {\mathcal O}_{K}^{i},\qquad i=1,2, \] with the multiplicity $m^i_{Y^i}(\lambda)$ equal to the dimension of the lowest $K$-type of $Y^i_\lambda$. The index of the holomorphic discrete series representations is the lowest $K$-type shifted by a one dimensional representation of $K$ with weight $\rho(\mathfrak{p}^-)$, so it has the same dimension as the lowest $K$-type. The situation for the antiholomorphic discrete series representations is analogous, but there is a minus sign. Hence \[ m^i_{Y^i}(\lambda) = (-1)^{i-1}Q_{Y^i}(\lambda),\qquad i=1,2. \] This already forces the coefficients $c_1$ and $c_2$ from Conjecture \ref{conj} to be 1 and -1 respectively. Since ${\mathcal O}_K$ is the minimal orbit, it follows that for infinite-dimensional $Y$, \[ {\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y))) \subseteq \overline{O_{K}}\quad\Rightarrow\quad {\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y))) = \overline{O_{K}}. \] \medskip If ${\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y))) = \overline{O_{K}}$ and $Y$ is irreducible, then $\mathcal{V}(Y)$ must be either $\overline{\mathcal{O}_K^1}$ or $\overline{\mathcal{O}_K^2}$. This follows from minimality of $\mathcal{O}_K$ and from \cite{V3}, Theorem 1.3. Namely, the codimension of the boundary of $\mathcal{O}_K^i$ in $\overline{\mathcal{O}_K^i}$ is $n\geq 2$. On the other hand, by \cite{KO}, Lemma 3.5, $\mathcal{V}(Y)=\overline{\mathcal{O}_K^i}$ implies $Y$ is holomorphic if $i=1$, respectively antiholomorphic if $i=2$. Let us assume $i=1$; the other case is analogous. It is possible to write $Y$ as a $\mathbb{Z}$-linear combination of generalized Verma modules; see for example \cite{HPZ}, Proposition 3.6. So we see that it is enough to check the conjecture assuming $Y$ is a generalized Verma module. In this case, one easily computes that $I(Y)$ is the lowest $K$-type of $Y$ shifted by the one dimensional $\widetilde{K}$-module with weight $\rho(\mathfrak{p}^-)$; see \cite{HPZ}, Lemma 3.2. So the index polynomial is the dimension of the lowest $K$-type. By \cite{NOT}, Proposition 2.1, this is exactly the same as the multiplicity $m^1_Y$ of $\overline{\mathcal{O}_K^1}$ in the associated cycle. This proves the conjecture in this case (with $c_1=1$). } \end{ex} Whenever $G$ is a simple group with a Hermitian symmetric space, the associated varieties ${\mathcal O}_K^1$ and ${\mathcal O}_K^2$ of holomorphic and antiholomorphic discrete series are real forms of a complex orbit ${\mathcal O}_K$ attached by the Springer correspondence to $\sigma_K$. The argument above proves Conjecture \ref{conj} for holomorphic and antiholomorphic representations. But in general there can be many more real forms of ${\mathcal O}_K$, and the full statement of Conjecture \ref{conj} is not so accessible. \medskip We mention that neither of the two assumptions (\ref{assum1}) and (\ref{assum2}) above is automatically fulfilled. Below, we list the classical groups for which the assumption (\ref{assum1}) is satisfied, i.e the classical groups for which $\sigma_{K}$ is a Springer representation. To check whether $\sigma_K$ is a Springer representation, we proceed as follows (see \cite{Car}, Chapters 11 and 13): \begin{itemize} \item[(i)] we identify $\sigma_K$ as a Macdonald representation; \item[(ii)] we compute the symbol of $\sigma_K$; \item[(iii)] we write down the partition associated with this symbol; \item[(iv)] we check whether the partition corresponds to a complex nilpotent orbit. \end{itemize} Recall that complex nilpotent orbits in classical Lie algebras are in one-to-one correspondence with the set of partitions $\lbrack d_1,\cdots,d_k\rbrack$ with $d_1\geq d_2\geq\cdots\geq d_k\geq 1$ such that (see \cite{CM}, Chapter 5): \begin{itemize} \item[$\bullet$] $d_{1}+d_{2}+\cdots+d_{k}=n$, when $\mathfrak{g}\simeq\mathfrak{s}\mathfrak{l}(n,\mathbb{C})$; \item[$\bullet$] $d_{1}+d_{2}+\cdots+d_{k}=2n+1$ and the even $d_j$ occur with even multiplicity, when $\mathfrak{g}\simeq\mathfrak{s}\mathfrak{o} (2n+1,\mathbb{C})$; \item[$\bullet$] $d_{1}+d_{2}+\cdots+d_{k}=2n$ and the odd $d_j$ occur with even multiplicity, when $\mathfrak{g}\simeq\mathfrak{s}\mathfrak{p}(2n,\mathbb{C})$; \item[$\bullet$] $d_{1}+d_{2}+\cdots+d_{k}=2n$ and the even $d_j$ occur with even multiplicity, when $\mathfrak{g}\simeq\mathfrak{s}\mathfrak{o} (2n,\mathbb{C})$; except that the partitions having all the $d_j$ even and occurring with even multiplicity are each associated to {\em two} orbits. \end{itemize} For example, when $G=SU(p,q)$, with $q\geq p\geq 1$, the Weyl group $W_\mathfrak{g}$ is the symmetric group $S_{p+q}$, and $W_\mathfrak{k}$ can be identified with the subgroup $S_p\times S_q$. The representation $\sigma_K$ is parametrized, as a Macdonald representation, by the partition $\lbrack 2^p,1^{q-p}\rbrack$ (see \cite{M} or Proposition 11.4.1 in \cite{Car}). This partition corresponds to a $2pq$-dimensional nilpotent orbit, so $\sigma_K$ is Springer. Note that when $\mathfrak{g}$ is of type $A_n$, there is no symbol to compute, and any irreducible representation of $W_\mathfrak{g}$ is a Springer representation. When $G=SO_e(2p,2p+1)$, with $p\geq 1$, the group $W_\mathfrak{k}$ is generated by a root subsystem of type $D_p\times B_p$. In this case, $\sigma_K$ is parametrized by the pair of partitions $(\lbrack \alpha\rbrack,\lbrack\beta\rbrack)=(\lbrack 1^p\rbrack,\lbrack1^p\rbrack)$ and its symbol is the array \[ \begin{pmatrix} 0&&2&&3&&\cdots&&p+1\cr &1&&2&&\cdots &&p& \end{pmatrix}. \] (See \cite{L} or Proposition 11.4.2 in \cite{Car}.) The partition of $4p+1$ associated with this symbol is $\lbrack 3,2^{2p-2},1^2\rbrack$. This partition corresponds to a $2p(2p+1)$-dimensional nilpotent orbit, i.e., $\sigma_K$ is a Springer representation. When $G=Sp(p,q;\mathbb{R})$, with $q> p\geq 1$, the Weyl group $W_\mathfrak{k}$ is generated by a root subsystem of type $C_p\times C_q$ so that $\sigma_K$ is parametrized by the pair of partitions $(\lbrack \alpha\rbrack,\lbrack\beta\rbrack)=(\lbrack \emptyset\rbrack,\lbrack2^p,1^{q-p}\rbrack)$. Its symbol is the array \[ \begin{pmatrix} 0&&1&&2&&\cdots&&q\cr &1&&2&&\cdots &&q+1& \end{pmatrix}, \] where in the second line there is a jump from $q-p$ to $q-p+2$. (See \cite{L} or Proposition 11.4.3 in \cite{Car}.) The partition of $2p+2q$ associated with this symbol is $\lbrack 3,2^{2p-2},1^{2(q-p)+1}\rbrack$. This partition does not correspond to a nilpotent orbit, i.e., $\sigma_K$ is not a Springer representation. \scriptsize{\begin{table}[ht] \addtolength{\tabcolsep}{-6pt} \centering \scalebox{0.81}{ \begin{tabular}{c c c c c} \hline\hline & & & &\\${\bf G}$ & {\bf Generator for $\sigma_{K}$}& {\bf Springer ?} & ${\bf {\mathcal O}_{K}}$ & ${\bf \mathop{\hbox {dim}}\nolimits_{\bb C}({\mathcal O}_{K})}$\\[0.5ex] & & & &\\ \hline\hline & & & &\\ $SU(p,q)$, $q\geq p\geq 1$&\tiny{$\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq p+q}}(X_{i}-X_{j})$ for $p\geq 2$} &\tiny{Yes}&\tiny{$\lbrack 2^p,1^{q-p}\rbrack$}&\tiny{$2pq$} \\[8ex] & $\prod\limits_{2\leq i<j\leq q+1}(X_{i}-X_{j})$ for $q\geq 2$, $p=1$ & & (minimal orbit if $p=1$)& \\[5ex] & $\sigma_{K}$ is trivial for $p=q=1$& & (principal orbit if $p=q=1$)& \\ & & & &\\ \hline\hline & & & &\\ $SO_{e}(2p,2p+1)$, $p\geq 1$& $\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq 2p}}(X_{i}^{2}-X_{j}^{2})\prod\limits_{p+1\leq i\leq 2p}X_{i}$ for $p\geq 2$&Yes &$\lbrack 3,2^{2p-2},1^{2}\rbrack$& $2p(2p+1)$ \\[8ex] & $X_{2}$ for $p=1$& &(subregular orbit if $p=1$) & \\ & & & &\\ \hline\hline & & & &\\ $SO_{e}(2p,2p-1)$, $p\geq 1$\;\;\;\;\;\;& $\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq 2p-1}}(X_{i}^{2}-X_{j}^{2})\prod\limits_{i=p+1}^{2p-1 X_{i}$ for $p\geq 2$&Yes &$\lbrack 3,2^{2p-2}\rbrack$& $2p(2p-1)$ \\ & $\sigma_{K}$ is trivial for $p=1$& &(principal orbit if $p=1$)& \\ & & & &\\ \hline\hline & & & &\\ $SO_{e}(2,2q+1)$, $q\geq 2$& $\prod\limits_{2\leq i<j\leq q+1}(X_{i}^{2}-X_{j}^{2})\prod\limits_{i=2}^{q+1} X_{i}$&Yes &$\lbrack 3,1^{2q}\rbrack$& $2(2q+1)$ \\ & & & & \\ & & & &\\ \hline\hline & & & &\\ $SO_{e}(2p,2q+1)$ &$\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq p+q}}(X_{i}^{2}-X_{j}^{2})\prod\limits_{i=p+1}^{p+q}X_{i}$ &Yes & $\lbrack 3,2^{2p-2},1^{2(q-p)+2}\rbrack$ &$2p(2q+1)$ \\ $q\geq p+1\geq 3$& & & & \\ & & & &\\ \hline\hline & & & &\\ $SO_{e}(2p,2q+1)$ &$\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq p+q}}(X_{i}^{2}-X_{j}^{2})\prod\limits_{i=p+1}^{p+q} X_{i}$ for $q\geq 2$&No & {\huge -} &{\huge -} \\[8ex] $p\geq q+2\geq 2$ & $X_{p+1}\prod\limits_{1\leq i<j\leq p}(X_{i}^{2}-X_{j}^{2})$ for $q=1$& & & \\[5ex] & $\prod\limits_{1\leq i<j\leq p}(X_{i}^{2}-X_{j}^{2})$ for $q=0$& & &\\ & & & &\\ \hline\hline & & & &\\ $Sp(2n,\bb R)$, $n\geq 1$ &$\prod\limits_{1\leq i<j\leq n}(X_{i}-X_{j})$ for $n\geq 2$& Yes & $\lbrack 2^n\rbrack$&$n(n+1)$ \\[5ex] & $\sigma_{K}$ is trivial for $n=1$& & (principal orbit if $n=1$)& \\ & & & &\\ \hline\hline & & & &\\ $Sp(p,q;\bb R)$, $q\geq p\geq 1$ & \tiny{$\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq p+q}}(X_{i}^{2}-X_{j}^{2})\prod\limits_{i=1}^{p+q} X_{i}$ for $p\geq 2$} & No & {\huge -} & {\huge -} \\[8ex] & $\prod\limits_{2\leq i<j\leq q+1}(X_{i}^{2}-X_{j}^{2})\prod\limits_{1\leq i\leq q+1}X_{i}$ for $q\geq 2$, $p=1$& & \\[5ex] & $X_{1}X_{2}$ for $p=q=1$ & && \\ & & & &\\ \hline\hline & & & &\\ $SO_{e}(2p,2q)$, $q\geq p\geq 1$ & $\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq p+q}}(X_{i}^{2}-X_{j}^{2})$ for $p\geq 2$& Yes &$\lbrack 3,2^{2p-2},1^{2(q-p)+1}\rbrack$&$4pq$ \\[8ex] & $\prod\limits_{2\leq i<j\leq q+1}(X_{i}^{2}-X_{j}^{2})$ for $q\geq 2$, $p=1$ & & & \\[5ex] & $\sigma_{K}$ is trivial for $p=q=1$& & (principal orbit for $p=q=1$) & \\ & & & &\\ \hline\hline & & & &\\ $SO^\star(2n)$, $n\geq 1$ & $\prod\limits_{1\leq i<j\leq n}(X_{i}-X_{j})$ for $n\geq 2$& Yes & $\lbrack 2^{n}\rbrack$ for $n$ even & $n(n-1)$ \\[5ex] & $\sigma_{K}$ is trivial for $n=1$ & & $\lbrack 2^{n-1},1^{2}\rbrack$ for $n$ odd & \\ & & & (trivial orbit if $n=1$)& \\ & & & (minimal orbit if $n=3$)& \\ [1ex] \hline\hline\end{tabular} }\label{tableSpringer} \end{table}} \normalsize \clearpage The following theorem provides a sufficient condition for both assumptions (\ref{assum1}) and (\ref{assum2}) to hold. In contrast with the previous table, it includes exceptional groups. \begin{thm}\label{OK} Suppose G is connected semisimple, $T$ is a compact Cartan subgroup in $G$ contained in $K$, and $\lambda_0$ is the Harish-Chandra parameter for a discrete series representation $Y_0$ of $G$. Assume that the set of integral roots for $\lambda_0$ is precisely the set of compact roots, i.e., \begin{equation}\label{exception} \{\alpha \in \Delta(\mathfrak{g},\mathfrak{t})|\lambda_0(\alpha^\vee) \in {\mathbb Z} \}=\Delta(\mathfrak{k},\mathfrak{t}). \end{equation} Then $\sigma_K$ is the Springer representation for a complex nilpotent orbit ${\mathcal O}_K$. Let $\{Y_{\lambda_0+\mu} | \mu\in\Lambda\}$ be the Hecht-Schmid coherent family of virtual representations corresponding to $Y_0$ and form the virtual representation $$Y \overset{\text{def.}}= \sum_{w \in W_\mathfrak{k}} (-1)^w Y_{w\lambda}.$$ Then $Y$ is a nonzero integer combination of irreducible representations having associated variety of annihilator equal to $\overline{{\mathcal O}_K}$. \end{thm} \begin{proof} The character of $Y$ on the compact Cartan $T$ is a multiple (by the cardinality of $W_\mathfrak{k}$) of the character of $Y_0$. Consequently the character of $Y$ on $T$ is not zero, so $Y$ is not zero. By construction the virtual representation $Y$ transforms under the coherent continuation action of the integral Weyl group $W(\lambda_0) = W_\mathfrak{k}$ by the sign character of $W(\lambda_0)$. By the theory of $\tau$-invariants of Harish-Chandra modules, it follows that every irreducible constituent of $Y$ must have every simple integral root in its $\tau$-invariant. At any regular infinitesimal character $\lambda_0$ there is a unique maximal primitive ideal $J(\lambda_0)$, characterized by having every simple integral root in its $\tau$ invariant. The Goldie rank polynomial for this ideal is a multiple of \[ q_0(\lambda) = \prod_{\langle\alpha^\vee,\lambda_0 \rangle \in \mathbb{N}} \langle \alpha^\vee,\lambda\rangle; \] so the Goldie rank polynomial for every irreducible constituent of $Y$ is a multiple of $q_0$. The Weyl group representation generated by $q_0$ is $\sigma_K$ (see \eqref{Korbit}); so by \cite{BV1}, it follows that the complex nilpotent orbit ${\mathcal O}_0$ attached to the maximal primitive ideal $J_0$ must correspond to $\sigma_K$ as in \eqref{assum1}. At the same time, we have seen that the (nonempty!) set of irreducible constituents of the virtual representation $Y$ all satisfy \eqref{assum2}. \end{proof} Theorem \ref{OK} applies to any real form of $E_6$, $E_7$ and $E_8$, and more generally to any equal rank real form of one root length. It applies as well to $G_2$ (both split and compact forms. The theorem applies to compact forms for any $G$, and in that case ${\mathcal O}_K ={0}$). However, for the split $F_4$ and taking $\lambda_0$ a discrete series parameter for the nonlinear double cover, the integral root system (type $C_4$) strictly contains the compact roots (type $C_3 \times C_1$). So the above theorem does not apply to split $F_4$. Nevertheless the representation $\sigma_K$ does correspond to a (special) nilpotent orbit ${\mathcal O}_K$. At regular integral infinitesimal character, there are (according to the representation-theoretic software {\tt atlas}; see \cite{atlas}) exactly $27$ choices for an irreducible representation $Y$ as in (\ref{assum2}). There are two real forms of the orbit ${\mathcal O}_K$. The $Y$'s come in three families (``two-sided cells") of nine representations each, with essentially the same associated variety in each family. One of the three families contains an $A_\mathfrak{q}(\lambda)$ (with Levi of type $B_3$) and therefore has associated variety equal to one of the two real forms. In particular, the condition (\ref{exception}) is sufficient but not necessary for assumptions (\ref{assum1}) and (\ref{assum2}) to hold. Note that for rank one $F_4$, the representation $\sigma_K$ is not in the image of the Springer correspondence. For the classical groups, Theorem \ref{OK} applies to all the cases of one root length, explaining all the ``yes'' answers in Table \ref{tableSpringer} for types $A$ and $D$. In the case of two root lengths, the hypothesis of Theorem \ref{OK} can be satisfied in the noncompact case exactly when $G$ is Hermitian symmetric (so the cases $SO_e(2,2n-1)$ and $Sp(2n,\mathbb{R})$; more precisely, for appropriate nonlinear coverings of these groups). We do not know a simple general explanation for the remaining ``yes'' answers in the table. Just as for $F_4$, the integral root systems for a discrete series parameter $\lambda_0$ are too large for Theorem \ref{OK}: in the case of $SO_e(2p,2q+1)$, for example, the root system for $K$ is $D_p\times B_q$, but (for $p\ge 2$) the integral root system cannot be made smaller than $B_p \times B_q$.
1,314,259,993,125
arxiv
\section{Introduction} \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth]{figures/introduction.jpg} \caption{\small{ Comparison between non-local (NL) and \textbf{compact generalized non-local (CGNL)} networks on recognizing an action video of kicking the ball. Given the \emph{reference patch} (green rectangle) in the first frame, we visualize for each method the highly related responses in the other frames by thresholding the feature space. CGNL network out-performs the original NL network in capturing the ball that is not only in long-range distance from the reference patch but also corresponds to different channels in the feature map. }} \label{fig:introduction} \end{figure} Capturing spatio-temporal dependencies between spatial pixels or temporal frames plays a key role in the tasks of fine-grained object and action classification. Modeling such interactions among images and videos is the major topic of various feature extraction techniques, including SIFT, LBP, Dense Trajectory~\cite{dense_trajectories}, etc. In the past few years, deep neural network automates the feature designing pipeline by stacking multiple end-to-end convolutional or recurrent modules, where each of them processes correlation within spatial or temporal local regions. In general, capturing the long-range dependencies among images or videos still requires multiple stacking of these modules, which greatly hinders the learning and inference efficiency. A recent work~\cite{understanding_receptive_field} also suggests that stacking more layers cannot always increase the effective receptive fields to capture enough local relations. Inspired by the classical non-local means for image filtering, the recently proposed non-local neural network~\cite{non-local} addresses this challenge by directly modeling the correlation between any two positions in the feature maps in a single module. Without bells and whistles, the non-local method can greatly improve the performances of existing networks on many video classification benchmarks. Despite its great performances, the original non-local network only considers the global spatio-temporal correlation by merging channels, and it might miss the subtle but important cross-channel clues for discriminating fine-grained objects or actions. For instance, the body, the ball and their interaction are all necessary for describing the action of kicking the ball in Fig.~\ref{fig:introduction}, while the original non-local operation learns to focus on the body part relations but neglect the body-ball interactions that usually correspond to different channels of the input features. To improve the effectiveness in fine-grained object and action recognition tasks, this work extends the non-local module by learning explicit correlations among all of the elements across the channels. First, this extension scale-ups the representation power of the non-local operation to attend the interaction between subtle object parts (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, the body and ball in Fig.~\ref{fig:introduction}). Second, we propose its compact representation for various kernel functions to address the high computation burden issue. We show that as a self-contained module, the compact generalized non-local (CGNL) module provides steady improvements in classification tasks. Third, we also investigate the grouped CGNL blocks, which model the correlations across channels within each group. We evaluate the proposed CGNL method on the task of fine-grained classification and action recognition. Extensive experimental results show that: 1) The CGNL network are easy to optimize as the original non-local network; 2) Compared with the non-local module, CGNL module enjoys capturing richer features and dense clues for prediction, as shown in Figure~\ref{fig:introduction}, which leads to results substantially better than those of the original non-local module. Moreover, in the appendix of extensional experiments, the CGNL network can also promise a higher accuracy than the baseline on the large-scale ImageNet dataset~\cite{imagenet}. \section{Related Works} \textbf{Channel Correlations:} The mechanism of sharing the same conv kernel among channels of a layer in a ConvNet~\cite{lenet-5} can be seen as a basic way to capture correlations among channels, which aggregates the channels of feature maps by the operation of sum pooling. The SENet~\cite{senet} may be the first work that explicitly models the interdependencies between the channels of its spatial features. It aims to select the useful feature maps and suppress the others, and only considers the global information of each channel. Inspired by~\cite{non-local}, we present the generalized non-local (GNL) module, which generalizes the non-local (NL) module to learn the correlations between any two positions across the channels. Compared to the SENet, we model the interdependencies among channels in an explicit and dense manner. \textbf{Compact Representation:} After further investigation, we find that the non-local module contains a second-order feature space (Sect.\ref{subsect:general non-local formulation}), which is used widely in previous computer vision tasks, \textit{e.g.}, SIFT~\cite{sift}, Fisher encoding~\cite{fisher}, Bilinear model~\cite{bilinear}~\cite{cbp} and segmentation task~\cite{order2_seg}. However, such second-order feature space involves high dimensions and heavy computational burdens. In the area of kernel learning~\cite{scholkopf2001learning}, there are many prior works such as compact bilinear pooling (CBP)~\cite{cbp} that uses the Tensor Sketching~\cite{tensor-sketching} to address this problem. But this type of method is not perfect yet. Because the it cannot produce a light computation to the various size of sketching vectors. Fortunately, in mathematics, the whole non-local operation can be viewed as a trilinear formation. It can be fast computed with the associative law of matrix production. To the other types of pairwise function, such as Embedded Gaussian or RBF~\cite{rbf}, we propose a tight approximation for them by using the Taylor expansion. \section{Approach} In this section, we introduce a general formulation of the proposed general non-local operation. We then show that the original non-local and the bilinear pooling are special cases of this formulation. After that, we illustrate that the general non-local operation can be seen as a modality in the trilinear matrix production and show how to implement our generalized non-local (GNL) module in a compact representations. \subsection{Review of Non-local Operation} \label{subsect:general non-local formulation} We begin by briefly reviewing the original non-local operation~\cite{non-local} in matrix form. Suppose that an image or video is given to the network and let $\mathbf{X} \in \mathbb{R}^{N \times C}$ denote (see notation\footnote{Bold capital letters denote a matrix $\mathbf{X}$, bold lower-case letters a column vector $\mathbf{x}$. $\mathbf{x}_i$ represents the $i^{th}$ column of the matrix $\mathbf{X}$. $x_{ij}$ denotes the scalar in the $i^{th}$ row and $j^{th}$ column of the matrix $\mathbf{X}$. All non-bold letters represent scalars. $\mathbf{1}_{m} \in \mathbb{R}^{m}$ is a vector of ones. $\mathbf{I}_n \in \mathbb{R}^{n \times n}$ is an identity matrix. $\vect(\mathbf{X})$ denotes the vectorization of matrix $\mathbf{X}$. $\mathbf{X} \circ \mathbf{Y}$ and $\mathbf{X} \otimes \mathbf{Y}$ are the Hadamard and Kronecker products of matrices.}) the input feature map of the non-local module, where $C$ is the number of channels. For the sake of notation clarity, we collapse all the spatial (width $W$ and height $H$) and temporal (video length $T$) positions in one dimension, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $N=HW$ or $N=HWT$. To capture long-range dependencies across the whole feature map, the original non-local operation computes the response $\mathbf{Y} \in \mathbb{R}^{N \times C}$ as the weighted sum of the features at all positions, \begin{align} \label{eq:nl} \mathbf{Y} = f\big(\theta(\mathbf{X}), \phi(\mathbf{X}) \big) g(\mathbf{X}), \end{align} where $\theta(\cdot), \phi(\cdot), g(\cdot)$ are learnable transformations on the input. In \cite{non-local}, the authors suggest using $1\times1$ or $1 \times 1 \times 1$ convolution for simplicity, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the transformations can be written as \begin{align} \label{eq:linear} \theta(\mathbf{X}) = \mathbf{X} \mathbf{W}_\theta \in \mathbb{R}^{N \times C}, \quad \phi(\mathbf{X}) = \mathbf{X} \mathbf{W}_\phi \in \mathbb{R}^{N \times C}, \quad g(\mathbf{X}) = \mathbf{X} \mathbf{W}_g \in \mathbb{R}^{N \times C}, \end{align} parameterized by the weight matrices $\mathbf{W}_\theta, \mathbf{W}_\phi, \mathbf{W}_g \in \mathbb{R}^{C \times C}$ respectively. The pairwise function $f(\cdot, \cdot) : \mathbb{R}^{N \times C} \times \mathbb{R}^{N \times C} \rightarrow \mathbb{R}^{N \times N}$ computes the affinity between all positions (space or space-time). There are multiple choices for $f$, among which dot-product is perhaps the simplest one, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \begin{align} \label{eq:f} \mathbf{f}\big( \theta(\mathbf{X}), \phi(\mathbf{X}) \big) = \theta(\mathbf{X}) \phi(\mathbf{X})^\top. \end{align} Plugging Eq.~\ref{eq:linear} and Eq.~\ref{eq:f} into Eq.~\ref{eq:nl} yields a trilinear interpretation of the non-local operation, \begin{align} \label{eq:tri} \mathbf{Y} = \mathbf{X} \mathbf{W}_\theta \mathbf{W}_\phi^\top \mathbf{X}^\top \mathbf{X} \mathbf{W}_g, \end{align} where the pairwise matrix $\mathbf{X} \mathbf{W}_\theta \mathbf{W}_\phi^\top \mathbf{X}^\top \in \mathbb{R}^{N \times N}$ encodes the similarity between any locations of the input feature. The effect of non-local operation can be related to the self-attention module~\cite{VaswaniSPUJGKP17} based on the fact that each position (row) in the result $\mathbf{Y}$ is a linear combination of all the positions (rows) of $\mathbf{X} \mathbf{W}_g$ weighted by the corresponding row of the pairwise matrix. \subsection{Review of Bilinear Pooling} Analogous to the conventional kernel trick~\cite{scholkopf2001learning}, the idea of bilinear pooling~\cite{bilinear} has recently been adopted in ConvNets for enhancing the feature representation in various tasks, such as fine-grained classification, person re-id, action recognition. At a glance, bilinear pooling models pairwise feature interactions using explicit outer product at the final classification layer: \begin{align} \label{eq:bil} \mathbf{Z} = \mathbf{X}^\top \mathbf{X} \in \mathbb{R}^{C \times C}, \end{align} where $\mathbf{X} \in \mathbb{R}^{N \times C}$ is the input feature map generated by the last convolutional layer. Each element of the final descriptor $z_{c_1 c_2} = \sum_n x_{n c_1} x_{n c_2}$ sum-pools at each location $n = 1, \cdots, N$ the bilinear product $x_{n c_1} x_{n c_2}$ of the corresponding channel pair $c_1, c_2 = 1, \cdots, C$. Despite the distinct design motivation, it is interesting to see that bilinear pooling (Eq.~\ref{eq:bil}) can be viewed as a special case of the second-order term (Eq.~\ref{eq:f}) in the non-local operation if we consider, \begin{align} \theta(\mathbf{X}) = \mathbf{X}^\top \in \mathbb{R}^{C \times N}, \quad \phi(\mathbf{X}) = \mathbf{X}^{\top} \in \mathbb{R}^{C \times N}. \end{align} \subsection{Generalized Non-local Operation} The original non-local operation aims to directly capture long-range dependencies between any two positions in one layer. However, such dependencies are encoded in a joint location-wise matrix $f(\theta(\mathbf{X}), \phi(\mathbf{X}))$ by aggregating all channel information together. On the other hand, channel-wise correlation has been recently explored in both discriminative~\cite{bilinear} and generative~\cite{Ustyuzhaninov2017WhatDI} models through the covariance analysis across channels. Inspired by these works, we generalize the original non-local operation to model long-range dependencies between any positions of any channels. We first reshape the output of the transformations (Eq.~\ref{eq:linear}) on $\mathbf{X}$ by merging channel into position: \begin{align} \label{eq:linear_g} \theta(\mathbf{X}) = \vect(\mathbf{X} \mathbf{W}_\theta) \in \mathbb{R}^{NC}, \phi(\mathbf{X}) = \vect(\mathbf{X} \mathbf{W}_\phi) \in \mathbb{R}^{NC}, g(\mathbf{X}) = \vect(\mathbf{X} \mathbf{W}_g) \in \mathbb{R}^{NC} . \end{align} By lifting the row space of the underlying transformations, our generalized non-local (GNL) operation pursues the same goal of Eq.~\ref{eq:nl} that computes the response $\mathbf{Y} \in \mathbb{R}^{N \times C}$ as: \begin{align} \label{eq:tri_g} \vect(\mathbf{Y}) = f \big( \vect(\mathbf{X} \mathbf{W}_\theta), \vect(\mathbf{X} \mathbf{W}_\phi) \big) \vect(\mathbf{X} \mathbf{W}_g). \end{align} Compared to the original non-local operation (Eq.~\ref{eq:tri}), GNL utilizes a more general pairwise function $f(\cdot, \cdot): \mathbb{R}^{NC} \times \mathbb{R}^{NC} \rightarrow \mathbb{R}^{NC \times NC}$ that can differentiate between pairs of same location but at different channels. This richer similarity greatly augments the non-local operation in discriminating fine-grained object parts or action snippets that usually correspond to channels of the input feature. Compared to the bilinear pooling (Eq.~\ref{eq:bil}) that can only be used after the last convolutional layer, GNL maintains the input size and can thus be flexibly plugged between any network blocks. In addition, bilinear pooling neglects the spatial correlation which, however, is preserved in GNL. Recently, the idea of dividing channels into groups has been established as a very effective technique in increasing the capacity of ConvNets. Well-known examples include Xception~\cite{xception}, MobileNet~\cite{mobilenet}, ShuffleNet~\cite{shufflenet}, ResNeXt~\cite{resnext} and Group Normalization~\cite{gn}. Given its simplicity and independence, we also realize the channel grouping idea in GNL by grouping all $C$ channels into $G$ groups, each of which contains $C'=C/G$ channels of the input feature. We then perform GNL operation independently for each group to compute $\mathbf{Y}'$ and concatenate the results along the channel dimension to restore the full response $\mathbf{Y}$. \subsection{Compact Representation} \label{subsect:compact represnetation} A straightforward implementation of GNL (Eq.~\ref{eq:tri_g}) is prohibitive as the quadratic increase with respect to the channel number $C$ in the presence of the $NC \times NC$ pairwise matrix. Although the channel grouping technique can reduce the channel number from $C$ to $C/G$, the overall computational complexity is still much higher than the original non-local operation. To mitigate this problem, this section proposes a compact representation that leads to an affordable approximation for GNL. Let us denote $\bm{\theta} = \vect(\mathbf{X} \mathbf{W}_\theta)$, $\bm{\phi} = \vect(\mathbf{X} \mathbf{W}_\phi)$ and $\bm{g} = \vect(\mathbf{X} \mathbf{W}_g)$, each of which is a $NC$-D vector column. Without loss of generality, we assume $f$ is a general kernel function (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, RBF, bilinear, etc.) that computes a $NC \times NC$ matrix composed by the elements, \begin{align} \label{eq:taylor_series} \big[ f(\bm{\theta}, \bm{\phi}) \big]_{ij} \approx \sum_{p=0}^{P} \alpha_p^2 (\theta_i \phi_j)^p, \end{align} which can be approximated by Taylor series up to certain order $P$. The coefficient $\alpha_p$ can be computed in closed form once the kernel function is known. Taking RBF kernel for example, \begin{align} \label{eq:rbf} [f(\bm{\theta}, \bm{\phi})]_{ij} &= \text{exp}(-\gamma \| \theta_i - \phi_j \|^2) \approx \sum_{p=0}^{P} \beta \frac{(2\gamma)^p}{p!} (\theta_i \phi_j)^p, \end{align} where $\alpha_p^2 = \beta \frac{(2\gamma)^p}{p!}$ and $\beta = \text{exp} \big( -\gamma( \|\bm{\theta}\|^2 + \|\bm{\phi}\|^2 ) \big)$ is a constant and $\beta = \text{exp}(-2\gamma)$ if the input vectors $\bm{\theta}$ and $\bm{\phi}$ are $\ell2$-normalized. By introducing two matrices, \begin{align} \bm{\Theta} = [\alpha_0 \bm{\theta}^0, \cdots, \alpha_{P} \bm{\theta}^{P}] \in \mathbb{R}^{NC \times (P+1)}, \quad \bm{\Phi} = [\alpha_0 \bm{\phi}^0, \cdots, \alpha_{P} \bm{\phi}^{P}] \in \mathbb{R}^{NC \times (P+1)} \end{align} our compact generalized non-local (CGNL) operation approximates Eq.~\ref{eq:tri_g} via a trilinear equation, \begin{align} \label{eq:final} \vect(\mathbf{Y}) \approx \bm{\Theta} \bm{\Phi}^\top \bm{g}. \end{align} At first glance, the above approximation still involves the computation of a large pairwise matrix $\bm{\Theta} \bm{\Phi}^\top \in \mathbb{R}^{NC \times NC}$. Fortunately, the order of Taylor series is usually relatively small $P \ll NC$. According to the associative law, we could alternatively compute the vector $\bm{z} = \bm{\Phi}^\top \bm{g} \in \mathbb{R}^{P+1}$ first and then calculate $\bm{\Theta} \bm{z}$ in a much smaller complexity of $\mathcal{O}(NC(P + 1))$. In another view, the process that this bilinear form $\bm{\Phi}^\top \bm{g}$ is squeezed into scalars can be treated as a related concept of the SE module~\cite{senet}. \begin{table}[ht] % % \begin{minipage}[h]{0.54\textwidth} \textbf{Complexity analysis:} Table~\ref{table:computation complexity} compares the computational complexity of CGNL network with the GNL ones. We cannot afford for directly computing GNL operation because of its huge complexity of $\mathcal{O}(2(NC)^2)$ in both time and space. Instead, our compact method dramatically eases the heavy calculation to $\mathcal{O}(NC(P+1))$. \end{minipage} \hskip 5pt plus 1fil % % \begin{minipage}[h]{0.42\textwidth} \caption{\small{ Complexity comparison of GNL and CGNL operations, where $N$ and $C$ indicate the number of positions and channels respectively. }} \centering \scriptsize \begin{tabular}[h]{lcc} \toprule & General NL Method & CGNL Method \\ \cmidrule(r){2-3} Strategy & $f\big(\bm{\Theta} \bm{\Phi}^\top\big) \bm{g}$ & $\bm{\Theta} \bm{\Phi}^\top \bm{g}$ \\ Time & $\mathcal{O}(2(NC)^2)$ & $\mathcal{O}(NC(P+1))$ \\ Space & $\mathcal{O}(2(NC)^2)$ & $\mathcal{O}(NC(P+1))$ \\ \bottomrule \label{table:computation complexity} \end{tabular} \end{minipage} \end{table} \subsection{Implementation Details} \begin{figure}[H] \centering \includegraphics[width=1.0\textwidth]{figures/cgnl.pdf} \caption{\small{ \textbf{Grouped compact generalized non-local (CGNL) module.} The feature maps are shown with the shape of their tensors, \textit{e.g.}, $[C,N]$, where $N = THW$ or $N=HW$. The feature maps will be divided along channels into multiple groups after three conv layers whose kernel size and stride both equals 1 ($\text{k}=1, \text{s}=1$). The channels dimension is grouped into $C'=C/G$, where $G$ is a group number. The compact representations for generalized non-local module are build within each group. $P$ indicates the order of Taylor expansion for kernel functions. }} \label{fig:cgnl} \end{figure} Fig.~\ref{fig:cgnl} illustrates the workflow of how CGNL module processes a feature map $\mathbf{X}$ of the size $N \times C$, where $N = H \times W$ or $N = T \times H \times W$. $\mathbf{X}$ is first fed into three $1 \times 1 \times 1$ convolutional layers that are described by the weights $W_\theta, W_\phi, W_g$ respectively in Eq.~\ref{eq:linear_g}. To improve the capacity of neural networks, the channel grouping idea~\cite{resnext,gn} is then applied to divide the transformed feature along the channel dimension into $G$ groups. As shown in Fig.~\ref{fig:cgnl}, we approximate for each group the GNL operation (Eq.~\ref{eq:tri_g}) using the Taylor series according to Eq.~\ref{eq:final}. To achieve generality and compatibility with existing neural network blocks, the CGNL block is implemented by wrapping Eq.~\ref{eq:tri_g} in an identity mapping of the input as in residual learning~\cite{resnet}: \begin{align} \label{eq:cgnl residual output} \mathbf{Z} = concat(BN(\mathbf{Y}' \mathbf{W}_z)) + \mathbf{X}, \end{align} where $\mathbf{W}_z \in \mathbb{R}^{C \times C}$ denotes a $1 \times 1$ or $1 \times 1 \times 1$ convolution layer followed by a Batch Normalization~\cite{bn} in each group. \section{Experiments} \label{sec:experiments} \subsection{Datasets} We evaluate the CGNL network on multiple tasks, including fine-grained classification and action recognition. For fine-grained classification, we experiment on the Birds-200-2011 (CUB) dataset~\cite{welinder2010caltech}, which contains 11788 images of 200 bird categories. For action recognition, we experiment on two challenging datasets, Mini-Kinetics~\cite{mini-kinetics} and UCF101~\cite{ucf101}. The Mini-Kinetics dataset contains 200 action categories. Due to some video links are unavaliable to download, we use 78265 videos for training and 4986 videos for validation. The UCF101 dataset contains 101 actions, which are separated into 25 groups with 4-7 videos of each action in a group. \subsection{Baselines} Given the steady performance and efficiency, the ResNet~\cite{resnet} series (ResNet-50 and ResNet-101) are adopted as our baselines. For video tasks, we keep the same architecture configuration with~\cite{non-local}, where the temporal dimension is trivially addressed by max pooling. Following~\cite{non-local} the convolutional layers in the baselines are implemented as $1 \times k \times k$ kernels, and we insert our CGNL blocks into the network to turn them into compact generalized non-local (CGNL) networks. We investigate the configurations of adding 1 and 5 blocks. \cite{non-local} suggests that adding 1 block on the $res4$ is slightly better than the others. So our experiments of adding 1 block all target the $res4$ of ResNet. The experiments of adding 5 blocks, on the other hand, are configured by inserting 2 blocks on the $res3$, and 3 blocks on the $res4$, to every other residual block in ResNet-50 and ResNet-101. \textbf{Training:} We use the models pretrained on ImageNet~\cite{imagenet} to initialize the weights. The frames of a video are extracted in a \emph{dense} manner. Following~\cite{non-local}, we generate 32-frames input clips for models, first randomly crop out 64 consecutive frames from the full-length video and then drop every other frame. The way to choose these 32-frames input clips can be viewed as a temporal augmentation. The crop size for each clip is distributed evenly between 0.08 and 1.25 of the original image and its aspect ratio is chosen randomly between 3/4 and 4/3. Finally we resize it to $224$. We use a weight decay of $0.0001$ and momentum of $0.9$ in default. The strategy of gradual warmup is used in the first ten epochs. The dropout~\cite{dropout} with ratio $0.5$ is inserted between average pooling layer and last fully-connected layer. To keep same with~\cite{non-local}, we use zero to initialize the weight and bias of the BatchNorm (BN) layer in both CGNL and NL blocks~\cite{train_imagenet_in_1h}. To train the networks on CUB dataset, we follow the same training strategy above but the final crop size of 448. \textbf{Inference:} The models are tested immediately after training is finished. In~\cite{non-local}, spatially fully-convolutional inference \footnote{https://github.com/facebookresearch/video-nonlocal-net} is used for NL networks. For these video clips, the shorter side is resized to 256 pixels and use 3 crops to cover the entire spatial size along the longer side. The final prediction is the averaged softmax scores of all clips. For fine-grined classification, we do 1 center-crop testing in size of 448. \subsection{Ablation Experiments} \begin{table}[t] \caption{\small{ \textbf{Ablations.} Top1 and top5 accuracy ($\%$) on various datasets. }} \begin{minipage}[t]{0.32\textwidth} \begin{subtable}[t]{1.0\textwidth} \caption{\small{ Results of adding 1 CGNL block on CUB. The kernel of dot production achieves the best result. The accuracies of others are at the edge of baselines. }} \label{table:kernel functions} \scriptsize \centering \begin{tabularx}{\textwidth}{lcc} \toprule model & top1 & top5 \\ \midrule R-50. & 84.05 & 96.00 \\ \cmidrule(r){2-3} Dot Production & 85.14 & 96.88 \\ Gaussian RBF & 84.10 & 95.78 \\ Embedded Gaussian & 84.01 & 96.08 \\ \bottomrule \end{tabularx} \end{subtable} \begin{subtable}[t]{1.0\textwidth} \caption{\small{ Results of comparison on UCF-101. Note that CGNL network is not grouped in channel. }} \label{table:ucf results} \scriptsize \centering \begin{tabularx}{\textwidth}{llll} \toprule model && top1 & top5 \\ \midrule R-50. && 81.62 & 94.62 \\ \cmidrule(r){2-4} + 1 NL block && 82.88 & 95.74 \\ + 1 CGNL block && 83.38 & 95.42 \\ \bottomrule \end{tabularx} \end{subtable} \end{minipage}\hskip 5pt plus 1fil \begin{minipage}[t]{0.66\textwidth} \begin{subtable}[t]{1.0\textwidth} \caption{\small{ Results of channel grouped CGNL networks on CUB. A few groups can boost the performance. But more groups tend to prevent the CGNL block from capturing the correlations between positions across channels. }} \label{table:groups on cub} \scriptsize \centering \vskip 4pt plus 1fil \begin{tabular}{llll} \toprule \multicolumn{1}{c}{model} & groups & top1 & top5 \\ \midrule R-101 & - & 85.05 & 96.70 \\ \cmidrule(r){2-4} \multirow{4}{*}{+ 1 CGNL} & 1 & 86.17 & 97.82 \\ \multirow{4}{*}{block} & 4 & 86.24 & 97.05 \\ & 8 & 86.35 & 97.86 \\ & 16 & 86.13 & 96.75 \\ & 32 & 86.04 & 96.69 \\ \bottomrule \end{tabular} \hskip 3pt plus 1fil \begin{tabular}{llll} \toprule \multicolumn{1}{c}{model} & groups & top1 & top5 \\ \midrule R-101 & - & 85.05 & 96.70 \\ \cmidrule(r){2-4} \multirow{4}{*}{+ 5 CGNL} & 1 & 86.01 & 95.97 \\ \multirow{4}{*}{block} & 4 & 86.19 & 96.07 \\ & 8 & 86.24 & 97.23 \\ & 16 & 86.43 & 98.89 \\ & 32 & 86.10 & 97.13 \\ \bottomrule \end{tabular} \end{subtable} \begin{subtable}[t]{1.0\textwidth} \caption{\small{ Results of grouped CGNL networks on Mini-Kinetics. More groups help the CGNL networks improve top1 accuracy obveriously. }} \label{table:groups on mini-kinetics} \scriptsize \centering \vskip 1.9pt plus 0fil \begin{tabular}[t]{llll} \toprule model & gorups & top1 & top5 \\ \midrule R-50 & - & 75.54 & 92.16 \\ \cmidrule(r){2-4} \multirow{2}{*}{+ 1 CGNL} & 1 & 77.16 & 93.56 \\ \multirow{2}{*}{block} & 4 & 77.56 & 93.00 \\ & 8 & 77.76 & 93.18 \\ \bottomrule \end{tabular} \hskip 3pt plus 1fil \begin{tabular}[t]{llll} \toprule model & gorups & top1 & top5 \\ \cmidrule(r){2-4} R-101 & - & 77.44 & 93.18 \\ \midrule \multirow{2}{*}{+ 1 CGNL} & 1 & 78.79 & 93.64 \\ \multirow{2}{*}{block} & 4 & 79.06 & 93.54 \\ & 8 & 79.54 & 93.84 \\ \bottomrule \end{tabular} \end{subtable} \end{minipage} \end{table} \textbf{Kernel Functions:} We use three popular kernel functions, namely dot production, embedded Gaussian and Gaussian RBF, in our ablation studies. For dot production, Eq.~\ref{eq:final} will be held for direct computation. For embedded Gaussian, the $\alpha_p^2$ will be $\frac{1}{p!}$ in Eq.~\ref{eq:taylor_series}. And for Gaussian RBF, the corresponding formula is defined as Eq.~\ref{eq:rbf}. We expend the Taylor series with third order and the hyperparameter $\gamma$ for RBF is set by $1e\text{-}4$~\cite{kernel-pooling}. Table~\ref{table:kernel functions} suggests that dot production is the best kernel functions for CGNL networks. Such experimental observations are consistent with~\cite{non-local}. The other kernel functions we used, Embedded Gaussion and Gaussian RBF, has a little improvements for performance. Therefore, we choose the dot production as our main experimental configuration for other tasks. \textbf{Grouping:} The grouping strategy is another important technique. On Mini-Kinetics, Table~\ref{table:groups on mini-kinetics} shows that grouping can bring higher accuracy. The improvements brought in by adding groups are larger than those by reducing the channel reduction ratio. The best top1 accuracy is achieved by splitting into 8 groups for CGNL networks. On the other hand, however, it is worthwhile to see if more groups can always improve the results, and Table~\ref{table:groups on cub} gives the answer that more groups will hamper the performance improvements. This is actually expected, as the affinity in CGNL block considers the points across channels. When we split the channels into a few groups, it can facilitate the restricted optimization and ease the training. However, if too many groups are adopted, it hinder the affinity to capture the rich correlations between elements across the channels. \begin{figure}[ht] \begin{minipage}[]{0.32\textwidth} \centering \includegraphics[width=1.\textwidth]{figures/cgnl_2_resblock_a.jpg} \caption{\small{ The workflow of our CGNL block. The corresponding formula is shown below in a blue tinted box. }} \label{fig:cgnl_2_resblock_a} \end{minipage} \hskip 3pt plus 1fil \begin{minipage}[]{0.32\textwidth} \centering \includegraphics[width=1.\textwidth]{figures/cgnl_2_resblock_b.jpg} \caption{\small{ The workflow of the simple residual block for comparison. The corresponding formula is shown below in a blue tinted box. }} \label{fig:cgnl_2_resblock_b} \end{minipage} \hskip 3pt plus 1fil \begin{minipage}[h]{0.3\textwidth} \captionof{table}{\small{ Results comparison of the CGNL block to the simple residual block on CUB dataset. }} \centering \scriptsize \begin{tabular}[t]{lll} \toprule model & top1 & top5 \\ \midrule R-50 & 84.05 & 96.00 \\ \cmidrule(r){2-3} + 1 Residual Block & 84.11 & 96.23 \\ + 1 CGNL block & 85.14 & 96.88 \\ \bottomrule \end{tabular} \label{table:cgnl_2_resblock} \end{minipage} \end{figure} \textbf{Comparison of CGNL Block to Simple Residual Block:} There is a confusion about the efficiency caused by the possibility that the scalars from $\bm{\Phi}^\top \bm{g}$ in Eq.~\ref{eq:final} could be wiped out by the BN layer. Because according to Algorithm 1 in~\cite{bn}, the output of input $\bm{\Theta}$ weighted by the scalars $s=\bm{\Phi}^\top \bm{g}$ can be approximated to $O=\frac{s\bm{\Theta}-E(s\bm{\Theta})}{\sqrt{Var(s\bm{\Theta})}} * \gamma + \beta =\frac{s\bm{\Theta}-sE(\bm{\Theta})}{\sqrt{s^2Var(\bm{\Theta})}} * \gamma + \beta =\frac{\bm{\Theta}-E(\bm{\Theta})}{\sqrt{Var(\bm{\Theta})}} * \gamma + \beta$. At first glance, the scalars $s$ is totally erased by BN in this mathmatical process. However, the \emph{de facto} operation of a convolutional module has a process order to aggregate the features. Before passing into the BN layer, the scalars $s$ has already saturated in the input features $\bm{\Theta}$ and then been transformed into a different feature space by a learnable parameter $\mathbf{W}_z$. In other words, it is $\mathbf{W}_z$ that "protects" $s$ from being erased by BN via the convolutional operation. To eliminate this confusion, we further compare adding 1 CGNL block (with the kernel of dot production) in Fig~\ref{fig:cgnl_2_resblock_a} and adding 1 simple residual block in Fig~\ref{fig:cgnl_2_resblock_b} on CUB dataset in Table~\ref{table:cgnl_2_resblock}. The top1 accuracy $84.11\%$ of adding a simple residual block is slightly better than $84.05\%$ of the baseline, but still worse than $85.14\%$ of adding a linear kerenlized CGNL module. We think that the marginal improvement ($84.06\%\rightarrow84.11\%$) is due to the more parameters from the added simple residual block. \begin{figure}[ht] \centering \includegraphics[width=1.\textwidth]{figures/cub_vis.jpg} \caption{\small{ Result analysis of the NL block and our CGNL block on CUB. Column 1: the input images with a small \emph{reference patch} (green rectangle), which is used to find the highly related patches (white rectangle). Column 2: the highly related clues for prediction in each feature map found by the NL network. The dimension of self-attention space in NL block is $N \times N$, where $N=HW$. So its visualization only has one column. Columns 3 to 7: the most related patches computed by our compact generalized non-local module. We first pick a reference position in the space of $\bm{g}$, then we use the corresponding vectors in $\bm{\Theta}$ and $\bm{\Phi}$ to compute the attention maps with a threshold (here we use 0.7). Last column: the ground truth of body parts. The highly related areas of CGNL network can easily cover all of the standard parts that provide the prediction clues. }} \label{fig:cub visualization} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=1.\textwidth]{figures/video_vis.jpg} \caption{\small{ Visualization with feature heatmaps. We select a reference patch (\emph{green rectangle}) in one frame, then visualize the high related ares by heatmaps. The CGNL network enjoys capturing dense relationships in feature space than NL networks. }} \label{fig:video visualization} \end{figure} \subsection{Main Results} \begin{table}[t] \caption{\small{ \textbf{Main results.} Top1 and top5 accuracy ($\%$) on various datasets. }} \begin{minipage}[t]{0.3\textwidth} \begin{subtable}[t]{1.0\textwidth} \caption{\small{ Main validation results on Mini-Kinetics. The CGNL networks is build within 8 groups. }} \label{table:mini-kinetics main results} \scriptsize \centering \begin{tabularx}{\textwidth}{lll} \toprule model & top1 & top5 \\ \midrule R-50 & 75.54 & 92.16 \\ \cmidrule(r){2-3} + 1 NL block & 76.53 & 92.90 \\ + 1 CGNL block & 77.76 & 93.18 \\ \cmidrule(r){2-3} + 5 NL block & 77.53 & 94.00 \\ + 5 CGNL block & 78.79 & 94.37 \\ \midrule R-101 & 77.44 & 93.18 \\ \cmidrule(r){2-3} + 1 NL block & 78.02 & 93.86 \\ + 1 CGNL block & 79.54 & 93.84 \\ \cmidrule(r){2-3} + 5 NL block & 79.21 & 93.21 \\ + 5 CGNL block & 79.88 & 93.37 \\ \bottomrule \end{tabularx} \end{subtable} \end{minipage}\hskip 5pt plus 1fil \begin{minipage}[t]{0.67\textwidth} \begin{subtable}[t]{1.0\textwidth} \caption{\small{ Results on CUB. The CGNL networks are set by 8 channel groups. }} \label{table:cub main results} \scriptsize \centering \begin{tabularx}{.48\textwidth}{llll} \toprule model && top1 & top5 \\ \midrule R-50 && 84.05 & 96.00 \\ \cmidrule(r){2-4} + 1 NL block && 84.79 & 96.76 \\ + 1 CGNL block && 85.14 & 96.88 \\ \cmidrule(r){2-4} + 5 NL block && 85.10 & 96.18 \\ + 5 CGNL block && 85.68 & 96.69 \\ \bottomrule \end{tabularx} \hskip 8pt plus 0fil \begin{tabularx}{.48\textwidth}{llll} \toprule model && top1 & top5 \\ \midrule R-101 && 85.05 & 96.70 \\ \cmidrule(r){2-4} + 1 NL block && 85.49 & 97.04 \\ + 1 CGNL block && 86.35 & 97.86 \\ \cmidrule(r){2-4} + 5 NL block && 86.10 & 96.35 \\ + 5 CGNL block && 86.24 & 97.23 \\ \bottomrule \end{tabularx} \end{subtable} \vskip 1.5pt plus 0fil \begin{subtable}[t]{1.0\textwidth} \caption{\small{ Results on COCO. 1 NL or 1 CGNL block is added in Mask R-CNN. }} \label{table:mask rcnn} \scriptsize \centering \begin{tabularx}{\textwidth}{llccc|ccc} \toprule model && $\text{AP}^{\text{box}}$ & $\text{AP}_{\text{50}}^{\text{box}}$ & $\text{AP}_{\text{75}}^{\text{box}}$ & $\text{AP}^{\text{mask}}$ & $\text{AP}_{\text{50}}^{\text{mask}}$ & $\text{AP}_{\text{75}}^{\text{mask}}$ \\ \midrule Baseline && 34.47 & 54.87 & 36.58 & 30.44 & 51.55 & 31.95 \\ \cmidrule(r){2-8} + 1 NL block && 35.02 & 55.79 & 37.54 & 30.23 & 52.40 & 32.77 \\ + 1 CGNL block && 35.70 & 56.07 & 38.69 & 31.22 & 52.44 & 32.67 \\ \bottomrule \end{tabularx} \end{subtable} \end{minipage}\hskip 0pt plus 1fil \end{table} Table~\ref{table:mini-kinetics main results} shows that although adding 5 NL and CGNL blocks in the baseline networks can both improve the accuracy, the improvement of CGNL network is larger. The same applies to Table~\ref{table:ucf results} and Table~\ref{table:cub main results}. In experiments on UCF101 and CUB dataset, the similar results are also observed that adding 5 CGNL blocks provides the optimal results both for R-50 and R-101. Table~\ref{table:mini-kinetics main results} shows the main results on Mini-Kinetics dataset. Compared to the baseline R-50 whose top1 is $75.54\%$, adding 1 NL block brings improvement by about 1.0\%. Similar results can be found in the experiments based on R-101, where adding 1 CGNL provides about more than 2\% improvement, which is larger than that of adding 1NL block. Table~\ref{table:ucf results} shows the main results on the UCF101 dataset, where adding 1CGNL block achieves higher accuracy than adding 1NL block. And Table~\ref{table:cub main results} shows the main results on the CUB dataset. To understand the effects brought by CGNL network, we show the visualization analysis as shown in Fig~\ref{fig:cub visualization} and Fig~\ref{fig:video visualization}. Additionly, to investigate the capacity and the generalization ability of our CGNL network. We test them on the task of object detection and instance segmentation. We add 1 NL and 1 CGNL block in the R-50 backbone for Mask-RCNN~\cite{mask-rcnn}. Table~\ref{table:mask rcnn} shows the main results on COCO2017 dataset~\cite{coco2017} by adopting our 1 CGNL block in the backbone of Mask-RCNN~\cite{mask-rcnn}. It shows that the performance of adding 1 CGNL block is still better than that of adding 1 NL block. We observe that adding CGNL block can always obtain better results than adding the NL block with the same blocks number. These experiments suggest that considering the correlations between any two positions across the channels can significantly improve the performance than that of original non-local methods. \section{Conclusion} We have introduced a simple approximated formulation of compact generalized non-local operation, and have validated it on the task of fine-grained classification and action recognition from RGB images. Our formulation allows for explicit modeling of rich interdependencies between any positions across channels in the feature space. To ease the heavy computation of generalized non-local operation, we propose a compact representation with the simple matrix production by using Taylor expansion for multiple kernel functions. It is easy to implement and requires little additional parameters, making it an attractive alternative to the original non-local block, which only considers the correlations between two positions along the specific channel. Our model produces competitive or state-of-the-art results on various benchmarked datasets. \section*{Appendix: Experiments on ImageNet} As a general method, the CGNL block is compatible with complementary techniques developed for the image task of fine-grained classification, temporal feature needed task of action recognition and the basic task of object detection. \begin{table}[h] \caption{Results on ImageNet. Best top1 and top5 accuracy ($\%$).} \centering \scriptsize \begin{tabular}{lll} \toprule model & top1 & top5 \\ \midrule R-50 & 76.15 & 92.87 \\ \cmidrule(r){2-3} + 1 CGNL block & 77.69 & 93.64 \\ + 1 CGNLx block & 77.32 & 93.46 \\ \midrule R-152 & 78.31 & 94.06 \\ \cmidrule(r){2-3} + 1 CGNL block & 79.53 & 94.59 \\ + 1 CGNLx block & 79.37 & 94.47 \\ \bottomrule \end{tabular} \label{table:imagenet} \end{table} In this appendix, we further report the results of our spatial CGNL network on the large-scale ImageNet~\cite{imagenet} dataset, which has 1.2 million training images and 50000 images for validation in 1000 object categories. The training strategy and configurations of our CGNL networks is kept same as those in Sec~\ref{sec:experiments}, only except the crop size here used for input is 224. For a better demonstration of the generality of our CGNL network, we investigate both adding 1 dot production CGNL block and 1 Gaussian RBF CGNL block (identified by CGNLx) in Table~\ref{table:imagenet}. We compare these models with two strong baselines, R-50 and R-152. In Table~\ref{table:imagenet}, all the best top1 and top5 accuracies are reported under the single center crop testing. The CGNL networks beat the basemodels by larger than 1 point no matter whichever the dot production or Gaussian RBF plays as the kernel function in the CGNL module. \clearpage {\small \bibliographystyle{ieee}
1,314,259,993,126
arxiv
\section{Introduction}\label{s-intro} For the face poset~\(P\) of a locally finite regular CW complex, Curry proved in \autocite[Theorem~7.7]{Curry18} that a canonical equivalence \begin{equation} \label{e-26eeba67} \mathbb{D}\colon\mathrm{h}{\operatorname{D}^{\textnormal{b}}(\operatorname{Fun}(P,\Cat{Vect}))} \longrightarrow\mathrm{h}{\operatorname{D}^{\textnormal{b}}(\operatorname{Fun}(P^{\operatorname{op}},\Cat{Vect}))} \end{equation} between triangulated categories\footnote{ Here \(\mathrm{h}\) denotes the underlying triangulated category of a stable \(\infty\)-category. } exists, where \(\Cat{Vect}\) denotes the category of vector spaces over a field. Note that the polysimplicial case was proven before by Schneider in \autocite[Proposition~2]{Schneider98}. This leads to the following: \begin{question}\label{a27f3b95f9} Under what conditions on~\(P\) do we have such a duality equivalence? \end{question} To answer this question, we must give a definition of ``such a duality equivalence'' so that the desired equivalence is not an extra datum but a property. \begin{example}\label{dd32f45da1} Let \(P\) be the poset generated by the relations \(0\leq1\leq2\) and \(0\leq1'\leq2\) on the set \(\{0,1,1',2\}\). An isomorphism \(P\simeq P^{\operatorname{op}}\) gives an equivalence \(\operatorname{Fun}(P,\Cat{Sp})\simeq\operatorname{Fun}(P^{\operatorname{op}},\Cat{Sp})\), but the two isomorphisms give different equivalences. We can also use the suspension functor to get many more equivalences. \end{example} Curry's proof rules out this example, but his argument depends on a somewhat arbitrary choice of a dualizing complex. To get some insights, let us look at a similar equivalence in general topology. Let \(X\) be a locally compact Hausdorff space. In \autocite[Section~5.5.5]{LurieHA}, Lurie states Verdier duality as a canonical equivalence \begin{equation*} \mathbb{D}\colon\operatorname{Shv}_{\Cat{Sp}}(X)\longrightarrow\operatorname{cShv}_{\Cat{Sp}}(X) \end{equation*} between the \(\infty\)-categories of spectrum-valued sheaves and cosheaves. The original construction is complicated, but as we see in \cref{s-ch}, a simpler explanation exists: We assume that \(X\) is compact for simplicity and write \(p\colon X\to{*}\) and \(d\colon X\to X\times X\) for the projection and the diagonal, respectively. Then the composites \begin{gather} \label{e-e91f4124} \phantom, \Cat{Sp} \xrightarrow{p^*}\operatorname{Shv}_{\Cat{Sp}}(X) \xrightarrow{d_*}\operatorname{Shv}_{\Cat{Sp}}(X\times X) \simeq\operatorname{Shv}_{\Cat{Sp}}(X)\otimes\operatorname{Shv}_{\Cat{Sp}}(X),\\ \label{e-e94d496b} \operatorname{Shv}_{\Cat{Sp}}(X)\otimes\operatorname{Shv}_{\Cat{Sp}}(X) \simeq\operatorname{Shv}_{\Cat{Sp}}(X\times X) \xrightarrow{d^*}\operatorname{Shv}_{\Cat{Sp}}(X) \xrightarrow{p_*}\Cat{Sp} \end{gather} constitute a duality datum for the self-duality in~\(\Cat{Pr}_{\textnormal{st}}\), the symmetric monoidal \(\infty\)-category of presentable stable \(\infty\)-categories.\footnote{ We prove in \cref{ss-ps} that \(\operatorname{Shv}_{\Cat{Sp}}(X)\) is rigid, which is stronger than this claim. } Now let us get back to our problem and take a finite poset~\(P\). It is known (see \cref{ss-alex}) that when \(X\) is its Alexandroff space \(\operatorname{Alex}(P)\) (see \cref{e407564b20}), we have \(\operatorname{Shv}_{\Cat{Sp}}(X)\simeq\operatorname{Fun}(P,\Cat{Sp})\) and \(\operatorname{cShv}_{\Cat{Sp}}(X)\simeq\operatorname{Fun}(P^{\operatorname{op}},\Cat{Sp})\). Moreover, in this case, both \cref{e-e91f4124,e-e94d496b} are in \(\Cat{Pr}_{\textnormal{st}}\). However, these do not form a duality datum unless \(P\) is discrete: \begin{example}\label{3d8480c654} Assume \(X=\operatorname{Alex}(P)\) for a finite nondiscrete poset~\(P\). Pick an element \(p\in P\) that is minimal among nonminimal elements. Let \(F\colon P\to\Cat{Sp}\) be the extension of \(\mathbf{S}\) by zero along \(\{p\}\hookrightarrow P\). Then the value at~\(p\) of the image of~\(F\) under \begin{equation*} \operatorname{Shv}_{\Cat{Sp}}(X) \xrightarrow{\operatorname{id}\otimes\text{\cref{e-e91f4124}}} \operatorname{Shv}_{\Cat{Sp}}(X) \otimes\operatorname{Shv}_{\Cat{Sp}}(X) \otimes\operatorname{Shv}_{\Cat{Sp}}(X) \xrightarrow{\cref{e-e94d496b}\otimes{\operatorname{id}}} \operatorname{Shv}_{\Cat{Sp}}(X) \end{equation*} is given by the limit \(\projlim(F)\), which is a coproduct of \((\#(P_{<p})-1)\) copies of \(\Sigma^{-1}\mathbf{S}\) and never equivalent to~\(F(p)=\mathbf{S}\). \end{example} Therefore we give up to use \cref{e-e91f4124} and consider if the composite \begin{equation} \label{e-6dded2c2} \phantom, \operatorname{Shv}_{\Cat{Sp}}(X) \xrightarrow{\text{--}\otimes\operatorname{Shv}_{\Cat{Sp}}(X)} [\operatorname{Shv}_{\Cat{Sp}}(X),\operatorname{Shv}_{\Cat{Sp}}(X)\otimes\operatorname{Shv}_{\Cat{Sp}}(X)] \xrightarrow{[\operatorname{id},\text{\cref{e-e94d496b}}]} [\operatorname{Shv}_{\Cat{Sp}}(X),\Cat{Sp}], \end{equation} where \([\text{--},\text{--}]\) denotes the internal mapping object in \(\Cat{Pr}_{\textnormal{st}}\), is an equivalence. Thus we reach the following: \begin{definition}\label{89fa6d69dd} We call a finite poset \(P\) \emph{Verdier} if the pair \((\operatorname{Fun}(P,\Cat{Sp}),\Gamma)\) is a commutative Frobenius algebra (cf. \cref{ss-dualizable}) in \(\Cat{Pr}_{\textnormal{st}}\).\footnote{ This condition is a~priori different from the requirement that \cref{e-6dded2c2} is an equivalence, but it turns out to be equivalent by \cref{a3ad5064a0} as \(\operatorname{Shv}_{\Cat{Sp}}(X)\simeq\operatorname{Fun}(P,\Cat{Sp})\) is compactly generated in this case. } \end{definition} In other words, \(P\) is Verdier if and only if \cref{e-e94d496b} is a ``perfect pairing''. Our main result rephrases the Verdier property in terms of the Gorenstein* property, a concept used in combinatorial commutative algebra: \begin{Theorem}\label{main} For a finite poset~\(P\), the following are equivalent: \begin{enumerate}[label=(\roman*)] \item \label{i-verdier} The poset \(P\) is Verdier. \item \label{i-gorenstein} For each \(p\in P\), the full subposet \(P_{<p}\) is Gorenstein* (over~\(\mathbf{Z}\)), i.e., its geometric realization is a generalized homology sphere (see \cref{ss-g}). \item \label{i-vanishing} For each \(p<q\) in~\(P\), the limit of \(\mathbf{Z}_{[p,q]}\), the extension of the constant functor~\(\mathbf{Z}\) by zero along \([p,q]\hookrightarrow P\) vanishes. (Or equivalently, the limit of \(\mathbf{S}_{[p,q]}\) vanishes; see \cref{sz}.) \end{enumerate} \end{Theorem} The existence of an equivalence \(\operatorname{Fun}(P,\Cat{Sp})\simeq\operatorname{Fun}(P^{\operatorname{op}},\Cat{Sp})\) for~\(P\) satisfying \cref{i-gorenstein} or its variant where \(\Cat{Sp}\) is replaced by \(\operatorname{D}(\mathbf{Z})\) may not surprise experts. What is novel is the formulation itself, with which an if-and-only-if statement becomes possible. \begin{corollary}\label{cc3fc211f5} A finite poset~\(P\) is Gorenstein* if and only if \(P^{\triangleright}\) is Verdier. \end{corollary} \begin{example}\label{3cad4d5d3f} As proven in \autocite[Proposition~3.1]{Bjorner84}, a finite poset~\(P\) is the face poset of some regular CW complex if and only if the geometric realization of \(P_{<p}\) is homeomorphic to a sphere for each~\(p\). Hence any finite face poset is Verdier. In particular, we have an equivalence \(\operatorname{Fun}(P,\Cat{Sp})\simeq\operatorname{Fun}(P^{\operatorname{op}},\Cat{Sp})\). \end{example} In light of this example, the equivalence \(\text{\cref{i-verdier}}\Leftrightarrow\text{\cref{i-gorenstein}}\) in \cref{main} can be informally summarized by the following: \begin{slogan}\label{dcf6d450ff} A finite poset enjoys Verdier duality if and only if it is homologically CW. \end{slogan} Of course, there is an example that is not a face poset: \begin{example}\label{a8f461525b} Let \(P\) be the face poset of the triangulation of a homology sphere that is not a sphere. Then \(P^{\triangleright}\) is Verdier, but it is not the face poset of any regular CW complex. \end{example} \begin{remark}\label{e14ab4a7a8} In this paper, we work over~\(\mathbf{S}\) (or~\(\mathbf{Z}\)) for simplicity, but our argument is valid over other coefficients. For example, for a field~\(k\) and a finite poset~\(P\), the functor \(\projlim\colon\operatorname{Fun}(P,\operatorname{D}(k))\to\operatorname{D}(k)\) makes \(\operatorname{Fun}(P,\operatorname{D}(k))\) a commutative Frobenius algebra in the \(\infty\)-category of \(k\)-linear (presentable) stable \(\infty\)-categories if and only if \(P_{<p}\) is Gorenstein* over~\(k\) for any \(p\in P\). \end{remark} As a byproduct of our proof, we find the following generalization of \autocite[Proposition~1.2.4.3]{LurieHA}, which may be of independent interest. \begin{Theorem}\label{0b4fe90159} Let \(P\) be a Gorenstein* finite poset. Then for any stable \(\infty\)-category \(\cat{C}\), a diagram \((P^{\triangleright})^{\triangleleft} \simeq(P^{\triangleleft})^{\triangleright} \to\cat{C}\) is a limit if and only if it is a colimit. \end{Theorem} We can handle the locally finite case by a limit argument; precisely, we show the following: \begin{Theorem}\label{lf-arbitrary} Let \(P\) a poset such that \(P_{\geq p}\) is finite and \(P_{<p}\) is finite and Gorenstein* for each \(p\in P\) (e.g., the face poset of a locally finite regular CW complex). Then there is a canonical equivalence \begin{equation*} \phantom, \mathbb{D}\colon \operatorname{Fun}(P,\Cat{Sp}) \longrightarrow\operatorname{Fun}(P^{\operatorname{op}},\Cat{Sp}), \end{equation*} which is pointwise given by \(\mathbb{D}(F)\colon p\mapsto\projlim_{q\in P}\operatorname{Map}(p,q)\otimes F(q)\), where \(\otimes\) denotes the copower. \end{Theorem} \begin{remark}\label{4c27a68603} In \cref{lf-arbitrary}, the requirement that \(P_{\geq p}\) be finite cannot be dropped: Let \(P\) be the poset given in \autocite[Example~A.13]{ttg-fun}. Then \(P_{<p}\) is Gorenstein* for any \(p\); in fact, \(P\) is the face poset of a regular CW structure of~\(S^{\infty}\). However, the assignment described in the statement does not preserve compact objects, thus does not lift to an equivalence. \end{remark} Our formulation gives us more than aesthetic satisfaction. For instance, this unified view to the two duality theorems enables us to study the interaction between startification and Verdier duality. A sample application is the following: \begin{Theorem}\label{str} Let \(X\to\operatorname{Alex}(P)\) be a stratification of a compact Hausdorff space, where \(P\) is a Verdier finite poset. Suppose that the inverse image \(\operatorname{Shv}_{\Cat{Sp}}(\operatorname{Alex}(P))\to\operatorname{Shv}_{\Cat{Sp}}(X)\) is fully faithful. Then our duality functor \(\operatorname{Fun}(P,\Cat{Sp})\to\operatorname{Fun}(P^{\operatorname{op}},\Cat{Sp})\) can be canonically identified with the composite \begin{equation*} \phantom, \operatorname{Shv}_{\Cat{Sp}}(\operatorname{Alex}(P)) \xrightarrow{f^*} \operatorname{Shv}_{\Cat{Sp}}(X) \xrightarrow{\mathbb{D}} \operatorname{cShv}_{\Cat{Sp}}(X) \xrightarrow{f_+} \operatorname{cShv}_{\Cat{Sp}}(\operatorname{Alex}(P)), \end{equation*} where \(\mathbb{D}\) is the Verdier duality equivalence for~\(X\) and \(f_+\) is the cosheaf pushforward. \end{Theorem} \begin{example}\label{8fb3bcf249} In a nice situation, the (space-valued) inverse image \(\operatorname{Shv}(\operatorname{Alex}(P))\to\operatorname{Shv}(X)\) is fully faithful and its image consists of constructible sheaves; see \autocite[Section~3]{ClausenJansen}\footnote{ Beware that this part contains a minor error; see \cref{c204a3567d}. } for a precise statement. For example, we can show that the assumption of \cref{str} is satisfied when \(X\) is a finite regular CW complex and \(P\) is its face poset. \end{example} This paper is organized as follows: We develop necessary tools on duality in \cref{s-duality} and on poset (co)homology in \cref{s-poset}. Then we prove \cref{main} by showing \(\text{\cref{i-gorenstein}}\Rightarrow\text{\cref{i-vanishing}}\), \(\text{\cref{i-vanishing}}\Leftrightarrow\text{\cref{i-verdier}}\), and \(\text{\cref{i-verdier}}\Rightarrow\text{\cref{i-gorenstein}}\) in \cref{ss-van,ss-fin,ss-up}, respectively. We also show \cref{0b4fe90159} in \cref{ss-up}. After that, we study its variants in \cref{s-v} and in particular prove \cref{lf-arbitrary}. In \cref{s-ch}, we study Verdier duality for locally compact Hausdorff spaces from a formal standpoint. It motivates our formulation and is used to obtain \cref{str}. \subsection*{Conventions} For a poset~\(P\), we write \(P_{\bot}\) and \(P_{\top}\) for the posets obtained by adding the least element~\(\bot\) and the greatest element~\(\top\), respectively. When we regard \(P\) as an \(\infty\)-category, these correspond to its left and right cones (\(P^{\triangleleft}\) and~\(P^{\triangleright}\)) in \autocite[Notation~1.2.8.4]{LurieHTT}. We also write \(P_{\bot,\top}\) for the one obtained by adding both. The empty face (or \((-1)\)-face) is not included in our face poset, but we regard \(S^{-1}=\emptyset\) as a sphere. We use the closed symmetric monoidal structure on~\(\Cat{Pr}\) given in \autocite[Section~4.8.1]{LurieHA}. For an \(\infty\)-topos~\(\cat{X}\) and a presentable \(\infty\)-category~\(\cat{C}\), the \(\infty\)-categories of \(\cat{C}\)-valued sheaves \(\operatorname{Shv}_{\cat{C}}(\cat{X})\) and cosheaves \(\operatorname{cShv}_{\cat{C}}(\cat{X})\) are identified with \(\cat{C}\otimes\cat{X}\) and \([\cat{X},\cat{C}]\), respectively. Concretely, their objects can be regarded as limit-preserving functors \(\cat{X}^{\operatorname{op}}\to\cat{C}\) and colimit-preserving functors \(\cat{X}\to\cat{C}\), respectively. We write \(f_+\dashv f^+\) for the pushforward-pullback adjunction for cosheaves. The global section functor, i.e., the cohomology functor, is denoted by~\(\Gamma\), not by \(\mathrm{R}\Gamma\). \subsection*{Acknowledgments} While working on this project, I was at the University of Tokyo and the Max Planck Institute for Mathematics and was partially supported by the Hausdorff Center for Mathematics. I thank them for the hospitality. \section{General facts on duality}\label{s-duality} \subsection{A useful criterion}\label{ss-dualizable} Recall that for objects \(A\), \(A^{\vee}\) and a morphism \(e\colon A^{\vee}\otimes A\to\mathbf{1}\) in a symmetric monoidal \(\infty\)-category, we say that \(e\) is a \emph{counit} of a duality between \(A\) and~\(A^{\vee}\) if for any objects \(C\) and~\(D\) the composite \begin{equation*} \operatorname{Map}(C,D\otimes A^{\vee}) \xrightarrow{\text{--}\otimes A} \operatorname{Map}(C\otimes A,D\otimes A^{\vee}\otimes A) \xrightarrow{\operatorname{Map}(C\otimes A,D\otimes e)} \operatorname{Map}(C\otimes A,D) \end{equation*} is an equivalence. \begin{lemma}\label{a3ad5064a0} Let \(A\) and \(A^{\vee}\) be objects and \(e\colon A^{\vee}\otimes A\to\mathbf{1}\) a morphism in a closed symmetric monoidal \(\infty\)-category. If \(A\) is dualizable and the composite \begin{equation} \label{e-90ab6b96} A^{\vee} \simeq[\mathbf{1},A^{\vee}] \xrightarrow{\text{--}\otimes A}[A,A^{\vee}\otimes A] \xrightarrow{[A,e]}[A,\mathbf{1}] \end{equation} is an equivalence, then \(e\) is a counit. Here \([\text{--},\text{--}]\) denotes the mapping object functor. \end{lemma} \begin{proof} By the definition of \([\text{--},\text{--}]\), it suffices to show that the morphism \begin{equation*} C\otimes A^{\vee} \simeq[\mathbf{1},C\otimes A^{\vee}] \xrightarrow{\text{--}\otimes A}[A,C\otimes A^{\vee}\otimes A] \xrightarrow{[A,C\otimes e]}[A,C] \end{equation*} is an equivalence for every~\(C\). Since \(A\) is dualizable, this morphism is equivalent to the one obtained by applying \(C\otimes\text{--}\) to \cref{e-90ab6b96}. \end{proof} \subsection{Functorialities}\label{ss-pot} For a commutative algebra \(A\) and a morphism \(l\colon A\to\mathbf{1}\) (of objects) in a closed symmetric monoidal \(\infty\)-category, we can form a morphism \(A\to[A,\mathbf{1}]\) as in \cref{e-90ab6b96} by letting \(e\) be the composite \(l\circ m\) where \(m\colon A\otimes A\to A\) is the multiplication. We discuss the (\(1\)-categorical) naturality of this assignment \((A,l)\mapsto(A\to[A,\mathbf{1}])\). Note that the pair \((A,l)\) is called a \emph{commutative Frobenius algebra} if \(e\) is a counit. \begin{proposition}\label{00461cedb4} Suppose that \(f\colon A\to B\) is a morphism of commutative algebras in a symmetric monoidal \(\infty\)-category and that \(g\colon B\to A\) is an \(A\)-linear morphism. Then for every morphism \(l\colon A\to\mathbf{1}\) and any objects \(C\) and~\(D\), there are commutative squares \begin{align*} \begin{tikzcd}[ampersand replacement=\&] \operatorname{Map}(C,D\otimes A)\ar[r]\ar[d,"(D\otimes f)\circ\text{--}"']\& \operatorname{Map}(C\otimes A,D)\ar[d,"\text{--}\circ(C\otimes g)"]\\ \operatorname{Map}(C,D\otimes B)\ar[r]\& \operatorname{Map}(C\otimes B,D)\rlap, \end{tikzcd} && \begin{tikzcd}[ampersand replacement=\&] \operatorname{Map}(C,D\otimes B)\ar[r]\ar[d,"(D\otimes g)\circ\text{--}"']\& \operatorname{Map}(C\otimes B,D)\ar[d,"\text{--}\circ(C\otimes f)"]\\ \operatorname{Map}(C,D\otimes A)\ar[r]\& \operatorname{Map}(C\otimes A,D)\rlap, \end{tikzcd} \end{align*} where the horizontal morphisms are the ones associated to \((A,l)\) and \((B,l\circ g)\), respectively. Moreover, if the symmetric monoidal structure is closed, the same thing holds when the mapping spaces are replaced by the internal mapping objects. \end{proposition} \begin{example}\label{50e6a4f108} Let \(K\) be an \(\infty\)-category and \(i\colon K_0\hookrightarrow K\) be an inclusion of a sieve. A direct computation shows that \(f =i^*\colon\operatorname{Fun}(K,\Cat{Sp})\to\operatorname{Fun}(K_0,\Cat{Sp})\) and its right adjoint \(g\) satisfy the assumptions of \cref{00461cedb4} in~\(\Cat{Pr}_{\textnormal{st}}\). \end{example} \begin{example}\label{b5898d2f13} Let \(K\) be an \(\infty\)-category and \(j\colon K_1\hookrightarrow K\) be an inclusion of a cosieve. A direct computation shows that \(f =j^*\colon\operatorname{Fun}(K,\Cat{S})\to\operatorname{Fun}(K_1,\Cat{S})\) and its left adjoint \(g\) satisfy the assumptions of \cref{00461cedb4} in~\(\Cat{Pr}\). This is a special case of \cref{f7ea8c5d40} below. \end{example} \begin{example}\label{68011c91d6} Let \(p\colon\cat{Y}\to\cat{X}\) be a proper geometric morphism between \(\infty\)-toposes. According to \cref{90680bda5b}, \(f=p^*\colon\operatorname{Shv}_{\Cat{Sp}}(\cat{X})\to\operatorname{Shv}_{\Cat{Sp}}(\cat{Y})\) and its right adjoint~\(g\) satisfy the assumptions of \cref{00461cedb4} in~\(\Cat{Pr}_{\textnormal{st}}\). \end{example} \begin{example}\label{f7ea8c5d40} Let \(j\colon\cat{Y}\to\cat{X}\) be an étale geometric morphism between \(\infty\)-toposes. As noted in \autocite[Remark~6.3.5.2]{LurieHTT}, \(f=j^*\colon\operatorname{Shv}(\cat{X})\to\operatorname{Shv}(\cat{Y})\) and its left adjoint~\(g\) satisfy the assumptions of \cref{00461cedb4} in~\(\Cat{Pr}\). \end{example} \begin{proof}[Proof of \cref{00461cedb4}] In this proof, in order to simplify the notation, we write \((\text{--},\text{--})\) for \(\operatorname{Map}(C\otimes\text{--},D\otimes\text{--})\) or \([C\otimes\text{--},D\otimes\text{--}]\) if the symmetric monoidal structure is closed. For the first square, we construct the \(2\)-cells in the diagram \begin{equation*} \begin{tikzcd} &&(A,A\otimes A)\ar[r]\ar[d,"{(g,A\otimes A)}"]& (A,A)\ar[r]\ar[d,"{(g,A)}"]& (A,\mathbf{1})\ar[d,"{(g,\mathbf{1})}"]\\ (\mathbf{1},A)\ar[r]\ar[d,"{(\mathbf{1},f)}"']\ar[rru]& (B,A\otimes B)\ar[r]\ar[d,"{(B,f\otimes B)}"']& (B,A\otimes A)\ar[r]& (B,A)\ar[r]& (B,\mathbf{1})\rlap.\\ (\mathbf{1},B)\ar[r]& (B,B\otimes B)\ar[r]& (B,B)\ar[ru] \end{tikzcd} \end{equation*} We obtain the triangle and the four rectangles by naturality. We also obtain the pentagon by the linearity of~\(g\) and the functoriality of \((B,\text{--})\). For the second square, we construct the \(2\)-cells in the diagram \begin{equation*} \begin{tikzcd} &&(B,B\otimes B)\ar[r]\ar[d,"{(f,B\otimes B)}"]& (B,B)\ar[r]\ar[d,"{(f,B)}"]& (B,A)\ar[r]\ar[d,"{(f,A)}"]& (B,\mathbf{1})\ar[d,"{(f,\mathbf{1})}"]\\ (\mathbf{1},B)\ar[r]\ar[d,"{(\mathbf{1},g)}"']\ar[rru]& (A,B\otimes A)\ar[r]\ar[d,"{(A,g\otimes A)}"']& (A,B\otimes B)\ar[r]& (A,B)\ar[r]& (A,A)\ar[r]& (A,\mathbf{1})\rlap.\\ (\mathbf{1},A)\ar[r]& (A,A\otimes A)\ar[rrru] \end{tikzcd} \end{equation*} We obtain the upper triangle and the four rectangles by naturality. We also obtain the lower triangle by the linearity of~\(g\) and the functoriality of \((A,\text{--})\). \end{proof} We record the following obvious consequence: \begin{corollary}\label{ee41f58df5} In the situation of \cref{00461cedb4}, assume furthermore that \((A,l)\) is Frobenius and that \(f\circ g\) is homotopic to the identity. Then \((B,l\circ g)\) is also Frobenius. \end{corollary} We also have the following variant: \begin{lemma}\label{e894284df5} Suppose that \(f\colon A\to B\) is a morphism of commutative algebras in a symmetric monoidal \(\infty\)-category. Then for every morphism \(m\colon B\to\mathbf{1}\) and any objects \(C\) and~\(D\), there is a commutative square \begin{equation*} \begin{tikzcd} \operatorname{Map}(C,D\otimes A)\ar[r]\ar[d,"(D\otimes f)\circ\text{--}"']& \operatorname{Map}(C\otimes A,D)\\ \operatorname{Map}(C,D\otimes B)\ar[r]& \operatorname{Map}(C\otimes B,D)\ar[u,"\text{--}\circ(C\otimes f)"']\rlap, \end{tikzcd} \end{equation*} where the horizontal morphisms are the ones associated to \((A,m\circ f)\) and \((B,m)\), respectively. Moreover, if the symmetric monoidal structure is closed, the same thing holds when the mapping spaces are replaced by the internal mapping objects. \end{lemma} \begin{proof} We use the same notation as in the proof of \cref{00461cedb4}. We construct the \(2\)-cells in the diagram \begin{equation*} \begin{tikzcd} (\mathbf{1},A)\ar[r]\ar[d,"{(\mathbf{1},f)}"']& (A,A\otimes A)\ar[r]\ar[d,"{(A,f\otimes f)}"']& (A,A)\ar[d,"{(A,f)}"']\ar[rd]& {}\\ (\mathbf{1},B)\ar[rd]& (A,B\otimes B)\ar[r]& (A,B)\ar[r]& (A,\mathbf{1})\\ {}& (B,B\otimes B)\ar[u,"{(f,B\otimes B)}"']\ar[r]& (B,B)\ar[r]\ar[u,"{(f,B)}"']& (B,\mathbf{1})\rlap.\ar[u,"{(f,\mathbf{1})}"'] \end{tikzcd} \end{equation*} We obtain the upper square since \(f\) is a morphism of commutative algebras. We also obtain the other cells by naturality. \end{proof} \section{Homotopy theory of posets}\label{s-poset} \subsection{Poset cohomology}\label{ss-h-poset} \begin{definition}\label{69fc302def} For a poset~\(P\), we write \(\lvert P\rvert\) for the geometric realization (as a topological space) of its nerve and \(\operatorname{\Delta}(P)\) for its \emph{order complex}, i.e., the abstract simplicial complex consisting of finite (nonempty) chains in~\(P\). Note that \(\lvert P\rvert\) is canonically homeomorphic to the geometric realization of \(\operatorname{\Delta}(P)\). \end{definition} In this subsection, we study how the cohomology of~\(\lvert P\rvert\) and that of~\(P\), i.e., the sheaf cohomology of \(\operatorname{Fun}(P,\Cat{S})\), are related. We first recall the following from \autocite[Section~A.1]{LurieHA}: \begin{definition}\label{6c3f6be635} We say that an \(\infty\)-topos \(\cat{X}\) has \emph{constant shape} if the shape \(\operatorname{Sh}\cat{X}\) is corepresentable. If \(\cat{X}_{/X}\) has constant shape for every \(X\in\cat{X}\), we say that \(\cat{X}\) is \emph{locally of constant shape}. According to \autocite[Proposition~A.1.18]{LurieHA}, this is equivalent to the condition that the constant sheaf functor \(\cat{S}\to\cat{X}\) admits a left adjoint. \end{definition} \begin{example}\label{940a2e3d8c} The presheaf \(\infty\)-topos of an \(\infty\)-category is locally of constant shape. Its shape is the image under the left adjoint of \(\Cat{S}\hookrightarrow\Cat{Cat}_{\infty}\). \end{example} \begin{example}\label{df7429c55a} The sheaf \(\infty\)-topos of a CW complex is locally of constant shape. Its shape is the homotopy type. In fact, any CW complex is locally of singular shape in the sense of \autocite[Section~A.4]{LurieHA} as any open subspace is homotopy equivalent to a CW complex. \end{example} \begin{proposition}\label{cb58ae3d6a} If an \(\infty\)-topos \(\cat{X}\) is locally of constant shape, for any spectrum~\(E\), the canonical morphism \([\Sigma^{\infty}_+\operatorname{Sh}\cat{X},E] \to\Gamma(\cat{X};E)\) is an equivalence. Here \([\text{--},\text{--}]\) denotes the mapping spectrum. \end{proposition} \begin{proof} Let \(p\colon\cat{X}\to\Cat{S}\) denote the projection. By assumption, \(p^*\) admits a left adjoint~\(p_!\). If we regard objects in \(\operatorname{Shv}_{\Cat{Sp}}(\text{--})\) as limit-preserving functors \((\text{--})^{\operatorname{op}}\to\Cat{Sp}\), the spectrum-valued pullback \(\Cat{Sp}\to\operatorname{Shv}_{\Cat{Sp}}(\cat{X})\) is given as the precomposition with~\((p_!)^{\operatorname{op}}\). Therefore, \(\Gamma(\cat{X};E)\simeq p_*p^*E\) is given the value of \(E\colon\Cat{S}^{\operatorname{op}}\to\Cat{Sp}\) at \(p_!p^*{*}\simeq\operatorname{Sh}\cat{X}\), which is the cohomology \([\Sigma^{\infty}_+\operatorname{Sh}\cat{X},E]\). \end{proof} \begin{corollary}\label{afc4ccfff7} For a poset~\(P\) and a spectrum~\(E\), we have a functorial (both in~\(P\) and in~\(E\)) equivalence \(\Gamma(P;E)\simeq\Gamma(\lvert P\rvert;E)\), where \(E\) denotes the constant sheaves on the \(\infty\)-toposes \(\operatorname{Fun}(P,\Cat{S})\) and \(\operatorname{Shv}(\lvert P\rvert)\), respectively. \end{corollary} \begin{proof} This follows from \cref{940a2e3d8c,df7429c55a,cb58ae3d6a}. \end{proof} \begin{remark}\label{f2eee8dfff} In fact, at least if \(P_{\geq p}\) is finite for \(p\in P\), we can construct a canonical geometric morphism \(\operatorname{Shv}(\lvert P\rvert)\to\operatorname{Fun}(P,\Cat{S})\) whose inverse image functor is fully faithful. This shows that we can take any functor \(E\colon P\to\Cat{Sp}\) as a coefficient in the statement of \cref{afc4ccfff7}, but we do not need this generality in this paper. \end{remark} \subsection{Gorenstein* posets}\label{ss-g} We first recall the following notion from combinatorial commutative algebra. See \autocite[Chapter~II]{Stanley96} for a textbook account, which in particular explains where the name comes from. \begin{definition}\label{fe7384d8c5} We call an \(n\)-dimensional\footnote{Here \(n\) can be \(-1\), so that the empty complex is Gorenstein*.} finite abstract simplicial complex \emph{Gorenstein*} if its geometric realization is a generalized homology \(n\)-sphere, i.e., an (integral) homology \(n\)-manifold having the (integral) homology of an \(n\)-sphere. \end{definition} The following definition is a variant of the definition of a Cohen--Macaulay poset given in \autocite[Section~3]{Baclawski80}. \begin{definition}\label{8e20e56844} We call a finite poset~\(P\) \emph{Gorenstein*} if for every \(p<q\) in \(P_{\bot,\top}\) the interval \((p,q)\) has the (integral) homology of a sphere\footnote{ We regard \(S^{-1}=\emptyset\) as a sphere. }. \end{definition} By definition if \(P\) is a Gorenstein* finite poset then \((p,q)\) is Gorenstein* for every \(p<q\in P_{\bot,\top}\). \begin{lemma}\label{37f851549b} Any maximal chain of a Gorenstein* finite poset~\(P\) has the same length. In other words, \(P_{\bot,\top}\) admits a rank function\footnote{ A \emph{rank function} on a finite poset~\(P\) is a function \(r\colon P\to\mathbf{Z}\) such that \(r(q)=r(p)+1\) if \(q\) is an immediate successor of~\(p\). }. \end{lemma} \begin{proof} This holds more generally for Cohen--Macaulay finite posets; see \autocite[Proposition~3.1]{Baclawski80}. \end{proof} These two definitions are compatible: \begin{proposition}\label{35d43b2aa2} For a finite poset~\(P\), it is Gorenstein* if and only if \(\operatorname{\Delta}(P)\) is Gorenstein*. \end{proposition} We omit the proof since it is a straightforward variant of \autocite[Proposition~3.3]{Baclawski80}. \begin{corollary}\label{594984da1c} For a finite abstract simplicial complex, it is Gorenstein* if its underlying poset is Gorenstein*. \end{corollary} We later need the following lemma, as we prefer cohomology: \begin{lemma}\label{94f6eb4680} For a finite poset, the Gorenstein* condition can be checked via cohomology instead of homology; i.e., \(P\) is Gorenstein* if and only if \((p,q)\) has the cohomology of a sphere for \(p<q\) in \(P_{\bot,\top}\). \end{lemma} \begin{proof} By definition \(P\) is Gorenstein* if and only if so is \(P^{\operatorname{op}}\). Hence the desired result follows from the self-duality of the \(\infty\)-category of perfect complexes over~\(\mathbf{Z}\). \end{proof} \subsection{A vanishing result}\label{ss-van} \begin{definition}\label{96f462d60f} Let \(P\) be a poset and \(E\) a spectrum. For \(p\leq q\) in~\(P\), we let \(E_{[p,q]}\in\operatorname{Fun}(P,\Cat{Sp})\) denote the functor obtained from the constant functor \(E\in\operatorname{Fun}([p,q],\Cat{Sp})\) by left Kan extending along \([p,q]\hookrightarrow P_{\leq q}\) and then right Kan extending along \(P_{\leq q}\hookrightarrow P\). If \(E\in\operatorname{D}(\mathbf{Z})\), we use the same symbol for the element in \(\operatorname{Fun}(P,\operatorname{D}(\mathbf{Z}))\) determined similarly. \end{definition} \begin{proposition}\label{271a46a032} For a Gorenstein* finite poset~\(P\), for every \(p<q\) in \(P_{\top}\) the cohomology \(\Gamma(P_{\top};\mathbf{Z}_{[p,q]})\) vanishes. \end{proposition} We later prove the converse. \begin{proof} Since \(\Gamma(P_{\top};\mathbf{Z}_{[p,q]}) \simeq\Gamma((P_{\top})_{\leq q};\mathbf{Z}_{[p,q]})\) holds and \((P_{\top})_{<q}\) is also Gorenstein*, we can assume \(q=\top\). By \cref{35d43b2aa2}, there is a unique rank function \(r\colon P_{\bot,\top}\to\mathbf{Z}\) satisfying \(r(\bot)=-1\). Then \(\lvert P\rvert\) is a generalized homology \((r(\top)-1)\)-sphere. If \(r(\top)=1\) holds, \(P\) is the discrete poset with two elements and the result can be directly checked. So we henceforth assume \(r(\top)>1\). In what follows, we repeatedly use \cref{afc4ccfff7}. Let \(\mathbf{Z}_{\geq p}\) denote the left Kan extension of the constant functor with value \(\mathbf{Z}\) along \(P_{\geq p}\hookrightarrow P\). Then the pullback diagram \begin{equation*} \begin{tikzcd} \Gamma(P_{\top};\mathbf{Z}_{[p,\top]})\ar[r]\ar[d]& \Gamma(P_{\top};\mathbf{Z})\ar[d,"f"]\\ \Gamma(P;\mathbf{Z}_{\geq p})\ar[r,"g"]& \Gamma(P;\mathbf{Z}) \end{tikzcd} \end{equation*} can be formed in \(\operatorname{D}(\mathbf{Z})\). Here \(f\) is induced by \(P\to P_{\top}\). As this morphism of posets induces an isomorphism on \(H^0(\lvert\text{--}\rvert;\mathbf{Z})\), we see that \(f\) induces an isomorphism on~\(\pi_0\). On the other hand, \(\Gamma(P;\mathbf{Z}_{\geq p})\) is computed as the relative cohomology \(\operatorname{fib}(\Gamma(P;\mathbf{Z})\to\Gamma(P\setminus P_{\geq p};\mathbf{Z}))\). By the Lefschetz duality theorem, it is the dual of \(\Sigma^{r(\top)-1}\Gamma(P_{\geq p};\mathbf{Z})\) in \(\operatorname{D}(\mathbf{Z})\), which is \(\Sigma^{1-r(\top)}\mathbf{Z}\), and \(g\) induces an isomorphism on~\(\pi_{1-r(\top)}\) as \(\lvert P_{\geq p}\rvert\) is connected. Therefore, \(f\) and~\(g\) can be identified with the two direct summand inclusions of \(\Gamma(P;\mathbf{Z})\simeq\mathbf{Z}\oplus\Sigma^{1-r(\top)}\mathbf{Z}\), from which \(\Gamma(P_{\top};\mathbf{Z}_{[p,\top]})\simeq0\) follows. \end{proof} \begin{proof}[Proof of \(\text{\cref{i-gorenstein}}\Rightarrow\text{\cref{i-vanishing}}\) of \cref{main}] This follows from \cref{271a46a032}. \end{proof} We note that this vanishing also holds when \(\mathbf{Z}\) is replaced by~\(\mathbf{S}\): \begin{lemma}\label{sz} For a finite poset~\(P\) and \(p\leq q\) in \(P\), the cohomology \(\Gamma(P;\mathbf{Z}_{[p,q]})\) vanishes if and only if so does \(\Gamma(P;\mathbf{S}_{[p,q]})\). \end{lemma} \begin{proof} Note that if a spectrum \(E\) is nonzero and bounded below, \(E\otimes\mathbf{Z}\) is also nonzero; this can be seen by considering the smallest \(i\) such that \(\pi_iE\) is nonzero. Hence the desired result follows from \(\Gamma(P;\mathbf{Z}_{[p,q]})\simeq\Gamma(P;\mathbf{S}_{[p,q]})\otimes\mathbf{Z}\) and the fact that \(\Gamma(P;\mathbf{S}_{[p,q]})\) is bounded below, both of which follow from the finiteness of~\(P\). \end{proof} \section{Verdier duality for finite posets}\label{s-main} \subsection{Recollements}\label{ss-recolle} We refer the reader to \autocite[Section~A.8]{LurieHA} for a discussion on recollements using \(\infty\)-categories. When we say \(\cat{C}_0\) and \(\cat{C}_1\) form a recollement, \(\cat{C}_0\) is supposed to be the ``closed'' part; i.e., the \(\cat{C}_1\)-localization annihilates~\(\cat{C}_0\). We abuse terminology to say the two functors \(\cat{C}_0\hookrightarrow\cat{C}\) and \(\cat{C}_1\hookrightarrow\cat{C}\) determine a recollement when \(\cat{C}\) is a recollement of their images. We recall the following standard fact: \begin{lemma}\label{a9ca3865c3} Consider a presentable stable \(\infty\)-category \(\cat{C}\) and suppose that \(j\colon P_1\hookrightarrow P\) be an upward-closed full subposet with complement \(i\colon P_0\hookrightarrow P\). Let \(i_*\) and~\(j_*\) denote the right Kan extension along~\(i\) and~\(j\) and \(j_!\) the left Kan extension along~\(j\). Then the following hold for the functor \(\infty\)-category \(\operatorname{Fun}(P,\cat{C})\): \begin{enumerate} \item \label{i-sheaf} The functors \(i_*\) and \(j_*\) form a recollement. \item \label{i-cosheaf} The functors \(j_!\) and \(i_*\) also form a recollement. \end{enumerate} \end{lemma} \begin{proof} Both can be easily checked by using \autocite[Proposition~A.8.20]{LurieHA} and observing that \(i_*\) and \(j_!\) are given as the extension-by-zero functors. \end{proof} \begin{lemma}\label{2953dbadf2} Consider a left exact functor \(\cat{C}\to\cat{C}'\) and suppose that \(\cat{C}\) and \(\cat{C}'\) are recollements of \(\cat{C}_0\) and~\(\cat{C}_1\) and \(\cat{C}_0'\) and~\(\cat{C}_1'\), respectively. Furthermore, assume the following: \begin{itemize} \item The functor \(f\) restricts to define equivalences \(\cat{C}_0\to\cat{C}_1\) and \(\cat{C}_0'\to\cat{C}_1'\). \item The morphism \(L_0'\circ f\to f\circ L_0\) obtained by the above condition is an equivalence. \end{itemize} Then \(f\) itself is an equivalence. \end{lemma} \begin{proof} This follows from \autocite[Proposition~A.8.14]{LurieHA}. \end{proof} \subsection{Duality and the vanishing condition}\label{ss-fin} We prove \cref{i-verdier,i-vanishing} in \cref{main} are equivalent. We start with a pointwise description of~\(\mathbb{D}\): \begin{lemma}\label{f6072744fd} Let \(K\) be a finite \(\infty\)-category so that \(\projlim\colon\operatorname{Fun}(K,\Cat{Sp})\to\Cat{Sp}\) is a morphism in \(\Cat{Pr}_{\textnormal{st}}\). Then the potential duality functor \begin{equation*} \mathbb{D}\colon \operatorname{Fun}(K,\Cat{Sp})\longrightarrow [\operatorname{Fun}(K,\Cat{Sp}),\Cat{Sp}]\simeq\operatorname{Fun}(K^{\operatorname{op}},\Cat{Sp}) \end{equation*} induced by the composite \( \operatorname{Fun}(K,\Cat{Sp}) \otimes\operatorname{Fun}(K,\Cat{Sp}) \xrightarrow{\text{--}\otimes\text{--}} \operatorname{Fun}(K,\Cat{Sp}) \xrightarrow{\projlim}\Cat{Sp} \) (cf.~\cref{e-90ab6b96}) is objectwise given by \begin{equation} \label{e-8b7f0497} \phantom, F\longmapsto\biggl( k\longmapsto\projlim_{l\in K}\operatorname{Map}(k,l)\otimes F(l) \biggr), \end{equation} where \(\otimes\) denotes the copower. \end{lemma} \begin{proof} By definition, \(\mathbb{D}\) is the composite \(\operatorname{Fun}(K,\Cat{Sp})\to \operatorname{Fun}(K^{\operatorname{op}}\times K\times K,\Cat{Sp})\to \operatorname{Fun}(K^{\operatorname{op}},\Cat{Sp})\), where the first and second maps are objectwise given by \(F\mapsto((k,l,m)\mapsto\operatorname{Map}(k,l)\otimes F(m))\) and \(G\mapsto\bigl(k\mapsto\projlim_lG(k,l,l)\bigr)\), respectively. \end{proof} \begin{proof}[Proof of \(\text{\cref{i-vanishing}}\Rightarrow\text{\cref{i-verdier}}\) of \cref{main}] We show by induction on~\(\#P\) that \(\mathbb{D}_P\) is an equivalence, which is equivalent to the Verdier property by \cref{a3ad5064a0}. If \(P=\emptyset\), the claim is obvious. We assume \(\#P>0\) and pick a maximal element \(m\in P\). Since \(\{m\}\) is upward closed, by applying \cref{a9ca3865c3} to \(j\colon\{m\}\hookrightarrow P\) and \(i\colon P\setminus\{m\}\hookrightarrow P\), we can form two recollements, which fit into a diagram \begin{equation} \label{e-a7e63710} \begin{tikzcd} \operatorname{Fun}(P\setminus\{m\},\Cat{Sp})\ar[r,"i_*",hook]\ar[d,"\mathbb{D}_{P\setminus\{m\}}"']& \operatorname{Fun}(P,\Cat{Sp})\ar[d,"\mathbb{D}_{P}"']& \Cat{Sp}\ar[l,"j_*"',hook']\ar[d,dashed]\\ \operatorname{Fun}((P\setminus\{m\})^{\operatorname{op}},\Cat{Sp})\ar[r,"i_+",hook]& \operatorname{Fun}(P^{\operatorname{op}},\Cat{Sp})& \Cat{Sp}\rlap,\ar[l,"j_?"',hook'] \end{tikzcd} \end{equation} where the identification \(\operatorname{Fun}(\{m\},\Cat{Sp}) \simeq\Cat{Sp}\simeq\operatorname{Fun}(\{m\}^{\operatorname{op}},\Cat{Sp})\) is made and \(j_?\) denotes the right adjoint of~\(j^+\). \Cref{00461cedb4} applied to \cref{50e6a4f108} says that the left square commutes and that the canonical morphism \(i^?\circ\mathbb{D}_P\to\mathbb{D}_{P\setminus\{m\}}\circ i^*\) is an equivalence, where \(i^?\) denotes the left adjoint of~\(i_+\). We then consider applying \cref{2953dbadf2} to conclude the proof. Since \(\mathbb{D}_{P\setminus\{m\}}\) is an equivalence by our inductive hypothesis, it remains to check that \(\mathbb{D}_P\) restricts to define the dashed arrow and that it is an equivalence. Equivalently, we need to show that the composite \begin{equation*} \Cat{Sp} \xrightarrow{j_*}\operatorname{Fun}(P,\Cat{Sp}) \xrightarrow{\mathbb{D}_P}\operatorname{Fun}(P^{\operatorname{op}},\Cat{Sp}) \xrightarrow{\text{restriction}}\operatorname{Fun}(\{p\}^{\operatorname{op}},\Cat{Sp}) \simeq\Cat{Sp} \end{equation*} is zero for \(p\neq m\) and an equivalence for \(p=m\). Note that since this functor is colimit-preserving, it is determined by its value at~\(\mathbf{S}\), for which we write \(E_p\). \Cref{f6072744fd} says that \(E_p\) is computed as \(\projlim_{q\in P}\operatorname{Map}(p,q)\otimes(j_*\mathbf{S})(q)\). If \(p\nleq m\), the spectrum \(E_p\) is zero as \((j_*\mathbf{S})(q)=0\) holds for \(q\nleq m\). If \(p\leq m\), the spectrum \(E_p\) is equivalent to the cohomology \(\Gamma(P;\mathbf{S}_{[p,m]})\). Hence \(E_p\) is zero for \(p<m\) by the assumption~\cref{i-vanishing} and \cref{sz}. Therefore, it remains to compute \(E_m\simeq\Gamma(P;\mathbf{S}_{[m,m]})\). We pick a maximal chain \(p_0<\dotsb<p_r=m\) in \(P\). For \(i=1\), \dots,~\(n\), by using \(\Gamma(P;\mathbf{S}_{[p_{i-1},p_{i}]})=0\), we have \begin{equation*} \phantom. \Gamma(P;\mathbf{S}_{[p_i,p_i]}) \simeq\operatorname{fib}\bigl(\Gamma(P;\mathbf{S}_{[p_{i-1},p_i]}) \to\Gamma(P;\mathbf{S}_{[p_{i-1},p_{i-1}]})\bigr) \simeq\Sigma^{-1}\Gamma(P;\mathbf{S}_{[p_{i-1},p_{i-1}]}). \end{equation*} Thus we have \(\Gamma(P;\mathbf{S}_{[m,m]})=\Sigma^{-r}\mathbf{S}\). Hence the dashed arrow in \cref{e-a7e63710} is identified with the functor \(\Sigma^{-r}\), which is an equivalence. \end{proof} \begin{proof}[Proof of \(\text{\cref{i-verdier}}\Rightarrow\text{\cref{i-vanishing}}\) of \cref{main}] We proceed by induction on~\(\#P\). If \(P=\emptyset\), the claim is obvious. We assume \(P\neq\emptyset\) and pick a maximal element~\(m\). Then \(P\setminus\{m\}\) is also Verdier by \cref{ee41f58df5} applied to \cref{50e6a4f108}. Hence by our inductive hypothesis, it suffices to show that \(\Gamma(P;\mathbf{S}_{[p,m]})\) vanishes for any \(p<m\). We now form the diagram~\cref{e-a7e63710}, but the dashed arrow already exists in this case since \(\mathbb{D}_P\) is an equivalence. As we have observed in the above proof, the existence of the dashed arrow in particular means that \(\Gamma(P;\mathbf{S}_{[p,m]})\) vanishes for \(p<m\), which is what we wanted to show by \cref{sz}. \end{proof} \subsection{The Gorenstein* condition from duality}\label{ss-up} We finally complete the proof of \cref{main}. The main ingredient is the following nontrivial observation: \begin{proposition}\label{b644f9030c} If a finite poset~\(P\) is Verdier, \(P_{>p}\) is also Verdier for any \(p\in P\). \end{proposition} The proof requires the following trivial observations: \begin{lemma}\label{6e50e2411f} Let \(P\) be a finite poset. For \(p\in P\), let \(\mathbf{S}_{\leq p}\) denote the right Kan extension of the constant functor with value~\(\mathbf{S}\) along \(P_{\leq p}\hookrightarrow P\). Then \(\operatorname{Fun}(P,\Cat{Sp})\) is generated by \(\mathbf{S}_{\leq p}\) under colimits and shifts. \end{lemma} \begin{proof} We proceed by induction on \(\#P\). There is nothing to prove if \(P=\emptyset\). Assume otherwise and pick a maximal element \(m\in P\). Let \(\cat{C}\subset\operatorname{Fun}(P,\Cat{Sp})\) be the full subcategory generated by \(\mathbf{S}_{\leq p}\) under colimits and shifts. The inductive hypothesis implies that \(F\in\operatorname{Fun}(P,\Cat{Sp})\) is in \(\cat{C}\) if \(F(m)\) is zero. Hence any \(F\in\operatorname{Fun}(P,\Cat{Sp})\) the fiber of \(F\to F(m)\otimes\mathbf{S}_{\leq m}\) is in \(\cat{C}\). Since \(F(m)\otimes\mathbf{S}_{\leq m}\) is also in \(\cat{C}\) by assumption we have \(F\in\cat{C}\). \end{proof} \begin{lemma}\label{b334c9cd53} Let \(P\) be a (not necessarily finite) poset Then \(E_{[\bot,p]} \colon P_{\bot}\to\Cat{Sp}\) is a limit diagram for any \(p\in P\) and any \(E\in\Cat{Sp}\). \end{lemma} \begin{proof} We can assume that \(p\) is the greatest element by replacing~\(P\) with \(P_{\leq p}\). Then the result follows from \cref{afc4ccfff7} or, more directly, the observation that now \(P\) is weakly contractible. \end{proof} \begin{proof}[Proof of \cref{b644f9030c}] By induction, it suffices to consider the case where \(p\) is minimal. Let \(j\) denote the inclusion \(P_{>p}\hookrightarrow P\) and \(j_!\colon\operatorname{Fun}(P_{>p},\Cat{Sp}) \to\operatorname{Fun}(P,\Cat{Sp})\) the left Kan extension functor. \cref{ee41f58df5} applied to \cref{b5898d2f13} says that the pair \((\operatorname{Fun}(P_{>p},\Cat{Sp}), \Gamma_P\circ j_!)\) is a commutative Frobenius algebra. Hence it suffices to construct an equivalence \(\Gamma_P\circ j_!\simeq \Sigma^{-1}\circ\Gamma_{P_{>p}}\) in \(\operatorname{Fun}(\operatorname{Fun}(P_{>p},\Cat{Sp}),\Cat{Sp})\). We write \(j\) as the composite \(P_{>p}\xhookrightarrow{k}P_{\geq p}\xhookrightarrow{l}P\). Then we have a morphism \(j_!\simeq l_!\circ k_!\to l_!\circ k_*\), where \((\text{--})_!\) and \((\text{--})_*\) denote the left and right Kan extension functors. By applying \(\Gamma_P\circ\text{--}\), we obtain \(\Gamma_P\circ j_!\to\Gamma_P\circ l_!\circ k_*\). Since its cofiber can be computed as \begin{equation*} \phantom, \Gamma_P\circ l_!\circ\operatorname{cofib}(k_!\to k_*) \simeq \Gamma_P\circ l_*\circ\operatorname{cofib}(k_!\to k_*) \simeq \Gamma_{P_{\geq p}}\circ\operatorname{cofib}(k_!\to k_*) \simeq \Gamma_{P_{>p}} , \end{equation*} where we use the minimality of~\(p\), we are reduced to showing that \(\Gamma_P\circ l_!\circ k_*\) is zero. Let \(\cat{C}\) denote the full subcategory of \(\operatorname{Fun}(P_{\geq p},\Cat{Sp})\) spanned by the limit diagrams. We need to show that \(\Gamma_P\circ l_!\) is zero on~\(\cat{C}\). We now observe that \(\cat{C}\) is generated under colimits and shifts by \(\mathbf{S}_{[p,q]}\) for \(q\in P_{>p}\): First, they are indeed limit diagrams by \cref{b334c9cd53}. Then it follows from \cref{6e50e2411f} that \(\operatorname{Fun}(P_{>p},\Cat{Sp})\) is generated by their restrictions. Therefore, we need to show that \((\Gamma_P\circ l_!)(\mathbf{S}_{[p,q]}) \simeq\Gamma_P(\mathbf{S}_{[p,q]})\) is zero, but this follows from~\cref{i-vanishing} of \cref{main}; note that we have already proven \(\text{\cref{i-verdier}}\Rightarrow\text{\cref{i-vanishing}}\) in \cref{ss-fin}. \end{proof} \begin{proof}[Proof of \(\text{\cref{i-verdier}}\Rightarrow\text{\cref{i-gorenstein}}\) of \cref{main}] Let \(P\) be a Verdier finite poset. According to \cref{94f6eb4680}, it suffices to show that \((p,q)\) has the (integral) \emph{cohomology} of a sphere for \(p<q\) in \(P_{\bot}\). Since we know that \(P_{\leq q}\) is Verdier from \(\text{\cref{i-verdier}}\Leftrightarrow\text{\cref{i-vanishing}}\), we can assume that \(q\) is the greatest element of~\(P\). We can also assume that \(p=\bot\) by \cref{b644f9030c}. Hence it remains to compute the cohomology of~\(P_{<q}\), which is the fiber of \(\Gamma(P;\mathbf{Z})\to\Gamma(P;\mathbf{Z}_{[q,q]})\). If \(P\) is a singleton, it is obviously zero. Otherwise, \cref{b334c9cd53} says \(\Gamma(P;\mathbf{Z})\simeq\mathbf{Z}\) and the last part of the proof of \(\text{\cref{i-vanishing}}\Rightarrow\text{\cref{i-verdier}}\) says that \(\mathbf{Z}_{[q,q]}\) is some positive desuspension of~\(\mathbf{Z}\). Therefore, \(P_{<q}\) has the cohomology of a sphere. \end{proof} We then obtain \cref{0b4fe90159} as a bonus: \begin{proof}[Proof of \cref{0b4fe90159}] By the standard (stable) Yoneda argument, we can assume \(\cat{C}=\Cat{Sp}\). Since \(P^{\operatorname{op}}\) is also Gorenstein*, it suffices to show that any limiting diagram \(P_{\bot,\top}\to\Cat{Sp}\) is colimiting. Then as in the proof of \cref{b644f9030c}, it suffices to show that \(\mathbf{S}_{[\bot,p]}\in\operatorname{Fun}(P_{\bot,\top},\Cat{Sp})\) is colimiting for \(p\in P_{\top}\). Since this is trivial for \(p=\top\) as \(P_{\bot}\) is weakly contractible, we assume otherwise. By the self-duality of the \(\infty\)-category of finite spectra, we are reduced to showing that \(\mathbf{S}_{[p,\bot]}\in\operatorname{Fun}((P_{\bot,\top})^{\operatorname{op}},\Cat{Sp})\) is limiting. As \(p\neq\top\), this is equivalent to the vanishing of the cohomology of \(\mathbf{S}_{[p,\bot]}\in\operatorname{Fun}((P_{\bot})^{\operatorname{op}},\Cat{Sp})\), which follows from \cref{main}. \end{proof} \section{Variants}\label{s-v} In this section, we prove \cref{lf-arbitrary} and show that our duality can be regarded as a topological sheaf-cosheaf duality. \subsection{For locally finite posets}\label{ss-lf} The equivalence \cref{e-26eeba67} exists for the face poset of a locally finite regular CW complex. We extend our duality to cover that case. \begin{definition}\label{8771652871} We say that a poset~\(P\) is \emph{locally finite} if \(P_{\geq p}\) is finite for every \(p\in P\). \end{definition} This terminology is justified by considering the Alexandroff topology of~\(P\) (see \cref{e407564b20}). \begin{definition}\label{3908ebac89} For a poset~\(P\), we write \(\operatorname{P}^{\textnormal{fin}}(P)\) for the poset of finite subsets. We write \(\operatorname{Down}(P)\) for the poset of sieves, i.e., downward-closed full subposets. We consider the functor \begin{equation} \label{e-8591975d} \operatorname{P}^{\textnormal{fin}}(P)^{\triangleright}\longrightarrow \operatorname{Down}(P) \end{equation} given by \(S\mapsto \bigcup_{p\in S}P_{\leq s}\) and \({\infty}\mapsto P\). \end{definition} We prove that for a nice poset~\(P\), the presheaf \(\infty\)-categories of it and its opposite can be recovered from those of full subposets of the form \(\bigcup_{p\in S}P_{\leq s}\) for finite~\(S\) by taking colimits in~\(\Cat{Pr}\). \begin{proposition}\label{8d61256fe2} For any poset~\(P\) and any presentable \(\infty\)-category~\(\cat{C})\), the diagram given by the composite \begin{equation*} \operatorname{P}^{\textnormal{fin}}(P)^{\triangleright} \xrightarrow{\text{\cref{e-8591975d}}} \operatorname{Down}(P) \xrightarrow{(\operatorname{PShv}_{\cat{C}}(\text{--}),\text{--}_!)} \Cat{Pr} \end{equation*} is colimiting. \end{proposition} \begin{proof} First, note that the diagram \(\operatorname{P}^{\textnormal{fin}}(P)^{\triangleright}\to\operatorname{Down}(P)\to\Cat{Poset}\) is colimiting. Since \(\operatorname{Down}^{\textnormal{fin}}(P)\) is filtered, its composite with \(\Cat{Poset}\hookrightarrow\Cat{Cat}_{\infty}\) is also colimiting, from which the result follows. \end{proof} \begin{proposition}\label{13a6000b87} Suppose that \(P\) is a locally finite poset and \(\cat{C}\) is a compactly generated pointed \(\infty\)-category. Then the diagram given by the composite \begin{equation*} \operatorname{P}^{\textnormal{fin}}(P)^{\triangleright} \xrightarrow{\text{\cref{e-8591975d}}} \operatorname{Down}(P) \xrightarrow{(\operatorname{Fun}(\text{--},\cat{C}),\text{--}_*)} \Cat{Pr} \end{equation*} is colimiting. Here the second arrow is well defined by \cref{0ec8cf3aa6} below. \end{proposition} The proof requires several lemmas: \begin{lemma}\label{cbb999bf61} For a poset~\(P\), let \(\operatorname{Down}^{\textnormal{fin}}(P)\) be the image of \(\operatorname{P}^{\textnormal{fin}}(P)\) under \cref{e-8591975d}. Then \(\operatorname{P}^{\textnormal{fin}}(P)\to\operatorname{Down}^{\textnormal{fin}}(P)\) is cofinal. \end{lemma} \begin{proof} This follows from Joyal's version of Quillen's theorem~A and the fact that a nonempty poset having binary joins is weakly contractible. \end{proof} \begin{lemma}\label{0ec8cf3aa6} Let \(i\colon K_0\hookrightarrow K\) be a sieve inclusion of \(\infty\)-categories and \(\cat{C}\) a presentable \(\infty\)-category. Then the right Kan extension functor \(i_*\colon\operatorname{Fun}(K_0,\cat{C})\hookrightarrow\operatorname{Fun}(K,\cat{C})\) preserves weakly contractible colimits. In particular, \(i_*\) preserves colimits if \(\cat{C}\) is pointed. \end{lemma} \begin{proof} Let \(F\colon J^{\triangleright}\to\operatorname{Fun}(K_0,\cat{C})\) be a colimit diagram where \(J\) is weakly contractible. We need to show that \(i_*(F(\text{--}))(k) \colon J^{\triangleright}\to\cat{C}\) is colimiting for any \(k\in K\). If \(k\in K_0\), the diagram is equivalent to \((F(\text{--}))(k)\), which is colimiting since so is~\(F\). If \(k\notin K_0\), the diagram is equivalent to the constant diagram with value~\(*\), which is colimiting since \(J\) is weakly contractible. \end{proof} \begin{lemma}\label{b8ebfe9ce2} Let \(P\) be a poset and \(\cat{C}\) a compactly generated \(\infty\)-category. Then any compact object of \(\operatorname{Fun}(P,\cat{C})\) is a left Kan extension of its restriction to some finite full subposet. If \(P\) is finite, the full subcategory of compact objects is the essential image of the inclusion \(\operatorname{Fun}(P,\cat{C}^{\omega})\hookrightarrow\operatorname{Fun}(P,\cat{C})\). \end{lemma} \begin{proof} These follow from \autocite[Corollary~2.11 and Proposition~2.8]{ttg-fun}\footnote{ Beware that the assumption \(\bigcup_{j\in J}K_j=K\) is missing in the statement of \autocite[Corollary~2.11]{ttg-fun} }, respectively. \end{proof} \begin{lemma}\label{fa97259e33} Let \(P\) be a locally finite poset and \(\cat{C}\) a compactly generated pointed \(\infty\)-category. Then for any \(P_0\in\operatorname{Down}(P)\), the right Kan extension functor \(i_*\colon\operatorname{Fun}(P_0,\cat{C})\hookrightarrow\operatorname{Fun}(P,\cat{C})\) preserves compact objects. \end{lemma} \begin{proof} Let \(p\in P_0\) be an element and \(C\) a compact object of~\(\cat{C}\). Since \(i_*\) preserves (finite) colimits by \cref{0ec8cf3aa6}, it suffices to show that \(F=i_*(j(p)\otimes C)\) is compact, where \(j\) denotes the Yoneda embedding \(P_0^{\operatorname{op}}\hookrightarrow\operatorname{Fun}(P_0,\Cat{S})\). Now we compute \(F(q)\) for \(q\in P\): If \(q\in P_{\geq p}\cap P_0\), it is \(C\). If \(q\notin P_0\), it is final and thus initial since \(\cat{C}\) is pointed. Otherwise, it is initial. This computation shows that \(F\rvert_{P\setminus P_{\geq p}}\) is initial, which means that \(F\) is the left Kan extension of \(F\rvert_{P_{\geq p}}\), as \(P_{\geq p}\) is upward closed. This computation also shows that \(F\rvert_{P_{\geq p}}\) takes compact values, which means by \cref{b8ebfe9ce2} that \(F\rvert_{P_{\geq p}}\) is compact, as \(P_{\geq p}\) is finite. Hence the desired result follows. \end{proof} \begin{lemma}\label{8eca80f4cb} Let \(P\) be a locally finite poset and \(\cat{C}\) a compactly generated pointed \(\infty\)-category. Then every compact object in \(\operatorname{Fun}(P,\cat{C})\) is a right Kan extension of its restriction to \(\bigcup_{s\in S}P_{\leq s}\) for some \(S\in\operatorname{P}^{\textnormal{fin}}(P)\). \end{lemma} \begin{proof} Let \(F\) be a compact object. By \cref{b8ebfe9ce2}, there is a finite full subposet~\(Q\) such that \(F\) can be identified with the left Kan extension of \(F\lvert_Q\) along \(Q\hookrightarrow P\). We take \(S=\bigcup_{q\in Q}P_{\geq q}\), which is finite since \(P\) is locally finite, and consider the inclusion \(i\colon P_S=\bigcup_{s\in S}P_{\leq s}\hookrightarrow P\). Since \(P_S\) contains~\(Q\), the morphism \(i_!i^*F\to F\) is an equivalence. Hence it suffices to show that the composite \(i_!i^*F\to F\to i_*i^*F\) is an equivalence. As its restriction to~\(P_S\) is an equivalence, we consider \(p\notin P_S\). Then \((i_!i^*F)(p)\) is initial since no \(q\in Q\) satisfies \(q\leq p\) and \((i_*i^*F)(p)\) is final since \(P_S\) is downward closed. Since \(\cat{C}\) is pointed, the desired claim follows. \end{proof} \begin{proof}[Proof of \cref{13a6000b87}] According to \cref{fa97259e33}, the diagram actually lands in \(\Cat{Pr}_{\cat{\omega}}\), the \(\infty\)-category of compactly generated \(\infty\)-categories and functors preserving colimits and compact objects. Since the inclusion \(\Cat{Pr}_{\omega}\hookrightarrow\Cat{Pr}\) preserves colimits by \autocite[Theorem~5.5.3.18 and Proposition~5.5.7.6]{LurieHTT}, it suffices to show that its restriction \(\operatorname{P}^{\textnormal{fin}}(P)^{\triangleright} \to\Cat{Pr}_{\omega}\) is colimiting. Furthermore, since \(\operatorname{P}^{\textnormal{fin}}(P)\) is filtered, it suffices to show that its composite with \((\text{--})^{\omega} \colon\Cat{Pr}_{\cat{\omega}}\to\Cat{Cat}_{\infty}\) is colimiting. Then the desired claim follows from \cref{cbb999bf61,8eca80f4cb}. \end{proof} One might think that the desired equivalence could be immediately obtained from \cref{8d61256fe2,13a6000b87} by taking the colimit of the assignment given by \begin{equation*} \phantom, \operatorname{P}^{\textnormal{fin}}(P)\ni S\longmapsto \bigl(\mathbb{D}_{P_S} \colon\operatorname{Fun}(P_S,\Cat{Sp}) \to \operatorname{Fun}(P_S^{\operatorname{op}},\Cat{Sp}) \bigr)\in\operatorname{Fun}(\Delta^1,\Cat{Pr}) , \end{equation*} where \(P_S\) denotes \(\bigcup_{s\in S}P_{\leq s}\). However, what we have proven in \cref{ss-pot} is not sufficient in order to construct such a functor directly. We avoid this issue by first constructing the desired functor~\(\mathbb{D}\) for~\(P\): \begin{definition}\label{eb22338bf4} For a locally finite poset~\(P\), we define \(\Gamma_{\textnormal{cpt}}\colon\operatorname{Fun}(P,\Cat{Sp})\to\Cat{Sp}\) as the colimit of the functor \(\operatorname{P}^{\textnormal{fin}}(P)\to\operatorname{Fun}(\Delta^1,\Cat{Pr})\) given by \(Q\mapsto(\Gamma\colon\operatorname{Fun}(Q,\Cat{Sp})\to\Cat{Sp})\). Note that the source is identified with \(\operatorname{Fun}(P,\Cat{Sp})\) by \cref{13a6000b87} and the target is identified with \(\Cat{Sp}\) by \cref{cbb999bf61} and the fact that \(\operatorname{Down}^{\textnormal{fin}}(P)\) is weakly contractible. From this, we obtain \(\mathbb{D}\colon\operatorname{Fun}(P,\Cat{Sp})\to\operatorname{Fun}(P^{\operatorname{op}},\Cat{Sp})\) as in \cref{ss-pot}. \end{definition} Now the following two results imply \cref{lf-arbitrary}: \begin{proposition}\label{924159442f} Let \(P\) be a locally finite poset and \(F\colon P\to\Cat{Sp}\) a functor. If \(P_{\leq p}\) is finite for each \(p\in P\), the functor \(\mathbb{D}(F)\colon P^{\operatorname{op}}\to\Cat{Sp}\) is pointwise given by \(p\mapsto\projlim_{q\in P}\operatorname{Map}(p,q)\otimes F(q)\). \end{proposition} \begin{proof} We fix~\(p\) and vary~\(F\). Then \(F\mapsto\projlim_{q\in P}\operatorname{Map}(p,q)\otimes F(q)\) preserves colimits by the finiteness assumption on~\(P\). Hence we can assume that \(F\) is compact. By \cref{8eca80f4cb}, we can find \(S\in\operatorname{P}^{\textnormal{fin}}(P)\) such that \(F\) is a right Kan extension of its restriction to \(\bigcup_{s\in S}P_{\leq s}\). By replacing \(S\) with \(S\cup\{p\}\), we can assume \(p\in S\). Then the desired result follows from \cref{f6072744fd} since \(\bigcup_{s\in S}P_{\leq s}\) is finite by assumption. \end{proof} \begin{theorem}\label{7873d83c85} Let \(P\) be a locally finite poset. If \(P_{<p}\) is finite and Verdier for each \(p\in P\), the pair \((\operatorname{Fun}(P,\Cat{Sp}),\Gamma_{\textnormal{cpt}})\) is a commutative Frobenius algebra in \(\Cat{Pr}_{\textnormal{st}}\). In particular, \(\mathbb{D}\) is an equivalence. \end{theorem} \begin{proof} By \cref{a3ad5064a0}, we only need to show that \(\mathbb{D}\) is an equivalence. For \(S\in\operatorname{P}^{\textnormal{fin}}(P)\), let \(P_S\) denote \(\bigcup_{s\in S}P_{\leq s} \in\operatorname{Down}(P)\). We regard \(\mathbb{D}\) as an object of \(\operatorname{Fun}(\Delta^1,\Cat{Pr}_{\textnormal{st}})\) and consider the (essential) poset of subobjects \(\operatorname{Sub}(\mathbb{D})\). By \cref{00461cedb4} applied to \cref{50e6a4f108}, each \(S\in\operatorname{P}^{\textnormal{fin}}(P)\) determines \(\mathbb{D}_{P_S}\in\operatorname{Sub}(\mathbb{D})\). Hence we obtain the morphism of posets \(\operatorname{P}^{\textnormal{fin}}(P)\to\operatorname{Sub}(\mathbb{D})\). Then we consider the composite \begin{equation*} \phantom, \operatorname{P}^{\textnormal{fin}}(P)^{\triangleright} \longrightarrow\operatorname{Sub}(\mathbb{D}) \longrightarrow\operatorname{Fun}(\Delta^1,\Cat{Pr}_{\textnormal{st}}), \end{equation*} where we set \(\infty\mapsto\mathbb{D}\) in the first arrow. This is colimiting by \cref{13a6000b87,8d61256fe2}. By assumption and \cref{main}, the functor \(\mathbb{D}_{P_S}\) is an equivalence for \(S\in\operatorname{P}^{\textnormal{fin}}(P)\). Therefore, \(\mathbb{D}\) is also an equivalence. \end{proof} \begin{remark}\label{53c4ee6c29} We can define the lower shriek functor for a morphism between posets satisfying the condition of \cref{lf-arbitrary} as the functor corresponding to the cosheaf pushforward under the duality equivalences. See \cref{0f134964dc} for the locally compact Hausdorff case. \end{remark} \subsection{In terms of sheaves}\label{ss-alex} We explain that our duality for a poset can be interpreted as a sheaf-cosheaf duality over its Alexandroff space, which we recall as follows: \begin{definition}\label{e407564b20} The \emph{Alexandroff space} \(\operatorname{Alex}(P)\) of a poset~\(P\) is the topological space whose underlying set is that of~\(P\) and whose open sets are the upward-closed subsets. \end{definition} We recall the following fact, which was first proven in \autocite[Example~A.11]{ttg-fun}. \begin{theorem}[Aoki]\label{7af5d021e3} The assignment \(F\mapsto(p\mapsto F(P_{\geq p}))\) determines the inverse image functor of a geometric morphism \begin{equation} \phantom. \label{e-5a26004a} \operatorname{Fun}(P,\Cat{S})=\operatorname{PShv}(P^{\operatorname{op}})\longrightarrow\operatorname{Shv}(\operatorname{Alex}(P)). \end{equation} This identifies \(\operatorname{Shv}(\operatorname{Alex}(P))\) as the bounded reflection of \(\operatorname{PShv}(P^{\operatorname{op}})\) and \(\operatorname{PShv}(P^{\operatorname{op}})\) as the hypercompletion of \(\operatorname{Shv}(\operatorname{Alex}(P))\). \end{theorem} Note that this geometric morphism is not an equivalence in general; see \autocite[Example~A.13]{ttg-fun}. However, it is an equivalence in the situation we are interested in: \begin{proposition}\label{f5b86f5e98} If \(P\) is a locally finite poset, \cref{e-5a26004a} is an equivalence. \end{proposition} \begin{proof} According to \autocite[Example~A.12]{ttg-fun}, this is true for finite posets. By \cref{7af5d021e3}, the morphism \cref{e-5a26004a} is an equivalence if and only if \(\operatorname{Shv}(\operatorname{Alex}(P))\) is hypercomplete. Since \(\operatorname{Shv}(\operatorname{Alex}(P))\) can be written as a colimit of \(\operatorname{Shv}(\operatorname{Alex}(P_{\geq p_1}\cap\dotsb\cap P_{\geq p_n}))\) for \(p_1\), \dots,~\(p_n\in P\) and \(n\geq1\) in the \(\infty\)-category of \(\infty\)-toposes, \(\operatorname{Shv}(\operatorname{Alex}(P))\) is hypercomplete when \(P\) is locally finite. \end{proof} \begin{remark}\label{c29c50ee4d} Note that by using \autocite[Corollary~2.6]{AsaiShah} instead of \autocite[Example~A.12]{ttg-fun} in the proof, we can obtain this result for a wider class of posets. \end{remark} \begin{remark}\label{c204a3567d} It is a consequence of \autocite[Theorem~3.4]{ClausenJansen} that the morphism \cref{e-5a26004a} is an equivalence for a poset satisfying the ascending chain condition. However, they use the ``geometric morphism'' \(\operatorname{Shv}(X)\to\operatorname{PShv}(P^{\operatorname{op}})\) constructed in \autocite[page~27]{ClausenJansen} for a stratification \(X\to\operatorname{Alex}(P)\), which is not geometric in general; the trivial stratification on~\(\operatorname{Alex}(P)\) for the poset~\(P\) in \autocite[Example~A.13]{ttg-fun} gives a counterexample. Nevertheless, when \(P\) is locally finite, \cref{f5b86f5e98} shows that the morphism is indeed geometric. \end{remark} Hence \cref{lf-arbitrary}, which we have seen in \cref{ss-lf}, says the following: \begin{theorem}\label{8d0ab00507} Let \(P\) be a locally finite poset such that \(P_{<p}\) is finite and Gorenstein* for each \(p\in P\). Then there is a canonical equivalence \begin{equation*} \phantom. \mathbb{D}\colon \operatorname{Shv}_{\Cat{Sp}}(\operatorname{Alex}(P)) \longrightarrow\operatorname{cShv}_{\Cat{Sp}}(\operatorname{Alex}(P)). \end{equation*} \end{theorem} \section{Verdier duality for proper separated \texorpdfstring{\(\infty\)}{\textinfty}-toposes}\label{s-ch} The sheaf-cosheaf duality for locally compact Hausdorff spaces, which is often called covariant Verdier duality, was studied in \autocite[Section~5.5.5]{LurieHA}. In this section, we first prove its generalization using more abstract methods. Then we prove \cref{str} using our formulation. In future work, we will study a relative variant. \subsection{Proper separated \texorpdfstring{\(\infty\)}{\textinfty}-toposes}\label{ss-ps} Following \autocite[C2.4.16]{Elephant}, we say that a geometric morphism is \emph{Beck--Chevalley} if any pullback satisfies the Beck--Chevalley condition; i.e., the (unstable) proper base change theorem holds. Recall that in \autocite[Section~7.3.1]{LurieHTT} a geometric morphism is called \emph{proper} if its arbitrary base change is Beck--Chevalley. \begin{definition}\label{53e4be788e} An \(\infty\)-topos \(\cat{X}\) is called \emph{separated} if its diagonal \(\cat{X}\to\cat{X}\times\cat{X}\) is proper. \end{definition} \begin{remark}\label{ea7702d67f} Consider a geometric morphism \(\cat{Y}\to\cat{X}\) between \(n\)-toposes. If the geometric morphism \(\operatorname{Shv}(\cat{Y})\to\operatorname{Shv}(\cat{X})\) between \(\infty\)-toposes is proper, its arbitrary base change is Beck--Chevalley in the \((n+1)\)-category of \(n\)-toposes, but not vice versa. This is why in \(1\)-topos theory we usually call a geometric morphism \emph{tidy} when its arbitrary base change is Beck--Chevalley in the \(2\)-category of \(1\)-toposes. The same remark applies to the notion of separatedness. \end{remark} However, the following is proven in \autocite[Theorem~7.3.1.16]{LurieHTT}: \begin{example}[Lurie]\label{884b0546f5} The sheaf \(\infty\)-topos of a compact Hausdorff space is proper and separated. \end{example} We recall the following notion, which was introduced in \autocite[Appendix~D]{Gaitsgory15}: \begin{definition}[Gaitsgory]\label{95976481a6} A presentably symmetric monoidal stable \(\infty\)-category \(\cat{C}\) is called \emph{rigid} if the unit \(u\colon\Cat{Sp}\to\cat{C}\) admits a colimit-preserving right adjoint and the multiplication \(m\colon\cat{C}\otimes\cat{C}\to\cat{C}\) admits a \(\cat{C}\otimes\cat{C}\)-linear\footnote{ Here the colimit-preserving property is included in the definition of linearity. } right adjoint. \end{definition} If \(\cat{C}\) is rigid, it is easy to see that \(u^{\textnormal{R}}\circ m\) and \(m^{\textnormal{R}}\circ u\) constitute a duality datum in \(\Cat{Pr}\), where \(\text{--}^{\textnormal{R}}\) indicates the right adjoint. In particular, \((\cat{C},u^{\textnormal{R}}\circ m)\) is a commutative Frobenius algebra. \begin{theorem}\label{2bd8a0da2f} If \(\cat{X}\) is a proper separated \(\infty\)-topos, then \(\operatorname{Shv}_{\Cat{Sp}}(\cat{X})\) is rigid. \end{theorem} \begin{corollary}\label{600698211c} The pair \((\operatorname{Shv}_{\Cat{Sp}}(\cat{X}),\Gamma)\) is a commutative Frobenius algebra in \(\Cat{Pr}_{\textnormal{st}}\) for any proper separated \(\infty\)-topos~\(\cat{X}\). \end{corollary} \begin{proof}[Proof of \cref{2bd8a0da2f}] According to \autocite[Example~4.8.1.19]{LurieHA}, the binary product of \(\infty\)-toposes can be computed as their tensor product in \(\Cat{Pr}\). Hence the result follows from \cref{90680bda5b} below. \end{proof} \begin{lemma}\label{90680bda5b} Let \(f\colon\cat{Y}\to\cat{X}\) be a proper morphism of \(\infty\)-toposes, then \(f^*\colon \operatorname{Shv}_{\Cat{Sp}}(\cat{X})\to\operatorname{Shv}_{\Cat{Sp}}(\cat{Y})\) admits a \(\operatorname{Shv}_{\Cat{Sp}}(\cat{X})\)-linear right adjoint. \end{lemma} \begin{proof} According to \autocite[Remark~7.3.1.5]{LurieHTT}, the direct image functor \(\cat{Y}\to\cat{X}\) preserves filtered colimits. Hence \(f_*\colon\operatorname{Shv}_{\Cat{Sp}}(\cat{Y})\to\operatorname{Shv}_{\Cat{Sp}}(\cat{Y})\) preserves colimits. Now we consider the diagram \begin{equation*} \begin{tikzcd}[column sep=large] \cat{Y}\ar[r,"\text{graph}"]\ar[d,"f"']& \cat{Y}\times\cat{X}\ar[r,"\operatorname{pr}_1"]\ar[d,"f\times{\operatorname{id}}"]& \cat{Y}\ar[d,"f"]\\ \cat{X}\ar[r,"\text{diagonal}"]& \cat{X}\times\cat{X}\ar[r,"\operatorname{pr}_1"]& \cat{X} \end{tikzcd} \end{equation*} in the \(\infty\)-category of \(\infty\)-toposes. Since the right and outer squares are cartesian, so is the left one. According to \autocite[Example~4.8.1.19]{LurieHA}, the binary product of \(\infty\)-toposes can be computed as their tensor product in \(\Cat{Pr}\). Therefore, since \(f\times{\operatorname{id}}\) is Beck--Chevalley, for any \(F\in\operatorname{Shv}_{\Cat{Sp}}(\cat{X})\) and \(G\in\operatorname{Shv}_{\Cat{Sp}}(\cat{Y})\) the canonical morphism \(f_*G\otimes F\to f_*(G\otimes f^*F)\) is an equivalence. \end{proof} \subsection{The locally compact case}\label{ss-local} The following result is derived from \cref{2bd8a0da2f} by using \cref{ee41f58df5} applied to \cref{f7ea8c5d40}: \begin{theorem}\label{bb6fe1df93} Let \(j\colon\cat{U}\hookrightarrow\cat{X}\) be an open subtopos of a proper separated \(\infty\)-topos. Then the pair \((\operatorname{Shv}_{\Cat{Sp}}(\cat{U}),\Gamma_{\cat{X}}\circ j_!)\) is a commutative Frobenius algebra in \(\Cat{Pr}_{\textnormal{st}}\). \end{theorem} Here the composite \(\Gamma_{\cat{X}}\circ j_!\) depends on~\(j\), not only on~\(\cat{U}\), but there is a canonical choice for locally compact spaces: \begin{definition}\label{02c27cb552} Let \(X\) be a locally compact Hausdorff space. We define the \emph{global section with compact support} \(\Gamma_{\textnormal{cpt}}\) as the composite \(p_*\circ j_!\) where \(j\colon X\hookrightarrow X_{\infty}\) is the inclusion to its one-point compactification and \(p\colon X_{\infty}\to{*}\) is the projection. Then \cref{bb6fe1df93} says that \((\operatorname{Shv}_{\Cat{Sp}}(X),\Gamma_{\textnormal{cpt}})\) is a commutative Frobenius algebra. We let \(\mathbb{D}\colon\operatorname{Shv}(X)\to\operatorname{cShv}(X)\) denote the associated equivalence (cf. \cref{ss-pot}). \end{definition} \begin{remark}\label{12b6445923} One can prove Verdier duality for locally compact Hausdorff spaces by a similar method to the one we have used in \cref{ss-lf} for locally finite posets: Namely, the sheaf and cosheaf \(\infty\)-categories of a locally compact Hausdorff space can be written as colimits in \(\Cat{Pr}\) as those of compact subspaces. We leave the details to the interested reader. \end{remark} We give its objectwise description to justify calling our functor ``Verdier duality'': \begin{proposition}\label{bdd139fe5d} For a locally compact Hausdorff space~\(X\) and a spectrum-valued sheaf \(F\in\operatorname{Shv}_{\Cat{Sp}}(X)\), the cosheaf \(\mathbb{D}(F)\) is pointwise given by \(U\mapsto \injlim_{K\subset U}\operatorname{fib}(F(X)\to F(X\setminus K))\), where \(K\) runs over compact subsets. \end{proposition} Note that Lurie's equivalence also has this pointwise formula; see \autocite[Proposition~5.5.5.10]{LurieHA}. \begin{proof} First suppose that \(X\) is compact. Let \(j\) denote the inclusion \(U\hookrightarrow X\). By definition, \(\mathbb{D}(F)(U)\) is the global section of \((j_!\mathbf{S}_U)\otimes F\). Let \(i\) denote the inclusion \(X\setminus U\hookrightarrow X\). Then by recollement, \(\mathbb{D}(F)(U)\) is equivalent to the global section of \(\operatorname{fib}(F\to i_*i^*F)\). Hence it is written as \(\injlim_{V\supset X\setminus U} \operatorname{fib}(F(X)\to F(V))\), where \(V\) runs over open subsets. As \(X\) is compact, this coincides with the desired description. We proceed to the general case. Let \(j\colon X\hookrightarrow X_{\infty}\) denote the inclusion to the one-point compactification and \(i\) the inclusion of the point at infinity. \Cref{00461cedb4} applied to \cref{f7ea8c5d40} says \(j_+\circ\mathbb{D}_X\simeq\mathbb{D}_{X_{\infty}}\circ j_!\). Hence \(\mathbb{D}_X(F)(U)\) can be computed as \begin{equation*} \phantom, (j_+\circ\mathbb{D}_X)(F)(U) \simeq(\mathbb{D}_{X_{\infty}}\circ j_!)(F)(U) \simeq\injlim_{K\subset U} \operatorname{fib}\bigl((j_!F)(X_{\infty})\to(j_!F)(X_{\infty}\setminus K)\bigr), \end{equation*} where we use the compact case. By recollement, the desired result follows from the vanishing of \(\operatorname{fib}((i_*i^*j_*F)(X_{\infty}) \to(i_*i^*j_*F)(X_{\infty}\setminus K))\) for each \(K\), which follows from \(K\subset X\). \end{proof} \begin{remark}\label{0f134964dc} Let \(f\colon Y\to X\) be a continuous map between locally compact Hausdorff spaces. As in \autocite[Remark~9.4.6]{GaitsgoryLurie}, we can define the lower shriek functor~\(f_!\) as the composite \((\mathbb{D}_X)^{-1}\circ f_+\circ\mathbb{D}_Y\). One could check its standard properties by applying \cref{00461cedb4} to \cref{68011c91d6,f7ea8c5d40}. To describe further functorial properties of this construction, one could use the technology presented in \autocite[Chapter~7]{GaitsgoryRozenblyum171}. However, beware that it is built on unproven results in \((\infty,2)\)-category theory. \end{remark} \subsection{Application: Verdier duality and stratification}\label{ss-str} We prove the following generalization of \cref{str}: \begin{theorem}\label{a292d59cdb} Let \(P\) be a finite poset and \(\cat{X}\to\operatorname{Shv}(\operatorname{Alex}(P))\) a geometric morphism. Suppose that \(P\) is Verdier, that \(\cat{X}\) is proper and separated, and that the spectrum-valued inverse image \(f^*\colon\operatorname{Shv}_{\Cat{Sp}}(\operatorname{Alex}(P))\to\operatorname{Shv}_{\Cat{Sp}}(X)\) is fully faithful. Then we have \(\mathbb{D}_P\simeq f_+\circ\mathbb{D}_X\circ f^*\). \end{theorem} \begin{remark}\label{7014e1bd6e} The assumption is satisfied when the \emph{space-valued} inverse image \(\operatorname{Shv}(\operatorname{Alex}(P))\to\cat{X}\) is fully faithful: This can be seen by considering objects of \(\operatorname{Shv}_{\Cat{Sp}}(\text{--})\) as left exact functors \((\Cat{Sp}^{\omega})^{\operatorname{op}}\to\text{--}\). \end{remark} \begin{proof}[Proof of \cref{a292d59cdb}] We have \(\Gamma_{\operatorname{Alex}(P)} \simeq\Gamma_{\operatorname{Alex}(P)}\circ f_*\circ f^* \simeq\Gamma_{X}\circ f^*\). Hence the desired result follows from \cref{e894284df5}. \end{proof} \bibliographystyle{plain}
1,314,259,993,127
arxiv
\section{Introduction} \label{sec:intro} Under the paradigm of the \gls{IoT}, the number of connecting devices is increasing dramatically. IoT devices are mostly battery limited and transmit short packets in a sporadic and uncoordinated manner~\cite{Chen2020_massiveAccess,Wu2020_massiveAccess}. This calls for new theoretical frameworks that help to understand the fundamental limits of massive random access and provide guidelines for system design. Polyanskiy~\cite{PolyanskiyISIT2017massive_random_access} proposed a novel formulation for the massive uncoordinated access problem with three key assumptions: i) all users employ a common codebook and the decoder only aims to return a list of messages without recovering users' identities; ii) the error event is defined per user and the error probability is averaged over the users; iii) each user sends a fixed amount of information bits within a finite frame length. Under this formulation, traditional as well as novel random access protocols \cite{Berioli2016NOW} yield achievability bounds. In \cite{PolyanskiyISIT2017massive_random_access}, an achievability bound for the Gaussian \gls{MAC} was derived and it was shown that modern random access schemes exhibit a large gap to this bound. This gap has been later reduced in, e.g., \cite{Ordentlich2017low_complexity_random_access,Vem2019,Fengler2019sparcs,Amalladinne2020unsourced,Amalladinne2020,Pradhan2020}. Polyanskiy's framework has been extended to the quasi-static fading channel~\cite{Kowshik2020}, multiple-antenna channel~\cite{Fengler2019nonBayesian}, \revisee{and a setup with common alarm messages~\cite{Stern2019}.} In Polyanskiy's achievability bound, the number of active users is fixed and known to the receiver, an assumption that has practical shortcomings. Since \gls{IoT} devices access the channel at random times and in a grant-free manner, the number of active users varies over time, and hence, it is typically unknown to the receiver. Therefore, the bound in \cite{PolyanskiyISIT2017massive_random_access} may be an overoptimistic benchmark for random-access schemes that are designed to work with unknown number of active users. Moreover, when the number of active users is unknown, the decoder needs to determine the list size. Choosing a list size smaller than the number of active users will result in \glspl{MD}\textemdash i.e., transmitted messages that are not included in the decoded list\textemdash whereas choosing it larger than the number of active users will result in \glspl{FA}\textemdash i.e., decoded messages that were not transmitted. Furthermore, additional \glspl{MD} and \glspl{FA} may occur in the decoding process. There is a trade-off between \gls{MD} and \gls{FA} probabilities. A decoder that always outputs the whole codebook will never misdetect, but has \gls{FA} probability close to one; similarly, a decoder that always outputs an empty set will never raise a \gls{FA} but always misdetects. Characterizing the \gls{MD}--\gls{FA} trade-off is a fundamental engineering challenge that was not addressed in \cite{PolyanskiyISIT2017massive_random_access}. An achievability bound for the Gaussian \gls{MAC} with unknown number of active users was presented in \cite{Effros2018ISIT}. However, the authors consider the joint-user error event instead of the per-user error event, and thus, \gls{MD} and \gls{FA} are not explicitly considered. In short, a random-coding bound accounting for both \gls{MD} and \gls{FA}, which can serve as a benchmark for common-codebook massive uncoordinated random access with random user activity, is still missing. Most of the practical algorithms that have been proposed for common-codebook massive random access require knowledge of the number of active users. Advanced ALOHA schemes, such as irregular repetition slotted ALOHA~(IRSA)~\cite{Liva2011IRSA}, can also operate when the number of active users is unknown. However, research on modern random access protocols~\cite{Berioli2016NOW}, such as IRSA, has traditionally focused on characterizing and minimizing the packet loss rate, which accounts only for \gls{MD}. The scheme proposed in \cite{Vem2019} also addressed \gls{MD} only. Minimizing the \gls{MD} probability alone can entail a high \gls{FA} probability. In~\cite{Decurninge2020}, a tensor-based communication scheme was proposed, \revise{and both \gls{MD} and \gls{FA} probabilities are reported in the performance evaluation. Another scheme \revisee{for which} both \gls{MD} and \gls{FA} probabilities \revisee{are reported} was recently proposed in~\cite{fengler2020pilot} for the quasi-static fading \gls{MAC} \revisee{and for the case in which} the receiver has a large number of antennas.} In this work, we extend Polyanskiy's bound to the case where the number of active users is {\em random} and {\em unknown}. To this end, we first extend the definition of a random-access code provided in~\cite{PolyanskiyISIT2017massive_random_access} to account for both \gls{MD} and \gls{FA} probabilities. Then, we derive a random-coding bound for the Gaussian \gls{MAC}. Unlike the scheme in~\cite{PolyanskiyISIT2017massive_random_access}, our decoder does not assume knowledge of the number of active users, and thus cannot use it to set the decoded list size. Instead, we let our decoder decide the best list size within a predetermined interval around an estimated value of the number of active users. \revisee{Our decoding metric is similar to that used in \cite{Stern2019}. However, different from \cite{Stern2019}, we limit the decoded list size to be in an interval to avoid overfitting.} \revisee{Compared with the bound in \cite{PolyanskiyISIT2017massive_random_access}}, our bound suggests that the lack of knowledge of the number of active users entails a small penalty in power efficiency. Furthermore, \revise{we apply our bound to \revisee{characterize} \gls{MD} and \gls{FA} in slotted ALOHA with multi-packet reception (SA-MPR). Using our bound, we \revisee{benchmark the energy efficiency of} SA-MPR and \revisee{of the} massive random access schemes \revisee{proposed in \cite{Fengler2019sparcs,Amalladinne2020unsourced}}.} For instance, for a system with $\revise{300}$ active users in average, to achieve both \gls{MD} and \gls{FA} probabilities below $10^{-1}$, the required energy per bit \revisee{predicted by} our achievability bound is \revise{$0.65$~dB higher than that \revisee{predicted by} the bound for a known number of active users \cite{PolyanskiyISIT2017massive_random_access}. In the same setting, the required energy per bit predicted by our bound is $9$~dB, $4.1$~dB, and $3.6$~dB lower than that of the SA-MPR bound, the scheme based on sparse regression code (SPARC)~\cite{Fengler2019sparcs}, and an enhancement of SPARC~\cite{Amalladinne2020unsourced}, respectively.} \subsubsection*{Notation} Random quantities are denoted with non-italic letters with sans-serif font, e.g., a scalar $\vect{r}{x}$ and a vector $\rvVec{v}$. Deterministic quantities are denoted with italic letters, e.g., a scalar $x$ and a vector $\bm{v}$. The Euclidean norm is denoted by $\|\cdot\|$. We use $\mathfrak{P}({\mathcal A})$ to denote the set of all subsets of ${\mathcal A}$; $[n]$ denotes the set of integers $\{1,\dots,n\}$ if $n \ge 1$ and $[n] \triangleq \emptyset$ if $n=0$; $[m:n] \triangleq \{m,m+1,\dots,n\}$ if $m \le n$ and $[m:n] \triangleq \emptyset$ if $m>n$; $x^+ \triangleq \max\{x,0\}$; $\ind{\cdot}$ is the indicator function. The set of natural and complex numbers are denoted by $\mathbb{N}$ and $\mathbb{C}$, respectively. We denote the Gamma function by $\Gamma(x) \triangleq \int_{0}^{\infty}z^{x-1}e^{-z}dz$, and the lower and upper incomplete Gamma functions by $\gamma(x,y) \triangleq \int_{0}^{y}z^{x-1}e^{-z}dz$ and $\Gamma(x,y) \triangleq \int_{y}^{\infty}z^{x-1}e^{-z}dz$, respectively. \section{Random-Access Channel} \label{sec:channel} We consider a \gls{MAC} in which a random set of $\vect{r}{K}_{\rm a}$ users transmit their messages to a receiver over $n$ uses of a stationary memoryless channel. Let $\vect{r}{x}_k \in {\mathcal X}$ be the transmitted signal of user $k$ in a channel use. Given $\vect{r}{K}_{\rm a} = K_{\rm a}$, the channel law is given by $P_{\vect{r}{y} \,\vert\, \vect{r}{x}_1,\dots,\vect{r}{x}_{{K}_{\rm a}}}$. Thus this random-access channel is characterized by the \gls{PMF} $P_{\vect{r}{K}_{\rm a}}$ of $\vect{r}{K}_{\rm a}$ and by the set of conditional probabilities $\{P_{\vect{r}{y} \,\vert\, \vect{r}{x}_1,\dots,\vect{r}{x}_{{K}_{\rm a}}} \colon {\mathcal X}^{K_{\rm a}} \to {\mathcal Y} \}_{K_{\rm a} \in \mathbb{N}}$. As in~\cite{PolyanskiyISIT2017massive_random_access}, we assume that the channel law is permutation invariant. We further assume that the receiver does not know the realizations of $\vect{r}{K}_{\rm a}$. As in~\cite{PolyanskiyISIT2017massive_random_access}, our model differs from the classical \gls{MAC} in that the total number of users is not limited, all users employ the same codebook, and the receiver decodes up to a permutation of messages. However, as opposed to~\cite{PolyanskiyISIT2017massive_random_access}, where the number of active users is assumed to be fixed and known, we assume that $\vect{r}{K}_{\rm a}$ is random and unknown. We therefore need to account for both \gls{MD} and \gls{FA}. We next rigorously define the \gls{MD} and \gls{FA} probabilities, as well as the notion of a random-access code. \begin{definition}[Random-access code] \label{def:code} Consider a random-access channel characterized by $\big\{P_{\vect{r}{K}_{\rm a}}, \{P_{\vect{r}{y} \,\vert\, \vect{r}{x}_1,\dots,\vect{r}{x}_{{K}_{\rm a}}}\}_{K_{\rm a} \in \mathbb{N}}\big\}$. \revisee{An $(M,n,\epsilon_{\rm MD},\epsilon_{\rm FA})$ random-access code for this channel, where $M$ and $n$ are positive integers and $\epsilon_{\rm MD},\epsilon_{\rm FA} \in (0,1)$, consists of:} \begin{itemize}[leftmargin=*] \item \revisee{A random variable $\vect{r}{U}$ defined on a set ${\mathcal U}$ that is revealed to both the transmitter and the receiver before the start of the transmission. This random variable acts as common randomness and allows for the use of randomized coding strategies.} \item \revisee{An encoder mapping $f\colon {\mathcal U} \times [M] \to {\mathcal X}^n$ defining the transmitted codeword $\rvVec{x}_i = f(\vect{r}{U},\vect{r}{w}_i)$ of user $i$ for a given message $\vect{r}{w}_i$, which is assumed to be uniformly distributed over $[M]$.} \item \revisee{A decoding function $g\colon {\mathcal U} \times {\mathcal Y}^n \to \mathfrak{P}([M])$ providing an estimate $\widehat{{\mathcal W}} = \{\hat{\vect{r}{w}}_1,\dots,\hat{\vect{r}{w}}_{|\widehat{{\mathcal W}}|}\} = g(\vect{r}{U},\rvVec{y})$ of the list of transmitted messages, where $\rvVec{y} = [\vect{r}{y}(1) \dots \vect{r}{y}(n)]^{\scriptscriptstyle\mathsf{T}}$ denotes the channel output sequence.} \end{itemize} Let $\widetilde{{\mathcal W}} = \{\widetilde{\vect{r}{w}}_1,\dots,\widetilde{\vect{r}{w}}_{|\widetilde{{\mathcal W}}|}\}$ denotes the set of distinct elements of $\{{\vect{r}{w}}_1,\dots,{\vect{r}{w}}_{\vect{r}{K}_{\rm a}}\}$. \revisee{We assume that the decoding function satisfies the following constraints on the \gls{MD} and \gls{FA} probabilities: \begin{align} \!\!\!P_{\rm MD} &\triangleq \E\Bigg[{\ind{|\widetilde{{\mathcal W}}| \ne 0} \cdot \frac{1}{|\widetilde{{\mathcal W}}|} \sum_{i=1}^{|\widetilde{{\mathcal W}}|} \P[\widetilde{\vect{r}{w}}_i \!\notin\! \widehat{{\mathcal W}}]}\!\Bigg] \!\le \epsilon_{\rm MD}, \label{eq:def_pMD}\\ \!\!\!P_{\rm FA} &\triangleq \E\Bigg[{\ind{|\widehat{{\mathcal W}}| \ne 0} \cdot \frac{1}{|\widehat{{\mathcal W}}|} \sum_{i=1}^{|\widehat{{\mathcal W}}|} \P[\hat{\vect{r}{w}}_i \notin {{\mathcal W}}]}\Bigg] \!\le \epsilon_{\rm FA}, \label{eq:def_pFA} \end{align} The expectations in \eqref{eq:def_pMD} and \eqref{eq:def_pFA} are with respect to the size of ${\mathcal W}$ and $\widehat{{\mathcal W}}$, respectively.} \end{definition} In the random-access code defined in~\cite{PolyanskiyISIT2017massive_random_access}, the decoder outputs a list of messages of size equal to the number of active users, which is assumed to be known. In such a setup, a \gls{MD} implies a \gls{FA}, and vice versa. Hence, the two types of errors become indistinguishable. In our setup, the decoded list size can be different from the number of transmitted messages. Hence, we introduce explicitly the \gls{MD} and \gls{FA} probabilities. This allows us to characterize the \gls{MD}--\gls{FA} trade-off. Hereafter, we consider the Gaussian \gls{MAC} with $\{P_{\vect{r}{y}|\vect{r}{x}_1,\dots,\vect{r}{x}_{{K}_{\rm a}}}\}$ imposed by $ \rvVec{y} = \sum_{i=1}^{\vect{r}{K}_{\rm a}}\rvVec{x}_i + \rvVec{z}, $ where $\{\rvVec{x}_i\}_{i=1}^{\vect{r}{K}_{\rm a}}$ are the transmitted signals over $n$ channel uses and $\rvVec{z} \sim {\mathcal C}{\mathcal N}(\mathbf{0},\mat{\mathrm{I}}_n)$ is the Gaussian noise independent of $\{\rvVec{x}_i\}_{i=1}^{\vect{r}{K}_{\rm a}}$. We consider the power constraint $\|\rvVec{x}_i\|^2 \le nP, \forall i \in [\vect{r}{K}_{\rm a}]$. \section{Random-Coding Bound} \label{sec:RCU} The random-coding bound in~\cite[Th.~1]{PolyanskiyISIT2017massive_random_access} is derived by constructing a random-coding scheme as follows. Let ${\mathcal W} = \{\vect{r}{w}_1, \dots, \vect{r}{w}_{K_{\rm a}}\} \subset [M]$ be the set of transmitted messages. Each active user picks randomly a codeword $\vect{c}_{\vect{r}{w}_i}$ from a common codebook containing $M$ codewords $\vect{c}_1,\dots,\vect{c}_M$ drawn independently from the distribution ${\mathcal C}{\mathcal N}(\mathbf{0},P'\mat{\mathrm{I}}_n)$ for a fixed $P' < P$. To convey message $\vect{r}{w}_i$, user $i$ transmits $\vect{c}_{\vect{r}{w}_i}$ provided that $\|\vect{c}_{\vect{r}{w}_i}\|^2 \le nP$. Otherwise, it transmits the all-zero codeword. The receiver employs a minimum distance decoder where the decoded list is $\widehat{{\mathcal W}} = \arg\min_{\widehat{{\mathcal W}} \subset [M], |\widehat{{\mathcal W}}| = K_{\rm a}} \|c(\widehat{{\mathcal W}}) - \rvVec{y}\|^2$, with $c({\mathcal W}) \triangleq \sum_{i\in {\mathcal W}} \vect{c}_{i}$. The error analysis involves manipulations of unions of the pairwise error events via a change of measure and the application of the Chernoff bound combined with Gallager's $\rho$-trick~\cite[p.~136]{Gallager1968information}. An alternative bound is also obtained by writing the pairwise error event as an inequality involving information densities, and by applying a property of the information density given in~\cite[Cor.~17.1]{Polyanskiy2019lecture}. In the following, we derive a similar random-coding bound for the case in which $\vect{r}{K}_{\rm a}$ is random and unknown to the receiver. Specifically, we consider a random-coding scheme with the same encoder as in \cite{PolyanskiyISIT2017massive_random_access}. However, since the receiver does not know $\vect{r}{K}_{\rm a}$, we let the decoder estimate $\vect{r}{K}_{\rm a}$ from $\rvVec{y}$, then decide the best list size within an interval around the initially detected value of $\vect{r}{K}_{\rm a}$. Specifically, given the channel output $\vect{y}$, the receiver estimates $\vect{r}{K}_{\rm a}$ as \begin{align} K_{\rm a}' = \arg\min_{K \in [K_l:K_u]} m(\vect{y},K), \end{align} where $m(\vect{y},K)$ is a suitably chosen metric, and $K_l$ and $K_u$ are suitably chosen lower and upper limits on $K_{\rm a}'$, respectively. Then, given $K_{\rm a}'$, the receiver decodes the list of messages as \begin{equation} \label{eq:decoder_Ka'} \widehat{{\mathcal W}} = \arg\min_{\widehat{{\mathcal W}} \subset [M], \underline{K_{\rm a}'} \le |\widehat{{\mathcal W}}| \le \overline{K_{\rm a}'}} \|c(\widehat{{\mathcal W}}) - \rvVec{y}\|^2, \end{equation} where \revisee{$\underline{K_{\rm a}'} =\max\{K_l,K_{\rm a}'-r\}$ and $\overline{K_{\rm a}'}\triangleq \min\{K_u,K_{\rm a}'+r\}$ with a chosen nonnegative integer $r$}. \revisee{We refer to $r$ as the {\em decoding radius}.} An error analysis of this random-coding scheme conducted along similar lines as in \cite{PolyanskiyISIT2017massive_random_access} leads to the following result. \begin{theorem}[Random-coding bound, $\vect{r}{K}_{\rm a}$ random and unknown] \label{thm:RCU_unknownKa} Fix $P' < P$, \revise{$r$, $K_{l}$, and $K_{u}$ ($K_{l} \le K_{u}$)}. For the $\vect{r}{K}_{\rm a}$-user Gaussian \gls{MAC} with $\vect{r}{K}_{\rm a} \sim P_{\vect{r}{K}_{\rm a}}$, there exists an $(M,n,\epsilon_{\rm MD},\epsilon_{\rm FA})$ random-access code satisfying the power constraint $P$ and \begin{align} \epsilon_{\rm MD} &= \sum_{K_{\rm a} =\max\{K_{l},1\}}^{K_{u}} \bigg(P_{\vect{r}{K}_{\rm a}}(K_{\rm a}) \sum_{K_{\rm a}' = K_{l}}^{K_{u}} \sum_{t\in {\mathcal T}}\frac{t+(K_{\rm a}-\overline{K_{\rm a}'})^+}{K_{\rm a}} \notag \\ &\qquad \cdot\min\{p_t,q_t, \xi(K_{\rm a},K_{\rm a}')\} \bigg) + p_0, \label{eq:eps_MD}\\ \epsilon_{\rm FA} &= \sum_{K_{\rm a} =K_{l}}^{K_{u}} \bigg(P_{\vect{r}{K}_{\rm a}}(K_{\rm a}) \sum_{K_{\rm a}' = K_{l}}^{K_{u}} \sum_{t\in {\mathcal T}} \sum_{t' \in {\mathcal T}_t} \notag \\ &\qquad \frac{t'+(\underline{K_{\rm a}'}-K_{\rm a})^+}{K_{\rm a} - t - {(K_{\rm a} - \overline{K_{\rm a}'})}^+ + t' + {(\underline{K_{\rm a}'}-K_{\rm a})}^+} \notag \\ &\qquad \cdot \min\{p_{t,t'}, q_{t,t'}, \xi(K_{\rm a},K_{\rm a}')\} \bigg) + p_0, \label{eq:eps_FA} \end{align} where \begin{align} p_0 &= 2 - \sum_{K_{\rm a} = K_{l}}^{K_{u}}P_{\vect{r}{K}_{\rm a}}(K_{\rm a}) - \E_{\vect{r}{K}_{\rm a}}\left[\frac{M!}{M^{\vect{r}{K}_{\rm a}}(M-\vect{r}{K}_{\rm a})!} \right] \notag \\ &\quad + \E[\vect{r}{K}_{\rm a}] \frac{\Gamma(n,nP/P')}{\Gamma(n)}, \label{eq:p0}\\ p_t &= \sum_{t'\in \overline{{\mathcal T}}_t} p_{t,t'}, \label{eq:pt}\\ p_{t,t'} &= e^{-n E(t,t')}, \label{eq:ptt} \\ E(t,t') &= \max_{\rho,\rho_1 \in [0,1]} -\rho\rho_1 t' R_1 - \rho_1 R_2 + E_0(\rho,\rho_1), \label{eq:Ett} \\ \!\!\!\! E_0(\rho,\rho_1) &= \max_{\lambda} \rho_1 a + \ln(1-\rho_1 P_2 b), \label{eq:E0}\\%E_0(\rho,\rho_1,\lambda), \\ a &= \rho \ln(1+ P' t' \lambda) + \ln(1+ P't \mu), \label{eq:a}\\ b &= \rho\lambda -\frac{\mu}{1+ P't\mu}, \label{eq:b} \\ \mu &= \frac{\rho \lambda}{1+P't'\lambda}, \\ P_2 &= 1+ \big((K_{\rm a} - \overline{K_{\rm a}'})^+ + (\underline{K_{\rm a}'} - K_{\rm a})^+\big)P', \label{eq:P2}\\ R_1 &= \frac{1}{nt'} \ln\binom{M - \max\{K_{\rm a},\underline{K_{\rm a}'}\}}{t'}, \label{eq:R1 \\ R_2 &= \frac{1}{n} \ln \binom{\min\{K_{\rm a}, \overline{K_{\rm a}'}\}}{t}, \\ q_t &= \inf_{\gamma} \bigg(\!\P[\vect{r}{I}_{t} \!\le\! \gamma] + \sum_{t'\in \overline{{\mathcal T}}_t}\! \exp(n(t'R_1 \!+\! R_2) \!-\! \gamma)\!\bigg), \label{eq:qt}\\ q_{t,t'} &= \inf_{\gamma} \Big(\P[\vect{r}{I}_{t} \!\le\! \gamma] + \exp(n(t'R_1 \!+\! R_2) \!-\! \gamma)\Big), \label{eq:qtt} \\ {\mathcal T} &= [0:\min\{\overline{K_{\rm a}'},K_{\rm a},M\!-\!\underline{K_{\rm a}'} \!-\! (K_{\rm a} \!-\! \overline{K_{\rm a}'})^+\}], \label{eq:T} \\ {\mathcal T}_t &= \big[\big({(K_{\rm a} - \overline{K_{\rm a}'})}^+ - {(\underline{K_{\rm a}'} - K_{\rm a})}^+ + \max\{\underline{K_{\rm a}'},1\} \big. \big. \notag \\ &\quad \quad \big. \big. - K_{\rm a} + t\big)^+ : u_t\big], \label{eq:Tt} \\ \overline{{\mathcal T}}_t &= \big[\big({(K_{\rm a} \!-\! \overline{K_{\rm a}'})}^+ - {(K_{\rm a}\!-\!\underline{K_{\rm a}'})}^+ + t\big)^+ : u_t \big], \label{eq:Tbart} \\ u_t &= \min\big\{{(\overline{K_{\rm a}'} - K_{\rm a})}^+ - {(\underline{K_{\rm a}'}-K_{\rm a})}^+ + t, \big.\notag \\ &\quad \quad \big. \overline{K_{\rm a}'} - {(\underline{K_{\rm a}'}\!-\!K_{\rm a})}^+, M-\max\{\underline{K_{\rm a}'},K_{\rm a}\}\big\}, \\ \!\!\!\!\!\!\xi(K_{\rm a},K_{\rm a}') &= \min_{K\colon K \ne K_{\rm a}'} \P[m\left(\rvVec{y}_0,K_{\rm a}' \right) < m\left(\rvVec{y}_0,K\right)], \label{eq:xi} \end{align} in~\eqref{eq:xi}, $\rvVec{y}_0 \sim {\mathcal C}{\mathcal N}(\mathbf{0},(1+K_{\rm a}P')\mat{\mathrm{I}}_n)$. The random variable $\vect{r}{I}_t$ in~\eqref{eq:qt} and \eqref{eq:qtt} is defined as \begin{equation} \label{eq:def_It} \vect{r}{I}_t \triangleq \!\!\min_{{\mathcal W}_{02} \subset [(K_{\rm a} - \overline{K_{\rm a}'})^+ + 1:K_{\rm a}] \atop |{\mathcal W}_{02}| = t}\!\! \imath_t(c({\mathcal W}_{01}') + c({\mathcal W}_{02});\rvVec{y} \,\vert\, c([K_{\rm a}] \setminus {\mathcal W}_0)), \end{equation} where ${\mathcal W}_{01}' = [K_{\rm a} + 1: \underline{K_{\rm a}'}]$, ${\mathcal W}_0 = [(K_{\rm a} - \overline{K_{\rm a}'})^+] \cup {\mathcal W}_{02}$, and \begin{align} &\imath_t(c({\mathcal W}_{0});\rvVec{y} \,\vert\, c({\mathcal W} \setminus {\mathcal W}_0)) \notag \\ &= n \ln(1+(t+(K_a\!\!-\overline{K_{\rm a}'})^+)P') + \frac{\|\rvVec{y} - c({\mathcal W} \setminus {\mathcal W}_0)\|^2}{1+(t+(K_a\!-\!\overline{K_{\rm a}'})^+)P'} \notag \\ &\quad - \|\rvVec{y} - c({\mathcal W}_0) - c({\mathcal W} \setminus {\mathcal W}_0)\|^2. \label{eq:infor_den} \end{align} \end{theorem} Some remarks are in order. \begin{enumerate}[leftmargin=*,label={\roman*)}] \item The parameters $K_{l}$ and $K_{u}$ can be taken to be the essential infimum and the essential supremum of $\vect{r}{K}_{\rm a}$, respectively. In numerical evaluation, it is often convenient to set $K_{l}$ to be the largest value and $K_{u}$ the smallest value for which $\sum_{K_{\rm a} = K_{l}}^{K_{u}}P_{\vect{r}{K}_{\rm a}}(K_{\rm a})$ exceeds a predetermined threshold. \item The term $1 - \E_{\vect{r}{K}_{\rm a}}\left[\frac{M!}{M^{\vect{r}{K}_{\rm a}}(M-\vect{r}{K}_{\rm a})!} \right]$ in $p_0$ can be upper-bounded by $\E_{\vect{r}{K}_{\rm a}}\big[\binom{\vect{r}{K}_{\rm a}}{2}/M\big]$ as in \cite{PolyanskiyISIT2017massive_random_access}. \item The term $R_1$ in~\eqref{eq:R1} can be upper-bounded by $ \frac{1}{n} \ln (M - \max\{K_{\rm a},\underline{K_{\rm a}'}\}) - \frac{1}{nt'} \ln t'!$, which allows for a stable computation when $M - \max\{K_{\rm a},\underline{K_{\rm a}'}\}$ is large. \item The optimal $\lambda$ in~\eqref{eq:E0} is given by the largest real root of the cubic function $c_1x^3 + c_2x^2 + c_3x + c_4$ with \begin{align} c_1 &= -\rho \rho_1(\rho\rho_1 + 1)t'P'P_2P_3^2,\\ c_2 &= \rho\rho_1 t'P'P_3^2 - \rho\rho_1(3-\rho_1)t'P'P_2P_3 \notag \\ &\quad -\rho\rho_1(\rho_1+1)P_2P_3^2,\\ c_3 &= (2\rho-1)\rho_1 t'P'P_3 + \rho_1 P_3^2 - 2\rho\rho_1 P_2P_3, \\ c_4 &= (\rho-1)\rho_1 t'P' + \rho_1P_3, \end{align} where $P_2$ is given by~\eqref{eq:P2} and $P_3 \triangleq (t' + \rho t)P'$. \item \revise{If the number of active users is fixed to $K_{\rm a}$, by letting $K_{\rm a}' = K_{\rm a}$ with probability $1$ and \revisee{by} setting the decoding radius $r$ to $0$, one obtains from Theorem~\ref{thm:RCU_unknownKa} a trivial generalization of \cite[Th.~1]{PolyanskiyISIT2017massive_random_access} to the complex case.} \end{enumerate} \begin{proof}[Proof of Theorem~\ref{thm:RCU_unknownKa}] We next present a sketch of the proof. The full proof can be found in Appendix~\ref{app:proof}. Denote by ${\mathcal W}_0$ the set of misdetected messages, i.e., ${\mathcal W}_0 \triangleq {\mathcal W} \setminus \widehat{{\mathcal W}}$, and by ${\mathcal W}_0'$ the set of falsely alarmed messages, i.e., ${\mathcal W}_0' \triangleq \widehat{{\mathcal W}} \setminus {\mathcal W}$. The \gls{MD} and \gls{FA} probabilities, given in~\eqref{eq:eps_MD} and \eqref{eq:eps_FA}, respectively, can be expressed as $P_{\rm MD} = \E[\ind{|{\mathcal W}| \ne 0} \cdot \md]$ and $P_{\rm FA} = \E[\ind{|\widehat{{\mathcal W}}| \ne 0} \cdot \fa]$. At a cost of adding a constant bounded by $p_0$ given in~\eqref{eq:p0}, we first replace the measure over which the expectation is taken by the one under which: i) there are at least $K_{l}$ and at most $K_{u}$ active users; ii) $\widetilde{\vect{r}{w}}_1,\dots,\widetilde{\vect{r}{w}}_{\vect{r}{K}_{\rm a}}$ are sampled uniformly without replacement from $[M]$; iii) $\rvVec{x}_i = \vect{c}_{\vect{r}{w}_i} \forall i,$ instead of $\rvVec{x}_i = \vect{c}_{\vect{r}{w}_i} \ind{\|\vect{c}_{\vect{r}{w}_i}\|^2 \le nP}$. Let $K_{\rm a} \to K_{\rm a}'$ denote the event that the estimation step outputs $K_{\rm a}'$ while $K_{\rm a}$ users are active. Given $K_{\rm a} \to K_{\rm a}'$, note that if $\overline{K_{\rm a}'} < K_{\rm a}$, the decoder commits at least $K_{\rm a} - \overline{K_{\rm a}'}$ \glspl{MD}; if $\underline{K_{\rm a}'} > K_{\rm a}$, the decoder commits at least $\underline{K_{\rm a}'} - K_{\rm a}$ \glspl{FA}. We let ${\mathcal W}_0 = {\mathcal W}_{01} \cup {\mathcal W}_{02}$ where ${\mathcal W}_{01}$ denotes the list of $(\vect{r}{K}_{\rm a} - \overline{K_{\rm a}'})^+$ \textit{initial} \glspl{MD} due to insufficient decoded list size, and ${\mathcal W}_{02}$ the \textit{additional} \glspl{MD} occurring during decoding. Similarly, let ${\mathcal W}_0' = {\mathcal W}_{01}' \cup {\mathcal W}_{02}'$ where ${\mathcal W}_{01}'$ denotes the list of $(\underline{K_{\rm a}'}-\vect{r}{K}_{\rm a})^+$ \textit{initial} \glspl{FA} due to excessive decoded list size, and ${\mathcal W}_{02}'$ the \textit{additional} \glspl{FA}. Fig.~\ref{fig:venn} depicts the relation between these sets. \begin{figure} \centering \begin{tikzpicture}[thick,scale=0.95, every node/.style={scale=0.95}] \def\radius{2cm} \def\radiusB{0.9*\radius} \def\mycolorbox#1{\textcolor{#1}{\rule{2ex}{2ex}}} \colorlet{colori}{gray!80} \colorlet{colorii}{gray!20} \coordinate (ceni) at (0,0); \coordinate[xshift=1.05*\radius] (cenii); \coordinate (edge1a) at (-\radiusB,0.1cm); \coordinate(edge1b) at (\radiusB,-.2cm); \coordinate (edge2a) at (\radius-\radiusB-.1cm,.1cm); \coordinate (edge2b) at (\radius+\radiusB+.3cm,-.2cm); \draw[fill=colori,fill opacity=0.5] (ceni) circle (\radiusB); \draw[fill=colorii,fill opacity=0.5] (cenii) circle (\radius); \draw (ceni) circle (\radiusB); \draw (edge1a) to (edge2a); \draw (edge1b) to (edge2b); \draw[-latex] (-\radius,-0.6*\radius) node[below,xshift=-.4cm,text width=2cm,align=center] {\small Transmitted messages ${\mathcal W}$} -- (-0.77*\radius,-0.5*\radius); \draw[-latex] (2.13*\radius,-0.6*\radius) node[below,xshift=.3cm,text width=2cm,align=center] {\small Decoded messages $\widehat{{\mathcal W}}$} -- (1.93*\radius,-0.5*\radius); \node[yshift=1.1*\radius,xshift=-1.25cm,text width=4cm,align=center] {\small \glspl{MD}: \\ ${\mathcal W}_0 = {\mathcal W}_{01} \cup {\mathcal W}_{02} = {\mathcal W} \setminus \widehat{{\mathcal W}}$}; \node[yshift=1.2*\radius,xshift=1.25cm,text width=4cm,align=center] at (cenii) {\small \glspl{FA}: \\ ${\mathcal W}_0' = {\mathcal W}'_{01} \cup {\mathcal W}'_{02} = \widehat{{\mathcal W}} \setminus {\mathcal W}$}; \node[xshift=.93cm,text width=1.2cm,align=center] at (ceni) {\small correctly decoded messages ${\mathcal W} \cap \widehat{{\mathcal W}}$}; \node[yshift=.8\radius,xshift=-.62cm,text width=2.1cm,align=center] at (ceni) {\small $~~~~(\vect{r}{K}_{\rm a}-\overline{K_{\rm a}'})^+$ initial \glspl{MD} $~~~{\mathcal W}_{01}~~~$}; \node[yshift=-.8\radius,xshift=-.7\radius,text width=1.5cm,align=center] at (ceni) {\small additional \glspl{MD} $~~~{\mathcal W}_{02}~~~$}; \node[yshift=.7\radius,xshift=.6\radius,text width=1.5cm,align=center] at (cenii) {\small $(\underline{K_{\rm a}'}-\vect{r}{K}_{\rm a})^+$ initial \glspl{FA} $~~~{\mathcal W}'_{01}~~~$}; \node[yshift=-1cm,xshift=.7\radius,text width=1.5cm,align=center] at (cenii) {\small additional \glspl{FA} $~~~{\mathcal W}'_{02}~~~$}; \end{tikzpicture} \caption{A diagram depicting the relation between the defined sets of messages.} \label{fig:venn} \end{figure} Using the above definitions, the set of transmitted messages is ${\mathcal W} = {\mathcal W}_{01} \cup {\mathcal W}_{02} \cup ({\mathcal W} \setminus {\mathcal W}_0)$, and the received signal is $\rvVec{y} = c({\mathcal W}_{01}) + c({\mathcal W}_{02}) + c({\mathcal W} \setminus {\mathcal W}_0) + \rvVec{z}$. Since the messages in ${\mathcal W}_{01}$ are always misdetected and the messages in ${\mathcal W}_{01}'$ are always falsely alarmed, the best approximation of ${\mathcal W}$ that the decoder can produce is ${\mathcal W}_{02} \cup ({\mathcal W} \setminus {\mathcal W}_0) \cup {\mathcal W}_{01}'$. However, under the considered error event ${\mathcal W} \to \widehat{{\mathcal W}}$, the actual decoded list is ${\mathcal W}'_{02} \cup ({\mathcal W} \setminus {\mathcal W}_0) \cup {\mathcal W}_{01}'$. Therefore, ${\mathcal W} \to \widehat{{\mathcal W}}$ implies the event $F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{01}',{\mathcal W}_{02}') \triangleq \big\{\|c({\mathcal W}_{01}) + c({\mathcal W}_{02})- c({\mathcal W}_{01}') - c({\mathcal W}_{02}') + \rvVec{z}\|^2 < \|c({\mathcal W}_{01}) - c({\mathcal W}_{01}') + \rvVec{z}\|^2 \big\}$. It follows that, after the change of measure, $P_{\rm MD}$ and $P_{\rm FA}$ can be bounded as \begin{align} P_{\rm MD} &\le \sum_{K_{\rm a} =\max\{K_{l},1\}}^{K_{u}} \!\!\bigg(P_{\vect{r}{K}_{\rm a}}(K_{\rm a}) \sum_{K_{\rm a}' = K_{l}}^{K_{u}} \sum_{t\in {\mathcal T}}\frac{t+(K_{\rm a}-\overline{K_{\rm a}'})^+}{K_{\rm a}} \notag \\ &\qquad \cdot \P[|{\mathcal W}_{02}| = t, K_{\rm a} \to K_{\rm a}'] \bigg) + p_0, \\ P_{\rm FA} &\le \sum_{K_{\rm a} =K_{l}}^{K_{u}} \bigg(P_{\vect{r}{K}_{\rm a}}(K_{\rm a}) \sum_{K_{\rm a}' = K_{l}}^{K_{u}} \sum_{t\in {\mathcal T}} \sum_{t' \in {\mathcal T}_t} \notag \\ &\qquad \frac{t+(\underline{K_{\rm a}'} - K_{\rm a})^+}{K_{\rm a} \!-\! t \!-\! {(K_{\rm a} \!-\! \overline{K_{\rm a}'})}^+ \!\!+\! t' \!+\! {(\underline{K_{\rm a}'}\!-\!K_{\rm a})}^+\!} \notag \\ &\qquad \cdot \P[|{\mathcal W}_{02}| = t, |{\mathcal W}_{02}'| = t', K_{\rm a} \to K_{\rm a}'] \bigg) \!+\! p_0, \label{eq:tmp853} \end{align} where ${\mathcal T}$ and ${\mathcal T}_t$ are given by~\eqref{eq:T} and \eqref{eq:Tt}, respectively. The constraint $t \in {\mathcal T}$ holds because the number of \glspl{MD}, given by $t+{(K_{\rm a}-\overline{K_{\rm a}'})}^+$, is upper-bounded by the total number of transmitted messages $K_{\rm a}$, and by $M-\underline{K_{\rm a}'}$ (since at least $\underline{K_{\rm a}'}$ messages are returned). The constraint $t' \in {\mathcal T}_t$ holds because: i) the decoded list size, given by $K_{\rm a} - t - {(K_{\rm a} - \overline{K_{\rm a}'})}^+ + t' + {(\underline{K_{\rm a}'}-K_{\rm a})}^+$, must be in $[\underline{K_{\rm a}'} : \overline{K_{\rm a}'}]$ and must be positive since the event $|\widehat{{\mathcal W}}| = 0$ results in no \gls{FA} by definition; ii) the number of \glspl{FA}, given by $t'+ (\underline{K_{\rm a}'}-K_{\rm a})^+$, is upper-bounded by the number of messages that are not transmitted $M-K_{\rm a}$, and by the maximal number of decoded messages $\overline{K_{\rm a}'}$. \revisee{Let $A(K_{\rm a},K_{\rm a}') \triangleq \{\rvVec{y} \colon K_{\rm a}' = \arg\min_{K \in [K_l : K_u]} m(\rvVec{y},K) \}$.} Since the event $K_{\rm a} \to K_{\rm a}'$ implies $|\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]$ and $A(K_{\rm a},K_{\rm a}')$, we have that \begin{align*} &\P[|{\mathcal W}_{02}| = t, K_{\rm a} \to K_{\rm a}'] \notag \\ &\le \P[{|{\mathcal W}_{02}| = t, |\widehat{{\mathcal W}}|\in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}],A(K_{\rm a},K_{\rm a}')}] \\ &\le \min\big\{\P\big({|{\mathcal W}_{02}| \!=\! t, |\widehat{{\mathcal W}}|\in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}\big), \P[A(K_{\rm a},K_{\rm a}')] \big\}. \end{align*} Similarly, $\P[{|{\mathcal W}_{02}| = t,|{\mathcal W}_{02}'| = t', K_{\rm a} \to K_{\rm a}'}] \le \min\Big\{\P\Big[|{\mathcal W}_{02}| \!=\! t, |{\mathcal W}_{02}'| \!=\! t', |\widehat{{\mathcal W}}|\!\in\! [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]\Big]$, $\P[A(K_{\rm a},K_{\rm a}')] \Big\}.$ Under the new measure, $\rvVec{y} \sim {\mathcal C}{\mathcal N}(\mathbf{0},(1+K_{\rm a} P')\mat{\mathrm{I}}_n)$. Thus, we can show that $\P[A(K_{\rm a},K_{\rm a}')]$ is upper-bounded by $\xi(K_{\rm a},K_{\rm a}')$ given by~\eqref{eq:xi}. To establish \eqref{eq:eps_MD} and \eqref{eq:eps_FA}, we proceed as in \cite{PolyanskiyISIT2017massive_random_access}, write the events $\{|{\mathcal W}_{02}| \!=\! t, |\widehat{{\mathcal W}}| \!\in\! [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]\}$ and $\{|{\mathcal W}_{02}| \!=\! t,|{\mathcal W}_{02}'| \!=\! t', |\widehat{{\mathcal W}}|\!\in\! [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]\}$ as the union of $F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{01}',{\mathcal W}_{02}')$ events, and bound their probabilities by $\min\{p_t, q_t\}$ and $\min\{p_{t,t'}, q_{t,t'}\}$, respectively. \revisee{Finally, to guarantee that {\em both} \eqref{eq:eps_MD} and \eqref{eq:eps_FA} are satisfied, we allow for randomized coding strategies, by introducing the variable $\vect{r}{U}$, which acts as common randomness. Proceeding as in \cite[Th.~19]{Polyanskiy2011feedback}, one can show that it is sufficient to perform randomization across (at most) three deterministic codes, i.e., $|{\mathcal U}|\le 3$.} \end{proof} In the following proposition, we derive $\xi(K_{\rm a},K_{\rm a}')$ for two different estimators of $\vect{r}{K}_{\rm a}$. \begin{proposition} \label{prop:xi} For the \gls{ML} estimation of $\vect{r}{K}_{\rm a}$, i.e., $m(\vect{y},K) \!=\! -\ln p_{\rvVec{y} | \vect{r}{K}_{\rm a}}(\vect{y} | K)$, $\xi(K_{\rm a}, K_{\rm a}')$ is given by \begin{align} \label{eq:xi_ML} \xi(K_{\rm a},K_{\rm a}') &\triangleq \min_{K:\; K \ne K_{\rm a}'} \!\Big(\ind{K \!<\! K_{\rm a}'}\frac{\Gamma(n,\zeta(K,K_{\rm a},K_{\rm a}')}{\Gamma(n)} \notag \\ & \qquad ~~ + \ind{K \!>\! K_{\rm a}'}\frac{\gamma(n,\zeta(K,K_{\rm a},K_{\rm a}'))}{\Gamma(n)}\Big), \end{align} with \begin{align} \zeta(K,K_{\rm a},K_{\rm a}') &\triangleq n \ln\left(\frac{1+KP'}{1+K_{\rm a}'P'}\right)(1+K_{\rm a}P')^{-1} \notag \\ &\quad \cdot \left(\frac{1}{1+K_{\rm a}'P'}-\frac{1}{1+KP'}\right)^{-1}. \label{eq:zeta_ML} \end{align} For an energy-based estimation of $\vect{r}{K}_{\rm a}$ with $m(\vect{y},K) = |\|\vect{y}\|^2 - n(1 + KP')|$, $\xi(K_{\rm a},K_{\rm a}')$ is given by~\eqref{eq:xi_ML} with \begin{align} \zeta(K,K_{\rm a},K_{\rm a}') \triangleq \frac{n}{1+K_{\rm a}P'}\left(1+\frac{K + K_{\rm a}'}{2}P'\right). \label{eq:zeta_energy} \end{align} \end{proposition} \begin{proof} See Appendix~\ref{proof:xi}. \end{proof} \revisee{The decoding radius $r$ can be optimized according to the target \gls{MD} and \gls{FA} probabilities. A large decoding radius reduces the initial \glspl{MD} and \glspl{FA} at the cost of overfitting, especially at low SNR values. Specifically, when the noise dominates, increasing $r$ increases the chance that the decoder~\eqref{eq:decoder_Ka'} returns a list whose sum is closer in Euclidean distance to the noise than to the sum of the transmitted codewords.} Our random-coding bound can also be applied to \revisee{SA-MPR} to investigate the resulting \gls{MD}--\gls{FA} trade-off. Consider an \revisee{SA-MPR} scheme where a length-$n$ frame is divided into $L$ slots and each user chooses randomly a slot to transmit. For $K_{\rm a}$ active users, the number of users transmitting in a slot follows a Binomial distribution with parameter $(K_{\rm a},1/L)$. The \gls{PMF} of the number of active users per slot, denoted by $\vect{r}{K}_{\rm {SA}}$, is given by $P_{\vect{r}{K}_{\rm {SA}}}(K_{\rm SA}) = \sum_{K_{\rm a}} P_{\vect{r}{K}_{\rm a}}(K_{\rm a}) \binom{K_{\rm a}}{K_{\rm SA}} L^{-K_{\rm SA}}\left(1-\frac{1}{L}\right)^{K_{\rm a} - K_{\rm SA}}$. Existing analyses of slotted ALOHA usually assume that the decoder can detect perfectly if no signal, one signal, or more than one signal have been transmitted in a slot. Furthermore, it is usually assumed that a collision-free slot leads to successful message decoding. However, the more slots, the shorter the slot length over which a user transmits its signal. To account for both detection and decoding errors, in Corollary~\ref{coro:slotted_ALOHA} below, we apply our decoder in a slot-by-slot manner, and obtain a random-coding bound similar to Theorem~\ref{thm:RCU_unknownKa}. \begin{corollary} \label{coro:slotted_ALOHA} For the Gaussian \gls{MAC} with the number of active users following $P_{\vect{r}{K}_{\rm a}}$ and frame length $n$, an SA-MPR scheme with $L$ slots can achieve the \gls{MD} and \gls{FA} probabilities given in~\eqref{eq:eps_MD} and \eqref{eq:eps_FA}, respectively, with codebook size $M$, codeword length $n/L$, power constraint $PL$, and per-slot number of active users following $P_{\vect{r}{K}_{\rm {SA}}}$. \end{corollary} \section{Numerical Evaluation} \label{sec:numerical} In this section, we numerically evaluate the proposed random-coding bound and compare it with \revisee{the} random-access \revisee{bound/}schemes \revisee{in\cite{PolyanskiyISIT2017massive_random_access,Fengler2019sparcs,Amalladinne2020unsourced}}. We assume that $\vect{r}{K}_{\rm a}$ follows a Poisson distribution. We consider $k \triangleq \log_2 M = 128$ bits and $n = 19200$ complex channel uses (i.e., $38400$ real degrees of freedom). The power is given in terms of average bit energy $\EbNo \triangleq nP/k$. \begin{figure}[t!] \centering \definecolor{magenta}{rgb}{1.00000,0.00000,1.00000 \begin{tikzpicture} \tikzstyle{every node}=[font=\scriptsize] \begin{axis} width=3in, height=2in, at={(0.759in,0.481in)}, scale only axis, xmin=0, xmax=300, xlabel style={font=\color{black},yshift=1ex}, xlabel={\footnotesize Average number of active users $\E[\vect{r}{K}_{\rm a}]$}, ymin=-2, ymax=12, ytick={-2, 0, 2, 4, 6, 8, 10, 12}, ylabel style={font=\color{black}, yshift=-3.2ex}, ylabel={\footnotesize Required $\EbNo$ (dB)}, axis background/.style={fill=white}, xmajorgrids, ymajorgrids, legend style={at={(0.05,-0.18)}, anchor=north west, row sep=-2.5pt, legend cell align=left, align=left, draw=white!15!black} ] \addplot [color=green,line width=1pt] table[row sep=crcr] 17 .7030 \\ 20 0.6092 \\ 25 0.6042 \\ 50 0.6120 \\ 75 0.6370 \\ 100 0.6701 \\ 125 0.7329 \\ 150 0.9365 \\ 175 1.2155 \\ 200 1.5259 \\ 225 1.8565 \\ 250 2.2029 \\ 275 2.5685 \\ 300 2.9512 \\ }; \addplot [color=blue, dashed,line width=1pt] table[row sep=crcr] 2 -0.2898 \\% -0.1183\\ 10 -0.0988 \\% 0.0906\\ 25 0.0146 \\ 50 0.1275 \\ 75 0.1615 \\ 100 0.1974 \\ 125 0.3502 \\ 150 0.5912 \\ 175 0.8533 \\ 200 1.1264 \\ 225 1.4078 \\ 250 1.6960 \\ 275 1.9234 \\ 300 2.2876 \\%2.3253\\ }; \addplot [color=magenta, line width=.8pt, mark=*, mark size=1.5pt, mark options={solid,color = magenta,fill=white}] table[row sep=crcr] 5 -0.5104 \\ 10 -0.3337\\ 25 0.5329\\ 50 1.5650\\ 75 2.8320\\ 100 3.7426\\ 125 4.7103\\ 150 5.8035\\ 175 6.8849\\ 200 8.0970\\ 225 8.9092\\ 250 10.1469\\ 275 11.3445\\ 300 13.0720\\ }; \addplot [color=magenta, line width=.8pt, mark size=1.5pt, mark=square*, mark options={solid,color = magenta,fill=white},dashed] table[row sep=crcr] 10 -.4161\\ 25 .47 \\ 50 1.2404 \\ 75 2.3500 \\ 100 3.3268 \\ 125 4.4576 \\ 150 5.4664 \\ 175 6.5878 \\ 200 7.7616 \\ 225 8.4016 \\ 250 9.5047 \\ 275 10.8719 \\ 300 12.1196 \\ }; \addplot [color=magenta, line width=.8pt, mark size=1.5pt, mark=*, mark options={solid, magenta}] table[row sep=crcr] 5 -0.6204 \\ 10 -0.5139 \\ 25 0.3957 \\ 50 1.2324 \\ 75 2.1412 \\ 100 3.2419\\ 125 4.2234 \\ 150 5.2535 \\ 175 6.2287\\ 200 7.2511\\ 225 8.1388\\ 250 9.1707\\ 275 10.2813\\ 300 11.8315 \\ }; \addplot [color=magenta, dashed, line width=.8pt, mark size=1.3pt, mark=square*, mark options={solid,color = magenta}] table[row sep=crcr] 5 -.2949 \\ 10 -.4201 \\ 25 0.2775 \\ 50 1.0264 \\ 75 1.8417 \\ 100 2.8363 \\ 125 3.9383 \\ 150 4.8895 \\ 175 5.8534 \\ 200 6.7662 \\ 225 7.5140 \\ 250 8.4517 \\ 275 9.8268 \\ 300 10.8815 \\ }; \addplot [color=orange, mark=x, line width=.8pt, mark size=2pt, mark options={solid, orange}] table[row sep=crcr] 15 1.889065225218457877e+00 \\ 20 1.829007583756270039e+00 \\ 50 2.152915984617741252e+00 \\ 75 2.35993197933708437e+00 \\ 100 2.5614794499550815 \\ 125 2.8702 \\ 150 3.2059 \\ 175 3.6287 \\ 200 4.2303 \\ 225 4.6432 \\ 250 5.1734 \\ 300 6.6150 \\ }; \addplot [color=orange, mark=+, line width=.8pt, mark size=2pt, mark options={solid, orange}] table[row sep=crcr] 15 3.0828 \\ 25 3.0260 \\ 50 3.1830 \\ 75 3.2404 \\ 100 3.2987687520233147\\ 125 3.3769650702680227 \\ 150 3.5981 \\ 175 4.0657 \\ 200 4.5905 \\ 225 4.9590 \\ 250 5.5682 \\ 300 7.0733 \\ }; \draw [color=gray] (200,12) node [below,xshift=5pt,yshift=1pt,color=black] {\cite[Th.~1]{PolyanskiyISIT2017massive_random_access}} -- (194,30); \draw [color=gray] (260,12) node [below,xshift=5pt,yshift=1pt,color=black] {{Theorem~\ref{thm:RCU_unknownKa}}} -- (245,41); \draw [black] (275,120) ellipse [x radius=5, y radius=6]; \draw [color=gray] (200,127) node [left,color=black,xshift=4pt,yshift=0pt,text width=1.9cm,align=center] {{SA-MPR with slot-index coding}} -- (272,124); \draw [black] (200,100.5) ellipse [x radius=5, y radius=6]; \draw [color=gray] (100,115) node [left,color=black] {{SA-MPR}} -- (196,104); \draw [color=gray] (26,75) node [above,color=black,xshift=0pt,yshift=-2pt,text width=1.5cm,align=center] {SPARC~\cite{Fengler2019sparcs}} -- (32,51); \draw [color=gray] (120,12) node [below,color=black,xshift=5pt,yshift=1pt] {Enhanced SPARC~\cite{Amalladinne2020unsourced}} -- (112,47); \end{axis} \end{tikzpicture \caption{The required $\EbNo$ to achieve $\max\{P_{\rm MD}, P_{\rm FA}\} \le 0.1$ vs. $\E[\vect{r}{K}_{\rm a}]$ for $k=128$ bits, $n = 19200$ channel uses, and $\vect{r}{K}_{\rm a} \sim \mathrm{Poisson}(\E[\vect{r}{K}_{\rm a}])$. We compare our random coding bound (Theorem~\ref{thm:RCU_unknownKa}) and SA-MPR bound (Corollary~\ref{coro:slotted_ALOHA}) with the bound in~\cite[Th.~1]{PolyanskiyISIT2017massive_random_access} for $\vect{r}{K}_{\rm a}$ known, and two practical schemes, namely, the SPARC scheme~\cite{Fengler2019sparcs}, and its enhancement~\cite{Amalladinne2020unsourced}. Solid lines represent schemes/bounds with $\vect{r}{K}_{\rm a}$ unknown; dashed lines represent schemes/bounds with $\vect{r}{K}_{\rm a}$ known.} \label{fig:EbN0_vs_EKa} \end{figure} In Fig.~\ref{fig:EbN0_vs_EKa}, we compare our random-coding bound with that of Polyanskiy~\cite{PolyanskiyISIT2017massive_random_access} in terms of the required $\EbNo$ so that neither $P_{\rm MD}$ nor $P_{\rm FA}$ exceed $0.1$. For our bound, we consider the \gls{ML} estimator of $\vect{r}{K}_{\rm a}$ and zero decoding radius, i.e., $\overline{K_{\rm a}'} = \underline{K_{\rm a}'} = K_{\rm a}'$. Numerical evaluation suggests that this choice is optimal for these target MD and FA probabilities. \revise{We choose $K_l$ to be the largest value and $K_u$ the smallest value for which $\P[{\vect{r}{K}_{\rm a} \notin [K_l,K_u]}] < 10^{-9}$. The $q_t$ and $q_{t,t'}$ terms are evaluated for $t = 1$ and $K_{\rm a} \le 50$ only.} For the bound in \cite[Th.~1]{PolyanskiyISIT2017massive_random_access}, we average over the Poisson distribution of $\vect{r}{K}_{\rm a}$. This corresponds to the scenario where $\vect{r}{K}_{\rm a}$ is random but known. As can be seen, the extra required $\EbNo$ due to the lack of knowledge of $\vect{r}{K}_{\rm a}$ is about $0.5$--$0.7$~dB. In Fig.~\ref{fig:EbN0_vs_EKa}, we also show the performance of the SA-MPR bound given in Corollary~\ref{coro:slotted_ALOHA}, where we optimize $L$ and the decoding radius for each $\E[\vect{r}{K}_{\rm a}]$. We also consider the possibility to encode $\lfloor \log_2 L\rfloor$ extra bits for each user in the slot index, and assume perfect decoding of these bits. We refer to this scheme as SA-MPR with slot-index coding. We also evaluate the performance of two practical schemes, namely: \begin{itemize} \item the SPARC scheme proposed in~\cite{Fengler2019sparcs}, which employs a concatenated coding framework with an inner approximate message passing~(AMP) decoder followed by an outer tree decoder. \item an enhancement of the SPARC scheme proposed in~\cite{Amalladinne2020unsourced}, which we refer to as enhanced SPARC. This scheme introduces belief propagation between the inner AMP decoder and the outer tree decoder in an iterative manner. \end{itemize} Note that the SPARC and enhanced SPARC schemes were proposed for the Gaussian \gls{MAC} with \textit{known} number of active users. To adapt these schemes to the case where $\vect{r}{K}_{\rm a}$ is unknown, we employ an energy-based estimation of $\vect{r}{K}_{\rm a}$, then treat this estimate as the true $\vect{r}{K}_{\rm a}$ in the decoding process. From Fig.~\ref{fig:EbN0_vs_EKa}, we see that SA-MPR, even with slot-index coding, becomes power inefficient as $\E[\vect{r}{K}_{\rm a}]$ increases. The enhanced SPARC scheme achieves the closest performance to our bound for $\E[\vect{r}{K}_{\rm a}] \ge 100$. It outperforms the original SPARC scheme by about $0.5$~dB for large $\E[\vect{r}{K}_{\rm a}]$. In Fig.~\ref{fig:Pe_vs_EbN0}, we plot the bounds on the \gls{MD} and \gls{FA} probabilities in Theorem~\ref{thm:RCU_unknownKa} (with ML estimation of $\vect{r}{K}_{\rm a}$) as a function of $\EbNo$ for different decoding radii. We observe that decoding with a small radius performs better in the low $\EbNo$ regime, where noise overfitting is the bottleneck. Increasing the decoding radius improves the performance in the moderate and high $\EbNo$ regime, where setting $r=0$ results in a high error floor due to the initial \glspl{MD} and \glspl{FA}. The error floor can be characterized analytically (see Appendix~\ref{app:error_floor}) \begin{figure}[t!] \centering \definecolor{magenta}{rgb}{1.00000,0.00000,1.00000 \begin{tikzpicture} \tikzstyle{every node}=[font=\scriptsize] \begin{axis} width=2.95in, height=1.8in, at={(0.759in,0.481in)}, scale only axis, xmin=0, xmax=20, xlabel style={font=\color{black},yshift=1ex}, xlabel={\footnotesize $\EbNo$ (dB)}, ymin=7e-7, ymax=1, ymode = log, yminorticks=true, ytick={1e0, 1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6}, ylabel style={font=\color{black}, yshift=-1.8ex}, ylabel={\footnotesize MD and FA probabilities}, axis background/.style={fill=white}, xmajorgrids, ymajorgrids, legend style={at={(0.99,0.99)}, anchor=north east, row sep=-2.5pt, legend cell align=left, align=left, draw=white!15!black} ] \addplot [color=blue,line width=0.5pt] table[row sep=crcr] 0 1.000000000000000 \\ 0.500000000000000 0.146777707087554 \\ 1.000000000000000 0.038501093391244 \\ 1.500000000000000 0.025850383046193\\ 2.000000000000000 0.021772071891965\\ 2.500000000000000 0.018617775903273\\ 3.000000000000000 0.016009283961793\\ 3.500000000000000 0.013843186229793\\ 4.000000000000000 0.012038715623758\\ 4.500000000000000 0.010530615259555\\ 5.000000000000000 0.009266057894810\\ 5.500000000000000 0.008202197353721\\ 6.000000000000000 0.007304145833348\\ 7.000000000000000 0.005891122825509\\ 8.000000000000000 0.004865075446055\\ 9.000000000000000 0.004110837384321\\ 10.000000000000000 0.003550069416826\\ 11.000000000000000 0.003128846290549\\ 12.000000000000000 0.002809571621660\\ 13.000000000000000 0.002565675254348\\ 14.000000000000000 0.002377690090178\\ 15.000000000000000 0.002232321446917\\ 16.000000000000000 0.002119370070190\\ 17.000000000000000 0.002031256355991\\ 18.000000000000000 0.001962289459717\\ 19.000000000000000 0.001908159498677\\ 20.000000000000000 0.001865577583167\\ }; \addlegendentry{$\epsilon_{\rm MD}$} \addplot [color=blue,dashed,line width=0.5pt] table[row sep=crcr] 0 1.000000000000000 \\ 0.500000000000000 0.148101484266278 \\ 1.000000000000000 0.038379904912468 \\ 1.500000000000000 0.025609324195977 \\ 2.000000000000000 0.021620294401635 \\ 2.500000000000000 0.018549309949211 \\ 3.000000000000000 0.015998992729046 \\ 3.500000000000000 0.013872663930583 \\ 4.000000000000000 0.012094931632125 \\ 4.500000000000000 0.010604389871631 \\ 5.000000000000000 0.009350962273943 \\ 5.500000000000000 0.008293764145908 \\ 6.000000000000000 0.007399234886732 \\ 7.000000000000000 0.005987926360951 \\ 8.000000000000000 0.004959840714961 \\ 9.000000000000000 0.004202242566935 \\ 10.000000000000000 0.003637935254935 \\ 11.000000000000000 0.003213481211418 \\ 12.000000000000000 0.002891445308809 \\ 13.000000000000000 0.002645266730529 \\ 14.000000000000000 0.002455441589217 \\ 15.000000000000000 0.002308576012215 \\ 16.000000000000000 0.002194423499029 \\ 17.000000000000000 0.002105348035991 \\ 18.000000000000000 0.002035612027232 \\ 19.000000000000000 0.001980867561597 \\ 20.000000000000000 0.001937795049910 \\ }; \addlegendentry{$\epsilon_{\rm FA}$} \addplot [color=black,dashdotted,line width=0.8pt] table[row sep=crcr] 0 0.001704144946427 \\ 20 0.001704144946427 \\ }; \addlegendentry{Error floor} \addplot [color=black,dashdotted,line width=0.8pt] table[row sep=crcr] 0 0.001774442973898 \\ 20 0.001774442973898 \\ }; \addplot [color=blue,line width=0.5pt] table[row sep=crcr] 0 1.000000000000000\\ 0.500000000000000 1.000000000000000\\ 1.000000000000000 1.000000000000000\\ 1.500000000000000 1.000000000000000\\ 2.000000000000000 0.079465507503007\\ 2.500000000000000 0.033068587453746\\ 3.000000000000000 0.007071139952214\\ 3.500000000000000 0.004781463371322\\ 4.000000000000000 0.003772564460892\\ 4.500000000000000 0.003022039388262\\ 5.000000000000000 0.002495748575896\\ 5.500000000000000 0.001949281367915\\ 6.000000000000000 0.001576417581834\\ 7.000000000000000 0.001047338502548\\ 8.000000000000000 0.000713831162347\\ 9.000000000000000 0.000501282020659\\ 10.000000000000000 0.000364090279862\\ 11.000000000000000 0.000274299830336\\ 12.000000000000000 0.000214431261303\\ 13.000000000000000 0.000173702742435\\ 14.000000000000000 0.000145309250922\\ 15.000000000000000 0.000125189787585\\ 16.000000000000000 0.000110685421613\\ 17.000000000000000 0.000100057695447\\ 18.000000000000000 0.000092159424436\\ 19.000000000000000 0.000086218208996\\ 20.000000000000000 0.000081703460546\\ }; \addplot [color=blue,dashed,line width=0.5pt] table[row sep=crcr] 0 1.000000000000000\\ 0.500000000000000 1.000000000000000\\ 1.000000000000000 1.000000000000000\\ 1.500000000000000 1.000000000000000\\ 2.000000000000000 0.388947507083411\\ 2.500000000000000 0.220434978322149\\ 3.000000000000000 0.111904745603824\\ 3.500000000000000 0.096740896056475\\ 4.000000000000000 0.089566128141456\\ 4.500000000000000 0.083465134416361\\ 5.000000000000000 0.003722992569717\\ 5.500000000000000 0.002101844797902\\ 6.000000000000000 0.001709628815437\\ 7.000000000000000 0.001153906415675\\ 8.000000000000000 0.000799477796390\\ 9.000000000000000 0.000570967871549\\ 10.000000000000000 0.000421525451198\\ 11.000000000000000 0.000322480505532\\ 12.000000000000000 0.000255637149437\\ 13.000000000000000 0.000209649451649\\ 14.000000000000000 0.000177328451787\\ 15.000000000000000 0.000154178705677\\ 16.000000000000000 0.000137362385795\\ 17.000000000000000 0.000124961411813\\ 18.000000000000000 0.000115696043535\\ 19.000000000000000 0.000108695811219\\ 20.000000000000000 0.000103357198913\\ }; \addplot [color=black,dashdotted,line width=0.8pt] table[row sep=crcr] 0 6.596366572455718e-05 \\ 20 6.596366572455718e-05 \\ }; \addplot [color=black,dashdotted,line width=0.8pt] table[row sep=crcr] 0 8.468424489119134e-05 \\ 20 8.468424489119134e-05 \\ }; \addplot [color=blue,line width=0.5pt] table[row sep=crcr] 0 1.000000000000000\\ 0.500000000000000 1.000000000000000\\ 1.000000000000000 1.000000000000000\\ 1.500000000000000 1.000000000000000\\ 2.000000000000000 0.371347651566337\\ 2.500000000000000 0.104903277919702\\ 3.000000000000000 0.026824695964395\\ 3.500000000000000 0.003293703095084\\ 4.000000000000000 0.000989809288806\\ 4.500000000000000 0.000726657389976\\ 5.000000000000000 0.000467791041960\\ 5.500000000000000 0.000316023082713\\ 6.000000000000000 0.000222432120851\\ 7.000000000000000 0.000111613752367\\ 8.000000000000000 0.000057888750466\\ 9.000000000000000 0.000031379359686\\ 10.000000000000000 0.000018024561585\\ 11.000000000000000 0.000011056340908\\ 12.000000000000000 0.000007251475433\\ 13.000000000000000 0.000005059480403\\ 14.000000000000000 0.000003745317576\\ 15.000000000000000 0.000002916710074\\ 16.000000000000000 0.000002375337933\\ 17.000000000000000 0.000002009188516\\ 18.000000000000000 0.000001754146781\\ 19.000000000000000 0.000001572056363\\ 20.000000000000000 0.000001438968208\\ }; \addplot [color=blue,dashed,line width=0.5pt] table[row sep=crcr] 0 1.000000000000000\\ 0.500000000000000 1.000000000000000\\ 1.000000000000000 1.000000000000000\\ 1.500000000000000 1.000000000000000\\ 2.000000000000000 1.000000000000000\\ 2.500000000000000 0.879722959591974\\ 3.000000000000000 0.462610389296507\\ 3.500000000000000 0.248376287959890\\ 4.000000000000000 0.221089032429527\\ 4.500000000000000 0.208364737070244\\ 5.000000000000000 0.001001165755560\\ 5.500000000000000 0.000383865642598\\ 6.000000000000000 0.000275367786392\\ 7.000000000000000 0.000144538657683\\ 8.000000000000000 0.000078493487920\\ 9.000000000000000 0.000044624176381\\ 10.000000000000000 0.000026765448010\\ 11.000000000000000 0.000017067693361\\ 12.000000000000000 0.000011577483590\\ 13.000000000000000 0.000008325764519\\ 14.000000000000000 0.000006317145206\\ 15.000000000000000 0.000005020374956\\ 16.000000000000000 0.000004156985782\\ 17.000000000000000 0.000003563938514\\ 18.000000000000000 0.000003145616116\\ 19.000000000000000 0.000002843890150\\ 20.000000000000000 0.000002622481408\\ }; \addplot [color=black,dashdotted,line width=0.8pt] table[row sep=crcr] 0 1.112629875990382e-06 \\ 20 1.112629875990382e-06 \\ }; \addplot [color=black,dashdotted,line width=0.8pt] table[row sep=crcr] 0 1.997909370360383e-06 \\ 20 1.997909370360383e-06 \\ }; \draw [black] (191,-9.3) ellipse [x radius=2, y radius=.7]; \draw [-latex] (180,-7.4) node [left] {Decoding radius $r = 1$} -- (190,-8.6); \draw [-latex] (180,-4.5) node [left] {Decoding radius $r = 0$} -- (190,-6.2); \draw [black] (191.5,-13.2) ellipse [x radius=2.5, y radius=.85]; \draw [-latex] (180,-10.8) node [left] {Decoding radius $r = 2$} -- (191,-12.3); \end{axis} \end{tikzpicture \caption{The bounds on the \gls{MD} and \gls{FA} probabilities vs. $\EbNo$ for $k=128$ bits, $n = 19200$ channel uses, and $\vect{r}{K}_{\rm a} \sim \mathrm{Poisson}(50)$.} \label{fig:Pe_vs_EbN0} \end{figure} \section{Conclusions} \label{sec:conclusions} We proposed a formulation for massive uncoordinated access where both the identity and the number of active users are unknown. We derived a random-coding bound for the Gaussian multiple access channel that reveals a trade-off between misdetection and false alarm. Our bound \revisee{provides an estimate of} the penalty in terms of energy efficiency due to the lack of knowledge of the number of active users, and serves as a benchmark to assess the performance of practical schemes. Possible future works include extending our bound to the \gls{MAC} with fading and multiple antennas. \section*{Acknowledgement} This work has been supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP). \appendices \section{Proof of Theorem~\ref{thm:RCU_unknownKa}} \label{app:proof} The following well-known results will be used in our proof. \begin{lemma}[{Change of measure~\cite[Lemma~4]{Ohnishi2020novel}}] \label{lem:change_measure} Let $p$ and $q$ be two probability measures. Consider a random variable $\vect{r}{x}$ supported on ${\mathcal H}$ and a function $f \colon {\mathcal H} \to [0,1]$. It holds that \begin{align} \E_p[f(\vect{r}{x})] \le \E_q[f(\vect{r}{x})] + d_{\rm TV}(p,q) \end{align} where $d_{\rm TV}(p,q)$ denotes the total variation distance between $p$ and $q$. \end{lemma} \begin{lemma}[{Chernoff bound~\cite[Th. 6.2.7]{DeGroot2012ProbStats}}] \label{lem:Chernoff} For a random variable $\vect{r}{x}$ with moment-generating function $\E[e^{t \vect{r}{x}}]$ defined for all $|t| \le b$, it holds for all $\lambda \in [0,b]$ that \begin{align} \P[\vect{r}{x} \le x] \le e^{\lambda x} \E[e^{-\lambda \vect{r}{x}}]. \end{align} \end{lemma} \begin{lemma} [{Gallager's $\rho$-trick~\cite[p.~136]{Gallager1968information}}] \label{lem:Gallager} It holds that $\P[\cup_i A_i] \le (\sum_{i} \P[A_i])^\rho$ for every $\rho \in [0,1]$. \end{lemma} \begin{lemma} \label{lem:chi2} Let $\rvVec{x} \sim {\mathcal C}{\mathcal N}(\vect{\mu},\sigma^2\mat{\mathrm{I}}_n)$. For all $\gamma > -\frac{1}{\sigma^2}$, it holds that \begin{align} \E[e^{-\gamma \|\rvVec{x}\|^2}] = (1+\gamma\sigma^2)^{-n} \exp\bigg(-\frac{\gamma\|\vect{\mu}\|^2}{1+\gamma\sigma^2}\bigg). \label{eq:tmp363} \end{align} \end{lemma} We present next an error analysis of the random-coding scheme introduced in Section~\ref{sec:RCU}. Denote by ${\mathcal W}_0$ the set of misdetected messages, i.e., ${\mathcal W}_0 \triangleq {\mathcal W} \setminus \widehat{{\mathcal W}}$, and by ${\mathcal W}_0'$ the set of falsely alarmed messages, i.e., ${\mathcal W}_0' \triangleq \widehat{{\mathcal W}} \setminus {\mathcal W}$. The \gls{MD} and \gls{FA} probabilities, defined respectively in~\eqref{eq:eps_MD} and \eqref{eq:eps_FA}, can be expressed as the average fraction of misdetected and falsely alarmed messages, respectively, i.e., \begin{align} P_{\rm MD} &= \E[\ind{|{\mathcal W}| \ne 0} \cdot \md], \label{eq:pMD}\\ P_{\rm FA} &= \E[\ind{|\widehat{{\mathcal W}}| \ne 0} \cdot \fa]. \label{eq:pFA} \end{align} \subsection{A Change of Measure} \label{sec:change_measure} Recall that $|{\mathcal W}|$ is the number of {\em distinct} transmitted messages. Since multiple transmitters may pick the same codeword to transmit, $|{\mathcal W}|$ can be smaller than $\vect{r}{K}_{\rm a}$. Since both $\ind{|{\mathcal W}| \ne 0} \cdot \frac{|{\mathcal W}_0|}{|{\mathcal W}|}$ and $\ind{|\widehat{{\mathcal W}}| \ne 0} \cdot \frac{|{\mathcal W}_0'|}{|\widehat{{\mathcal W}}|}$ are nonnegative and upper-bounded by one, we can apply Lemma~\ref{lem:change_measure} to these random quantities. Specifically, we replace the measure over which the expectation is taken by the one under which: i) there are at least $K_{l}$ and at most $K_{u} \ge \overline{K_{\rm a}'}$ active users, i.e., $K_{l}\le\vect{r}{K}_{\rm a} \le K_{u}$; ii) $\widetilde{\vect{r}{w}}_1,\dots,\widetilde{\vect{r}{w}}_{\vect{r}{K}_{\rm a}}$ are sampled uniformly without replacement from $[M]$, i.e., $|{\mathcal W}| = \vect{r}{K}_{\rm a}$; iii) $\rvVec{x}_i = \vect{c}_{\vect{r}{w}_i}, \forall i$ (instead of $\rvVec{x}_i = \vect{c}_{\vect{r}{w}_i} \ind{\|\vect{c}_{\vect{r}{w}_i}\|^2 \le nP}$). It then follows from \cite[Eq. (41)]{Kowshik2020fundamental} that the total variation between the true measure and the new one is upper-bounded by $\P[{\vect{r}{K}_{\rm a} \notin [K_{l}:K_u]}] + \P[ |{\mathcal W}| < \vect{r}{K}_{\rm a}] + \P[\overline{U}]$, where $U \triangleq \{\|\vect{c}_{\vect{r}{w}_i}\|^2 \le nP, \forall i \in [\vect{r}{K}_{\rm a}] \}$ and $\overline{U}$ denotes the complement of~$U$. We compute these probabilities as follows: \begin{itemize}[leftmargin=*] \item To compute the first probability, we simply use that $\P[{\vect{r}{K}_{\rm a} \notin [K_{l}:K_u]}] = 1 - \sum_{K_{\rm a} = K_{l}}^{K_{u}}P_{\vect{r}{K}_{\rm a}}(K_{\rm a})$. \item Consider a given $\vect{r}{K}_{\rm a} = K_{\rm a}$. Since $\widetilde{\vect{r}{w}}_1,\dots,\widetilde{\vect{r}{w}}_{K_{\rm a}}$ are drawn uniformly and independently from $[M]$, there are $M^{K_{\rm a}}$ possible $K_{\rm a}$-tuples. Among them, $\frac{M!}{(M-K_{\rm a})!}$ tuples have nonduplicate elements. Therefore, $\P[ |{\mathcal W}| = K_{\rm a} \,\vert\, \vect{r}{K}_{\rm a} = K_{\rm a}] = \frac{M!}{(M-K_{\rm a})!} \frac{1}{M^{K_{\rm a}}}$. As a consequence, $\P[ |{\mathcal W}| < \vect{r}{K}_{\rm a}] = 1 - \P[ |{\mathcal W}| = \vect{r}{K}_{\rm a}] = 1- \E_{\vect{r}{K}_{\rm a}}\Big[\frac{M!}{M^{\vect{r}{K}_{\rm a}}(M-\vect{r}{K}_{\rm a})!}\Big]$.\footnote{In~\cite{PolyanskiyISIT2017massive_random_access}, $\P[ |{\mathcal W}| \le K_{\rm a}]$ is upper-bounded by $\binom{K_{\rm a}}{2}/M$ using the union bound.} \item The probability $\P[\overline{U}]$ can be finally evaluated as \begin{align} \P[\overline{U}] &= \E_{\vect{r}{K}_{\rm a}}\Bigg[{\P[\bigcup_{i=1}^{\vect{r}{K}_{\rm a}} \|\vect{c}_{\vect{r}{w}_i}\|^2 > nP]}\Bigg] \\ &\le \E_{\vect{r}{K}_{\rm a}} \Bigg[\sum_{i=1}^{\vect{r}{K}_{\rm a}}{\P[\|\vect{c}_{\vect{r}{w}_i}\|^2 > nP]}\Bigg] \label{eq:tmp675}\\ &= \E[\vect{r}{K}_{\rm a}] \frac{\Gamma(n,nP/P')}{\Gamma(n)}, \label{eq:tmp676} \end{align} where \eqref{eq:tmp675} follows from the union bound and \eqref{eq:tmp676} holds since $\|\vect{c}_{\vect{r}{w}_i}\|^2$ follows the Gamma distribution with shape $n$ and scale $P'$. \end{itemize} From the above calculations, we deduce that the total variation between the two measures is upper-bounded by $p_0$ defined in~\eqref{eq:p0}. Applying Lemma~\ref{lem:change_measure} to the random quantities $\ind{|\widehat{{\mathcal W}}| \ne 0} \cdot \frac{|{\mathcal W}_0|}{|{\mathcal W}|}$ and $\ind{|{\mathcal W}| \ne 0} \cdot \frac{|{\mathcal W}_0'|}{|\widehat{{\mathcal W}}|}$, we consider implicitly the new measure from now on at a cost of adding $p_0$ to their original expectations. It remains to bound the \gls{MD} and \gls{FA} probabilities given in~\eqref{eq:pMD} and \eqref{eq:pFA}, respectively, under the new measure. For the sake of clarity, in Appendix~\ref{sec:special_case}, we shall prove a bound on $P_{\rm MD}$ and $P_{\rm FA}$ for a special case where i) $\vect{r}{K}_{\rm a}$ and $\vect{r}{K}'_{\rm a}$ are fixed and $r = 0$, i.e., there are always $K_{\rm a}$ users transmitting and the decoder always outputs a list of size $K'_{\rm a}$; ii) $K'_{\rm a} < \min\{K_{\rm a}, M-K_{\rm a}\}$. Then, in Appendix~\ref{sec:general_case}, we shall show how to extend the proof for the general case where $\vect{r}{K}_{\rm a}$ and $\vect{r}{K}'_{\rm a}$ are random and $r \ge 0$. \subsection{A Special Case} \label{sec:special_case} In the aforementioned special case, \eqref{eq:eps_MD} and \eqref{eq:eps_FA} become \begin{align} \epsilon_{\rm MD} &= \sum_{t = 0}^{K_{\rm a}'}\frac{t+K_{\rm a}-K_{\rm a}'}{K_{\rm a}} \min\{p_{t,t},q_{t,t}\} + p_0, \label{eq:eps_MD_simp}\\ \epsilon_{\rm FA} &= \sum_{t = 0}^{K_{\rm a}'} \frac{t}{K'_{\rm a}} \min\{p_{t,t}, q_{t,t}\} + p_0, \label{eq:eps_FA_simp} \end{align} where $p_{t,t}$ and $q_{t,t}$ will be derived next. We next show that $\epsilon_{\rm MD}$ and $\epsilon_{\rm FA}$ are indeed upper-bounds of $P_{\rm MD}$ and $P_{\rm FA}$, respectively, in this special case. Observe that since the decoded list size $K'_{\rm a}$ is smaller than the number of transmitted messages $K_{\rm a}$, at least $K_{\rm a} - {K_{\rm a}'}$ messages are misdetected by default, and there can be $t \in [0:K_{\rm a}']$ additional \glspl{MD} occurring during the decoding process. Due to symmetry, we can assume without loss of generality that ${\mathcal W} = [K_{\rm a}]$ and that the list of messages that are initially misdetected due to insufficient decoded list size is ${\mathcal W}_{01} = [{K}_{\rm a} - {K_{\rm a}'}]$.\footnote{Note that due to user's identity ambiguity, this does not imply that the messages from a set of specific users are always misdetected.} Furthermore, let ${\mathcal W}_{02} = {\mathcal W}_{0} \setminus {\mathcal W}_{01}$ denote the set of $t$ additional \glspl{MD}. Note that ${\mathcal W}_{02}$ is a generic subset of $[K_{\rm a} - {K_{\rm a}'} + 1:K_{\rm a}]$. Note also that $t$ is the number of \glspl{FA}, i.e., $|{\mathcal W}_0'| = t$. The set of transmitted messages can thus be expressed as ${\mathcal W} = {\mathcal W}_{01} \cup {\mathcal W}_{02} \cup ({\mathcal W} \setminus {\mathcal W}_0)$, and the received signal is $\rvVec{y} = c({\mathcal W}_{01}) + c({\mathcal W}_{02}) + c({\mathcal W} \setminus {\mathcal W}_0) + \rvVec{z}$. Since the messages in ${\mathcal W}_{01}$ are always misdetected, the best approximation of ${\mathcal W}$ that the decoder can produce is ${\mathcal W}_{02} \cup ({\mathcal W} \setminus {\mathcal W}_0)$. However, under the considered error event ${\mathcal W} \to \widehat{{\mathcal W}}$, the actual decoded list is ${\mathcal W}_{0}' \cup ({\mathcal W} \setminus {\mathcal W}_0)$. Therefore, ${\mathcal W} \to \widehat{{\mathcal W}}$ implies that $\|\rvVec{y} - c({\mathcal W}_{0}') - c({\mathcal W} \setminus {\mathcal W}_0)\|^2 < \|\rvVec{y} - c({\mathcal W}_{02}) - c({\mathcal W} \setminus {\mathcal W}_0))\|^2$, which is equivalent to \begin{align} \|c({\mathcal W}_{01}) + c({\mathcal W}_{02})- c({\mathcal W}_{0}') + \rvVec{z}\|^2 < \|c({\mathcal W}_{01}) + \rvVec{z}\|^2. \label{eq:eventF_simp} \end{align} We denote by $F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{0}')$ the set of $\{{\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{0}'\}$ that satisfy~\eqref{eq:eventF_simp}. We now compute the expectations in~\eqref{eq:pMD} and \eqref{eq:pFA}. Recall that, under assumptions just stated, we have $|{\mathcal W}_0| = t + K_{\rm a} - K'_{\rm a}$, $|{\mathcal W}_0'| = |{\mathcal W}_{02}| = t$, and $|\widehat{{\mathcal W}}| = K'_{\rm a}$. It follows from \eqref{eq:pMD} and \eqref{eq:pFA} that, after the change of measure in Appendix~\ref{sec:change_measure}, $P_{\rm MD}$ and $P_{\rm FA}$ can be bounded as \begin{align} P_{\rm MD} &\le \sum_{t=0}^{K'_{\rm a}} \frac{t+K_{\rm a}-{K_{\rm a}'}}{K_{\rm a}} \P[|{\mathcal W}_{02}| = t] + p_0, \label{eq:tmp850_simp}\\ P_{\rm FA} &\le \sum_{t=0}^{K'_{\rm a}} \frac{t}{K_{\rm a}'} \P[|{\mathcal W}_{02}| = t] + p_0, \label{eq:tmp853_simp} \end{align} Next, we proceed to bound $\P[|{\mathcal W}_{02}| = t]$. This is done following two approaches. The first approach is based on error exponent analyses, resulting in the term $p_{t,t}$ in~\eqref{eq:eps_MD_simp}. The second approach is a variation of the dependence testing (DT) bound \cite[Th.~17]{Polyanskiy2010}, resulting in $q_{t,t}$ in~\eqref{eq:eps_MD_simp}. \subsubsection{The Error-Exponent-Based Approach} \label{sec:1st_approach} By writing the event $|{\mathcal W}_{02}| = t$ as the union of the pairwise error events $F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{0})$, we have that \begin{align} &\P[{|{\mathcal W}_{02}| = t}] \notag \\ &= \P[\bigcup_{{\mathcal W}_{02} \subset [K_{\rm a} - {K_{\rm a}'} + 1:K_{\rm a}] \atop |{\mathcal W}_{02}| = t} \bigcup_{{\mathcal W}_{0}' \subset [K_{\rm a}+1:M] \atop |{\mathcal W}_{0}'| = t} \!F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{0}')]\!, \label{eq:tmp901_simp} \end{align} Next, given $c({\mathcal W}_{01})$, $c({\mathcal W}_{02})$, and $\rvVec{z}$, it holds for every $\lambda > -\frac{1}{tP'}$ that \begin{align} &\P[F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{0}')] \notag \\ &\le e^{\lambda \|c({\mathcal W}_{01}) + \|\rvVec{z}\|^2} \notag \\ &\quad \cdot \E_{c({\mathcal W}_{0}')}\Big[e^{-\lambda \|c({\mathcal W}_{01}) + c({\mathcal W}_{02})- c({\mathcal W}_{0}') + \rvVec{z}\|^2}\Big] \label{eq:tmp766_simp}\\ &= e^{\lambda \|c({\mathcal W}_{01}) + \rvVec{z}\|^2} (1+\lambda tP')^{-n} \notag \\ &\quad \cdot \exp\bigg(-\frac{\lambda\|c({\mathcal W}_{01}) + c({\mathcal W}_{02}) + \rvVec{z}\|^2}{1+\lambda t P'}\bigg), \label{eq:tmp768_simp} \end{align} where \eqref{eq:tmp766_simp} follows from the Chernoff bound in Lemma~\ref{lem:Chernoff}, and \eqref{eq:tmp768_simp} follows by computing the expectation in~\eqref{eq:tmp766_simp} using Lemma~\ref{lem:chi2}. Next, we apply Gallager's $\rho$-trick in Lemma~\ref{lem:Gallager} to get that, given $c({\mathcal W}_{01})$, $c({\mathcal W}_{02})$, and $\rvVec{z}$, it holds for every $\rho \in [0,1]$ that \begin{align} &\P[\bigcup_{{\mathcal W}_{0}' \subset [K_{\rm a}+1:M] \atop |{\mathcal W}_{0}'| = t} F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{0}')] \\ &\le \binom{M-K_{\rm a}}{t}^\rho (1+\lambda tP')^{-n\rho} \notag \\ &\quad \cdot \exp\Bigg(\lambda \rho \bigg(\|c({\mathcal W}_{01}) + \rvVec{z}\|^2 -\frac{\|c({\mathcal W}_{01}) \!+\! c({\mathcal W}_{02}) \!+\! \rvVec{z}\|^2}{1+\lambda t P'}\bigg)\Bigg). \label{eq:tmp803_simp} \end{align} Taking the expectation over $c({\mathcal W}_{02})$ using Lemma~\ref{lem:chi2}, we obtain for given $c({\mathcal W}_{01})$ and $\rvVec{z}$ that \begin{align} &\P[\bigcup_{{\mathcal W}_{0}' \subset [K_{\rm a}+1:M] \atop |{\mathcal W}_{0}'| = t} F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{0}')] \notag \\ &\le \binom{M-K_{\rm a}}{t}^\rho (1\!+\!\lambda tP')^{-n\rho} \Big(1+\frac{\lambda \rho t P'}{1\!+\!\lambda tP'}\Big)^{-n} \notag \\ &\quad \cdot \exp\Bigg(\!\lambda\rho \bigg(1-\frac{1}{1+\lambda P't(1+\rho)}\bigg)\|c({\mathcal W}_{01}) + \rvVec{z}\|^2\Bigg) \\ &= \binom{M-K_{\rm a}}{t}^\rho \exp\left(b_0\|c({\mathcal W}_{01}) + \rvVec{z}\|^2 - na_0\right), \label{eq:tmp811} \end{align} where $a_0$ and $b_0$ are given by taking $t' = t$ in~\eqref{eq:a} and \eqref{eq:b}, respectively. Now applying Gallager's $\rho$-trick again, we obtain that, for every $\rho_1 \in [0,1]$, \begin{align} &\P[\bigcup_{{\mathcal W}_{02} \subset [K_{\rm a} - {K_{\rm a}'} + 1:K_{\rm a}] \atop |{\mathcal W}_{02}| = t} \bigcup_{{\mathcal W}_{0}' \subset [K_{\rm a}+1:M] \atop |{\mathcal W}_{0}'| = t} F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{0}')] \notag \\ &\le \binom{K_{\rm a}'}{t}^{\rho_1} \binom{M-K_{\rm a}}{t}^{\rho\rho_1} \label{eq:tmp797_simp}\notag \\ &\quad \cdot \E[\exp\left(\rho_1 b_0\|c({\mathcal W}_{01}) + \rvVec{z}\|^2 - n\rho_1 a_0\right)] \\ &= \binom{K_{\rm a}'}{t}^{\rho_1} \binom{M-K_{\rm a}}{t}^{\rho\rho_1} e^{-n\rho_1 a_0} \big(1-\rho_1P_2b_0\big)^{-n}, \label{eq:tmp800_simp} \end{align} where the last equality follows by computing the expectation in~\eqref{eq:tmp797_simp} jointly over $c({\mathcal W}_{01})$ and $\rvVec{z}$ with the help of Lemma~\ref{lem:chi2}, and $P_2= 1+(K_{\rm a} - K_{\rm a}')P'$. Finally, plugging the result into \eqref{eq:tmp901_simp}, we obtain \begin{align} &\P[|{\mathcal W}_{02}| = t] \notag \\ &\le \binom{K_{\rm a}'}{t}^{\rho_1} \binom{M-K_{\rm a}}{t}^{\rho\rho_1} e^{-n\rho_1 a_0} \big(1-\rho_1P_2b_0\big)^{-n} \\ &\triangleq p_{t,t}. \label{eq:tmp1148_simp} \end{align} \subsubsection{The DT-Based Approach} \label{sec:2nd_approach} Next, we present an alternative bound on $\P[{|{\mathcal W}_{02}| = t}]$. Consider the channel law $P_{\rvVec{y} \,\vert\, c({\mathcal W}_{0}), c({\mathcal W} \setminus {\mathcal W}_0)}$ with input $c({\mathcal W}_{0})$ and output $\rvVec{y}$ where $|{\mathcal W}_{02}| = t$. The corresponding information density~\cite[Def. 17.1]{Polyanskiy2019lecture} is given by \begin{align} &\imath_t(c({\mathcal W}_{0});\rvVec{y} \,\vert\, c({\mathcal W} \setminus {\mathcal W}_0)) \notag \\ &= n \ln(1+(t+K_a-K_{\rm a}')P') + \frac{\|\rvVec{y} - c({\mathcal W} \setminus {\mathcal W}_0)\|^2}{1+(t+K_a-K_{\rm a}')P'} \notag \\ &\quad - \|\rvVec{y} - c({\mathcal W}_0) - c({\mathcal W} \setminus {\mathcal W}_0)\|^2. \end{align} Notice that the event $F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{0}')$ defined in~\eqref{eq:eventF_simp} is equivalent to $\{\imath_t(c({\mathcal W}_{0}');\rvVec{y} \,\vert\, c({\mathcal W} \setminus {\mathcal W}_0)) > \imath_t(c({\mathcal W}_{02});\rvVec{y} \,\vert\, c({\mathcal W} \setminus {\mathcal W}_0))\}.$ Let \begin{align} \label{eq:def_It_1_simp} \vect{r}{I}_t \triangleq \min_{{\mathcal W}_{02} \subset [K_{\rm a} - {K_{\rm a}'} + 1:K_{\rm a}] \atop |{\mathcal W}_{02}| = t} \imath_t(c({\mathcal W}_{02});\rvVec{y} \,\vert\, c({\mathcal W} \setminus {\mathcal W}_0)). \end{align} For a fixed arbitrary $\gamma$, it follows that \begin{align} &\P[{|{\mathcal W}_{02}| = t}] \notag \\ &=\P[I_{t} \le \gamma]\P[{|{\mathcal W}_{02}| = t \;\big|\; I_{t} \le \gamma}] \notag \\ &\quad + \P[I_{t} > \gamma]\P[{|{\mathcal W}_{02}| = t \;\big|\; I_{t} > \gamma}] \\ &\le \P[I_{t} \le \gamma] + \P[{|{\mathcal W}_{02}| = t \;\big|\; I_{t} > \gamma}] \label{eq:tmp838_simp}\\ &= \P[I_{t} \le \gamma] \notag \\ &\quad+ \P\bigg[ \bigcup_{{\mathcal W}_{02} \subset [K_{\rm a} - {K_{\rm a}'} + 1:K_{\rm a}] \atop |{\mathcal W}_{02}| = t} \bigcup_{{\mathcal W}_{0}' \subset [K_{\rm a}+1:M] \atop |{\mathcal W}_{0}'| = t} \bigg. \notag \\ &\qquad \qquad \bigg. \big\{\imath_t(c({\mathcal W}_{0}');\rvVec{y} \,\vert\, c({\mathcal W} \setminus {\mathcal W}_0)) \bigg. \notag \\ &\qquad \qquad \bigg. > \imath_t(c({\mathcal W}_{02});\rvVec{y} \,\vert\, c({\mathcal W} \setminus {\mathcal W}_0))\big\} \;\big|\; I_{t} > \gamma\bigg] \label{eq:tmp814_simp} \\ &\le \P[I_{t} \le \gamma] \notag \\ &\quad+ \P\bigg[\bigcup_{{\mathcal W}_{02} \subset [K_{\rm a} - {K_{\rm a}'} + 1:K_{\rm a}] \atop |{\mathcal W}_{02}| = t} \bigcup_{{\mathcal W}_{0}' \subset [K_{\rm a}+1:M] \atop |{\mathcal W}_{0}'| = t} \bigg. \notag \\ &\qquad \qquad \bigg. \big\{\imath_t(c({\mathcal W}_{0}');\rvVec{y} \,\vert\, c({\mathcal W} \setminus {\mathcal W}_0)) > \gamma\big\}\bigg]. \label{eq:tmp383_simp} \end{align} Here, \eqref{eq:tmp814_simp} follows by writing explicitly the event $\{|{\mathcal W}_{02}| = t\}$, and \eqref{eq:tmp383_simp} follows by relaxing the inequality inside the second probability. Using that $\P[\imath(x;\vect{r}{y}) > \gamma] \le e^{-\gamma}, \forall x$~\cite[Cor.~17.1]{Polyanskiy2019lecture}, we obtain \begin{align} \P[\imath_t(c({\mathcal W}_{0}');\rvVec{y} \,\vert\, c({\mathcal W} \setminus {\mathcal W}_0)) > \gamma] \le e^{-\gamma}. \end{align} Then, by applying the union bound and taking the infimum over $\gamma$, we conclude that \begin{align} &\P[{|{\mathcal W}_{02}| = t}] \notag \\ &\le \inf_{\gamma} \Bigg( \P[\vect{r}{I}_t \le \gamma] + \binom{K_{\rm a}'}{t} \binom{M-K_{\rm a}}{t} e^{-\gamma} \Bigg) \\ &\triangleq q_{t,t}. \label{eq:tmp1201_simp} \end{align} This concludes the DT-based approach. It follows from \eqref{eq:tmp1148_simp} and \eqref{eq:tmp1201_simp} that $\P[|{\mathcal W}_{02}| = t] \le \min\left\{p_{t,t}, q_{t,t} \right\}$. Introducing this bound into \eqref{eq:tmp850_simp} and \eqref{eq:tmp853_simp}, we obtain that the \gls{MD} and \gls{FA} probabilities, averaged over the Gaussian codebook ensemble, are upper-bounded by $\epsilon_{\rm MD}$ and $\epsilon_{\rm FA}$ given in~\eqref{eq:eps_MD_simp} and \eqref{eq:eps_FA_simp}, respectively. \subsection{The General Case} \label{sec:general_case} We now explain how the result in the special case considered in the previous subsection can be extended to the general case where $\vect{r}{K}_{\rm a}$ and $\vect{r}{K}'_{\rm a}$ are random and $r \ge 0$. For random $\vect{r}{K}_{\rm a}$ and $\vect{r}{K}'_{\rm a}$, one has to take into account all the possible combinations of the number of transmitted messages and decoded messages when computing the expectations in~\eqref{eq:pMD} and \eqref{eq:pFA}. Consider the event that $K_{\rm a}$ users are active and the estimation of $K_{\rm a}$ results in $K_{\rm a}'$, which we denote by $K_{\rm a} \to K_{\rm a}'$. As in the special case, we assume without loss of generality that ${\mathcal W} = [{K}_{\rm a}]$. Furthermore, due to symmetry, we let ${\mathcal W}_{01} = [({K}_{\rm a} - \overline{K_{\rm a}'})^+]$ denote the list of $(\vect{r}{K}_{\rm a} - \overline{K_{\rm a}'})^+$ initial \glspl{MD} due to insufficient decoded list size, and ${\mathcal W}_{02} = {\mathcal W}_0 \setminus {\mathcal W}_{01}$ the $t$ additional \glspl{MD} occurring during the decoding process. Note also that, if $\underline{K_{\rm a}'} > K_{\rm a}$, the decoder always outputs more than $K_{\rm a}$ messages. Hence, at least $\underline{K_{\rm a}'} - K_{\rm a}$ decoded messages are falsely alarmed. Due to symmetry, let ${\mathcal W}_{01}' = [\vect{r}{K}_{\rm a} + 1: \underline{K_{\rm a}'}]$ denote the list of $(\underline{K_{\rm a}'}-\vect{r}{K}_{\rm a})^+$ initial \glspl{FA} due to excessive decoded list size, and ${\mathcal W}_{02}' = {\mathcal W}_0' \setminus {\mathcal W}_{01}'$ the $t'$ additional \glspl{FA} occurring during the decoding process. See Fig.~\ref{fig:venn} for a diagram depicting the relation between these sets of messages. Under these assumptions, ${\mathcal W}_{02}$ and ${\mathcal W}_{02}'$ are generic subsets of $[({K}_{\rm a} - \overline{K_{\rm a}'})^+ + 1:{K}_{\rm a}]$ and $[\max\{{K}_{\rm a},\underline{K_{\rm a}'}\}+1 : M]$, respectively. Note that in the special case considered in Appendix~\ref{sec:special_case}, $t$ can take value from $0$ to $K'_{\rm a}$, and $t' = t$. In the general case, instead: \begin{itemize} \item The possible values of $t$ are given by ${\mathcal T}$ defined in~\eqref{eq:T}. This is because the number of \glspl{MD}, given by $t+{(K_{\rm a}-\overline{K_{\rm a}'})}^+$, is upper-bounded by the total number of transmitted messages $K_{\rm a}$, and by $M-\underline{K_{\rm a}'}$ (since at least $\underline{K_{\rm a}'}$ messages are returned). \item Given $t$, the integer $t'$ takes value in $\overline{{\mathcal T}}_t$ defined in~\eqref{eq:Tbart} because: i) the decoded list size, given by $K_{\rm a} - t - {(K_{\rm a} - \overline{K_{\rm a}'})}^+ + t' + {(\underline{K_{\rm a}'}-K_{\rm a})}^+$, must be in $[\underline{K_{\rm a}'} : \overline{K_{\rm a}'}]$; ii) the number of \glspl{FA}, given by $t'+ (\underline{K_{\rm a}'}-K_{\rm a})^+$, is upper-bounded by the number of messages that are not transmitted $M-K_{\rm a}$, and by the maximal number of decoded messages $\overline{K_{\rm a}'}$. \item If the decoded list size is further required to be strictly positive, then $t'$ takes value in ${\mathcal T}_t$ defined in~\eqref{eq:Tt}. \end{itemize} Using the above definitions, the best approximation of ${\mathcal W}$ that the decoder can produce is ${\mathcal W}_{02} \cup ({\mathcal W} \setminus {\mathcal W}_0) \cup {\mathcal W}_{01}'$, while the actual decoded list, under ${\mathcal W} \to \widehat{{\mathcal W}}$, is ${\mathcal W}'_{02} \cup ({\mathcal W} \setminus {\mathcal W}_0) \cup {\mathcal W}_{01}'$. Therefore, ${\mathcal W} \to \widehat{{\mathcal W}}$ implies that $\|\rvVec{y} - c({\mathcal W}'_{02}) - c({\mathcal W} \setminus {\mathcal W}_0) - c({\mathcal W}_{01}')\|^2 < \|\rvVec{y} - c({\mathcal W}_{02}) - c({\mathcal W} \setminus {\mathcal W}_0) - c({\mathcal W}_{01}')\|^2$, which is equivalent to \begin{multline} \|c({\mathcal W}_{01}) + c({\mathcal W}_{02})- c({\mathcal W}_{01}') - c({\mathcal W}_{02}') + \rvVec{z}\|^2 \\ < \|c({\mathcal W}_{01}) - c({\mathcal W}_{01}') + \rvVec{z}\|^2. \label{eq:eventF} \end{multline} We denote by $F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{01}',{\mathcal W}_{02}')$ the set of $\{{\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{01}',{\mathcal W}_{02}'\}$ that satisfy~\eqref{eq:eventF}. We now compute the expectations in $P_{\rm MD}$ and $P_{\rm FA}$. Given $|{\mathcal W}_{02}| = t$ and $|{\mathcal W}_{02}'| = t'$, we have that $|{\mathcal W}_0| = t+(\vect{r}{K}_{\rm a} - \overline{K_{\rm a}'})^+$, $|{\mathcal W}_0'| = t + (\underline{K_{\rm a}'}-\vect{r}{K}_{\rm a})^+$, and $|\widehat{{\mathcal W}}| = \vect{r}{K}_{\rm a} - t - (\vect{r}{K}_{\rm a} - \overline{K_{\rm a}'})^+ + t' + (\underline{K_{\rm a}'}-\vect{r}{K}_{\rm a})^+$. It follows from \eqref{eq:pMD} and \eqref{eq:pFA} that, after the change of measure in Appendix~\ref{sec:change_measure}, $P_{\rm MD}$ and $P_{\rm FA}$ can be bounded as \begin{align} P_{\rm MD} &\le \sum_{K_{\rm a} =\max\{K_{l},1\}}^{K_{u}} P_{\vect{r}{K}_{\rm a}}(K_{\rm a}) \sum_{K_{\rm a}' = K_{l}}^{K_{u}} \sum_{t\in {\mathcal T}}\frac{t+(K_{\rm a}-\overline{K_{\rm a}'})^+}{K_{\rm a}} \notag \\ &\qquad \quad \cdot \P[|{\mathcal W}_{02}| = t, K_{\rm a} \to K_{\rm a}'] \notag \\ &\quad + p_0, \label{eq:tmp850}\\ P_{\rm FA} &\le \sum_{K_{\rm a} =K_{l}}^{K_{u}} P_{\vect{r}{K}_{\rm a}}(K_{\rm a}) \sum_{K_{\rm a}' = K_{l}}^{K_{u}} \sum_{t\in {\mathcal T}} \sum_{t' \in {\mathcal T}_t} \notag \\ &\qquad \quad \frac{t+(\underline{K_{\rm a}'} - K_{\rm a})^+}{K_{\rm a} \!-\! t \!-\! {(K_{\rm a} \!-\! \overline{K_{\rm a}'})}^+ \!\!+\! t' \!+\! {(\underline{K_{\rm a}'}\!-\!K_{\rm a})}^+\!} \notag \\ &\qquad \quad \cdot \P[|{\mathcal W}_{02}| = t, |{\mathcal W}_{02}'| = t', K_{\rm a} \to K_{\rm a}'] \notag \\ &\quad + p_0. \label{eq:tmp853} \end{align} Next, we proceed to bound the joint probabilities $\P[|{\mathcal W}_{02}| = t,K_{\rm a} \to K_{\rm a}']$ and $\P[|{\mathcal W}_{02}| = t,|{\mathcal W}_{02}'| = t',K_{\rm a} \to K_{\rm a}']$. Let \begin{align} \label{eq:def_A} A(K_{\rm a},K_{\rm a}') \triangleq \{m(\rvVec{y},K_{\rm a}') < m(\rvVec{y},K), \forall K \ne K_{\rm a}'\}. \end{align} Since the event $K_{\rm a} \to K_{\rm a}'$ implies that $|\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]$ and $A(K_{\rm a},K_{\rm a}')$, we have \begin{align} &\P[|{\mathcal W}_{02}| = t, K_{\rm a} \to K_{\rm a}'] \notag \\ &\le \P[{{|{\mathcal W}_{02}| = t, |\widehat{{\mathcal W}}|\in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}}],A(K_{\rm a},K_{\rm a}')}] \\ &\le \min\left\{\P[{|{\mathcal W}_{02}| \!=\! t, |\widehat{{\mathcal W}}|\in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}], \P[A(K_{\rm a},K_{\rm a}')] \right\}, \label{eq:tmp883} \end{align} where \eqref{eq:tmp883} follows from the fact that the joint probability is upper-bounded by each of the individual probabilities. Similarly, we can show that \begin{align} &\P[|{\mathcal W}_{02}| = t,|{\mathcal W}_{02}'| = t', K_{\rm a} \to K_{\rm a}'] \notag \\ &\le \min\Big\{\P[{|{\mathcal W}_{02}| = t, |{\mathcal W}_{02}'| = t', |\widehat{{\mathcal W}}|\in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}], \Big. \notag \\ &\qquad\qquad\Big. \P[A(K_{\rm a},K_{\rm a}')] \Big\}. \label{eq:tmp1054} \end{align} We next present the bounds on $\P[A(K_{\rm a},K_{\rm a}')]$, $\P[{|{\mathcal W}_{02}| \!=\! t, |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}]$, and $\P[{|{\mathcal W}_{02}| = t, |{\mathcal W}_{02}'| = t', |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}]$. \subsubsection{Bound on $\P[A(K_{\rm a},K_{\rm a}')]$} We have \begin{align} \P[A(K_{\rm a},K_{\rm a}')] &= \P[m(\rvVec{y},K_{\rm a}') < m(\rvVec{y},K), \forall K \ne K_{\rm a}'] \\ &\le\min_{K\colon K \ne K_{\rm a}'}\P[m(\rvVec{y},K_{\rm a}') < m(\rvVec{y},K)] \\ &= \xi(K_{\rm a},K_{\rm a}'), \label{eq:tmp1077} \end{align} where $\xi(K_{\rm a},K_{\rm a}')$ is given by~\eqref{eq:xi}, and \eqref{eq:tmp1077} holds since under the new measure, $\rvVec{y} \sim {\mathcal C}{\mathcal N}(\mathbf{0},(1+K_{\rm a} P')\mat{\mathrm{I}}_n)$ distribution. \subsubsection{Bounds of $\P[{|{\mathcal W}_{02}| = t, |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}]$} \label{sec:bound_tMDs} As in Appendix~\ref{sec:special_case}, we follow two approaches to bound $\P[{|{\mathcal W}_{02}| = t, |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}]$. The first approach is based on error exponent analyses and the second approach is based on the DT bound. In the first approach, we write the event $\{|{\mathcal W}_{02}| = t, |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]\}$ as the union of the pairwise events and obtain \begin{align} &\P[{|{\mathcal W}_{02}| = t, |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}] \notag \\ &= \P\Bigg(\bigcup_{t' \in \overline{{\mathcal T}}_t} \bigcup_{{\mathcal W}_{02} \subset [(K_{\rm a} - \overline{K_{\rm a}'})^+ + 1:K_{\rm a}] \atop |{\mathcal W}_{02}| = t} \bigcup_{{\mathcal W}_{02}' \subset [\max\{K_{\rm a},\underline{K_{\rm a}'}\}+1:M] \atop |{\mathcal W}_{02}'| = t'} \Bigg. \notag \\ &\qquad \qquad \Bigg.F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{01}',{\mathcal W}_{02}') \Bigg). \label{eq:tmp901} \end{align} Then, by applying the Chernoff bound, Gallager's $\rho$-trick, and Lemma~\ref{lem:chi2} following similar steps as in Appendix~\ref{sec:1st_approach}, we obtain \begin{align} \P[{|{\mathcal W}_{02}| = t, |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}] \le p_t \label{eq:tmp1148} \end{align} with $p_t$ given by~\eqref{eq:pt}. In the second approach, we consider the channel law $P_{\rvVec{y} \,\vert\, c({\mathcal W}_{0}), c({\mathcal W} \setminus {\mathcal W}_0)}$ with input $c({\mathcal W}_{0})$ and output $\rvVec{y}$ where $|{\mathcal W}_{02}| = t$. The corresponding information density $\imath_t(c({\mathcal W}_{0});\rvVec{y} \,\vert\, c({\mathcal W} \setminus {\mathcal W}_0))$ is defined in~\eqref{eq:infor_den}. Notice that the event $F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{01}',{\mathcal W}_{02}')$ defined in~\eqref{eq:eventF} is equivalent to $\{\imath_t(c({\mathcal W}_{01}')+c({\mathcal W}_{02}');\rvVec{y} \,\vert\, c({\mathcal W} \setminus {\mathcal W}_0)) > \imath_t(c({\mathcal W}_{01}') + c({\mathcal W}_{02});\rvVec{y} \,\vert\, c({\mathcal W} \setminus {\mathcal W}_0))\}.$ Then, by proceeding as in Appendix~\ref{sec:2nd_approach}, it follows that \begin{align} \P[{|{\mathcal W}_{02}| = t, |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}] \le q_t \label{eq:tmp1201} \end{align} with $q_t$ given by~\eqref{eq:qt}. \subsubsection{Bounds of $\P[{|{\mathcal W}_{02}| = t, |{\mathcal W}_{02}'| = t', |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}]$} First, we have that \begin{align} &\P[{|{\mathcal W}_{02}| = t, |{\mathcal W}_{02}'| = t', |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}] \notag \\ &= \P\Bigg[\bigcup_{{\mathcal W}_{02} \subset [(K_{\rm a} - \overline{K_{\rm a}'})^+ + 1:K_{\rm a}] \atop |{\mathcal W}_{02}| = t} \bigcup_{{\mathcal W}_{02}' \subset [\max\{K_{\rm a},\underline{K_{\rm a}'}\}+1:M] \atop |{\mathcal W}_{02}| = t'} \bigg.\notag \\ &\qquad \quad \bigg. F({\mathcal W}_{01},{\mathcal W}_{02},{\mathcal W}_{01}',{\mathcal W}_{02}')\Bigg]. \label{eq:tmp365} \end{align} Notice that $\P[{|{\mathcal W}_{02}| = t, |{\mathcal W}_{02}'| = t', |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}]$ differs from $\P[{|{\mathcal W}_{02}| = t, |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}]$ in~\eqref{eq:tmp901} only in the absence of the union $\bigcup_{t'\in \overline{{\mathcal T}}_t}$. By applying the Chernoff bound, Gallager's $\rho$-trick, and Lemma~\ref{lem:chi2} following similar steps as in Appendix~\ref{sec:1st_approach}, we obtain that \begin{align} \P[{|{\mathcal W}_{02}| = t, |{\mathcal W}_{02}'| = t', |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}] \le p_{t,t'} \label{eq:tmp1217} \end{align} with $p_{t,t'}$ given by~\eqref{eq:ptt}. Alternatively, bounding $\P\Big[|{\mathcal W}_{02}| = t, |{\mathcal W}_{02}'| = t', |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]\Big]$ using the information density's property as in Appendix~\ref{sec:2nd_approach}, we obtain \begin{align} \P[{|{\mathcal W}_{02}| = t, |{\mathcal W}_{02}'| = t', |\widehat{{\mathcal W}}| \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]}] \le q_{t,t'} \label{eq:tmp1226} \end{align} with $p_{t,t'}$ given by~\eqref{eq:qtt}. \vspace{.3cm} From \eqref{eq:tmp883}, \eqref{eq:tmp1077}, \eqref{eq:tmp1148}, and \eqref{eq:tmp1201}, we obtain that \begin{align} \P[|{\mathcal W}_{02}| = t, K_{\rm a} \to K_{\rm a}'] \le \min\left\{p_t, q_t, \xi(K_{\rm a},K_{\rm a}') \right\}. \end{align} From \eqref{eq:tmp1054}, \eqref{eq:tmp1077}, \eqref{eq:tmp1217}, and \eqref{eq:tmp1226}, we obtain that \begin{align} &\P[|{\mathcal W}_{02}| = t, |{\mathcal W}_{02}'| = t', K_{\rm a} \to K_{\rm a}'] \notag \\ &\le \min\left\{p_{t,t'}, q_{t,t'}, \xi(K_{\rm a},K_{\rm a}') \right\}. \end{align} Substituting these bounds on $\P[|{\mathcal W}_{02}| = t, K_{\rm a} \to K_{\rm a}']$ and $\P[|{\mathcal W}_{02}| = t, |{\mathcal W}_{02}'| = t', K_{\rm a} \to K_{\rm a}']$ into \eqref{eq:tmp850} and \eqref{eq:tmp853}, we deduce that the \gls{MD} and \gls{FA} probabilities, averaged over the Gaussian codebook ensemble, are upper-bounded by $\epsilon_{\rm MD}$ and $\epsilon_{\rm FA}$ given in~\eqref{eq:eps_MD} and \eqref{eq:eps_FA}, respectively. Finally, proceeding as in \cite[Th.~19]{Polyanskiy2011feedback}, one can show that there exists a randomized coding strategy that achieves \eqref{eq:eps_MD} and \eqref{eq:eps_FA} and involves time-sharing among three deterministic codes. \section{Proof of Proposition~\ref{prop:xi}} \label{proof:xi} The \gls{PDF} of $\rvVec{y}_0$ is given by \begin{align} p_{\rvVec{y}_0}(\vect{y}_0) = \frac{1}{\pi^n (1+K_{\rm a} P')^n} \exp\left(-\frac{\|\vect{y}_0\|^2}{1+K_{\rm a} P'}\right). \end{align} Therefore, with the \gls{ML} estimation of $\vect{r}{K}_{\rm a}$, we have that \begin{align} m(\rvVec{y}_0,K) &= -\ln p_{\rvVec{y}_0}(\vect{y}_0) \\ &= \frac{\|\vect{y}_0\|^2}{1+K P'} + n\ln(1+K P') + n \ln \pi. \end{align} As a consequence, the event $m\left(\rvVec{y}_0,K_{\rm a}'\right) < m\left(\rvVec{y}_0,K\right)$ can be written as $\frac{\|\rvVec{y}_0\|^2}{1+K_{\rm a}' P'} + n\ln(1+K_{\rm a}' P') < \frac{\|\rvVec{y}_0\|^2}{1+K P'} + n\ln(1+K P')$, or equivalently, \begin{equation} \|\rvVec{y}_0\|^2 \left(\frac{1}{1+K_{\rm a}'P'} - \frac{1}{1+KP'}\right) < n \ln\left(\frac{1+KP'}{1+K_{\rm a}'P'}\right). \label{eq:eventKa} \end{equation} Using the fact that $\|\rvVec{y}_0\|^2$ follows a Gamma distribution with shape $n$ and scale $1+K_{\rm a} P'$, we deduce that $\xi(K_{\rm a},K_{\rm a}')$ is given by~\eqref{eq:xi_ML} with $ \zeta(K,K_{\rm a},K_{\rm a}')$ given by~\eqref{eq:zeta_ML}. For the energy-based estimation with $m(\vect{y},K) = |\|\vect{y}\|^2 - n(1 + KP')|$, after some manipulations, the event $m\left(\rvVec{y}_0,K_{\rm a}'\right) < m\left(\rvVec{y}_0,K\right)$ is equivalent to \begin{align} \begin{cases} \|\rvVec{y}_0\|^2 > n\left(1 + \frac{K_{\rm a} + K_{\rm a}'}{2}P'\right), &\text{if~} K_{\rm a}' < K_{\rm a}, \\ \|\rvVec{y}_0\|^2 < n\left(1 + \frac{K_{\rm a} + K_{\rm a}'}{2}P'\right), &\text{if~} K_{\rm a}' > K_{\rm a}. \end{cases} \end{align} Thus, from the Gamma distribution of $\|\rvVec{y}_0\|^2$, we deduce that $\xi(K_{\rm a},K_{\rm a}')$ is given by~\eqref{eq:xi_ML} with $\zeta(K,K_{\rm a},K_{\rm a}')$ given by~\eqref{eq:zeta_energy}. \section{Error Floor Analysis} \label{app:error_floor} For the decoder considered in Theorem~\ref{thm:RCU_unknownKa}, the initial \glspl{MD} and \glspl{FA} are unavoidable. On the other hand, the additional \glspl{MD} and \glspl{FA} can be reduced as the power $P$ increases. As $P\!\to\! \infty$, by assuming that no additional \gls{MD} or \gls{FA} occurs on top of these initial \glspl{MD} or \glspl{FA}, we obtain lower bounds on $\epsilon_{\rm MD}$ and $\epsilon_{\rm FA}$ as follows. \begin{proposition}[Asymptotic lower bounds on $\epsilon_{\rm MD}$ and $\epsilon_{\rm FA}$] With ML or energy-based estimation of $\vect{r}{K}_{\rm a}$, it holds that \begin{align} &\lim_{P\to\infty} \epsilon_{\rm MD} \notag \\ &\ge \bar{\epsilon}_{\rm MD} \notag \\ &= \!\sum_{K_{\rm a} =\max\{K_\ell,1\}}^{K_{u}} \!\bigg(\!P_{\vect{r}{K}_{\rm a}}(K_{\rm a}) \!\sum_{K_{\rm a}' = K_\ell}^{K_{u}} \! \frac{(K_{\rm a}\!-\!\overline{K_{\rm a}'})^+\!}{K_{\rm a}} {\xi}(K_{\rm a},K_{\rm a}') \! \bigg) \!+\! \bar{p},\! \label{eq:eps_MD_floor}\\ &\lim_{P\to\infty} \epsilon_{\rm FA} \notag \\ &\ge \bar{\epsilon}_{\rm FA} \notag \\ &= \!\sum_{K_{\rm a} =K_\ell}^{K_{u}} \!\bigg(\!P_{\vect{r}{K}_{\rm a}}(K_{\rm a}) \notag \\ & \qquad \cdot \sum_{K_{\rm a}' = K_\ell}^{K_{u}} \! \frac{(\underline{K_{\rm a}'}-K_{\rm a})^+}{K_{\rm a} \!-\! {(K_{\rm a} \!-\! \overline{K_{\rm a}'})}^+ \!+\! {(\underline{K_{\rm a}'}\!-\!K_{\rm a})}^+} {\xi}(K_{\rm a},K_{\rm a}') \! \bigg) \!+\! \bar{p}, \label{eq:eps_FA_floor} \end{align} where $\bar{p} = 2 - \sum_{K_{\rm a} = K_\ell}^{K_{u}}P_{\vect{r}{K}_{\rm a}}(K_{\rm a}) - \E_{\vect{r}{K}_{\rm a}}\left[\frac{M!}{M^{\vect{r}{K}_{\rm a}}(M-\vect{r}{K}_{\rm a})!} \right]$, and $\xi(K_{\rm a},K_{\rm a}')$ is given by~\eqref{eq:xi_ML} with $\zeta(K,K_{\rm a},K_{\rm a}') = n \ln\big(\frac{K}{K_{\rm a}'}\big) K_{\rm a}^{-1}\big(\frac{1}{K_{\rm a}'} - \frac{1}{K}\big)^{-1}$ for ML estimation of $\vect{r}{K}_{\rm a}$ and $\zeta(K,K_{\rm a},K_{\rm a}') = n\frac{K+K_{\rm a}'}{2 K_{\rm a}}$ for energy-based estimation of $\vect{r}{K}_{\rm a}$. \end{proposition} \begin{proof} First, the optimal value of $P'$ minimizing the bounds must grow with $P$ since otherwise $\tilde{p}$ will be large. Therefore, as $P\to \infty$, we can assume without loss of optimality that $P' \to \infty$. Next, when $t = t' = 0$, we can verify that $a = b = 0$, thus $E_0(\rho,\rho_1) = 0$ and $E(0,0) = 0$, achieved with $\rho = \rho_1 = 0$. Therefore, $p_0 = p_{0,0} = e^{-n\cdot 0} = 1$. We can also verify that $q_0$ and $q_{0,0}$ both converge to $1$ as $P' \to \infty$. When $P' \to \infty$, $\xi(K_{\rm a},K_{\rm a}')$ given in Proposition~\ref{prop:xi} converges to the right-hand side of \eqref{eq:xi_ML} with $\zeta(K,K_{\rm a},K_{\rm a}') = n \ln\big(\frac{K}{K_{\rm a}'}\big) K_{\rm a}^{-1}\big(\frac{1}{K_{\rm a}'} - \frac{1}{K}\big)^{-1}$ for ML estimation of $\vect{r}{K}_{\rm a}$ and $\zeta(K,K_{\rm a},K_{\rm a}') = n\frac{K+K_{\rm a}'}{2 K_{\rm a}}$ for energy-based estimation of $\vect{r}{K}_{\rm a}$. Furthermore, the last term in $\tilde{p}$ given by~\eqref{eq:p0} vanishes and thus $\tilde{p} \to \bar{p}$. Finally, the lower bounds $\bar{\epsilon}_{\rm MD}$ and $\bar{\epsilon}_{\rm FA}$ follows by substituting the asymptotic values of $p_0$, $q_0$, $p_{0,0}$, $q_{0,0}$, $\xi(K_{\rm a},K_{\rm a}')$, and $\tilde{p}$ computed above into $\epsilon_{\rm MD}$ and $\epsilon_{\rm FA}$, and by setting $\min\{p_t,q_t\}$ to zero for $t \ne 0$, and setting $\min\{p_{t,t'},q_{t,t'}\}$ to zero for $(t,t')\ne (0,0)$. \end{proof} We remark that the lower bounds in~\eqref{eq:eps_MD_floor} and~\eqref{eq:eps_FA_floor} are tight for typical IoT settings. Indeed, equalities in~\eqref{eq:eps_MD_floor} and~\eqref{eq:eps_FA_floor} hold if the probability of having additional \glspl{MD} and \glspl{FA} vanishes, i.e., $\min\{p_t,q_t\} \to 0$ for $t \ne 0$ and $\min\{p_{t,t'},p_{t,t'}\} \to 0$ for $(t,t') \ne (0,0)$ as $P\to\infty$. With $\rho = \rho_1 = 1$, the optimal $\lambda$ in~\eqref{eq:E0} is given by $\lambda \!=\! 1/(2P_2)$. Thus, by replacing the maximization over $\rho$ and $\rho_1$ in~\eqref{eq:Ett} with $\rho = \rho_1 = 1$, we obtain that $E(t,t') \ge -t' R_1 - R_2 + \ln\big(1+\frac{(t+t')P'}{4P_2}\big)$. It follows that \begin{align} p_{t,t'} &\le \binom{M\!-\!\max\{K_{\rm a}, \underline{K_{\rm a}'}\}}{t'} \binom{\min\{K_{\rm a}, \overline{K_{\rm a}'}\}}{t} \notag \\ &\quad \cdot \left(1+\frac{(t+t')P'}{4P_2}\right)^{-n}. \label{eq:bound_ptt} \end{align} If $K_{\rm a} \in [\underline{K_{\rm a}'}:\overline{K_{\rm a}'}]$, i.e., $P_2 = 1$, the right-hand side of~\eqref{eq:bound_ptt} vanishes as $P' \to\infty$. Otherwise, the right-hand side of~\eqref{eq:bound_ptt} converges to \begin{align} \bar{p}_{t,t'} &= \binom{M-\max\{K_{\rm a}, \underline{K_{\rm a}'}\}}{t'} \binom{\min\{K_{\rm a}, \overline{K_{\rm a}'}\}}{t} \notag \\ &\quad \cdot \bigg(1+\frac{t+t'}{4((K_{\rm a} - \overline{K_{\rm a}'})^+ + (\underline{K_{\rm a}'} - K_{\rm a})^+)}\bigg)^{-n} \\ &\le M^{t'} K_{\rm a}^t \bigg(1+\frac{t+t'}{4((K_{\rm a} - \overline{K_{\rm a}'})^+ + (\underline{K_{\rm a}'} - K_{\rm a})^+)}\bigg)^{-n}. \end{align} Observe that $\bar{p}_{t,t'}$ is small if $n$ is relatively large compared to $\ln M$ and $\ln K_{\rm a}$, which is true for relevant values of $n,M$ and $K_{\rm a}$ in the IoT. Specifically, in typical IoT scenarios, $M$ and $K_{\rm a}$ are in the order of $10^2$, while $K_{\rm a}/n$ is from $10^{-4}$ to $10^{-3}$\textemdash see~\cite{PolyanskiyISIT2017massive_random_access} and \cite[Rem.~3]{Zadik2019}. For example, with $(M,n) = (2^{100}, 15000)$ and $K_{\rm a} \le 300$ as considered in~\cite{PolyanskiyISIT2017massive_random_access} and many follow-up works, assume that $(K_{\rm a} - \overline{K_{\rm a}'})^+ + (\underline{K_{\rm a}'} - K_{\rm a})^+ \le 20$, then $\bar{p}_{t,t'} < 10^{-128}$ for every $t\le 300$ and $t' \le 300$. As a consequence, $p_{t,t'}$ and $p_t$ are very small. We conclude that $\lim\limits_{P\to\infty} \epsilon_{\rm MD}$ and $\lim\limits_{P\to\infty} \epsilon_{\rm FA}$ approach closely $\bar{\epsilon}_{\rm MD}$ and $\bar{\epsilon}_{\rm FA}$, respectively. In other words, $\bar{\epsilon}_{\rm MD}$ and $\bar{\epsilon}_{\rm FA}$ essentially characterize the error floors of ${\epsilon}_{\rm MD}$ and ${\epsilon}_{\rm FA}$, respectively, as $P\to \infty$. \bibliographystyle{IEEEtran} \section*{Solutions}% } \IfFileExists{MinionPro.sty}{ }{ }
1,314,259,993,128
arxiv
\section{Introduction} Since their introduction in the late 1990s, WLANs (Wireless Local Area Networks) have rapidly overtaken wired networks to become the primary means of connecting devices to the Internet. According to Cisco \cite{cisco2019wp}, they will account for 57\% of the Internet traffic in 2022, compared to 22\% and 21\% for mobile and wired networks, respectively. The current WLAN architecture is defined by the IEEE standard 802.11 (commercially known as Wi-Fi). APs (Access Points) are the centerpiece of this setup, serving as relays for wireless devices; we refer to the latter as STAs (Stations) throughout this paper. Typically, each AP is equipped with a wired interface giving access to the LAN and then the Internet as well as a wireless interface providing connectivity to nearby STAs through radio communications. Space on the radio spectrum is a scarce resource as it is often shared by multiple WLANs. The radio bands used by the IEEE 802.11 standard (currently 2.4 and 5 GHz, soon to be joined by 6 GHz) are divided into channels. Different APs can then be assigned to different, orthogonal channels enabling the APs to transmit at the same time without interfering with each other. Equally important thanks to the limited radio range of radio waves, APs configured on the same radio channel can transmit concurrently provided that they are sufficiently far away from each other. This ability was central to the success of WLANs and it is commonly known as the spatial reuse of radio channels. However, the spatial reuse of radio channels as performed by today’s WLANs may be reaching its limit. This is particularly true in places where WLAN deployments are very dense, such as offices, shopping malls and train stations. This is because, in these areas the distance between APs is small, so that an AP is more likely to be blocked by the transmissions of one or several nearby APs operating on the same channel. This will in turn take a hefty toll on the WLANs' performance. A solution to this issue can be found in the 2021 amendment to 802.11 known as 802.11ax \cite{802.11-2021}, which enables the dynamic configuration of two key parameters at each AP: \texttt{TX\_PWR} and \texttt{OBSS\_PD}. The former parameter specifies the power level (in dBm) at which the AP transmits its data. The latter parameter defines the sensitivity threshold (in dBm). If the energy received is below this level, this indicates to the AP that the radio channel is clear and thus available for transmission. Otherwise, the AP must defer its transmissions. While prior amendments to 802.11 held these \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters constant (typically 20 dBm and -82 dBm respectively), 802.11ax has made them dynamic with their values spanning from 1 to 21 dBm for the former and from -82 to -62 dBm for the latter. Adjusting the configurations of \texttt{TX\_PWR} and \texttt{OBSS\_PD} can help overcome the limitations of spatial reuse in dense environments by allowing APs that are close to each other to transmit on the same channel. Figure~\ref{fig:toy} depicts a simple example of two APs operating on the same radio channel and illustrates how different configurations of the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters can lead to different performance. \begin{figure}[!h] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{img/introa.png} \caption{With the default configuration of \texttt{TX\_PWR} and \texttt{OBSS\_PD}, the two APs are within each other’s detection range so that they cannot transmit simultaneously.} \label{fig:toya} \end{subfigure} \hspace{0.2cm} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{img/introb.png} \caption{The value of \texttt{TX\_PWR} is reduced on both AP so that they do not belong to each other’s detection range. Under this configuration, concurrent transmissions from the two AP may occur at the same time.} \label{fig:toyb} \end{subfigure} \caption{Adequately configuring the \texttt{TX\_PWR} parameter of APs can significantly improve the spatial reuse of radio channels in WLANs. Note that concurrent transmissions of the two APs could also be attained by increasing \texttt{OBSS\_PD} at each AP. While similar, reducing \texttt{TX\_PWR} and increasing \texttt{OBSS\_PD} may affect the WLANs' performance differently (see Table 2 of \cite{sr_mab_2} for more details).} \label{fig:toy} \end{figure} Despite the potential of 802.11ax to improve the spatial reuse of radio channels, finding an adequate configuration of \texttt{TX\_PWR} and \texttt{OBSS\_PD} for the APs in a WLAN is a complex problem. First, an adequate configuration is very topology-specific. In other words, knowing a suitable configuration for a given scenario is of no value for another scenario. Second, a distributed solution would be more appreciated than a centralized solution. Not only does this avoid the search in an otherwise very high dimensional space but this avoids the assumption of having a centralized entity (e.g., a controller) deciding the configurations of all APs. This assumption is acceptable if all the interfering APs belong to the same WLAN but unrealistic if they belong to concurrent WLANs. Third, forecasting the performance of WLANs with an analytical model that can subsequently help establish ``optimal’’ configurations is difficult. The degree of details in the models will be either too coarse and thus inapplicable, or adequate but unscalable when scenarios involve multiple APs and STAs. A solution to this is that APs can apply new configurations of their parameters, measure the effect of these changes on their performance, and exchange their experience with surrounding APs. This paves the way for the use of online and reinforcement learning techniques in a distributed manner. In this paper, we present \texttt{INSPIRE} a distributed solution performing local Bayesian optimizations based on GPs (Gaussian Processes) to improve the spatial reuse of WLANs by adequately configuring the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters at each AP. \texttt{INSPIRE} makes no explicit assumptions about the topology of WLANs or the radio environments and thus can apply to any WLANs. Additionally, it can operate even when the APs to be configured belong to different concurrent WLANs. It is based on (i) an intuitive reward function that combines the performance of STAs and returns a score reflecting the overall performance of a given WLAN configuration, (ii) the use of GPs to explore promising new WLAN configurations, and (iii) a consensus algorithm to coordinate the APs’ efforts into reaching collectively improved performance for the WLANs. More precisely, the contributions of this paper are as follows: \begin{itemize} \item We demonstrate the ability of GPs at approximating the reward function, which reflects the performance of WLANs, and at exploring efficient AP configurations; \item We establish the superiority of a divide-and-conquer approach to handle the complex problem of setting the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters at each AP; \item We introduce \texttt{INSPIRE} a distributed solution that lets the APs of concurrent WLANs automatically adapt their internal parameters’ setting in their own interest as well as in the interest of obtaining a more efficient spatial reuse of radio channels; \item We evaluate the efficiency of \texttt{INSPIRE} on real-life inspired case studies using a detailed network discrete-event simulator and compare its performance with several state-of-the-art solutions (centralized or distributed). \end{itemize} The remainder of this paper is organized as follows. The next section discusses the related work. Section~\ref{sec:sol} describes the proposed strategy to address the issue of spatial reuse of a radio channel in WLANs. Its performance are evaluated in Section~\ref{sec:num_res} and we use Section~\ref{sec:disc} to deepen the understanding of the numerical results as well as some of the choices we made on our strategy. Section~\ref{sec:conc} concludes this paper. \section{Related work} \label{sec:soa} The release of the IEEE 802.11ax amendment \cite{802.11-2020} in late 2021 marks a new era for the spatial reuse of radio channels of WLANs: Nodes can dynamically adjust their transmission power (\texttt{TX\_PWR}) and sensitivity threshold (\texttt{OBSS\_PD}) parameters. For a detailed explanation of how this new feature is implemented, we refer the interested reader to~\cite{Wilhem2021}, which also provides simple scenarios to illustrate its potential benefits. Years before IEEE released the 802.11ax amendment, the idea of dynamically updating \texttt{TX\_PWR} and \texttt{OBSS\_PD} has been explored by some researchers. The pioneering work of \cite{Zhu2004} presents an analytical model that, based on the current radio channel conditions, dynamically configures \texttt{OBSS\_PD} on each node of a Wi-Fi-based mesh network. Concurrently, \cite{Kim2004} established that adapting \texttt{TX\_PWR} can lead to increased throughput and reduced energy consumption. More recently, in 2020, \cite{Qiu2020} casts the issues of positioning the APs of a WLAN and choosing their \texttt{TX\_PWR} as an optimization problem. The authors provides a solution to this problem that delivers a static configuration of \texttt{TX\_PWR} for a WLAN. But their solution does no account for the number of STAs nor the type of traffic in the WLAN. The difficulty of accurately modeling the dependency between the configuration parameters of a large WLAN and its performance is a strong hurdle to the development of spatial reuse strategies based on analytical models. As a result, most of the proposed strategies are data-driven. Adaptive by construction, they seem to constitute promising candidates in the search for configurations that improve the spatial reuse of a radio channel and the performance of WLANs. Machine learning (ML) techniques are natural candidates for addressing problems requiring a data-driven approach, and the spatial reuse problem is no exception. \cite{fsc} addresses the problem of configuring \texttt{TX\_PWR} and \texttt{OBSS\_PD} with a two-scale solution using artificial neural networks (ANN). In their strategy, STAs and APs first adjust their value of \texttt{OBSS\_PD} to minimize interference. Then, an ANN, which was trained offline through simulation, is used to increase the fairness between STAs in terms of attained throughput. However, given the vast diversity of WLAN topologies, the offline learning of the ANN appears as a clear limitation to the generalization of this strategy. An online learning procedure is proposed by \cite{mab_mswim}, which uses reinforcement learning and more precisely the Multi-Armed Bandit (MAB) framework to find the optimal configuration of \texttt{TX\_PWR} and \texttt{OBSS\_PD} in a WLAN. The approach comprises two agents with one sampling promising configurations through a multivariate normal distribution, and the other identifying the best configuration among those already sampled with Thompson sampling and Normal-Gamma priors. These two ML solutions \cite{fsc,mab_mswim} were tested on the network simulator ns-3 and lead to significant WLAN improvements. However, in order to perform their optimization, they both assume the presence of a central controller that has access and control over all the APs in the WLANs. By construction, these approaches are centralized, and hence cannot be applied to cases where concurrent WLANs managed by different owners interfere with others. Distributed approaches are undisputedly better fit than centralized approaches to handle cases with a set of concurrent WLANs. \cite{dsc} introduces a distributed algorithm named Dynamic Sensitive Control which is run on every STAs of a WLAN. In short, each STA tries to dynamically reduce its value of \texttt{OBSS\_PD} to favor concurrent transmissions while keeping it high enough to ensure a high quality signal reception. Similarly, \cite{lsr} proposes Link-aware Spatial Reuse (LSR), a distributed algorithm designed for the APs. In LSR, each AP chooses another AP, which is allowed to transmit concurrently, and then prescribes a value of \texttt{TX\_PWR} for the selected AP. These two algorithms rely on a single measurement metric reflecting the quality of the received signal, namely the Received Signal Strength, to choose the nodes configuration. More recently, strategies using distributed MAB approaches have been proposed \cite{sr_mab, sr_mab_2}. They both use Thompson sampling with Gaussian priors to find the best couple of \texttt{TX\_PWR} and \texttt{OBSS\_PD} at each AP. In~\cite{sr_mab}, each AP seeks to maximize the throughput of its associated STAs. On the other hand, in~\cite{sr_mab_2}, the authors assume that every AP has access to the performance of all other APs in the WLAN; then each AP attempts to maximize a global reward that takes into account the performance of all the other nodes. Both strategies \cite{sr_mab, sr_mab_2} solutions were evaluated in a self-made simulator with simple random scenarios. Table \ref{tab:soa} summarizes the main characteristics of the data-driven strategies discussed above. It shows that, out of the six considered strategies, two (i.e., \cite{dsc,fsc}) only focus on the configuration of the \texttt{OBSS\_PD} parameter (keeping the \texttt{TX\_PWR} parameter fixed). To help in the comparison of the different strategies, we introduce two concepts: ``pull area'' and ``push area''. The pull area indicates the area on which each node is assumed to obtain information (this typically includes parameter configurations and performance measurements). Depending on the strategy being considered, the pull area can include just the node itself, the surrounding nodes, or the whole set of nodes in the WLANs. The push area designates the area which each AP can influence typically through the prescription of parameter configurations. In the case of centralized strategies (e.g., \cite{fsc, mab_mswim}), the pull and push areas naturally cover all the WLANs. We distinguish partially distributed strategies (e.g., \cite{sr_mab_2}) wherein either the pull or push area extends all the WLANs with fully distributed strategies (e.g., \cite{dsc, lsr, sr_mab}) in which both the pull and push areas differ from all the WLANs. We observe in Table~\ref{tab:soa} that only three out of the six state-of-the-art strategies can be considered as fully distributed. The last four columns of Table~\ref{tab:soa} pertain to the performance evaluation used to validate each of these strategies. It appears that most strategies were evaluated without considering the dynamical selection of the Modulation Coding Scheme (MCS) for the speed of the wireless links, nor bidirectional (with upstream and downstream) traffic. This can be seen as a strong limitation since this overlooks some associated trade-offs. For instance, increasing the value of \texttt{TX\_PWR} certainly enables the communication to operate with a faster data rate (larger MCS), but at the same time, it increases the level of interference with surrounding APs. Additionally, most strategies were evaluated on relatively simple scenarios (with a few APs and a limited number of radio channels), often using a self-made network simulator. \begin{table}[!h] \caption{Comparison of the state-of-the-art data-driven strategies. The last column refers to the size of the scenarios involved in the performance evaluation of the strategy. For instance, 216/18 means the evaluation comprises 216 APs distributed over 18 radio channels.} \label{tab:soa} \centering \rowcolors{2}{gray!13}{white} \begin{tabular}{|l c c c c c l l|} \hline {Proposed} & Tuning of & Pull & Push & Dynamic & {Traffic} & {Simulator} & {APs /} \\ {solutions} & {\texttt{TX\_PWR}} & area & area & MCS & Up/Down & & {channels} \\ \hline WCNC'15~\cite{dsc} & & STA & {STA} & & Up & Self-made & 100/3 \\ WCNC'21~\cite{lsr} & \checkmark & {AP} & {AP} & \checkmark & Down & {ns-3} & 6/1\\ Globecom'20~\cite{fsc} & & WLAN & WLAN & & {Up/Down} & {ns-3} & 3/1\\ ADHOC'19~\cite{sr_mab} & \checkmark & {AP} & {AP} & & Down & Self-made & 8/1\\ JNCA'19~\cite{sr_mab_2} & \checkmark & WLAN & {AP} & & Down & Self-made & 8/1 \\ MSWiM'21~\cite{mab_mswim} & \checkmark & WLAN & WLAN & & Down & {ns-3} & 10/1 \\ \texttt{INSPIRE} & \checkmark & \begin{tabular}{@{}c@{}} {Surrounding}\\ {APs}\end{tabular} & \begin{tabular}{@{}c@{}} {Surrounding}\\ {APs}\end{tabular} & \checkmark & {Up/Down} & {ns-3} & {216/18} \\ \hline \end{tabular} \end{table} In this paper, we propose a fully distributed strategy to address the problem of the spatial reuse of radio channels in WLANs. The proposed strategy can be applied to any arrangement of WLANs and its novelties are mostly twofold. First, to the best of our knowledge, it is the first strategy making use of Gaussian Processes to explore promising WLAN configurations in the quest of discovering the optimal one. Gaussian processes are recognized tools to deal with the exploration vs. exploitation dilemma (see \cite{srinivas2009gaussian, chowdhury2017kernelized}) which is at the center of the spatial reuse problem. Second, unlike the existing fully distributed strategies, \texttt{INSPIRE} allows each AP to account for its surroundings thanks to pull and push areas broader than a single node. Through the use of a simple consensus method, APs of the WLANs achieve to behave altruistically selecting configurations for the ``greater good'' of the WLANs. We also introduce realistic scenarios, inspired by real-life WLANs, with dynamic MCS and bidirectionnal traffic, to evaluate and compare the efficiency of all the considered strategies. \section{Proposed solution} \label{sec:sol} \subsection{WLANs under study} Let $\mathcal{W}$ denote the set of concurrent WLANs under study, each of which being comprised of one or more APs. We let $K$ be the total number of APs in $\mathcal{W}$ that operate on the radio channel of interest. We denote by $s_k$ the set of STAs associated with AP $k$ and by $S$ the total number of STAs in the considered radio channel of $\mathcal{W}$. Thus, we have: $S = \sum_{i = 1}^K |s_k|$. Finally, we use $\mathcal{N}_k$ to designate the set of APs that are within the communication range of AP $k$ (when every AP is under the default configuration of the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters). Note that AP $k$ itself belongs to $\mathcal{N}_k$. We refer to the APs in $\mathcal{N}_k$ as the surroundings of AP $k$. We make no assumptions on $\mathcal{W}$, including on the specific arrangement of its APs and STAs, other than the three detailed below. First, we assume that every AP $k$ is able to exchange control frames (possibly through its beacon frames) with its surrounding APs (i.e., the ones in $\mathcal{N}_k$). By the same token, we suppose that at least one AP $k$ has another AP in its communication range (\textit{i.e.}, $\exists k \in [1\mathrel{{.}\,{.}}\nobreak K], \mathcal{N}_k \neq \emptyset$), otherwise the spatial reuse of the radio channel would already be at its apex. Second, we assume that the $K$ APs have their \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters configurable (as it is the case since the introduction of the 802.11ax amendment). We use $x_k^t$ to denote the configuration of AP $k$ with regards to its two \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters at time $t$. Analogously, $x^t$ represents the configuration of the $K$ APs from $\mathcal{W}$ at time $t$. Thus, we have: $x_k^t \in C = [-82\mathrel{{.}\,{.}}\nobreak -62] \times [1\mathrel{{.}\,{.}}\nobreak 21]$ dBm and $x^t \in C^K$. Lastly, we assume that each AP in $\mathcal{W}$ can periodically run performance tests and obtain, in return, the mean throughput attained by each of its STAs over a short time interval $\Delta t$. More formally, we use the vector $T^t \in \mathbb{R}^{+N_S}$ to denote the throughput attained by the $S$ STAs of $\mathcal{W}$ given the WLAN configuration $x^t$ at time $t$. Throughout this paper, we sometimes refer to $T^t$ as $T(x^t)$ to explicitly show the dependency between the STAs' throughputs and APs' configurations. In this work, we seek to discover an adequate configuration $x^*$ of the $K$ APs composing $\mathcal{W}$ that improves the collective experience of the $S$ STAs through a better reuse of their radio channel. We address this problem as a reinforcement learning task in which, at regular time intervals $t$, the APs collect measurements $T^t$ associated to their current configuration $x^t$, and need to decide their next configuration $x^{t+1}$. The obstacles towards that objective are mostly threefold. (i) We need to define a meaningful objective function that APs will attempt to optimize collectively; (ii) We are facing the well-known exploration vs. exploitation dilemma since the search for an adequate configuration of the WLANs should be as seamless as possible (without disrupting the STAs). This leads us to cast the problem as a MAB problem where the arms are the WLANs' configurations. Following the MAB terminology, we refer to the objective function as the reward function; (iii) We are looking for a strategy that can be applied in a distributed way since it would be in general unrealistic to assume that (concurrent) APs have a fine knowledge beyond their surroundings. \subsection{Reward function} We need to define a reward function $f$ that appraises the ``goodness'' of a configuration $x$ with regards to the WLANs performance. Because multiple criteria may be considered in the definition of $f$, there is no universal definition. However, assessing the quality of a configuration $x$ can be derived from the STAs throughputs $T(x)$ obtained with APs configured with $x$. We distinguish three main types of reward functions in the literature: \begin{itemize} \item $f(x) = \sum_{T_i \in T(x)} T_i$ where $f$ is simply computed as the sum of all STAs' throughputs. This function, often referred to as the cumulated throughput of the WLANs, is highly exposed to the ``scapegoat'' objection. Configurations favorably assessed by this function may yet result in severe unfairness among STAs. \item $f(x) = \min_{T_i \in T(x)} T_i$ represents an effective way of preventing the ``scapegoat'' problem. However, this function has a low resolution since its computation overlooks all STAs' throughput but the lowest. \item $f(x) = \prod_{T_i \in T(x)} T_i$ is called the proportional fairness (PF) and provides a convenient trade-off between fairness and cumulated throughput. However, PF exhibits a high variability since $\frac{\partial f}{\partial T_i} = \prod_{T_j \in T(x), j \neq i} T_j$. \end{itemize} We choose PF for our reward function $f$ but we take its logarithm. This lowers its variability, which becomes: $\frac{\partial f}{\partial T_i} = \frac{1}{T_i}$ (note that $T_i$ is typically much larger than 1). This also emphasizes the contribution of STAs with low throughputs in the computation of $f$. Additionally, and for practical reasons, we normalize $f$ so that its return values remain in $[0, 1]$. To do so, we simply need to compute a normalization constant $\lambda$ that depends on the maximum theoretically attainable throughput of each STA. Denoting by $T^*$ the set of the maximum attainable throughputs for the $S$ STAs of $\mathcal{W}$, our reward function becomes: \begin{equation} \begin{split} f(x) &= \frac{1}{\lambda} \log \prod_{T_i \in T(x)} T_i\\ &= \frac{1}{\lambda} \sum_{T_i \in T(x)} \log T_i \end{split} \label{eq:reward} \end{equation} with $\lambda = \sum_{T_i^* \in T^*} \log T_i^*$. However, to compute Equation~\ref{eq:reward}, an AP must have a complete knowledge of the performance attained by the STAs of all APs or, at least, be able to communicate with all the APs in $\mathcal{W}$. This is in contradiction with our assumption that APs only have a partial knowledge of $\mathcal{W}$, limited to their surrounding APs. To design a reward function compatible with the distributed case, we proceed as follows. Each AP $k$ applies Equation~\ref{eq:reward} but restricted to the set of its associated STAs and obtains in return a ``selfish'' reward denoted by $f_k(x_k)$ and a normalization constant $\lambda_k$. Previous work \cite{sr_mab} have showed that considering such selfish rewards may have a positive but limited impact on the WLANs performance. Therefore, we introduce a more altruistic reward, denoted by $R_k$ that accounts not only for the ``selfish'' reward of AP $k$ (i.e., $f_k$) but also for the rewards of the surrounding APs (i.e., the ones in $\mathcal{N}_k$). The ``altruistic'' local reward of AP $k$ is computed as: \begin{equation} R_k(x) = \frac{1}{\sum_{i \in \mathcal{N}_k} \lambda_i} \sum_{i \in \mathcal{N}_k} \lambda_i f_i(x_i) \label{eq:local_reward} \end{equation} where $(\lambda_i, f_i(x_i))$ for $i \in \mathcal{N}_k$ represent the selfish reward and the normalization constant for each of the surrounding APs of AP $k$. Note that $R_k$ is equivalent to the global reward function of Equation~\ref{eq:reward} except that the set of considered APs is restricted to those in $\mathcal{N}_k$. \textbf{Proof.} The proof is straightforward since $\lambda_i = \sum_{j \in s_i} \log T_j^*$ and $f_i(x_i) = \frac{1}{\lambda_i} \sum_{j \in s_i} \log T_j(x_i)$. Therefore, $\sum_{i \in \mathcal{N}_k} \lambda_i = \sum_{i \in \mathcal{N}_k} \sum_{j \in s_i} \log T^*_j$, which is the normalization constant for all STAs associated with APs in $\mathcal{N}_k$. Similarly, $\sum_{i \in \mathcal{N}_k} \lambda_i f_i = \sum_{i \in \mathcal{N}_k} \sum_{j \in s_i} \log T_j$, which is the logarithm of the PF of all STAs associated with APs in $\mathcal{N}_k$. Therefore, Equation \ref{eq:local_reward} is equivalent to the global reward function of Equation~\ref{eq:reward} but applied only to the APs in $\mathcal{N}_k$. \subsection{Local reward maximization} \label{sec:reward_maxim} For the sake of clarity and since all variables in this section are relative to an AP $k$, we often omit the subscript $k$ in the notations. Now that each AP $k$ has its own local reward function, we need a model of the knowledge of AP $k$ on $R_k$ in order to find its argument of its maxima. We represent the beliefs of AP $k$ about $R_k$ by defining a prior distribution on the reward function space with a Gaussian Process (GP). \textbf{Gaussian process.} In our case, a GP can be defined as a collection of random variables indexed by configurations of APs in $\mathcal{N}_k$: $\{Y_c; c \in C^{|\mathcal{N}_k|}\}$ such that every finite collection $Y_{c_1, \cdots, c_n} = \left(Y_{c_1}, \cdots, Y_{c_n}\right)$ is distributed according to a multivariate normal distribution. We assume the GP to have zero mean so that it is entirely determined by its covariance function $\Sigma : C^{|\mathcal{N}_k|} \times C^{|\mathcal{N}_k|} \rightarrow \mathbb{R}^+$. As shown by \cite{gp}, GPs can be used as priors on a function space. We use $X_t$ to denote the $t \times 2|\mathcal{N}_k|$ features matrix gathering the tested configurations $\left(x^1, \cdots, x^t \right)^T$ and $Y_t$ to denote the $t \times 1$ label vector gathering the corresponding local reward values $\left(Y(x^1), \cdots, Y(x^t)\right)^T$. Given $X_t$ and $Y_t$, we can infer the distribution of the reward value for an arbitrary configuration $x$, $Y(x)$, as follows: $Y(x)|X_t,Y_t \sim \mathcal{N}\left(\mu(x), \sigma^2(x)\right)$ with $\mu(x)$ and $\sigma^2(x)$ defined in Equations \ref{eq:mu} and \ref{eq:sig}, respectively. \begin{equation} \mu(x) = \Sigma(x, X_t)\Sigma(X_t, X_t)^{-1}Y_t \label{eq:mu} \end{equation} \begin{equation} \sigma^2(x) = \Sigma(x, x) - \Sigma(x, X_t)\Sigma(X_t, X_t)^{-1}\Sigma(X_t, x) \label{eq:sig} \end{equation} Since GPs can be used as a prior on a functional space, they are useful to solve regression problems as well as minimization or maximization tasks. In our case, the AP $k$ uses a GP to model $R_k$ and to assist the exploration of promising configurations of the APs in $\mathcal{N}_k$, maximizing $R_k$ in a Bayesian way. Choosing the covariance function $\Sigma$ is a critical step when designing a GP as it determines some key features of $\mathcal{GP}_k$ such as its isotropy and smoothness. Since the reward function, which quantifies the quality of spatial reuse in $\mathcal{N}_k$, is likely to exhibit threshold effects, we choose a covariance function that decreases rapidly as the distance between two considered configurations increases. Thus, the regularity constraint is not too restrictive on the modeled function. Because we have no incentive to prefer any particular direction over another, we let the covariance function $\Sigma(x, x')$ depend only on $||x - x'||$ to ensure the isotropy of $\mathcal{GP}_k$. This leads us to use a Matérn kernel \cite{matern} with parameter $\nu = \frac{3}{2}$, which is defined as \begin{equation} \Sigma(x, x') = s^2\left(1 + \frac{\sqrt{3}||x - x'||}{\rho}\right)e^{-\frac{\sqrt{3}||x - x'||}{\rho}} \label{eq:kernel} \end{equation} where $s^2$ and $\rho$ are two hyperparameters whose values are approximated during the learning process. In practice, we can approach $s^2$ and $\rho$ by maximum likelihood estimation (MLE). Obtaining the likelihood of $Y_t$, denoted by $\mathcal{L}(s^2, \rho)$, is trivial since $Y_t$ follows a multivariate normal distribution. Therefore, $(s^2, \rho) = \argmax_{(x, y) \in \mathbb{R}^{+2}} \mathcal{L}(x, y)$ can be obtained by classical descent techniques applied to the likelihood gradient. As discussed before, each AP $k$ faces the exploitation vs exploration dilemma in its attempt to find the optimal configuration. A common way in the MAB framework to appraise a given strategy $\pi$ is then to consider the cumulative regret $\Gamma$. In our problem, $\Gamma$ is expressed with Equation \ref{eq:cum_reg} for an episode of $D$ steps, since it is expressed as the cumulative sum of the differences between 1 (namely, the best reward that the AP can get by definition of our reward function) and $R_k(\pi(t))$, which is the actual reward obtained at time $t$ for the strategy $\pi$. \begin{equation} \Gamma(\pi) = D - \sum_{t = 1}^D R_k(\pi(t)) \label{eq:cum_reg} \end{equation} Minimizing the cumulative regret with GP models is usually done by defining a strategy $\pi$ from the maximization of an acquisition function $A$: $\pi(t) = \argmax_x A_t(x)$. However, this assumes that our search space $C^{|\mathcal{N}_k|}$ is continuous. Since each AP $k$ deals with discrete configurations of $\mathcal{N}_k$, we systematically round the recommendation of $\mathcal{GP}_k$ to the nearest valid WLAN configuration. Many acquisition functions exist, such as Knowledge Gradient (KG) \cite{kg}, GP-UCB \cite{gpucb} or the Expected Improvement (EI) \cite{ei}. We choose EI over KG (whose computational cost can rapidly become prohibitive) and GP-UCB (which was found to be less efficient on our examples). The acquisition function of EI is expressed as $A_t(x) = \mathbb{E}\left[(\mu_{t+1}(x) - \max_{1 \leq i \leq t} Y(x_i))^+\right]$ given that $X_{t+1} = (X_t, x)$, $Y_{t+1} = (Y_t, Y(x))$ and $Y(x) \sim \mathcal{N}(\mu(x), \sigma^2(x))$. EI also has a closed-form as shown in Equation \ref{eq:ei}. \begin{equation} EI(x) = (\mu(x) - R_{k,t}^*) \Phi(Z) + \sigma(x)\phi(Z) \label{eq:ei} \end{equation} with $R_{k,t}^* = \max_{1 \leq i \leq t} Y(x_i)$, $Z = \frac{\mu(x) - R_{k,t}^*}{\sigma(x)}$, $\Phi$ and $\phi$ being respectively the CDF and the PDF of a standard Gaussian distribution. The AP can maximize Equation \ref{eq:ei} by gradient descent on $-EI(x)$. For the sake of completeness, we provide a closed-form expression of $\nabla EI(x)$ with Equations \ref{eq:dei}, \ref{eq:dmu}, \ref{eq:dsig} and \ref{eq:dker} directly followed by their proofs. First of all, it is easy to show that the closed-form of $\nabla EI(x)$ is given by Equation \ref{eq:dei}. \begin{equation} \nabla EI = \nabla \mu \Phi(Z) + \nabla \sigma \phi(Z) \label{eq:dei} \end{equation} \textbf{Proof (Eq. \ref{eq:dei}).} The proof is straightforward since: \begin{equation} \nabla EI = \nabla \mu \Phi(Z) + (\mu - R^*_{k,t}) \phi(Z) \nabla Z + \nabla \sigma \phi(Z) + \sigma \phi'(Z) \nabla Z \end{equation} By noticing that $\mu - R_{k,t}^* = \sigma Z$ and $\phi'(Z) = -Z\phi(Z)$, we have: \begin{equation} \begin{split} \nabla EI &= \nabla \mu \Phi(Z) + \sigma Z \phi(Z) \nabla Z + \nabla \sigma \phi(Z) - \sigma Z \phi(Z) \nabla Z\\ &= \nabla \mu \Phi(Z) + \nabla \sigma \phi(Z) \end{split} \end{equation} To complete the closed-form of $\nabla EI$ in Equation \ref{eq:dei}, we need to explicit $\nabla \mu$ and $\nabla \sigma$. Given the expression of $\mu(x)$ from Equation \ref{eq:mu}, we immediately get the expression of $\nabla \mu$ in Equation \ref{eq:dmu}. \begin{equation} \nabla \mu = \frac{\partial \Sigma(x, X_t)}{\partial x} \Sigma(X_t, X_t)^{-1} Y_t \label{eq:dmu} \end{equation} Similarly, recalling the expression of $\sigma^2(x)$ from Equation \ref{eq:sig}, we can derive the expression of $\nabla \sigma$ in Equation \ref{eq:dsig} for an isotropic kernel function $\Sigma$. \begin{equation} \nabla \sigma = -\frac{1}{\sigma(x)} \Sigma(X_t, X_t)^{-1} \Sigma(X_t, x) \frac{\partial \Sigma(X_t, x)}{\partial x} \label{eq:dsig} \end{equation} \textbf{Proof (Eq. \ref{eq:dsig}).} By noticing that $\Sigma(x, x)$ does not depend on $x$, that $\Sigma(x, X_t) = \Sigma(X_t, x)^T$ and similarly that $\Sigma(X_t, X_t)^{-1}$ is symmetric with an isotropic kernel we have: \begin{equation} \begin{split} \nabla \sigma^2(x) & = \frac{\partial \sigma^2}{\partial \Sigma(X_t, x)} \frac{\partial \Sigma(X_t, x)}{\partial x}\\ & = -(\Sigma(X_t, X_t)^{-1} \Sigma(X_t, x) + \Sigma(X_t, X_t)^{-T} \Sigma(X_t, x)) \frac{\partial \Sigma(X_t, x)}{\partial x}\\ & = - 2 \Sigma(X_t, X_t)^{-1} \Sigma(X_t, x) \frac{\partial \Sigma(X_t, x)}{\partial x} \end{split} \end{equation} The expression of $\nabla \sigma$ follows immediately: \begin{equation} \begin{split} \nabla \sigma(x) &= \frac{\partial \sqrt{\sigma^2}}{\partial \sigma^2} \nabla \sigma^2(x) \\ & = -\frac{1}{\sigma(x)} \Sigma(X_t, X_t)^{-1} \Sigma(X_t, x) \frac{\partial \Sigma(X_t, x)}{\partial x} \end{split} \end{equation} Eventually, the expression of the Jacobian $\frac{\partial \Sigma(X_t, x)}{\partial x}$ is required to have a complete closed-form of $\nabla EI$. We give the expression of the $i$th line of this Jacobian $\left(\frac{\partial \Sigma(x_i, x)}{\partial x}\right)$ in Equation \ref{eq:dker}, assuming that $\Sigma$ is the Matérn kernel given in Equation \ref{eq:kernel}. \begin{equation} \frac{\partial \Sigma(x_i, x)}{\partial x} = -\frac{3\sigma^2}{\rho^2}e^{-\frac{\sqrt{3}}{\rho}||x - x_i||}(x - x_i) \label{eq:dker} \end{equation} \textbf{Proof (Eq. \ref{eq:dker}).} \begin{equation} \begin{split} \frac{\partial \Sigma(x_i, x)}{\partial x} & = \frac{\partial \Sigma(x_i, x)}{\partial ||x - x_i||} \frac{\partial ||x - x_i||}{\partial x}\\ & = -\frac{3\sigma^2}{\rho^2} e^{-\frac{\sqrt{3}}{\rho}||x - x_i||}||x - x_i||\frac{\partial ||x - x_i||}{\partial x}\\ & = -\frac{3\sigma^2}{\rho^2} e^{-\frac{\sqrt{3}}{\rho}||x - x_i||}||x - x_i||\frac{(x - x_i)}{||x - x_i||}\\ & = -\frac{3\sigma^2}{\rho^2} e^{-\frac{\sqrt{3}}{\rho}||x - x_i||}(x - x_i) \end{split} \end{equation} With Equations \ref{eq:dei}, \ref{eq:dmu}, \ref{eq:dsig} and \ref{eq:dker}, each AP $k$ has a complete closed-form of $\nabla EI$. By applying its strategy $\pi_k(t) = \argmax_{x \in C^{|\mathcal{N}_k|}} EI(x)$ and classical gradient descent techniques on $-EI$, AP $k$ provides promising configurations for its surrounding APs in $\mathcal{N}_k$. \subsection{Aggregation of local prescriptions} In the previous sections, we have described how each AP $k$ computes its local reward and relies on its model $\mathcal{GP}_k$ to explore promising configurations for the APs in $\mathcal{N}_k$. In general, maximizing local rewards is very likely to lead to a sub-optimal situation since, for non-linear optimization problems, individual interests are often not aligned with the global interest (e.g., the famous Tragedy of the Commons \cite{tragedy_commons}). Without more information on the relation between the configuration of the APs and the measured throughputs of STAs, it seems impossible to prove theoretically that our global reward function is maximized by the optimization of local reward functions. However, it is our experience that the local rewards defined with Equation \ref{eq:local_reward} force each AP to behave altruistically by reducing its impact on its surroundings. As a matter of fact, as we will see in Sections \ref{sec:num_res} and \ref{sec:disc_benefits}, the use of \texttt{INSPIRE} leads to a significantly better spatial reuse of the radio channel. This suggests that the altruistic behaviors of independent APs make \texttt{INSPIRE} less exposed to a sub-optimal convergence. In any case, more coordination between APs is required. By construction, the collection $\mathcal{F} = \left(\mathcal{N}_k\right)_{1 \leq k \leq K}$ is a cover of the set of APs in $\mathcal{W}$ but not a partition. In fact, if $\mathcal{F}$ had only null intersections (i.e., $\forall j,k, \mathcal{N}_j \cap \mathcal{N}_k = \emptyset$), then the spatial reuse of the radio channel would already be at its apex and there is no need for improvement. Figure \ref{fig:intersections} illustrates an example with 5 APs in which the collection $\mathcal{F} = (\mathcal{N}_1, \mathcal{N}_2, \mathcal{N}_3, \mathcal{N}_4, \mathcal{N}_5)$ exhibits multiple non-null intersections. As a result, some APs will receive multiple (different) prescriptions for the configuration of their \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters at their next iteration. For instance, AP~2 will receive prescriptions from APs~1, 3, and 4 in addition to its own prescription. Since APs can only test one configuration at a time, one of those prescriptions must be chosen, or preferably, a consensus between them must be made. \begin{figure}[!h] \centering \includegraphics[width=3.2in]{rsc/Surroundings.png} \caption{A WLAN represented by a graph with APs depicted as labelled triangles and STAs as black dots. An edge exists between two APs when they are in the communication range of each other. We use different colors to illustrate the surroundings of each AP in $\mathcal{F}$.} \label{fig:intersections} \end{figure} First, let us note that APs in $\mathcal{N}_k$ with more STAs weigh more in the computation of the local reward $R_k$ than APs in $\mathcal{N}_k$ with fewer STAs (see Equation~\ref{eq:local_reward}). To reflect this property in the consensus to be reached, it seems natural to also give more weight to the prescriptions issued by APs with more STAs. More precisely, we let the weight assigned to the prescription coming from an AP $i$ be proportional to the number of its associated STAs $|s_i|$. This leads us to select an aggregation function such as the weighted marginal median or the weighted average. We opt for the latter function since they tend to perform equally good in our tests. Noting $P^i_k \in C$ the prescription of AP $i$ for AP $k$, each AP $k$ determines its next configuration (i.e., $x_k^{t+1}$) using Equation~\ref{eq:agg}. \begin{equation} x^{t+1}_k = \frac{1}{\sum_{i \in \mathcal{N}_k} |s_i|} \sum_{i \in \mathcal{N}_k} |s_i|P^i_k \label{eq:agg} \end{equation} \subsection{Algorithm and complexity} \begin{algorithm}[h!] \caption{\texttt{INSPIRE} run at each AP $k$} \label{alg:solution} \hspace{-5.2cm}\textbf{Input}: subset $\mathcal{N}_k$ of APs \begin{algorithmic}[1] \STATE Init the Gaussian Process $\mathcal{GP}_k$ \WHILE{\textbf{true}} \STATE Find a prescription $P^k = \argmax_{x \in C^{|\mathcal{N}_k|}} EI^t_k(x)$ by gradient descent of (\ref{eq:dei}) \STATE Broadcast $P^k$ to APs in $\mathcal{N}_k$ along with the number of STAs $s_k$ \STATE Receive the prescriptions $P^j_k$ and $s_j$ from AP $j, j \neq k, j \in \mathcal{N}_k$ \STATE Compute the consensus $x^{t+1}_k$ with (\ref{eq:agg}) \STATE Test $x^{t+1}_k$ for $\Delta t$ seconds and compute its selfish reward $f_k$ with (\ref{eq:reward}) \STATE Broadcast $f_k$, $\lambda_k$ and $x^{t+1}_k$ to APs in $\mathcal{N}_k$ \STATE Receive $f_j$, $\lambda_j$ and the configurations $x^{t+1}_j, j \neq k$ from APs in $\mathcal{N}_k$ \STATE Compute the local reward $R_k$ with (\ref{eq:local_reward}) and the local configuration $x^{t+1}$ \STATE Add the pattern $\left(x^{t+1}, R_k\right)$ to $\mathcal{GP}_k$ \ENDWHILE \end{algorithmic} \end{algorithm} Algorithm \ref{alg:solution} summarizes the main steps of our proposed strategy \texttt{INSPIRE} run on each AP of the WLANs. We begin the evaluation of the computational costs of running \texttt{INSPIRE} with the most resource-intensive operation. Inverting the $t \times t$ matrix $\Sigma(X_t, X_t)$ (Line 11 in Algorithm \ref{alg:solution}) must be made whenever $X_t$ changes. This operation, which is carried out with Cholesky decomposition $\Sigma(X_t, X_t) = LL^T$, with $L$ a lower triangular matrix updated whenever a new pattern (\textit{i.e.} a couple comprised of a tested configuration $x$ and its associated label $Y(x)$) is added to $\mathcal{GP}_k$, has a complexity of $O\left(t^3\right)$. Gradient descent (Line 1 in Algorithm \ref{alg:solution}) and matrix-vector multiplications of size $t \times t$ and $t \times 1$ have both a complexity of $O\left(t^2\right)$. These two operations are repeated for each gradient descent whose number of steps is capped by $m$. Therefore, the computational complexity of \texttt{INSPIRE} is asymptotically of $O\left(t^3 + mt^2\right)$. It is worth noting that the dimensionality of the problem (i.e. $\dim C^{|\mathcal{N}_k|} = |\mathcal{N}_k| \dim C$) does not appear in the expression of the asymptotic computational complexity of \texttt{INSPIRE}. This interesting property results from the use of a kernel function by GPs to compare WLANs configurations. This gives \texttt{INSPIRE} the ability to handle arbitrarily dense WLANs, or to optimize more parameters than just \texttt{TX\_PWR} and \texttt{OBSS\_PD}, without taking a hefty toll on its execution time. In fact, the real burden to the execution time of \texttt{INSPIRE} is $t$. This compels us to bound the size of $X_t$ and to find a balance between the amount of collected data on the WLANs' performance and configuration and a quick execution time. We keep the possibility of approximation methods to reduce the computational complexity of \texttt{INSPIRE} for future works. As for now, we recommend using windowing methods (such as a moving window) to bound the size of $X_t$ and so the computational complexity of \texttt{INSPIRE}. We now study the communication costs incurred by \texttt{INSPIRE}. Lines 4 and 8 of Algorithm \ref{alg:solution} deal with the communication operations necessary for an AP $k$ to learn the selfish rewards and the corresponding normalization constants of the APs in $\mathcal{N}_k$. This incurs additional communication traffic whose throughput $B_k$ can be evaluated as: \begin{equation} B_k = \frac{2(|\mathcal{N}_k| - 1)(F + 4\dim C + 6)}{\Delta t} \label{eq:comcost} \end{equation} where $\dim C$ is the dimensionality of our configuration space, $F$ is the minimal size of a frame, $|\mathcal{N}_k|$ is the number of APs in $\mathcal{N}_k$ and $\Delta t$ is the duration of an iteration. \textbf{Proof.} Line 4 triggers the transmission of $\dim C + 1$ floats for each AP in $\mathcal{N}_k$. Similarly, line 8 causes the transmission of $\dim C + 2$ floats. Hence, at each optimization step, and given that a float is 4 bytes long, an AP $k$ first sends to its $|\mathcal{N}_k| - 1$ neighbors a frame of size $F+4(\dim C + 1)$ before sending a second frame of size $F + 4(\dim C + 2)$, resulting in the transmission of $2(|\mathcal{N}_k| - 1)(F + 4\dim C + 6)$ bytes. Eventually, we obtain the communication overhead for each AP by dividing this quantity by $\Delta t$ as expressed by Equation \ref{eq:comcost}. \section{Performance Evaluation} \subsection{Experimental settings} To evaluate the ability of \texttt{INSPIRE} at improving the spatial reuse of a radio channel through the configuration of the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters, we consider two distinct scenarios. The first scenario is inspired by the WLAN deployment of Cisco in their offices in San Francisco. In~\cite{cisco_topo}, Cisco provides the location of 60 APs that together deliver wireless connectivity to their employees on a floor. To account for the WLANs' activity from other floors, we consider a three-floor building and we replicate on each floor the same arrangement of APs as in Cisco's offices. This leads us to a total number of 180 APs spanned over three floors. Assuming 18 independent radio channels, we run a radio channel allocation algorithm to determine the radio channel used by each AP. For our first scenario, we consider the subgraph resulting from the channel allocation with the highest density. We use \textbf{T1} to refer to this topology (i.e., arrangements of APs and STAs), which is illustrated in Figure~\ref{fig:topoT1}. \textbf{T1} exhibits a total of 10 APs and we associate a number of 5 STAs per AP. The second scenario addresses the case of many single-AP WLANs deployed and operated independently in a relatively limited area. This is typically the case in housing units where each apartment is equipped with its own AP so that the APs are often only a few meters away from a number of others. More specifically, we consider a nine-story building with 216 apartments of 25 m² each. We randomly position an AP within each apartment as well as 4 STAs per AP. Then, similarly to the first scenario, we apply a radio channel allocation algorithm given a total of 18 radio channels, to obtain the topology of interest denoted by \textbf{T2}. Note that \textbf{T2} consists of 14 APs and 56 STAs. Figure~\ref{fig:topoT2} depicts the topology \textbf{T2}. Despite \textbf{T1} and \textbf{T2} both representing dense scenarios of WLANs, they cover two different cases. \textbf{T1} exemplifies the case of a single WLAN designed from its inception to cover the open space of a company. Conversely, \textbf{T2} results from the uncoordinated combination of multiple independent WLANs that are thus likely to interfere with each other. Therefore, we expect \textbf{T2} to be a more difficult scenario than \textbf{T1}. \begin{figure} \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=1\linewidth]{rsc/MER_FLOORS_CH20.png} \caption{Topology \textbf{T1}.} \label{fig:topoT1} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=1\linewidth]{rsc/HLM.png} \caption{Topology \textbf{T2}.} \label{fig:topoT2} \end{subfigure} \caption{The two considered topologies. APs are shown as red triangles and they are connected by a two-headed arrows if they lie in each other's communication range. Associated STAs are shown as dots colored according to their throughputs: warm, yellowish colors indicate that the STA has enough throughput most of the time while, on the contrary, cool, blueish colors indicate that the STA has mostly not enough throughput under the default configuration of 802.11: 20 dBm for \texttt{TX\_PWR} and -82 dBm for \texttt{OBSS\_PD}.} \label{fig:topos} \end{figure} For each scenario, we consider heavily loaded conditions. APs attempt to transmit frames to each of their associated STAs at a rate of 50 Mbps while the latter attempt to send their frames to the AP at a lower rate of 3.33 Mbps. These assumptions are in line with the downstream traffic largely exceeding the upstream traffic in WLANs. Given the speed of wireless links in 802.11ax, the buffers of the APs will always be full of frames waiting to be sent. More generally, considering APs in saturation represents undoubtedly the most difficult case when dealing with the spatial reuse of a radio channel. Therefore, if \texttt{INSPIRE} manages to significantly improve the WLANs' performance under these circumstances, then it can only do better under normal conditions. To better appraise the quality of \texttt{INSPIRE}, we also consider a control strategy as well as several state-of-the-art solutions, which were discussed in Section~\ref{sec:soa} and briefly summarized here: \begin{itemize} \item \texttt{DEFAULT}: Every AP keeps its default configuration for the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters (i.e., $(-82, 20)$ dBm); \item \texttt{WCNC'15}: Each AP implements a simple distributed algorithm to dynamically update its \texttt{OBSS\_PD} parameter \cite{dsc}; \item \texttt{JNCA'19}: Each AP solves a MAB problem using Thompson sampling to dynamically update their \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters \cite{sr_mab_2}; \item \texttt{MSWiM'21}: Similar to \texttt{JNCA'19}, except that the sampling of new configurations is performed through a multivariate Gaussian mixture, and that the solution is centralized \cite{mab_mswim}. \end{itemize} We implemented \texttt{INSPIRE} (based on the open-source Gaussian process library LibGP \cite{libgp}) as well as the four strategies described above in the open-source network simulator ns-3 \cite{ns3}. ns-3 is a well-established realistic discrete-event simulator that implements most of the network protocols involved in the WLANs communication from the Physical up to the Application layer. We report in Table~\ref{tab:ns3_parameters} the simulation parameters used in the rest of this section. Unlike previous works (e.g., \cite{mab_mswim, dsc, fsc, sr_mab, sr_mab_2})) with the exception of \cite{lsr}, our simulations incorporate the mechanism of rate adaptation that let APs and STAs dynamically vary the speed of their wireless links (through the use of different Modulation Coding Scheme (MCS)) in response to the quality of the received signal. This is particularly important for the sake of our study since changing the value of \texttt{TX\_PWR} necessarily affects the quality of the received signal and thus the MCS. Since our simulated WLANs take place in buildings, we choose an appropriate path loss by combining the models \texttt{ItuR1238} and \texttt{InternalWallsLoss}, both implemented by ns-3. With these propagation models, the signal is decreased by an additional attenuation coefficient each time it goes through a floor or a wall. The attenuation coefficients are respectively -4 dBm (which is the default value in \texttt{ItuR1238}) and -8 dBm. \begin{table}[!h] \caption{ns-3 parameters.} \label{tab:ns3_parameters} \centering \rowcolors{2}{gray!13}{white} \begin{tabular}{|l p{90mm}|} \hline \textbf{Parameter} & \textbf{Value}\\ \hline ns-3 version & 3.31\\ Number of repetitions & 22\\ Simulation duration & 30 s\\ Duration of an iteration ($\Delta t$) & 75 ms\\ Packet size & 1,464 bytes\\ Downlink traffic & 50.0 Mbps\\ Uplink traffic & 3.33 Mbps\\ Channel size & 20 MHz\\ Frequency band & 5 GHz\\ A-MDPU Aggregation & 4\\ Path loss & \texttt{HybridBuildingsPropagationLossModel} (\texttt{ItuR1238PropagationLossModel} + \texttt{InternalWallsLoss})\\ Wi-Fi Manager & \texttt{IdealWifiManager}\\ \hline \end{tabular} \end{table} We instrumented ns-3 to collect and compute a number of performance metrics. At the end of each iteration, the quality of the spatial reuse is assessed with Equation \ref{eq:reward}, although distributed strategies may internally use the local reward defined in Equation~\ref{eq:local_reward}. Then, we compute the classical metric to analyze the efficiency of a strategy at dealing with a MAB problem: (i) The cumulative regret (with Equation \ref{eq:cum_reg} using the global reward in Equation~\ref{eq:reward}). We also collect the following performance metrics to reflect the effect of each strategy on the behavior of the WLANs and of their STAs: (ii) The number of starving STAs, which we define as STAs experiencing a very low throughput (namely, less than 10\% of their attainable throughput), (iii) The cumulated throughput, which simply sums all STAs' throughput and (iv) The Jain's fairness index \cite{jain}, which quantifies how evenly the STAs are served by the APs. Each simulation lasts 30 seconds and we replicated them independently 22 times to obtain their median value along with their first, second, and third quartiles. When the quartiles of a performance metric vary too much within a single simulation, we apply an exponential moving average (with $\alpha = 0.04$) to extract the underlying trends of the quartiles sequences. The metrics are collected throughout the whole duration of the simulation. At the end of each iteration, we compute all the performance metrics and then we refer to the current strategy to decide what will be the next configuration of the WLANs. Since an iteration lasts $\Delta t$ = 75 ms and a simulation lasts 30 seconds, the quality of each solution is assessed over 400 iterations. \subsection{Numerical results} \label{sec:num_res} Figure~\ref{fig:t1_res} illustrates the performance metrics delivered by the ns-3 simulator for each strategy in the case of topology \textbf{T1}. The cumulative regret, represented in Figure \ref{fig:t1_cumreg}, indicates which strategy has performed the best at any time of the simulation. \texttt{INSPIRE} is found to be the most efficient strategy, reducing the cumulative regret by 70\% more than \texttt{DEFAULT} and by over 50\% than \texttt{WCNC'15}, which happens to be the most efficient state-of-the-art strategy. We now look at the other performance metrics to better understand how much \texttt{INSPIRE} is able to improve the behavior of the WLAN and of its STAs. Taking \texttt{DEFAULT} as baseline, Figure \ref{fig:t1_starv} shows that \texttt{INSPIRE} reduces the number of STAs in starvation by 80\% while Figure \ref{fig:t1_fair} and \ref{fig:t1_cumthrough} demonstrate that our proposed solution manages to find a fairer sharing of resources (+133\%) and to increase the cumulated throughput (+400\%). \begin{figure}[!h] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\linewidth]{rsc/MER_FLOORS_CH20_GP_CumReg.png} \caption{Cumulative Regret} \label{fig:t1_cumreg} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{rsc/MER_FLOORS_CH20_GP_Starvations.png} \caption{Starvations} \label{fig:t1_starv} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{rsc/MER_FLOORS_CH20_GP_Fairness.png} \caption{Fairness} \label{fig:t1_fair} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{rsc/MER_FLOORS_CH20_GP_CumThroughput.png} \caption{Cumulated Throughput} \label{fig:t1_cumthrough} \end{subfigure} \caption{Performance metrics delivered on topology \textbf{T1} by each strategy.} \label{fig:t1_res} \end{figure} We now turn to the case of topology \textbf{T2}. First, we observe in Figure \ref{fig:t2_cumreg} that among the four considered strategies, \texttt{INSPIRE} is the one that manages to decrease the most the cumulative regret with a decline of about 36\% compared to the \texttt{DEFAULT} configuration at the end of the simulation. The proposed solution also outperforms \texttt{MSWiM'21}, which is found to be the best state-of-the-art strategy on this topology, by a margin of 14\%. Looking at the performance of WLANs and of their STAs, Figure~\ref{fig:t2_starv} shows that \texttt{INSPIRE} is able to limit the number of STAs starving from throughput by a degree of 36\% when compared to the \texttt{DEFAULT} configuration. Similarly, the measure of fairness among STAs and the cumulated throughput of STAs have their value increased by 28\% and nearly doubled, respectively with \texttt{INSPIRE} (see Figures \ref{fig:t2_fair} and \ref{fig:t2_cumthrough}). Overall, through the study of topologies \textbf{T1} and \textbf{T2}, \texttt{INSPIRE} demonstrates its superiority over the other state-of-the-art strategies. The significant improvements brought by our proposed solution on all performance metrics are permanently obtained after 100 iterations only (7.5 seconds). In other words, in less than 10 seconds, \texttt{INSPIRE} manages to significantly improve the behavior of the WLANs and of the associated STAs thanks to a better spatial reuse of the radio channel. This efficiency in searching and finding an adequate configuration of the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters at each AP of the WLANs mostly results from the distributed, altruistic use of Gaussian processes which we further discuss in the next section. \begin{figure}[!h] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\linewidth]{rsc/HLM_GP_CumReg.png} \caption{Cumulative Regret} \label{fig:t2_cumreg} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{rsc/HLM_GP_Starvations.png} \caption{Starvations} \label{fig:t2_starv} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{rsc/HLM_GP_Fairness.png} \caption{Fairness} \label{fig:t2_fair} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{rsc/HLM_GP_CumThroughput.png} \caption{Cumulated Throughput} \label{fig:t2_cumthrough} \end{subfigure} \caption{Performance metrics delivered on topology \textbf{T2} by each strategy.} \label{fig:t2_res} \end{figure} \section{Discussion} \label{sec:disc} \subsection{Seemingly similar problems with vastly different complexity} The topologies \textbf{T1} and \textbf{T2} may look similar, but they are not and \texttt{INSPIRE} performed differently on each of them. By the end of the optimization process (i.e., 400 steps), the performance metrics for \textbf{T1} were improved by at least 70\% from their initial values under the \texttt{DEFAULT} configuration, and only 7 STAs (representing 14\% of the STAs) were still starving from throughput. In case of \textbf{T2}, the progress was less with 14 STAs (representing 25\% of the STAs) remaining in starvation. This difference results from the location of STAs relatively to the APs. Looking at Figure \ref{fig:topos}, it appears that STAs in \textbf{T2} are further from their associated AP than the ones in \textbf{T1}. As a consequence, STAs are also closer to a concurrent AP. While STAs in \textbf{T1} are on average 10 times closer to their associated AP than to a concurrent AP, this ratio drops to an average value of 4 for STAs on \textbf{T2}. With STAs closer to concurrent APs, the spatial reuse problem becomes more difficult. As a matter of fact, to reach its associated STA, the AP must transmit at a greater power, increasing its chance to cause interference to the surrounding APs. Similarly, STAs that are far away from their AP are significantly affected by the transmissions of concurrent APs. To verify that \textbf{T2} constitutes a more complex example than \textbf{T1}, we examine the shape of the reward function in both cases. Because of the high dimensionality of the arguments of the reward function and the lack of a closed-form expression, we resort to a slicing technique to provide a visualization of the reward function in Equation \ref{eq:reward}. We postpone to Appendix \ref{sec:slices} the details of this slicing. Figure \ref{fig:slices} illustrates the obtained random slices in case of \textbf{T1} and \textbf{T2}. Figure~\ref{fig:t1_slice} suggests a relatively smooth reward function in \textbf{T1}. On the other hand, the reward function in the case of \textbf{T2} is much more erratic, featuring a lot of local maxima as shown by Figure~\ref{fig:t2_slice}. Interestingly, Figure~\ref{fig:t1_slice} shows that \texttt{INSPIRE} succeeded to find a configuration that is maximal in this (random) slice for the case of \textbf{T1}. We also notice that many configurations of equivalent efficiency exist, which also eases the complexity of the search for an adequate configuration. Conversely, in the case of \textbf{T2}, \texttt{INSPIRE} does not find the best configuration since the slice of Figure~\ref{fig:t2_slice} shows that, at least, a 6\% better configuration exist. Nonetheless, \texttt{INSPIRE} was able to find an efficient configuration. \begin{figure}[!h] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{rsc/RandomBasis_MERAKI_Circle.png} \caption{Slice for \textbf{T1}, with the best solution found by \texttt{INSPIRE} at (0, 0) and two scaled random unit vectors. The maximum of the slice is shown with a red circle.} \label{fig:t1_slice} \end{subfigure} \hspace{0.2cm} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{rsc/RandomBasis_HLM_Circle.png} \caption{Slice for \textbf{T2}, with the best solution found by \texttt{INSPIRE} at (0, 0) and two scaled random unit vectors. The maximum of the slice is shown with a red circle.} \label{fig:t2_slice} \end{subfigure} \caption{Random slices of the global reward function for the topologies \textbf{T1} and \textbf{T2}.} \label{fig:slices} \end{figure} \subsection{Benefits of distributed prescriptions} \label{sec:disc_benefits} Eventually, to justify our choice of only letting APs exploit local information and prescribe network configurations to their surrounding APs, we consider alternative versions of \texttt{INSPIRE}: (i) \texttt{GPs w/o agg.} where each AP keeps using the local, altruistic reward of Equation \ref{eq:local_reward} but does not aggregate local prescriptions, and prescribes only for its own configuration and (ii) \texttt{Single GP}, a centralized version of \texttt{INSPIRE} where a single GP has a complete knowledge of the WLANs and decides on the configuration of every AP. We compare these alternative strategies with \texttt{INSPIRE} and the \texttt{DEFAULT} strategy by considering their cumulative regret on topologies \textbf{T1} and \textbf{T2} in Figure \ref{fig:alternatives}. Figure \ref{fig:t1_compare} shows that, on \textbf{T1}, the alternative strategies have a cumulative regret 46\% lower than \texttt{DEFAULT}. However, \texttt{INSPIRE} manages to reduce its cumulative regret by an extra 25\%. Despite the greater complexity of the function in \textbf{T2}, this extra reduction factor persists but at a value of 13\% as shown in Figure \ref{fig:t2_compare}. Given the significant gap between \texttt{INSPIRE} and \texttt{GPs w/o agg.}, it is clear that prescribing for surrounding APs and aggregating those prescriptions leads to a more altruistic behaviour, which in turn brings additional benefits at the scale of the WLANs. More surprisingly, \texttt{INSPIRE} outperforms its centralized counterpart \texttt{Single GP}. At first glance, this is counter-intuitive since \texttt{Single GP} has a complete knowledge and control over the APs of the WLANs. However, \texttt{Single GP} involves a single agent to optimize a function of high complexity, which has to deal with $K|C|$ variables. Decentralization breaks down this task into simpler local optimization problems of lower dimension. With \texttt{INSPIRE}, each AP $k$ only deals with $|\mathcal{N}_k||C|$ variables, which is significantly less than $K|C|$ for large WLANs. Overall, APs run by \texttt{INSPIRE} solve simpler optimization problems and behave altruistically by ensuring a consensus with surrounding APs. By doing so, they manage to improve the spatial reuse of the radio channel at the scale of the WLANs. \begin{figure}[!h] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{rsc/MER_FLOORS_CH20_GP_Compare.png} \caption{Cumulative regret on topology \textbf{T1}.} \label{fig:t1_compare} \end{subfigure} \hspace{0.2cm} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{rsc/HLM_GP_Compare.png} \caption{Cumulative regret on topology \textbf{T2}.} \label{fig:t2_compare} \end{subfigure} \caption{Comparison of the cumulative regrets of the different versions of \texttt{INSPIRE} on \textbf{T1} and \textbf{T2}.} \label{fig:alternatives} \end{figure} \section{Conclusions} \label{sec:conc} In this work, we have presented \texttt{INSPIRE}, a reinforcement learning method to improve the spatial reuse of radio channels in WLANs by configuring two parameters of APs: the transmission power (\texttt{TX\_PWR}) and the sensitivity threshold (\texttt{OBSS\_PD}), that can be dynamically configured with the latest Wi-Fi amendments. To address the difficult problem of sharing efficiently and fairly the resource of a radio channel, \texttt{INSPIRE} works as a distributed solution where each AP solves a local Multi-Armed Bandit problem with the help of information and actions limited to its surrounding APs (viz. within its communication range). The development of the solution includes (i) an intuitive quantification (based on STAs throughputs) of the ``goodness'' of a configuration of \texttt{TX\_PWR} and \texttt{OBSS\_PD} for concurrent APs in WLANs, both at local and global scales, (ii) the use of an acquisition function and Gaussian processes to find local configurations that maximize approximations of local reward functions, and (iii) an altruistic behavior facilitated by prescriptions to surrounding APs along with a consensus method which aggregates the prescriptions of surrounding APs for the ``greater good'' of the WLANs. \texttt{INSPIRE} has been evaluated and compared with other state-of-the-art strategies addressing the same problem, using the open-source network simulator ns-3 that implements all the layers of the network stack. The different strategies were compared on two examples inspired by real-life deployments of dense WLANs in both professional and domestic environments. \texttt{INSPIRE} was found to outperform other state-of-the-art strategies by significantly reducing the number of STAs in starvation and increasing the cumulated throughput of the WLANs in only a few seconds. As future work, we plan to assess the quality of \texttt{INSPIRE} for a specific class of WLANs where STAs are mobile (e.g., customers in a shopping mall). Another natural follow-up would be to experiment \texttt{INSPIRE} with real material on a testbed. \bibliographystyle{abbrv} \section{Introduction} Since their introduction in the late 1990s, WLANs (Wireless Local Area Networks) have rapidly overtaken wired networks to become the primary means of connecting devices to the Internet. According to Cisco \cite{cisco2019wp}, they will account for 57\% of the Internet traffic in 2022, compared to 22\% and 21\% for mobile and wired networks, respectively. The current WLAN architecture is defined by the IEEE standard 802.11 (commercially known as Wi-Fi). APs (Access Points) are the centerpiece of this setup, serving as relays for wireless devices; we refer to the latter as STAs (Stations) throughout this paper. Typically, each AP is equipped with a wired interface giving access to the LAN and then the Internet as well as a wireless interface providing connectivity to nearby STAs through radio communications. Space on the radio spectrum is a scarce resource as it is often shared by multiple WLANs. The radio bands used by the IEEE 802.11 standard (currently 2.4 and 5 GHz, soon to be joined by 6 GHz) are divided into channels. Different APs can then be assigned to different, orthogonal channels enabling the APs to transmit at the same time without interfering with each other. Equally important thanks to the limited radio range of radio waves, APs configured on the same radio channel can transmit concurrently provided that they are sufficiently far away from each other. This ability was central to the success of WLANs and it is commonly known as the spatial reuse of radio channels. However, the spatial reuse of radio channels as performed by today’s WLANs may be reaching its limit. This is particularly true in places where WLAN deployments are very dense, such as offices, shopping malls and train stations. This is because, in these areas the distance between APs is small, so that an AP is more likely to be blocked by the transmissions of one or several nearby APs operating on the same channel. This will in turn take a hefty toll on the WLANs' performance. A solution to this issue can be found in the 2021 amendment to 802.11 known as 802.11ax \cite{802.11-2021}, which enables the dynamic configuration of two key parameters at each AP: \texttt{TX\_PWR} and \texttt{OBSS\_PD}. The former parameter specifies the power level (in dBm) at which the AP transmits its data. The latter parameter defines the sensitivity threshold (in dBm). If the energy received is below this level, this indicates to the AP that the radio channel is clear and thus available for transmission. Otherwise, the AP must defer its transmissions. While prior amendments to 802.11 held these \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters constant (typically 20 dBm and -82 dBm respectively), 802.11ax has made them dynamic with their values spanning from 1 to 21 dBm for the former and from -82 to -62 dBm for the latter. Adjusting the configurations of \texttt{TX\_PWR} and \texttt{OBSS\_PD} can help overcome the limitations of spatial reuse in dense environments by allowing APs that are close to each other to transmit on the same channel. Figure~\ref{fig:toy} depicts a simple example of two APs operating on the same radio channel and illustrates how different configurations of the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters can lead to different performance. \begin{figure}[!h] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{img/introa.png} \caption{With the default configuration of \texttt{TX\_PWR} and \texttt{OBSS\_PD}, the two APs are within each other’s detection range so that they cannot transmit simultaneously.} \label{fig:toya} \end{subfigure} \hspace{0.2cm} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{img/introb.png} \caption{The value of \texttt{TX\_PWR} is reduced on both AP so that they do not belong to each other’s detection range. Under this configuration, concurrent transmissions from the two AP may occur at the same time.} \label{fig:toyb} \end{subfigure} \caption{Adequately configuring the \texttt{TX\_PWR} parameter of APs can significantly improve the spatial reuse of radio channels in WLANs. Note that concurrent transmissions of the two APs could also be attained by increasing \texttt{OBSS\_PD} at each AP. While similar, reducing \texttt{TX\_PWR} and increasing \texttt{OBSS\_PD} may affect the WLANs' performance differently (see Table 2 of \cite{sr_mab_2} for more details).} \label{fig:toy} \end{figure} Despite the potential of 802.11ax to improve the spatial reuse of radio channels, finding an adequate configuration of \texttt{TX\_PWR} and \texttt{OBSS\_PD} for the APs in a WLAN is a complex problem. First, an adequate configuration is very topology-specific. In other words, knowing a suitable configuration for a given scenario is of no value for another scenario. Second, a distributed solution would be more appreciated than a centralized solution. Not only does this avoid the search in an otherwise very high dimensional space but this avoids the assumption of having a centralized entity (e.g., a controller) deciding the configurations of all APs. This assumption is acceptable if all the interfering APs belong to the same WLAN but unrealistic if they belong to concurrent WLANs. Third, forecasting the performance of WLANs with an analytical model that can subsequently help establish ``optimal’’ configurations is difficult. The degree of details in the models will be either too coarse and thus inapplicable, or adequate but unscalable when scenarios involve multiple APs and STAs. A solution to this is that APs can apply new configurations of their parameters, measure the effect of these changes on their performance, and exchange their experience with surrounding APs. This paves the way for the use of online and reinforcement learning techniques in a distributed manner. In this paper, we present \texttt{INSPIRE} a distributed solution performing local Bayesian optimizations based on GPs (Gaussian Processes) to improve the spatial reuse of WLANs by adequately configuring the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters at each AP. \texttt{INSPIRE} makes no explicit assumptions about the topology of WLANs or the radio environments and thus can apply to any WLANs. Additionally, it can operate even when the APs to be configured belong to different concurrent WLANs. It is based on (i) an intuitive reward function that combines the performance of STAs and returns a score reflecting the overall performance of a given WLAN configuration, (ii) the use of GPs to explore promising new WLAN configurations, and (iii) a consensus algorithm to coordinate the APs’ efforts into reaching collectively improved performance for the WLANs. More precisely, the contributions of this paper are as follows: \begin{itemize} \item We demonstrate the ability of GPs at approximating the reward function, which reflects the performance of WLANs, and at exploring efficient AP configurations; \item We establish the superiority of a divide-and-conquer approach to handle the complex problem of setting the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters at each AP; \item We introduce \texttt{INSPIRE} a distributed solution that lets the APs of concurrent WLANs automatically adapt their internal parameters’ setting in their own interest as well as in the interest of obtaining a more efficient spatial reuse of radio channels; \item We evaluate the efficiency of \texttt{INSPIRE} on real-life inspired case studies using a detailed network discrete-event simulator and compare its performance with several state-of-the-art solutions (centralized or distributed). \end{itemize} The remainder of this paper is organized as follows. The next section discusses the related work. Section~\ref{sec:sol} describes the proposed strategy to address the issue of spatial reuse of a radio channel in WLANs. Its performance are evaluated in Section~\ref{sec:num_res} and we use Section~\ref{sec:disc} to deepen the understanding of the numerical results as well as some of the choices we made on our strategy. Section~\ref{sec:conc} concludes this paper. \section{Related work} \label{sec:soa} The release of the IEEE 802.11ax amendment \cite{802.11-2020} in late 2021 marks a new era for the spatial reuse of radio channels of WLANs: Nodes can dynamically adjust their transmission power (\texttt{TX\_PWR}) and sensitivity threshold (\texttt{OBSS\_PD}) parameters. For a detailed explanation of how this new feature is implemented, we refer the interested reader to~\cite{Wilhem2021}, which also provides simple scenarios to illustrate its potential benefits. Years before IEEE released the 802.11ax amendment, the idea of dynamically updating \texttt{TX\_PWR} and \texttt{OBSS\_PD} has been explored by some researchers. The pioneering work of \cite{Zhu2004} presents an analytical model that, based on the current radio channel conditions, dynamically configures \texttt{OBSS\_PD} on each node of a Wi-Fi-based mesh network. Concurrently, \cite{Kim2004} established that adapting \texttt{TX\_PWR} can lead to increased throughput and reduced energy consumption. More recently, in 2020, \cite{Qiu2020} casts the issues of positioning the APs of a WLAN and choosing their \texttt{TX\_PWR} as an optimization problem. The authors provides a solution to this problem that delivers a static configuration of \texttt{TX\_PWR} for a WLAN. But their solution does no account for the number of STAs nor the type of traffic in the WLAN. The difficulty of accurately modeling the dependency between the configuration parameters of a large WLAN and its performance is a strong hurdle to the development of spatial reuse strategies based on analytical models. As a result, most of the proposed strategies are data-driven. Adaptive by construction, they seem to constitute promising candidates in the search for configurations that improve the spatial reuse of a radio channel and the performance of WLANs. Machine learning (ML) techniques are natural candidates for addressing problems requiring a data-driven approach, and the spatial reuse problem is no exception. \cite{fsc} addresses the problem of configuring \texttt{TX\_PWR} and \texttt{OBSS\_PD} with a two-scale solution using artificial neural networks (ANN). In their strategy, STAs and APs first adjust their value of \texttt{OBSS\_PD} to minimize interference. Then, an ANN, which was trained offline through simulation, is used to increase the fairness between STAs in terms of attained throughput. However, given the vast diversity of WLAN topologies, the offline learning of the ANN appears as a clear limitation to the generalization of this strategy. An online learning procedure is proposed by \cite{mab_mswim}, which uses reinforcement learning and more precisely the Multi-Armed Bandit (MAB) framework to find the optimal configuration of \texttt{TX\_PWR} and \texttt{OBSS\_PD} in a WLAN. The approach comprises two agents with one sampling promising configurations through a multivariate normal distribution, and the other identifying the best configuration among those already sampled with Thompson sampling and Normal-Gamma priors. These two ML solutions \cite{fsc,mab_mswim} were tested on the network simulator ns-3 and lead to significant WLAN improvements. However, in order to perform their optimization, they both assume the presence of a central controller that has access and control over all the APs in the WLANs. By construction, these approaches are centralized, and hence cannot be applied to cases where concurrent WLANs managed by different owners interfere with others. Distributed approaches are undisputedly better fit than centralized approaches to handle cases with a set of concurrent WLANs. \cite{dsc} introduces a distributed algorithm named Dynamic Sensitive Control which is run on every STAs of a WLAN. In short, each STA tries to dynamically reduce its value of \texttt{OBSS\_PD} to favor concurrent transmissions while keeping it high enough to ensure a high quality signal reception. Similarly, \cite{lsr} proposes Link-aware Spatial Reuse (LSR), a distributed algorithm designed for the APs. In LSR, each AP chooses another AP, which is allowed to transmit concurrently, and then prescribes a value of \texttt{TX\_PWR} for the selected AP. These two algorithms rely on a single measurement metric reflecting the quality of the received signal, namely the Received Signal Strength, to choose the nodes configuration. More recently, strategies using distributed MAB approaches have been proposed \cite{sr_mab, sr_mab_2}. They both use Thompson sampling with Gaussian priors to find the best couple of \texttt{TX\_PWR} and \texttt{OBSS\_PD} at each AP. In~\cite{sr_mab}, each AP seeks to maximize the throughput of its associated STAs. On the other hand, in~\cite{sr_mab_2}, the authors assume that every AP has access to the performance of all other APs in the WLAN; then each AP attempts to maximize a global reward that takes into account the performance of all the other nodes. Both strategies \cite{sr_mab, sr_mab_2} solutions were evaluated in a self-made simulator with simple random scenarios. Table \ref{tab:soa} summarizes the main characteristics of the data-driven strategies discussed above. It shows that, out of the six considered strategies, two (i.e., \cite{dsc,fsc}) only focus on the configuration of the \texttt{OBSS\_PD} parameter (keeping the \texttt{TX\_PWR} parameter fixed). To help in the comparison of the different strategies, we introduce two concepts: ``pull area'' and ``push area''. The pull area indicates the area on which each node is assumed to obtain information (this typically includes parameter configurations and performance measurements). Depending on the strategy being considered, the pull area can include just the node itself, the surrounding nodes, or the whole set of nodes in the WLANs. The push area designates the area which each AP can influence typically through the prescription of parameter configurations. In the case of centralized strategies (e.g., \cite{fsc, mab_mswim}), the pull and push areas naturally cover all the WLANs. We distinguish partially distributed strategies (e.g., \cite{sr_mab_2}) wherein either the pull or push area extends all the WLANs with fully distributed strategies (e.g., \cite{dsc, lsr, sr_mab}) in which both the pull and push areas differ from all the WLANs. We observe in Table~\ref{tab:soa} that only three out of the six state-of-the-art strategies can be considered as fully distributed. The last four columns of Table~\ref{tab:soa} pertain to the performance evaluation used to validate each of these strategies. It appears that most strategies were evaluated without considering the dynamical selection of the Modulation Coding Scheme (MCS) for the speed of the wireless links, nor bidirectional (with upstream and downstream) traffic. This can be seen as a strong limitation since this overlooks some associated trade-offs. For instance, increasing the value of \texttt{TX\_PWR} certainly enables the communication to operate with a faster data rate (larger MCS), but at the same time, it increases the level of interference with surrounding APs. Additionally, most strategies were evaluated on relatively simple scenarios (with a few APs and a limited number of radio channels), often using a self-made network simulator. \begin{table}[!h] \caption{Comparison of the state-of-the-art data-driven strategies. The last column refers to the size of the scenarios involved in the performance evaluation of the strategy. For instance, 216/18 means the evaluation comprises 216 APs distributed over 18 radio channels.} \label{tab:soa} \centering \rowcolors{2}{gray!13}{white} \begin{tabular}{|l c c c c c l l|} \hline {Proposed} & Tuning of & Pull & Push & Dynamic & {Traffic} & {Simulator} & {APs /} \\ {solutions} & {\texttt{TX\_PWR}} & area & area & MCS & Up/Down & & {channels} \\ \hline WCNC'15~\cite{dsc} & & STA & {STA} & & Up & Self-made & 100/3 \\ WCNC'21~\cite{lsr} & \checkmark & {AP} & {AP} & \checkmark & Down & {ns-3} & 6/1\\ Globecom'20~\cite{fsc} & & WLAN & WLAN & & {Up/Down} & {ns-3} & 3/1\\ ADHOC'19~\cite{sr_mab} & \checkmark & {AP} & {AP} & & Down & Self-made & 8/1\\ JNCA'19~\cite{sr_mab_2} & \checkmark & WLAN & {AP} & & Down & Self-made & 8/1 \\ MSWiM'21~\cite{mab_mswim} & \checkmark & WLAN & WLAN & & Down & {ns-3} & 10/1 \\ \texttt{INSPIRE} & \checkmark & \begin{tabular}{@{}c@{}} {Surrounding}\\ {APs}\end{tabular} & \begin{tabular}{@{}c@{}} {Surrounding}\\ {APs}\end{tabular} & \checkmark & {Up/Down} & {ns-3} & {216/18} \\ \hline \end{tabular} \end{table} In this paper, we propose a fully distributed strategy to address the problem of the spatial reuse of radio channels in WLANs. The proposed strategy can be applied to any arrangement of WLANs and its novelties are mostly twofold. First, to the best of our knowledge, it is the first strategy making use of Gaussian Processes to explore promising WLAN configurations in the quest of discovering the optimal one. Gaussian processes are recognized tools to deal with the exploration vs. exploitation dilemma (see \cite{srinivas2009gaussian, chowdhury2017kernelized}) which is at the center of the spatial reuse problem. Second, unlike the existing fully distributed strategies, \texttt{INSPIRE} allows each AP to account for its surroundings thanks to pull and push areas broader than a single node. Through the use of a simple consensus method, APs of the WLANs achieve to behave altruistically selecting configurations for the ``greater good'' of the WLANs. We also introduce realistic scenarios, inspired by real-life WLANs, with dynamic MCS and bidirectionnal traffic, to evaluate and compare the efficiency of all the considered strategies. \section{Proposed solution} \label{sec:sol} \subsection{WLANs under study} Let $\mathcal{W}$ denote the set of concurrent WLANs under study, each of which being comprised of one or more APs. We let $K$ be the total number of APs in $\mathcal{W}$ that operate on the radio channel of interest. We denote by $s_k$ the set of STAs associated with AP $k$ and by $S$ the total number of STAs in the considered radio channel of $\mathcal{W}$. Thus, we have: $S = \sum_{i = 1}^K |s_k|$. Finally, we use $\mathcal{N}_k$ to designate the set of APs that are within the communication range of AP $k$ (when every AP is under the default configuration of the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters). Note that AP $k$ itself belongs to $\mathcal{N}_k$. We refer to the APs in $\mathcal{N}_k$ as the surroundings of AP $k$. We make no assumptions on $\mathcal{W}$, including on the specific arrangement of its APs and STAs, other than the three detailed below. First, we assume that every AP $k$ is able to exchange control frames (possibly through its beacon frames) with its surrounding APs (i.e., the ones in $\mathcal{N}_k$). By the same token, we suppose that at least one AP $k$ has another AP in its communication range (\textit{i.e.}, $\exists k \in [1\mathrel{{.}\,{.}}\nobreak K], \mathcal{N}_k \neq \emptyset$), otherwise the spatial reuse of the radio channel would already be at its apex. Second, we assume that the $K$ APs have their \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters configurable (as it is the case since the introduction of the 802.11ax amendment). We use $x_k^t$ to denote the configuration of AP $k$ with regards to its two \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters at time $t$. Analogously, $x^t$ represents the configuration of the $K$ APs from $\mathcal{W}$ at time $t$. Thus, we have: $x_k^t \in C = [-82\mathrel{{.}\,{.}}\nobreak -62] \times [1\mathrel{{.}\,{.}}\nobreak 21]$ dBm and $x^t \in C^K$. Lastly, we assume that each AP in $\mathcal{W}$ can periodically run performance tests and obtain, in return, the mean throughput attained by each of its STAs over a short time interval $\Delta t$. More formally, we use the vector $T^t \in \mathbb{R}^{+N_S}$ to denote the throughput attained by the $S$ STAs of $\mathcal{W}$ given the WLAN configuration $x^t$ at time $t$. Throughout this paper, we sometimes refer to $T^t$ as $T(x^t)$ to explicitly show the dependency between the STAs' throughputs and APs' configurations. In this work, we seek to discover an adequate configuration $x^*$ of the $K$ APs composing $\mathcal{W}$ that improves the collective experience of the $S$ STAs through a better reuse of their radio channel. We address this problem as a reinforcement learning task in which, at regular time intervals $t$, the APs collect measurements $T^t$ associated to their current configuration $x^t$, and need to decide their next configuration $x^{t+1}$. The obstacles towards that objective are mostly threefold. (i) We need to define a meaningful objective function that APs will attempt to optimize collectively; (ii) We are facing the well-known exploration vs. exploitation dilemma since the search for an adequate configuration of the WLANs should be as seamless as possible (without disrupting the STAs). This leads us to cast the problem as a MAB problem where the arms are the WLANs' configurations. Following the MAB terminology, we refer to the objective function as the reward function; (iii) We are looking for a strategy that can be applied in a distributed way since it would be in general unrealistic to assume that (concurrent) APs have a fine knowledge beyond their surroundings. \subsection{Reward function} We need to define a reward function $f$ that appraises the ``goodness'' of a configuration $x$ with regards to the WLANs performance. Because multiple criteria may be considered in the definition of $f$, there is no universal definition. However, assessing the quality of a configuration $x$ can be derived from the STAs throughputs $T(x)$ obtained with APs configured with $x$. We distinguish three main types of reward functions in the literature: \begin{itemize} \item $f(x) = \sum_{T_i \in T(x)} T_i$ where $f$ is simply computed as the sum of all STAs' throughputs. This function, often referred to as the cumulated throughput of the WLANs, is highly exposed to the ``scapegoat'' objection. Configurations favorably assessed by this function may yet result in severe unfairness among STAs. \item $f(x) = \min_{T_i \in T(x)} T_i$ represents an effective way of preventing the ``scapegoat'' problem. However, this function has a low resolution since its computation overlooks all STAs' throughput but the lowest. \item $f(x) = \prod_{T_i \in T(x)} T_i$ is called the proportional fairness (PF) and provides a convenient trade-off between fairness and cumulated throughput. However, PF exhibits a high variability since $\frac{\partial f}{\partial T_i} = \prod_{T_j \in T(x), j \neq i} T_j$. \end{itemize} We choose PF for our reward function $f$ but we take its logarithm. This lowers its variability, which becomes: $\frac{\partial f}{\partial T_i} = \frac{1}{T_i}$ (note that $T_i$ is typically much larger than 1). This also emphasizes the contribution of STAs with low throughputs in the computation of $f$. Additionally, and for practical reasons, we normalize $f$ so that its return values remain in $[0, 1]$. To do so, we simply need to compute a normalization constant $\lambda$ that depends on the maximum theoretically attainable throughput of each STA. Denoting by $T^*$ the set of the maximum attainable throughputs for the $S$ STAs of $\mathcal{W}$, our reward function becomes: \begin{equation} \begin{split} f(x) &= \frac{1}{\lambda} \log \prod_{T_i \in T(x)} T_i\\ &= \frac{1}{\lambda} \sum_{T_i \in T(x)} \log T_i \end{split} \label{eq:reward} \end{equation} with $\lambda = \sum_{T_i^* \in T^*} \log T_i^*$. However, to compute Equation~\ref{eq:reward}, an AP must have a complete knowledge of the performance attained by the STAs of all APs or, at least, be able to communicate with all the APs in $\mathcal{W}$. This is in contradiction with our assumption that APs only have a partial knowledge of $\mathcal{W}$, limited to their surrounding APs. To design a reward function compatible with the distributed case, we proceed as follows. Each AP $k$ applies Equation~\ref{eq:reward} but restricted to the set of its associated STAs and obtains in return a ``selfish'' reward denoted by $f_k(x_k)$ and a normalization constant $\lambda_k$. Previous work \cite{sr_mab} have showed that considering such selfish rewards may have a positive but limited impact on the WLANs performance. Therefore, we introduce a more altruistic reward, denoted by $R_k$ that accounts not only for the ``selfish'' reward of AP $k$ (i.e., $f_k$) but also for the rewards of the surrounding APs (i.e., the ones in $\mathcal{N}_k$). The ``altruistic'' local reward of AP $k$ is computed as: \begin{equation} R_k(x) = \frac{1}{\sum_{i \in \mathcal{N}_k} \lambda_i} \sum_{i \in \mathcal{N}_k} \lambda_i f_i(x_i) \label{eq:local_reward} \end{equation} where $(\lambda_i, f_i(x_i))$ for $i \in \mathcal{N}_k$ represent the selfish reward and the normalization constant for each of the surrounding APs of AP $k$. Note that $R_k$ is equivalent to the global reward function of Equation~\ref{eq:reward} except that the set of considered APs is restricted to those in $\mathcal{N}_k$. \textbf{Proof.} The proof is straightforward since $\lambda_i = \sum_{j \in s_i} \log T_j^*$ and $f_i(x_i) = \frac{1}{\lambda_i} \sum_{j \in s_i} \log T_j(x_i)$. Therefore, $\sum_{i \in \mathcal{N}_k} \lambda_i = \sum_{i \in \mathcal{N}_k} \sum_{j \in s_i} \log T^*_j$, which is the normalization constant for all STAs associated with APs in $\mathcal{N}_k$. Similarly, $\sum_{i \in \mathcal{N}_k} \lambda_i f_i = \sum_{i \in \mathcal{N}_k} \sum_{j \in s_i} \log T_j$, which is the logarithm of the PF of all STAs associated with APs in $\mathcal{N}_k$. Therefore, Equation \ref{eq:local_reward} is equivalent to the global reward function of Equation~\ref{eq:reward} but applied only to the APs in $\mathcal{N}_k$. \subsection{Local reward maximization} \label{sec:reward_maxim} For the sake of clarity and since all variables in this section are relative to an AP $k$, we often omit the subscript $k$ in the notations. Now that each AP $k$ has its own local reward function, we need a model of the knowledge of AP $k$ on $R_k$ in order to find its argument of its maxima. We represent the beliefs of AP $k$ about $R_k$ by defining a prior distribution on the reward function space with a Gaussian Process (GP). \textbf{Gaussian process.} In our case, a GP can be defined as a collection of random variables indexed by configurations of APs in $\mathcal{N}_k$: $\{Y_c; c \in C^{|\mathcal{N}_k|}\}$ such that every finite collection $Y_{c_1, \cdots, c_n} = \left(Y_{c_1}, \cdots, Y_{c_n}\right)$ is distributed according to a multivariate normal distribution. We assume the GP to have zero mean so that it is entirely determined by its covariance function $\Sigma : C^{|\mathcal{N}_k|} \times C^{|\mathcal{N}_k|} \rightarrow \mathbb{R}^+$. As shown by \cite{gp}, GPs can be used as priors on a function space. We use $X_t$ to denote the $t \times 2|\mathcal{N}_k|$ features matrix gathering the tested configurations $\left(x^1, \cdots, x^t \right)^T$ and $Y_t$ to denote the $t \times 1$ label vector gathering the corresponding local reward values $\left(Y(x^1), \cdots, Y(x^t)\right)^T$. Given $X_t$ and $Y_t$, we can infer the distribution of the reward value for an arbitrary configuration $x$, $Y(x)$, as follows: $Y(x)|X_t,Y_t \sim \mathcal{N}\left(\mu(x), \sigma^2(x)\right)$ with $\mu(x)$ and $\sigma^2(x)$ defined in Equations \ref{eq:mu} and \ref{eq:sig}, respectively. \begin{equation} \mu(x) = \Sigma(x, X_t)\Sigma(X_t, X_t)^{-1}Y_t \label{eq:mu} \end{equation} \begin{equation} \sigma^2(x) = \Sigma(x, x) - \Sigma(x, X_t)\Sigma(X_t, X_t)^{-1}\Sigma(X_t, x) \label{eq:sig} \end{equation} Since GPs can be used as a prior on a functional space, they are useful to solve regression problems as well as minimization or maximization tasks. In our case, the AP $k$ uses a GP to model $R_k$ and to assist the exploration of promising configurations of the APs in $\mathcal{N}_k$, maximizing $R_k$ in a Bayesian way. Choosing the covariance function $\Sigma$ is a critical step when designing a GP as it determines some key features of $\mathcal{GP}_k$ such as its isotropy and smoothness. Since the reward function, which quantifies the quality of spatial reuse in $\mathcal{N}_k$, is likely to exhibit threshold effects, we choose a covariance function that decreases rapidly as the distance between two considered configurations increases. Thus, the regularity constraint is not too restrictive on the modeled function. Because we have no incentive to prefer any particular direction over another, we let the covariance function $\Sigma(x, x')$ depend only on $||x - x'||$ to ensure the isotropy of $\mathcal{GP}_k$. This leads us to use a Matérn kernel \cite{matern} with parameter $\nu = \frac{3}{2}$, which is defined as \begin{equation} \Sigma(x, x') = s^2\left(1 + \frac{\sqrt{3}||x - x'||}{\rho}\right)e^{-\frac{\sqrt{3}||x - x'||}{\rho}} \label{eq:kernel} \end{equation} where $s^2$ and $\rho$ are two hyperparameters whose values are approximated during the learning process. In practice, we can approach $s^2$ and $\rho$ by maximum likelihood estimation (MLE). Obtaining the likelihood of $Y_t$, denoted by $\mathcal{L}(s^2, \rho)$, is trivial since $Y_t$ follows a multivariate normal distribution. Therefore, $(s^2, \rho) = \argmax_{(x, y) \in \mathbb{R}^{+2}} \mathcal{L}(x, y)$ can be obtained by classical descent techniques applied to the likelihood gradient. As discussed before, each AP $k$ faces the exploitation vs exploration dilemma in its attempt to find the optimal configuration. A common way in the MAB framework to appraise a given strategy $\pi$ is then to consider the cumulative regret $\Gamma$. In our problem, $\Gamma$ is expressed with Equation \ref{eq:cum_reg} for an episode of $D$ steps, since it is expressed as the cumulative sum of the differences between 1 (namely, the best reward that the AP can get by definition of our reward function) and $R_k(\pi(t))$, which is the actual reward obtained at time $t$ for the strategy $\pi$. \begin{equation} \Gamma(\pi) = D - \sum_{t = 1}^D R_k(\pi(t)) \label{eq:cum_reg} \end{equation} Minimizing the cumulative regret with GP models is usually done by defining a strategy $\pi$ from the maximization of an acquisition function $A$: $\pi(t) = \argmax_x A_t(x)$. However, this assumes that our search space $C^{|\mathcal{N}_k|}$ is continuous. Since each AP $k$ deals with discrete configurations of $\mathcal{N}_k$, we systematically round the recommendation of $\mathcal{GP}_k$ to the nearest valid WLAN configuration. Many acquisition functions exist, such as Knowledge Gradient (KG) \cite{kg}, GP-UCB \cite{gpucb} or the Expected Improvement (EI) \cite{ei}. We choose EI over KG (whose computational cost can rapidly become prohibitive) and GP-UCB (which was found to be less efficient on our examples). The acquisition function of EI is expressed as $A_t(x) = \mathbb{E}\left[(\mu_{t+1}(x) - \max_{1 \leq i \leq t} Y(x_i))^+\right]$ given that $X_{t+1} = (X_t, x)$, $Y_{t+1} = (Y_t, Y(x))$ and $Y(x) \sim \mathcal{N}(\mu(x), \sigma^2(x))$. EI also has a closed-form as shown in Equation \ref{eq:ei}. \begin{equation} EI(x) = (\mu(x) - R_{k,t}^*) \Phi(Z) + \sigma(x)\phi(Z) \label{eq:ei} \end{equation} with $R_{k,t}^* = \max_{1 \leq i \leq t} Y(x_i)$, $Z = \frac{\mu(x) - R_{k,t}^*}{\sigma(x)}$, $\Phi$ and $\phi$ being respectively the CDF and the PDF of a standard Gaussian distribution. The AP can maximize Equation \ref{eq:ei} by gradient descent on $-EI(x)$. For the sake of completeness, we provide a closed-form expression of $\nabla EI(x)$ with Equations \ref{eq:dei}, \ref{eq:dmu}, \ref{eq:dsig} and \ref{eq:dker} directly followed by their proofs. First of all, it is easy to show that the closed-form of $\nabla EI(x)$ is given by Equation \ref{eq:dei}. \begin{equation} \nabla EI = \nabla \mu \Phi(Z) + \nabla \sigma \phi(Z) \label{eq:dei} \end{equation} \textbf{Proof (Eq. \ref{eq:dei}).} The proof is straightforward since: \begin{equation} \nabla EI = \nabla \mu \Phi(Z) + (\mu - R^*_{k,t}) \phi(Z) \nabla Z + \nabla \sigma \phi(Z) + \sigma \phi'(Z) \nabla Z \end{equation} By noticing that $\mu - R_{k,t}^* = \sigma Z$ and $\phi'(Z) = -Z\phi(Z)$, we have: \begin{equation} \begin{split} \nabla EI &= \nabla \mu \Phi(Z) + \sigma Z \phi(Z) \nabla Z + \nabla \sigma \phi(Z) - \sigma Z \phi(Z) \nabla Z\\ &= \nabla \mu \Phi(Z) + \nabla \sigma \phi(Z) \end{split} \end{equation} To complete the closed-form of $\nabla EI$ in Equation \ref{eq:dei}, we need to explicit $\nabla \mu$ and $\nabla \sigma$. Given the expression of $\mu(x)$ from Equation \ref{eq:mu}, we immediately get the expression of $\nabla \mu$ in Equation \ref{eq:dmu}. \begin{equation} \nabla \mu = \frac{\partial \Sigma(x, X_t)}{\partial x} \Sigma(X_t, X_t)^{-1} Y_t \label{eq:dmu} \end{equation} Similarly, recalling the expression of $\sigma^2(x)$ from Equation \ref{eq:sig}, we can derive the expression of $\nabla \sigma$ in Equation \ref{eq:dsig} for an isotropic kernel function $\Sigma$. \begin{equation} \nabla \sigma = -\frac{1}{\sigma(x)} \Sigma(X_t, X_t)^{-1} \Sigma(X_t, x) \frac{\partial \Sigma(X_t, x)}{\partial x} \label{eq:dsig} \end{equation} \textbf{Proof (Eq. \ref{eq:dsig}).} By noticing that $\Sigma(x, x)$ does not depend on $x$, that $\Sigma(x, X_t) = \Sigma(X_t, x)^T$ and similarly that $\Sigma(X_t, X_t)^{-1}$ is symmetric with an isotropic kernel we have: \begin{equation} \begin{split} \nabla \sigma^2(x) & = \frac{\partial \sigma^2}{\partial \Sigma(X_t, x)} \frac{\partial \Sigma(X_t, x)}{\partial x}\\ & = -(\Sigma(X_t, X_t)^{-1} \Sigma(X_t, x) + \Sigma(X_t, X_t)^{-T} \Sigma(X_t, x)) \frac{\partial \Sigma(X_t, x)}{\partial x}\\ & = - 2 \Sigma(X_t, X_t)^{-1} \Sigma(X_t, x) \frac{\partial \Sigma(X_t, x)}{\partial x} \end{split} \end{equation} The expression of $\nabla \sigma$ follows immediately: \begin{equation} \begin{split} \nabla \sigma(x) &= \frac{\partial \sqrt{\sigma^2}}{\partial \sigma^2} \nabla \sigma^2(x) \\ & = -\frac{1}{\sigma(x)} \Sigma(X_t, X_t)^{-1} \Sigma(X_t, x) \frac{\partial \Sigma(X_t, x)}{\partial x} \end{split} \end{equation} Eventually, the expression of the Jacobian $\frac{\partial \Sigma(X_t, x)}{\partial x}$ is required to have a complete closed-form of $\nabla EI$. We give the expression of the $i$th line of this Jacobian $\left(\frac{\partial \Sigma(x_i, x)}{\partial x}\right)$ in Equation \ref{eq:dker}, assuming that $\Sigma$ is the Matérn kernel given in Equation \ref{eq:kernel}. \begin{equation} \frac{\partial \Sigma(x_i, x)}{\partial x} = -\frac{3\sigma^2}{\rho^2}e^{-\frac{\sqrt{3}}{\rho}||x - x_i||}(x - x_i) \label{eq:dker} \end{equation} \textbf{Proof (Eq. \ref{eq:dker}).} \begin{equation} \begin{split} \frac{\partial \Sigma(x_i, x)}{\partial x} & = \frac{\partial \Sigma(x_i, x)}{\partial ||x - x_i||} \frac{\partial ||x - x_i||}{\partial x}\\ & = -\frac{3\sigma^2}{\rho^2} e^{-\frac{\sqrt{3}}{\rho}||x - x_i||}||x - x_i||\frac{\partial ||x - x_i||}{\partial x}\\ & = -\frac{3\sigma^2}{\rho^2} e^{-\frac{\sqrt{3}}{\rho}||x - x_i||}||x - x_i||\frac{(x - x_i)}{||x - x_i||}\\ & = -\frac{3\sigma^2}{\rho^2} e^{-\frac{\sqrt{3}}{\rho}||x - x_i||}(x - x_i) \end{split} \end{equation} With Equations \ref{eq:dei}, \ref{eq:dmu}, \ref{eq:dsig} and \ref{eq:dker}, each AP $k$ has a complete closed-form of $\nabla EI$. By applying its strategy $\pi_k(t) = \argmax_{x \in C^{|\mathcal{N}_k|}} EI(x)$ and classical gradient descent techniques on $-EI$, AP $k$ provides promising configurations for its surrounding APs in $\mathcal{N}_k$. \subsection{Aggregation of local prescriptions} In the previous sections, we have described how each AP $k$ computes its local reward and relies on its model $\mathcal{GP}_k$ to explore promising configurations for the APs in $\mathcal{N}_k$. In general, maximizing local rewards is very likely to lead to a sub-optimal situation since, for non-linear optimization problems, individual interests are often not aligned with the global interest (e.g., the famous Tragedy of the Commons \cite{tragedy_commons}). Without more information on the relation between the configuration of the APs and the measured throughputs of STAs, it seems impossible to prove theoretically that our global reward function is maximized by the optimization of local reward functions. However, it is our experience that the local rewards defined with Equation \ref{eq:local_reward} force each AP to behave altruistically by reducing its impact on its surroundings. As a matter of fact, as we will see in Sections \ref{sec:num_res} and \ref{sec:disc_benefits}, the use of \texttt{INSPIRE} leads to a significantly better spatial reuse of the radio channel. This suggests that the altruistic behaviors of independent APs make \texttt{INSPIRE} less exposed to a sub-optimal convergence. In any case, more coordination between APs is required. By construction, the collection $\mathcal{F} = \left(\mathcal{N}_k\right)_{1 \leq k \leq K}$ is a cover of the set of APs in $\mathcal{W}$ but not a partition. In fact, if $\mathcal{F}$ had only null intersections (i.e., $\forall j,k, \mathcal{N}_j \cap \mathcal{N}_k = \emptyset$), then the spatial reuse of the radio channel would already be at its apex and there is no need for improvement. Figure \ref{fig:intersections} illustrates an example with 5 APs in which the collection $\mathcal{F} = (\mathcal{N}_1, \mathcal{N}_2, \mathcal{N}_3, \mathcal{N}_4, \mathcal{N}_5)$ exhibits multiple non-null intersections. As a result, some APs will receive multiple (different) prescriptions for the configuration of their \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters at their next iteration. For instance, AP~2 will receive prescriptions from APs~1, 3, and 4 in addition to its own prescription. Since APs can only test one configuration at a time, one of those prescriptions must be chosen, or preferably, a consensus between them must be made. \begin{figure}[!h] \centering \includegraphics[width=3.2in]{rsc/Surroundings.png} \caption{A WLAN represented by a graph with APs depicted as labelled triangles and STAs as black dots. An edge exists between two APs when they are in the communication range of each other. We use different colors to illustrate the surroundings of each AP in $\mathcal{F}$.} \label{fig:intersections} \end{figure} First, let us note that APs in $\mathcal{N}_k$ with more STAs weigh more in the computation of the local reward $R_k$ than APs in $\mathcal{N}_k$ with fewer STAs (see Equation~\ref{eq:local_reward}). To reflect this property in the consensus to be reached, it seems natural to also give more weight to the prescriptions issued by APs with more STAs. More precisely, we let the weight assigned to the prescription coming from an AP $i$ be proportional to the number of its associated STAs $|s_i|$. This leads us to select an aggregation function such as the weighted marginal median or the weighted average. We opt for the latter function since they tend to perform equally good in our tests. Noting $P^i_k \in C$ the prescription of AP $i$ for AP $k$, each AP $k$ determines its next configuration (i.e., $x_k^{t+1}$) using Equation~\ref{eq:agg}. \begin{equation} x^{t+1}_k = \frac{1}{\sum_{i \in \mathcal{N}_k} |s_i|} \sum_{i \in \mathcal{N}_k} |s_i|P^i_k \label{eq:agg} \end{equation} \subsection{Algorithm and complexity} \begin{algorithm}[h!] \caption{\texttt{INSPIRE} run at each AP $k$} \label{alg:solution} \hspace{-5.2cm}\textbf{Input}: subset $\mathcal{N}_k$ of APs \begin{algorithmic}[1] \STATE Init the Gaussian Process $\mathcal{GP}_k$ \WHILE{\textbf{true}} \STATE Find a prescription $P^k = \argmax_{x \in C^{|\mathcal{N}_k|}} EI^t_k(x)$ by gradient descent of (\ref{eq:dei}) \STATE Broadcast $P^k$ to APs in $\mathcal{N}_k$ along with the number of STAs $s_k$ \STATE Receive the prescriptions $P^j_k$ and $s_j$ from AP $j, j \neq k, j \in \mathcal{N}_k$ \STATE Compute the consensus $x^{t+1}_k$ with (\ref{eq:agg}) \STATE Test $x^{t+1}_k$ for $\Delta t$ seconds and compute its selfish reward $f_k$ with (\ref{eq:reward}) \STATE Broadcast $f_k$, $\lambda_k$ and $x^{t+1}_k$ to APs in $\mathcal{N}_k$ \STATE Receive $f_j$, $\lambda_j$ and the configurations $x^{t+1}_j, j \neq k$ from APs in $\mathcal{N}_k$ \STATE Compute the local reward $R_k$ with (\ref{eq:local_reward}) and the local configuration $x^{t+1}$ \STATE Add the pattern $\left(x^{t+1}, R_k\right)$ to $\mathcal{GP}_k$ \ENDWHILE \end{algorithmic} \end{algorithm} Algorithm \ref{alg:solution} summarizes the main steps of our proposed strategy \texttt{INSPIRE} run on each AP of the WLANs. We begin the evaluation of the computational costs of running \texttt{INSPIRE} with the most resource-intensive operation. Inverting the $t \times t$ matrix $\Sigma(X_t, X_t)$ (Line 11 in Algorithm \ref{alg:solution}) must be made whenever $X_t$ changes. This operation, which is carried out with Cholesky decomposition $\Sigma(X_t, X_t) = LL^T$, with $L$ a lower triangular matrix updated whenever a new pattern (\textit{i.e.} a couple comprised of a tested configuration $x$ and its associated label $Y(x)$) is added to $\mathcal{GP}_k$, has a complexity of $O\left(t^3\right)$. Gradient descent (Line 1 in Algorithm \ref{alg:solution}) and matrix-vector multiplications of size $t \times t$ and $t \times 1$ have both a complexity of $O\left(t^2\right)$. These two operations are repeated for each gradient descent whose number of steps is capped by $m$. Therefore, the computational complexity of \texttt{INSPIRE} is asymptotically of $O\left(t^3 + mt^2\right)$. It is worth noting that the dimensionality of the problem (i.e. $\dim C^{|\mathcal{N}_k|} = |\mathcal{N}_k| \dim C$) does not appear in the expression of the asymptotic computational complexity of \texttt{INSPIRE}. This interesting property results from the use of a kernel function by GPs to compare WLANs configurations. This gives \texttt{INSPIRE} the ability to handle arbitrarily dense WLANs, or to optimize more parameters than just \texttt{TX\_PWR} and \texttt{OBSS\_PD}, without taking a hefty toll on its execution time. In fact, the real burden to the execution time of \texttt{INSPIRE} is $t$. This compels us to bound the size of $X_t$ and to find a balance between the amount of collected data on the WLANs' performance and configuration and a quick execution time. We keep the possibility of approximation methods to reduce the computational complexity of \texttt{INSPIRE} for future works. As for now, we recommend using windowing methods (such as a moving window) to bound the size of $X_t$ and so the computational complexity of \texttt{INSPIRE}. We now study the communication costs incurred by \texttt{INSPIRE}. Lines 4 and 8 of Algorithm \ref{alg:solution} deal with the communication operations necessary for an AP $k$ to learn the selfish rewards and the corresponding normalization constants of the APs in $\mathcal{N}_k$. This incurs additional communication traffic whose throughput $B_k$ can be evaluated as: \begin{equation} B_k = \frac{2(|\mathcal{N}_k| - 1)(F + 4\dim C + 6)}{\Delta t} \label{eq:comcost} \end{equation} where $\dim C$ is the dimensionality of our configuration space, $F$ is the minimal size of a frame, $|\mathcal{N}_k|$ is the number of APs in $\mathcal{N}_k$ and $\Delta t$ is the duration of an iteration. \textbf{Proof.} Line 4 triggers the transmission of $\dim C + 1$ floats for each AP in $\mathcal{N}_k$. Similarly, line 8 causes the transmission of $\dim C + 2$ floats. Hence, at each optimization step, and given that a float is 4 bytes long, an AP $k$ first sends to its $|\mathcal{N}_k| - 1$ neighbors a frame of size $F+4(\dim C + 1)$ before sending a second frame of size $F + 4(\dim C + 2)$, resulting in the transmission of $2(|\mathcal{N}_k| - 1)(F + 4\dim C + 6)$ bytes. Eventually, we obtain the communication overhead for each AP by dividing this quantity by $\Delta t$ as expressed by Equation \ref{eq:comcost}. \section{Performance Evaluation} \subsection{Experimental settings} To evaluate the ability of \texttt{INSPIRE} at improving the spatial reuse of a radio channel through the configuration of the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters, we consider two distinct scenarios. The first scenario is inspired by the WLAN deployment of Cisco in their offices in San Francisco. In~\cite{cisco_topo}, Cisco provides the location of 60 APs that together deliver wireless connectivity to their employees on a floor. To account for the WLANs' activity from other floors, we consider a three-floor building and we replicate on each floor the same arrangement of APs as in Cisco's offices. This leads us to a total number of 180 APs spanned over three floors. Assuming 18 independent radio channels, we run a radio channel allocation algorithm to determine the radio channel used by each AP. For our first scenario, we consider the subgraph resulting from the channel allocation with the highest density. We use \textbf{T1} to refer to this topology (i.e., arrangements of APs and STAs), which is illustrated in Figure~\ref{fig:topoT1}. \textbf{T1} exhibits a total of 10 APs and we associate a number of 5 STAs per AP. The second scenario addresses the case of many single-AP WLANs deployed and operated independently in a relatively limited area. This is typically the case in housing units where each apartment is equipped with its own AP so that the APs are often only a few meters away from a number of others. More specifically, we consider a nine-story building with 216 apartments of 25 m² each. We randomly position an AP within each apartment as well as 4 STAs per AP. Then, similarly to the first scenario, we apply a radio channel allocation algorithm given a total of 18 radio channels, to obtain the topology of interest denoted by \textbf{T2}. Note that \textbf{T2} consists of 14 APs and 56 STAs. Figure~\ref{fig:topoT2} depicts the topology \textbf{T2}. Despite \textbf{T1} and \textbf{T2} both representing dense scenarios of WLANs, they cover two different cases. \textbf{T1} exemplifies the case of a single WLAN designed from its inception to cover the open space of a company. Conversely, \textbf{T2} results from the uncoordinated combination of multiple independent WLANs that are thus likely to interfere with each other. Therefore, we expect \textbf{T2} to be a more difficult scenario than \textbf{T1}. \begin{figure} \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=1\linewidth]{rsc/MER_FLOORS_CH20.png} \caption{Topology \textbf{T1}.} \label{fig:topoT1} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=1\linewidth]{rsc/HLM.png} \caption{Topology \textbf{T2}.} \label{fig:topoT2} \end{subfigure} \caption{The two considered topologies. APs are shown as red triangles and they are connected by a two-headed arrows if they lie in each other's communication range. Associated STAs are shown as dots colored according to their throughputs: warm, yellowish colors indicate that the STA has enough throughput most of the time while, on the contrary, cool, blueish colors indicate that the STA has mostly not enough throughput under the default configuration of 802.11: 20 dBm for \texttt{TX\_PWR} and -82 dBm for \texttt{OBSS\_PD}.} \label{fig:topos} \end{figure} For each scenario, we consider heavily loaded conditions. APs attempt to transmit frames to each of their associated STAs at a rate of 50 Mbps while the latter attempt to send their frames to the AP at a lower rate of 3.33 Mbps. These assumptions are in line with the downstream traffic largely exceeding the upstream traffic in WLANs. Given the speed of wireless links in 802.11ax, the buffers of the APs will always be full of frames waiting to be sent. More generally, considering APs in saturation represents undoubtedly the most difficult case when dealing with the spatial reuse of a radio channel. Therefore, if \texttt{INSPIRE} manages to significantly improve the WLANs' performance under these circumstances, then it can only do better under normal conditions. To better appraise the quality of \texttt{INSPIRE}, we also consider a control strategy as well as several state-of-the-art solutions, which were discussed in Section~\ref{sec:soa} and briefly summarized here: \begin{itemize} \item \texttt{DEFAULT}: Every AP keeps its default configuration for the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters (i.e., $(-82, 20)$ dBm); \item \texttt{WCNC'15}: Each AP implements a simple distributed algorithm to dynamically update its \texttt{OBSS\_PD} parameter \cite{dsc}; \item \texttt{JNCA'19}: Each AP solves a MAB problem using Thompson sampling to dynamically update their \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters \cite{sr_mab_2}; \item \texttt{MSWiM'21}: Similar to \texttt{JNCA'19}, except that the sampling of new configurations is performed through a multivariate Gaussian mixture, and that the solution is centralized \cite{mab_mswim}. \end{itemize} We implemented \texttt{INSPIRE} (based on the open-source Gaussian process library LibGP \cite{libgp}) as well as the four strategies described above in the open-source network simulator ns-3 \cite{ns3}. ns-3 is a well-established realistic discrete-event simulator that implements most of the network protocols involved in the WLANs communication from the Physical up to the Application layer. We report in Table~\ref{tab:ns3_parameters} the simulation parameters used in the rest of this section. Unlike previous works (e.g., \cite{mab_mswim, dsc, fsc, sr_mab, sr_mab_2})) with the exception of \cite{lsr}, our simulations incorporate the mechanism of rate adaptation that let APs and STAs dynamically vary the speed of their wireless links (through the use of different Modulation Coding Scheme (MCS)) in response to the quality of the received signal. This is particularly important for the sake of our study since changing the value of \texttt{TX\_PWR} necessarily affects the quality of the received signal and thus the MCS. Since our simulated WLANs take place in buildings, we choose an appropriate path loss by combining the models \texttt{ItuR1238} and \texttt{InternalWallsLoss}, both implemented by ns-3. With these propagation models, the signal is decreased by an additional attenuation coefficient each time it goes through a floor or a wall. The attenuation coefficients are respectively -4 dBm (which is the default value in \texttt{ItuR1238}) and -8 dBm. \begin{table}[!h] \caption{ns-3 parameters.} \label{tab:ns3_parameters} \centering \rowcolors{2}{gray!13}{white} \begin{tabular}{|l p{90mm}|} \hline \textbf{Parameter} & \textbf{Value}\\ \hline ns-3 version & 3.31\\ Number of repetitions & 22\\ Simulation duration & 30 s\\ Duration of an iteration ($\Delta t$) & 75 ms\\ Packet size & 1,464 bytes\\ Downlink traffic & 50.0 Mbps\\ Uplink traffic & 3.33 Mbps\\ Channel size & 20 MHz\\ Frequency band & 5 GHz\\ A-MDPU Aggregation & 4\\ Path loss & \texttt{HybridBuildingsPropagationLossModel} (\texttt{ItuR1238PropagationLossModel} + \texttt{InternalWallsLoss})\\ Wi-Fi Manager & \texttt{IdealWifiManager}\\ \hline \end{tabular} \end{table} We instrumented ns-3 to collect and compute a number of performance metrics. At the end of each iteration, the quality of the spatial reuse is assessed with Equation \ref{eq:reward}, although distributed strategies may internally use the local reward defined in Equation~\ref{eq:local_reward}. Then, we compute the classical metric to analyze the efficiency of a strategy at dealing with a MAB problem: (i) The cumulative regret (with Equation \ref{eq:cum_reg} using the global reward in Equation~\ref{eq:reward}). We also collect the following performance metrics to reflect the effect of each strategy on the behavior of the WLANs and of their STAs: (ii) The number of starving STAs, which we define as STAs experiencing a very low throughput (namely, less than 10\% of their attainable throughput), (iii) The cumulated throughput, which simply sums all STAs' throughput and (iv) The Jain's fairness index \cite{jain}, which quantifies how evenly the STAs are served by the APs. Each simulation lasts 30 seconds and we replicated them independently 22 times to obtain their median value along with their first, second, and third quartiles. When the quartiles of a performance metric vary too much within a single simulation, we apply an exponential moving average (with $\alpha = 0.04$) to extract the underlying trends of the quartiles sequences. The metrics are collected throughout the whole duration of the simulation. At the end of each iteration, we compute all the performance metrics and then we refer to the current strategy to decide what will be the next configuration of the WLANs. Since an iteration lasts $\Delta t$ = 75 ms and a simulation lasts 30 seconds, the quality of each solution is assessed over 400 iterations. \subsection{Numerical results} \label{sec:num_res} Figure~\ref{fig:t1_res} illustrates the performance metrics delivered by the ns-3 simulator for each strategy in the case of topology \textbf{T1}. The cumulative regret, represented in Figure \ref{fig:t1_cumreg}, indicates which strategy has performed the best at any time of the simulation. \texttt{INSPIRE} is found to be the most efficient strategy, reducing the cumulative regret by 70\% more than \texttt{DEFAULT} and by over 50\% than \texttt{WCNC'15}, which happens to be the most efficient state-of-the-art strategy. We now look at the other performance metrics to better understand how much \texttt{INSPIRE} is able to improve the behavior of the WLAN and of its STAs. Taking \texttt{DEFAULT} as baseline, Figure \ref{fig:t1_starv} shows that \texttt{INSPIRE} reduces the number of STAs in starvation by 80\% while Figure \ref{fig:t1_fair} and \ref{fig:t1_cumthrough} demonstrate that our proposed solution manages to find a fairer sharing of resources (+133\%) and to increase the cumulated throughput (+400\%). \begin{figure}[!h] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\linewidth]{rsc/MER_FLOORS_CH20_GP_CumReg.png} \caption{Cumulative Regret} \label{fig:t1_cumreg} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{rsc/MER_FLOORS_CH20_GP_Starvations.png} \caption{Starvations} \label{fig:t1_starv} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{rsc/MER_FLOORS_CH20_GP_Fairness.png} \caption{Fairness} \label{fig:t1_fair} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{rsc/MER_FLOORS_CH20_GP_CumThroughput.png} \caption{Cumulated Throughput} \label{fig:t1_cumthrough} \end{subfigure} \caption{Performance metrics delivered on topology \textbf{T1} by each strategy.} \label{fig:t1_res} \end{figure} We now turn to the case of topology \textbf{T2}. First, we observe in Figure \ref{fig:t2_cumreg} that among the four considered strategies, \texttt{INSPIRE} is the one that manages to decrease the most the cumulative regret with a decline of about 36\% compared to the \texttt{DEFAULT} configuration at the end of the simulation. The proposed solution also outperforms \texttt{MSWiM'21}, which is found to be the best state-of-the-art strategy on this topology, by a margin of 14\%. Looking at the performance of WLANs and of their STAs, Figure~\ref{fig:t2_starv} shows that \texttt{INSPIRE} is able to limit the number of STAs starving from throughput by a degree of 36\% when compared to the \texttt{DEFAULT} configuration. Similarly, the measure of fairness among STAs and the cumulated throughput of STAs have their value increased by 28\% and nearly doubled, respectively with \texttt{INSPIRE} (see Figures \ref{fig:t2_fair} and \ref{fig:t2_cumthrough}). Overall, through the study of topologies \textbf{T1} and \textbf{T2}, \texttt{INSPIRE} demonstrates its superiority over the other state-of-the-art strategies. The significant improvements brought by our proposed solution on all performance metrics are permanently obtained after 100 iterations only (7.5 seconds). In other words, in less than 10 seconds, \texttt{INSPIRE} manages to significantly improve the behavior of the WLANs and of the associated STAs thanks to a better spatial reuse of the radio channel. This efficiency in searching and finding an adequate configuration of the \texttt{TX\_PWR} and \texttt{OBSS\_PD} parameters at each AP of the WLANs mostly results from the distributed, altruistic use of Gaussian processes which we further discuss in the next section. \begin{figure}[!h] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\linewidth]{rsc/HLM_GP_CumReg.png} \caption{Cumulative Regret} \label{fig:t2_cumreg} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{rsc/HLM_GP_Starvations.png} \caption{Starvations} \label{fig:t2_starv} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{rsc/HLM_GP_Fairness.png} \caption{Fairness} \label{fig:t2_fair} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{rsc/HLM_GP_CumThroughput.png} \caption{Cumulated Throughput} \label{fig:t2_cumthrough} \end{subfigure} \caption{Performance metrics delivered on topology \textbf{T2} by each strategy.} \label{fig:t2_res} \end{figure} \section{Discussion} \label{sec:disc} \subsection{Seemingly similar problems with vastly different complexity} The topologies \textbf{T1} and \textbf{T2} may look similar, but they are not and \texttt{INSPIRE} performed differently on each of them. By the end of the optimization process (i.e., 400 steps), the performance metrics for \textbf{T1} were improved by at least 70\% from their initial values under the \texttt{DEFAULT} configuration, and only 7 STAs (representing 14\% of the STAs) were still starving from throughput. In case of \textbf{T2}, the progress was less with 14 STAs (representing 25\% of the STAs) remaining in starvation. This difference results from the location of STAs relatively to the APs. Looking at Figure \ref{fig:topos}, it appears that STAs in \textbf{T2} are further from their associated AP than the ones in \textbf{T1}. As a consequence, STAs are also closer to a concurrent AP. While STAs in \textbf{T1} are on average 10 times closer to their associated AP than to a concurrent AP, this ratio drops to an average value of 4 for STAs on \textbf{T2}. With STAs closer to concurrent APs, the spatial reuse problem becomes more difficult. As a matter of fact, to reach its associated STA, the AP must transmit at a greater power, increasing its chance to cause interference to the surrounding APs. Similarly, STAs that are far away from their AP are significantly affected by the transmissions of concurrent APs. To verify that \textbf{T2} constitutes a more complex example than \textbf{T1}, we examine the shape of the reward function in both cases. Because of the high dimensionality of the arguments of the reward function and the lack of a closed-form expression, we resort to a slicing technique to provide a visualization of the reward function in Equation \ref{eq:reward}. We postpone to Appendix \ref{sec:slices} the details of this slicing. Figure \ref{fig:slices} illustrates the obtained random slices in case of \textbf{T1} and \textbf{T2}. Figure~\ref{fig:t1_slice} suggests a relatively smooth reward function in \textbf{T1}. On the other hand, the reward function in the case of \textbf{T2} is much more erratic, featuring a lot of local maxima as shown by Figure~\ref{fig:t2_slice}. Interestingly, Figure~\ref{fig:t1_slice} shows that \texttt{INSPIRE} succeeded to find a configuration that is maximal in this (random) slice for the case of \textbf{T1}. We also notice that many configurations of equivalent efficiency exist, which also eases the complexity of the search for an adequate configuration. Conversely, in the case of \textbf{T2}, \texttt{INSPIRE} does not find the best configuration since the slice of Figure~\ref{fig:t2_slice} shows that, at least, a 6\% better configuration exist. Nonetheless, \texttt{INSPIRE} was able to find an efficient configuration. \begin{figure}[!h] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{rsc/RandomBasis_MERAKI_Circle.png} \caption{Slice for \textbf{T1}, with the best solution found by \texttt{INSPIRE} at (0, 0) and two scaled random unit vectors. The maximum of the slice is shown with a red circle.} \label{fig:t1_slice} \end{subfigure} \hspace{0.2cm} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{rsc/RandomBasis_HLM_Circle.png} \caption{Slice for \textbf{T2}, with the best solution found by \texttt{INSPIRE} at (0, 0) and two scaled random unit vectors. The maximum of the slice is shown with a red circle.} \label{fig:t2_slice} \end{subfigure} \caption{Random slices of the global reward function for the topologies \textbf{T1} and \textbf{T2}.} \label{fig:slices} \end{figure} \subsection{Benefits of distributed prescriptions} \label{sec:disc_benefits} Eventually, to justify our choice of only letting APs exploit local information and prescribe network configurations to their surrounding APs, we consider alternative versions of \texttt{INSPIRE}: (i) \texttt{GPs w/o agg.} where each AP keeps using the local, altruistic reward of Equation \ref{eq:local_reward} but does not aggregate local prescriptions, and prescribes only for its own configuration and (ii) \texttt{Single GP}, a centralized version of \texttt{INSPIRE} where a single GP has a complete knowledge of the WLANs and decides on the configuration of every AP. We compare these alternative strategies with \texttt{INSPIRE} and the \texttt{DEFAULT} strategy by considering their cumulative regret on topologies \textbf{T1} and \textbf{T2} in Figure \ref{fig:alternatives}. Figure \ref{fig:t1_compare} shows that, on \textbf{T1}, the alternative strategies have a cumulative regret 46\% lower than \texttt{DEFAULT}. However, \texttt{INSPIRE} manages to reduce its cumulative regret by an extra 25\%. Despite the greater complexity of the function in \textbf{T2}, this extra reduction factor persists but at a value of 13\% as shown in Figure \ref{fig:t2_compare}. Given the significant gap between \texttt{INSPIRE} and \texttt{GPs w/o agg.}, it is clear that prescribing for surrounding APs and aggregating those prescriptions leads to a more altruistic behaviour, which in turn brings additional benefits at the scale of the WLANs. More surprisingly, \texttt{INSPIRE} outperforms its centralized counterpart \texttt{Single GP}. At first glance, this is counter-intuitive since \texttt{Single GP} has a complete knowledge and control over the APs of the WLANs. However, \texttt{Single GP} involves a single agent to optimize a function of high complexity, which has to deal with $K|C|$ variables. Decentralization breaks down this task into simpler local optimization problems of lower dimension. With \texttt{INSPIRE}, each AP $k$ only deals with $|\mathcal{N}_k||C|$ variables, which is significantly less than $K|C|$ for large WLANs. Overall, APs run by \texttt{INSPIRE} solve simpler optimization problems and behave altruistically by ensuring a consensus with surrounding APs. By doing so, they manage to improve the spatial reuse of the radio channel at the scale of the WLANs. \begin{figure}[!h] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{rsc/MER_FLOORS_CH20_GP_Compare.png} \caption{Cumulative regret on topology \textbf{T1}.} \label{fig:t1_compare} \end{subfigure} \hspace{0.2cm} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{rsc/HLM_GP_Compare.png} \caption{Cumulative regret on topology \textbf{T2}.} \label{fig:t2_compare} \end{subfigure} \caption{Comparison of the cumulative regrets of the different versions of \texttt{INSPIRE} on \textbf{T1} and \textbf{T2}.} \label{fig:alternatives} \end{figure} \section{Conclusions} \label{sec:conc} In this work, we have presented \texttt{INSPIRE}, a reinforcement learning method to improve the spatial reuse of radio channels in WLANs by configuring two parameters of APs: the transmission power (\texttt{TX\_PWR}) and the sensitivity threshold (\texttt{OBSS\_PD}), that can be dynamically configured with the latest Wi-Fi amendments. To address the difficult problem of sharing efficiently and fairly the resource of a radio channel, \texttt{INSPIRE} works as a distributed solution where each AP solves a local Multi-Armed Bandit problem with the help of information and actions limited to its surrounding APs (viz. within its communication range). The development of the solution includes (i) an intuitive quantification (based on STAs throughputs) of the ``goodness'' of a configuration of \texttt{TX\_PWR} and \texttt{OBSS\_PD} for concurrent APs in WLANs, both at local and global scales, (ii) the use of an acquisition function and Gaussian processes to find local configurations that maximize approximations of local reward functions, and (iii) an altruistic behavior facilitated by prescriptions to surrounding APs along with a consensus method which aggregates the prescriptions of surrounding APs for the ``greater good'' of the WLANs. \texttt{INSPIRE} has been evaluated and compared with other state-of-the-art strategies addressing the same problem, using the open-source network simulator ns-3 that implements all the layers of the network stack. The different strategies were compared on two examples inspired by real-life deployments of dense WLANs in both professional and domestic environments. \texttt{INSPIRE} was found to outperform other state-of-the-art strategies by significantly reducing the number of STAs in starvation and increasing the cumulated throughput of the WLANs in only a few seconds. As future work, we plan to assess the quality of \texttt{INSPIRE} for a specific class of WLANs where STAs are mobile (e.g., customers in a shopping mall). Another natural follow-up would be to experiment \texttt{INSPIRE} with real material on a testbed. \bibliographystyle{abbrv}
1,314,259,993,129
arxiv
\subsection*{Acknowledgement} D.B. is grateful for the hospitality {granted} to him during a visit at San Diego State University and acknowledges support from ``CompStar'', a research networking programme of the European Science foundation, from a grant of the Polish Ministry of Science and Higher Education supporting this programme and from the Russian Fund for Basic Research under grant No. 11-02-01538-a. The work of T.K. was supported in part by ``hadronphysics2'' within the European framework programme FP7. F.W. is supported by the National Science Foundation (USA) under Grant PHY-0854699.
1,314,259,993,130
arxiv
\section{Introduction and statement of main results} The purpose of this paper is to establish new connections between two recent developments in ``wild'' algebraic topology and to provide a new topological perspective on cotorsion-free abelian groups. Specifically, we give a characterization of cotorsion-free abelian groups in terms of homomorphisms from fundamental groups of Peano continua. In the process, we calculate the first homology group of the Griffiths twin cone. Our results are stated in terms of normal subgroups $\pi({\mathcal U},x)$ of the fundamental group $\pi_1(X,x)$ with respect to open covers $\mathcal U$ of $X$ that first appeared in \cite[\S2.5]{Spanier} and have since come into renewed focus: $\pi({\mathcal U},x)$ is generated by all elements of the form $[\alpha\cdot \beta \cdot \alpha^-]$ with a path $\alpha:([0,1],0)\rightarrow (X,x)$, an open set $U\in {\mathcal U}$ such that $\alpha(1)\in U$, and a loop $\beta:([0,1],\{0,1\})\rightarrow (U,\alpha(1))$, where ~$\cdot$ denotes path concatenation and $\alpha^-(t)=\alpha(1-t)$ denotes the reverse of the path $\alpha$. These subgroups have been playing prominent roles in two different contexts: the generalization of slender groups and the generalization of covering spaces. \subsection{Noncommutatively slender groups} A torsion-free abelian group $A$ is said to be {\em slender} if for every homomorphism $h: \mathbb{Z}^\mathbb{N}\rightarrow A$ there is an $n\in \mathbb{N}$ such that $h((c_k)_{k\in \mathbb{N}})=0$ whenever $c_k=0$ for all $k<n$. The slender groups form a subclass of the cotorsion-free groups: an abelian group is slender if and only if it is cotorsion-free and contains no subgroup isomorphic to $\mathbb{Z}^\mathbb{N}$ \cite{Nunke}. Recall that an abelian group $A$ is called {\em cotorsion} provided that whenever $A$ is a subgroup of an abelian group $B$ with $B/A$ torsion-free, we have $B=A\oplus C$ for some subgroup $C$ of $B$. In turn, $A$ is called {\em cotorsion-free} if it does not contain a nonzero cotorsion subgroup. The concept of slenderness can be generalized to non-abelian groups by replacing $\mathbb{Z}^\mathbb{N}$ with the fundamental group $\pi_1(\mathbb{H},{\bf o})$ of the Hawaiian Earring $\mathbb{H}$, that is, the subspace of the Euclidean plane comprised of the union of the circles $C_k=\{(x,y)\in \mathbb{R}^2\mid x^2+(y-1/k)^2=1/k^2\}$ ($k\in \mathbb{N}$) accumulating at the origin ${\bf o}=(0,0)$. Accordingly, a group $G$ is called {\em noncommutatively slender} (or {\em n-slender} for short) if for every homomorphism $h:\pi_1(\mathbb{H},{\bf o})\rightarrow G$ there is an $n\in \mathbb{N}$ such that $h([\gamma])=1$ for all loops $\gamma$ in $\bigcup_{k=n}^\infty C_k$. For example, every free group is n-slender \cite{Griffiths,Higman,MorganMorrison}. An abelian group is n-slender if and only if it is slender~\cite{Eda1992}. In general, we have the following characterization \cite{Eda2005}: A group $G$ is n-slender if and only if for every Peano continuum $X$ and every homomorphism $h:\pi_1(X,x)\rightarrow G$, there is an open cover ${\mathcal U}$ of $X$ such that $h(\pi({\mathcal U},x))=1$. The fundamental group $\pi_1(Y,y)$ of a path-connected topological space $Y$ is n\nobreakdash-slender if and only if every homomorphism $h:\pi_1(\mathbb{H},{\bf o})\rightarrow \pi_1(Y,y)$ is induced by a continuous map $f:(\mathbb{H},{\bf o})\rightarrow (Y,y)$ with $h=f_{\#}$ \cite{Eda1992}. The interplay of n-slenderness with free $\sigma$-products, as further investigated in \cite{Eda1998}, forms the foundation for the classification of the homotopy types of one-dimensional spaces by the isomorphism types of their fundamental groups \cite{Eda2010}. \subsection{Generalized covering spaces} Let $X$ be a path-connected topological space, $x\in X$, and let $Cov(X)$ denote the collection of all open covers of $X$. Observe that ${\mathcal S}=\{\pi({\mathcal U},x)\mid {\mathcal U}\in Cov(X)\}$ is a collection of normal subgroups of $\pi_1(X,x)$ which is inversely directed by refinement: if $\mathcal V$ refines $\mathcal U$, then $\pi({\mathcal V},x)\leqslant \pi({\mathcal U},x)$. We define \[\pi^s(X,x) =\bigcap_{{\mathcal U}\in Cov(X)} \pi({\mathcal U},x)\] and call this normal subgroup of $\pi_1(X,x)$ the {\em Spanier group} of $X$. Provided $X$ is also locally path-connected, $X$ admits a universal covering space if and only if ${\mathcal S}$ contains a minimal element, i.e., $\pi^s(X,x)=\pi({\mathcal U},x)$ for some $\mathcal U$, and it admits a simply-connected covering space if and only if $\pi({\mathcal U},x)=1$ for some $\mathcal U$ \cite{Spanier}. In general, $\pi^s(X,x)$ lies in the kernel of the natural homomorphism $\pi_1(X,x)\rightarrow \check{\pi}_1(X,x)$ to the first \v{C}ech homotopy group \cite{FZ2007} and it has recently been shown to equal this kernel if $X$ is locally path-connected and paracompact Hausdorff (e.g. if $X$ is a Peano continuum) \cite{BrazasFabel}. This homomorphism is often injective. For example, it is injective for one-dimensional spaces \cite{EdaKawamura1998}, for subsets of surfaces \cite{FZ2005} and for certain trees of manifolds \cite{FG}. Hence, for such $X$, we have $\pi^s(X,x)=1$. The Hawaiian Earring is the prototypical example of a Peano continuum that does not admit a universal covering space: $\pi^s(\mathbb{H},{\bf o})=1$, while $\pi({\mathcal U},{\bf o})\not=1$ for all ${\mathcal U}\in Cov(\mathbb{H})$. It was shown in \cite{FZ2007}, that every path-connected topological space $X$ admits a {\em generalized covering projection} $p:\widetilde{X}\rightarrow X$ corresponding to $\pi^s(X,x)$, i.e.: \begin{itemize} \item[(i)] $\widetilde{X}$ is path-connected and locally path-connected; \item[(ii)] $p_\#:\pi_1(\widetilde{X},\widetilde{x})\rightarrow \pi_1(X,x)$ is a monomorphism onto $\pi^s(X,x)$; and \item[(iii)] for every map $f:(Y,y)\rightarrow (X,x)$ from a path-connected and locally path-connected space $Y$ with $f_\#(\pi_1(Y,y))\leqslant p_\#(\pi_1(\widetilde{X},\widetilde{x}))$, there is a {\sl unique} lift $\widetilde{f}:(Y,y)\rightarrow (\widetilde{X},\widetilde{x})$ such that $f=p\circ \widetilde{f}$.\end{itemize} These properties uniquely characterize the concept, although they do not guarantee evenly covered neighborhoods or homeomorphic fibers. However, the automorphism group of this generalized covering projection is always naturally isomorphic to the quotient $\pi_1(X,x)/\pi^s(X,x)$ and it acts freely and transitively on every fiber. Moreover, $p$ is open if $X$ is locally path-connected \cite{FZ2007}. When $\pi^s(X,x)=1$, we speak of a {\em generalized universal covering}. In the case of a one-dimensional compact metric space, the resulting generalized universal covering space carries a combinatorial $\mathbb{R}$-tree structure that acts as a {\em generalized} Cayley graph for the fundamental group \cite{FZ2012}. This has given rise to a ``mechanical'' description of the fundamental group of the Menger universal curve in terms of an infinite version of the Towers of Hanoi puzzle \cite{FZ2013}. Generalized universal coverings are also useful in determining the asphericity of other locally non-trivial spaces \cite{F}. \pagebreak \subsection{Main results} \begin{definition} We call a group $G$ {\em homomorphically Hausdorff} relative to a path-connected topological space $X$ if for every homomorphism $h:\pi_1(X,x)\rightarrow G$, we have $\bigcap_{{\mathcal U}\in Cov(X)} h(\pi({\mathcal U},x))=1$.\end{definition} \begin{remark} The terminology is motivated by considering $\pi_1(X,x)$ as a topological group with basis $\{g\pi({\mathcal U},x)\mid g\in \pi_1(X,x)$, ${\mathcal U}\in Cov(X)\}$, as is done in \cite[\S3.3]{BDLM}, where this topology is called the {\em lasso topology}. (See also \cite{VZ}.) Given a homomorphism $h:\pi_1(X,x)\rightarrow G$, the image $K=h(\pi_1(X,x))$ is a topological group with basis $\{k h(\pi({\mathcal U},x))\mid k\in K$, ${\mathcal U}\in Cov(X)\}$ and $h:\pi_1(X,x)\rightarrow K$ is a quotient map. Then $\bigcap_{{\mathcal U}\in Cov(X)} h(\pi({\mathcal U},x))=1$ if and only if $K$ is Hausdorff. \end{remark} \begin{definition} We call a group $G$ {\em Spanier-trivial} relative to a path-connected space $X$ if for every homomorphism $h:\pi_1(X,x)\rightarrow G$, we have $h(\pi^s(X,x))=1$.\end{definition} \begin{remark} There are some obvious relationships. Every group is Spanier-trivial relative to the Hawaiian Earring $\mathbb{H}$. If $G$ is homomorphically Hausdorff relative to $X$, then $G$ is Spanier-trivial relative to $X$. The fundamental group $\pi_1(X,x)$ is Spanier-trivial relative to $X$ if and only if $\pi^s(X,x)=1$, if and only if $\pi_1(X,x)$ is Hausdorff, in which case $X$ admits a generalized universal covering space and is {\em homotopically Hausdorff}, i.e., no fixed element $g\in \pi_1(X,x)\setminus\{1\}$ can be represented by arbitrarily small loops. \end{remark} The prototypical example of a Peano continuum that is not homotopically Hausdorff is the Griffiths twin cone $C(\mathbb{H}_o)\vee C(\mathbb{H}_e)$: it is defined as the wedge of the two cones $C(\mathbb{H}_o)$ and $C(\mathbb{H}_e)$, over the subsets $\mathbb{H}_o=\bigcup_{k\in \mathbb{N}} C_{2k-1}$ and $\mathbb{H}_e=\bigcup_{k\in \mathbb{N}} C_{2k}$ of the Hawaiian Earring $\mathbb{H}$, respectively, joined at the distinguished points of their bases \cite{Griffiths1954}. Since every loop in $C(\mathbb{H}_o)\vee C(\mathbb{H}_e)$ can be homotoped arbitrarily closely to the wedge point $\ast$ (see, e.g., \cite[Lemma~2.1]{Eda1991}), we have the following well-known fact (cf. \cite[\S2.5 Example 18]{Spanier}): $\pi^s(C(\mathbb{H}_o)\vee C(\mathbb{H}_e),\ast)=\pi_1(C(\mathbb{H}_o)\vee C(\mathbb{H}_e),\ast)\not=1$. In particular, if a group $G$ is Spanier-trivial relative to $C(\mathbb{H}_o)\vee C(\mathbb{H}_e)$, then there is no nontrivial homomorphism $h:\pi_1(C(\mathbb{H}_o)\vee C(\mathbb{H}_e),\ast)\rightarrow G$, so that $G$ is also homomorphically Hausdorff relative to $C(\mathbb{H}_o)\vee C(\mathbb{H}_e)$.\vspace{5pt} Here are our main results: \begin{theorem}\label{mainresult} For an abelain group $A$, the following are equivalent: \begin{itemize} \item[(1)] $A$ is cotorsion-free. \item[(2)] $A$ is homomorphically Hausdorff relative to every Peano continuum. \item[(3)] $A$ is homomorphically Hausdorff relative to the Hawaiian Earring $\mathbb{H}$. \item[(4)] $A$ is Spanier-trivial relative to the Griffiths twin cone $C(\mathbb{H}_o)\vee C(\mathbb{H}_e)$. \end{itemize} \end{theorem} The proof of Theorem~\ref{mainresult} will be presented in three separate sections: ``(1)$\Rightarrow$(2)'' in \S\ref{ncot} (Theorem~\ref{general}), ``(4)$\Rightarrow$ (1)'' in \S\ref{Griffiths} (Corollary~\ref{ST}), and ``(3)$\Rightarrow$(1)'' in \S\ref{hom} (Corollary~\ref{HE}). The main work of \S\ref{Griffiths} goes into proving the following: \begin{theorem}\label{GH} $H_1(C(\mathbb{H}_o)\vee C(\mathbb{H}_e))$ is isomorphic to \[\left(\bigoplus_{2^{\aleph_0}}\mathbb{Q}\right) \bigoplus \left(\prod_{p\in \mathbb{P}} A_p\right),\] where $\mathbb{P}$ is the set of all primes and $A_p$ is the $p$-adic completion of $\bigoplus_{2^{\aleph_0}}\mathbb{J}_p$. \end{theorem} \begin{remark} Note that Theorem~\ref{GH} is stated in the format of Kaplansky and that $A_p \cong (\mathbb{J}_p)^\mathbb{N}$ \cite[pp.167--169]{Fuchs}. Moreover, by a theorem due to Balcerzyk (see \cite[VII.42 Exercise 7]{Fuchs}), we have $\big(\bigoplus_{2^{\aleph_0}}\mathbb{Q}\big) \bigoplus \big(\prod_{p\in \mathbb{P}} A_p\big)\cong\mathbb{Z}^\mathbb{N}/\bigoplus_{\mathbb{N}}\mathbb{Z}$. \end{remark} Theorem~\ref{mainresult} leaves open some natural questions regarding non-abelian groups: \begin{question} (a) Is there a group-theoretic characterization for the class of all groups that are homomorphically Hausdorff relative to every Peano continuum? (b) For which classes of groups do we have ``(3)$\Rightarrow$(2)'' or ``(4)$\Rightarrow$(2)''? \end{question} We list some non-abelian examples in Section~\ref{hom}. \section{Some algebraic preliminaries} In this section, we briefly recall some algebraic preliminaries from \cite{Fuchs} concerning infinite abelian groups. The {\em $\mathbb{Z}$-adic completion} of an abelian group $A$ is defined to be the inverse limit $\displaystyle \widehat{A}=\lim_{\longleftarrow} (A/nA,\pi^m_n,n\in \mathbb{N})$ whose homomorphisms $\pi^m_n:A/mA\rightarrow A/nA$ are given by $\pi^m_n(a+mA)=a+nA$ for $n,m\in \mathbb{N}$ with $n|m$. Here, $nA=\{na\mid a\in A\}$. The kernel of the canonical map $A\rightarrow \widehat{A}$, given by $a\mapsto (a+nA)_{n\in \mathbb{N}}$, is called the {\em first Ulm subgroup} of $A$, and is denoted by $U(A)$. If we restrict $n$ to powers of a fixed prime $p$ in this inverse limit, we obtain the {\em $p$-adic completion} of $A$. For example, the $p$-adic integers $\mathbb{J}_p$ are defined to be the $p$-adic completion of $\mathbb{Z}$: $\displaystyle \mathbb{J}_p=\lim_{\longleftarrow} \mathbb{Z}/p^k\mathbb{Z}$. An abelian group $D$ is called {\em divisible} if for all $d\in D$ and all $n\in \mathbb{N}$ we have $n|d$, i.e., $d=nc$ for some $c\in D$. Every divisible group is a direct sum of groups each isomorphic to $\mathbb{Q}$ or the quasicyclic group of type $p^\infty$ for some prime $p$ (i.e., the subgroup of the complex multiplicative group $\mathbb{C}^\times$ consisting of all $p^n$-th roots of unity for all $n\geqslant 0$) \cite[(23.1)]{Fuchs}. An abelian group $D$ is divisible if and only if it has the following property: whenever $D$ is a subgroup of an abelian group $A$, then $A=D\oplus C$ for some subgroup $C$ of $A$. Every abelian group $A$ has a maximal divisible subgroup $D$ (which is contained in $U(A)$) and we call $A$ {\em reduced}, if it does not contain a nonzero divisible subgroup \cite[(21.3)]{Fuchs}. It is an elementary exercise to show that the maximal divisible subgroup of a torsion-free abelian group is equal to its first Ulm subgroup $U(A)$, since in this case $U(A)$ itself is divisible. A subgroup $A$ of an abelian group $B$ is called {\em pure} if for every $a\in A$, we have $n|a$ in $A$ whenever $n|a$ in $B$. An abelian group $C$ is called {\em algebraically compact} if it has the following property: whenever $C$ is a pure subgroup of an abelian group $A$, then $A=C\oplus D$ for some subgroup $D$ of $A$. Clearly, every divisible abelian group is algebraically compact. Also, every finite abelian group is algebraically compact \cite[(38.1) and (3.1)]{Fuchs}. Every inverse limit of reduced algebraically compact abelian groups is reduced and algebraically compact \cite[(39.4)]{Fuchs}. In particular, $\widehat{\mathbb{Z}}$ and $\mathbb{J}_p$ are algebraically compact for any prime~$p$. An abelian group $C$ is divisible (respectively algebraically compact) if and only if it is {\em injective} (respectively {\em pure-injective}), i.e., every homomorphism $\tau:A\rightarrow C$ defined on a subgroup (respectively pure subgroup) $A$ of an abelian group $B$ extends to a homomorphism $\phi:B\rightarrow C$ \cite[(24.5) and (38.1)]{Fuchs}. An abelian group is cotorsion if and only if it is the homomorphic image of an algebraically compact abelian group \cite[(54.1)]{Fuchs}. A torsion-free abelian group is cotorsion if and only if it is algebraically compact \cite[(54.5)]{Fuchs}. Every nonzero torsion-free reduced algebraically compact abelian group contains a direct summand isomorphic to $\mathbb{J}_p$ \cite[(40.4)]{Fuchs}. We therefore have the following characterization \cite{GoebelWald}: \begin{theorem}[G\"obel-Wald] \label{goebel} An abelian group $A$ is cotorsion-free if and only if it is torsion-free and contains no subgroups isomorphic to $\mathbb{Q}$ or $\mathbb{J}_p$ for any prime $p$. \end{theorem} An abelian group $A$ is said to be {\em complete modulo the first Ulm subgroup}, or {\em complete mod-U}, if for every sequence $(a_n)_{n\in \mathbb{N}}$ in $A$ with $(n+1)!|(a_{n+1}-a_n)$ for all $n\in \mathbb{N}$, there is an element $a \in A$ such that $(n+1)!|(a-a_n)$ for all $n\in \mathbb{N}$. An abelian group $A$ is algebraically compact if and only if $A$ is complete mod-U and the maximal divisible subgroup of $A$ equals $U(A)$ \cite[Satz 2.2]{DugasGoebel}. Hence, a torsion-free abelian group $A$ is algebraically compact if and only if it is complete mod-U. \section{Cotorsion-free groups and Peano continua}\label{ncot} \begin{theorem}\label{general} Let $A$ be a cotorsion-free abelian group and $X$ a Peano continuum. Then $A$ is homomorphically Hausdorff relative to $X$. \end{theorem} We use tools from \cite{Eda2014}. For completeness, we give a self-contained proof. \begin{proof} Suppose, to the contrary, that there is a homomorphism $h:\pi_1(X,x)\rightarrow A$ and an element $0\neq a\in \bigcap_{{\mathcal U}\in Cov(X)} h(\pi({\mathcal U},x))$. Since $A$ is torsion-free, its first Ulm subgroup $U(A)$ equals the maximal divisible subgroup of $A$. However, $A$ is reduced, so that $U(A)=0$. Consider the $\mathbb{Z}$-adic completion $\widehat{\mathbb{Z}}$ of the integers $\mathbb{Z}$: \[\widehat{\mathbb{Z}}=\lim_{\longleftarrow}\left(\mathbb{Z}/2!\,\mathbb{Z}\leftarrow \mathbb{Z}/3!\,\mathbb{Z} \leftarrow \mathbb{Z}/4!\,\mathbb{Z}\leftarrow \cdots\right).\] An element $\widehat{u}\in \widehat{\mathbb{Z}}$ can be represented in the form \[\widehat{u}=(u_1+2!\,\mathbb{Z}, u_1+2!\,u_2+3!\,\mathbb{Z}, u_1+2!\,u_2+3!\,u_3+4!\,\mathbb{Z}, \dots),\] which we abbreviate by a formal sum $\sum_{i=1}^\infty i!\,u_i$. This representation is unique if we require that $u_i\in \{0,1,2, \dots, i\}$. While we do not add such infinite sums, we have \[\sum_{i=1}^\infty i!\,u_i=\sum_{i=1}^\infty i!\,v_i\] in $\widehat{\mathbb{Z}}$ if and only if \[(n+1)!\;\mid\; \sum_{i=1}^n i!\,u_i-\sum_{i=1}^n i!\,v_i\] in $\mathbb{Z}$ for all $n\in \mathbb{N}$. Below, we will show that for each sequence $(u_i)_{i\in\mathbb{N}}$, there is an $[\ell]\in \pi_1(X,x)$ such that \begin{equation}\label{goal} (n+1)!\;\mid\; h([\ell])-\sum_{i=1}^n i!\,u_ia\end{equation} in $A$ for all $n\in \mathbb{N}$. Since $\bigcap_{n\in \mathbb{N}} (n+1)!A=U(A)=0$, we obtain a well-defined homomorphism $\phi:\widehat{\mathbb{Z}}\rightarrow A$ by the formula \[ \phi\left(\sum_{i=1}^\infty i!\,u_i\right)=h([\ell]).\] Note that $\phi({\bf 1})=a\neq 0$, where ${\bf 1}=\sum_{i=1}^\infty i!\,u_i$ with $u_1=1$ and $u_i=0$ for all $i\geqslant 2$. Since $\widehat{\mathbb{Z}}$ is algebraically compact, $\phi(\widehat{\mathbb{Z}})$ is a nonzero cotorsion subgroup of $A$; a contradiction. We now show how to find the loop $\ell$. Since $X$ is a Peano continuum, there is a continuous surjection $f:[0,1]\rightarrow X$. Let $\mathcal V_1$ be a cover of $X$ by open path-connected subsets of $X$ such that diam$(U)<1$ for every $U\in \mathcal{V}_1$. Choose $k_1\in \mathbb{N}$ such that for every integer $s_1$ with $0\leqslant s_1<k_1$, there is a $U_{(s_1)}\in {\mathcal V}_1$ with $f([\frac{s_1}{k_1},\frac{s_1+1}{k_1}])\subseteq U_{(s_1)}$. Then ${\mathcal U}_1= \{U_{(s_1)}\mid 0\leqslant s_1<k_1\}$ covers $X$. Next, consider a collection ${\mathcal V}_2$ of open path-connected subsets of $X$ such that diam$(U)<1/2$ for all $U\in {\mathcal V}_2$ and such that for each integer $s_1$ with $0\leqslant s_1<k_1$, there is a subcollection ${\mathcal V}_2'$ of ${\mathcal V}_2$ with $f([\frac{s_1}{k_1},\frac{s_1+1}{k_1}])\subseteq \bigcup {\mathcal V}_2'\subseteq U_{(s_1)}$. Choose $k_2\in \mathbb{N}$ such that for all integers $s_1$ and $s_2$ with $0\leqslant s_i<k_i$, there is a $U_{(s_1,s_2)}\in {\mathcal V}_2$ with $f([\frac{s_1}{k_1}+\frac{s_2}{k_1k_2},\frac{s_1}{k_1}+\frac{s_2+1}{k_1k_2}])\subseteq U_{(s_1,s_2)}\subseteq U_{(s_1)}$. Then ${\mathcal U}_2=\{U_{(s_1,s_2)}\mid 0\leqslant s_i<k_i\}$ covers $X$. Inductively, we find a sequence of positive integers $k_n$ and open covers ${\mathcal U}_n=\{U_s\mid s\in S_n\}$ of $X$, where $S_n=\{(s_1,s_2,\cdots, s_n)\mid s_i\in \{0,1,2,\dots,k_i-1\}\}$, with the following properties: \begin{itemize} \item[(i)] For every $U\in {\mathcal U}_n$, we have that $U$ is path-connected and diam$(U)<1/n$; \item[(ii)] For every $s\in S_n$, we have $f([a_s,a_{s^+}])\subseteq U_s$, where $a_s=\sum_{i=1}^ns_i/\prod_{j=1}^i k_j$ and $s^+=(s_1, s_2, \cdots, s_n+1)$; \item[(iii)] For every $(s_1, s_2, \dots, s_n)\in S_n$, we have $U_{(s_1,s_2,\dots,s_n)}\subseteq U_{(s_1, s_2, \dots, s_{n-1})}$. \end{itemize} For each $n\in \mathbb{N}$, we have $u_na\in h(\pi({\mathcal U}_n,x))$. Hence, for each $s\in S_n$, there are continuous paths $\alpha_{s,i}:([0,1],0)\rightarrow (X,x)$, say $1\leqslant i \leqslant r_s$, and continuous loops $\beta_{s,i}:([0,1],\{0,1\})\rightarrow (U_s,\alpha_{s,i}(1))$ (possibly constant) such that $u_na=h(\prod_{s\in S_n} \prod_{i=1}^{r_s} [\alpha_{s,i} \cdot \beta_{s,i}\cdot \alpha_{s,i}^-])$. (Note that the order of the product does not matter since $A$ is abelian.) For each $s\in S_n$, choose paths $\gamma_{s,i}:[0,1]\rightarrow U_s$ with $\gamma_{s,i}(0)=f(a_s)$ and $\gamma_{s,i}(1)=\beta_{s,i}(0)$. Let $\ell_s$ be the loop $\gamma_{s,1}\cdot \beta_{s,1}\cdot \gamma_{s,1}^-\cdot \gamma_{s,2}\cdot \beta_{s,2}\cdot \gamma_{s,2}^-\cdot \;\cdots\; \cdot \gamma_{s,r_s}\cdot \beta_{s,r_s}\cdot\gamma_{s,r_s}^-$. Then $\ell_s$ is a loop in $U_s$ based at $f(a_s)$. Let $\psi:\pi_1(X,x)\rightarrow H_1(X)$ denote the Hurewicz homomorphism. Since $A$ is abelian, we have a homomorphism $\widetilde{h}:H_1(X)\rightarrow A$ with $h=\widetilde{h}\circ \psi$. Let $\lfloor \ell_s\rceil\in H_1(X)$ denote the element represented by the cycle $\ell_s$. Then $u_na=h(\prod_{s\in S_n} \prod_{i=1}^{r_s} [\alpha_{s,i} \cdot \beta_{s,i} \cdot \alpha_{s,i}^-])=\widetilde{h}(\sum_{s\in S_n} \lfloor \ell_s\rceil )$. We will now define a continuous path $g:[0,1]\rightarrow X$ from $g(0)=f(0)$ to $g(1)=f(1)$ so that the loop $\ell=g\cdot f^{-}$ has the following property: for each $n\in \mathbb{N}$, $\ell$ runs precisely $i!$ many times through each loop $\ell_s$ with $s\in S_i$ and $1\leqslant i \leqslant n$, and the sum of the remaining subpaths of $\ell$ is homologous to $n!$ copies of the same cycle. Specifically, put $T_n=\{(t_1, t_2, \dots, t_n)\mid 0\leqslant t_i < ik_i\}$ and $C_n=\{(c_1, c_2, \dots, c_n)\mid 0\leqslant c_i < i\}$. For each $t=(t_1, t_2, \dots, t_n)\in T_n$, let $s(t)=(s_1, s_2, \dots, s_n)\in S_n$ and $c(t)=(c_1, c_2, \dots,c_n)\in C_n$ be defined by the equation \[t_i=is_i+c_i\] and put \[b_t= \displaystyle \sum_{i=1}^n\frac{1+(3i-1)s_i+3c_i}{\prod_{j=1}^i (3j-1)k_j}.\] Put $\epsilon_n=1/\prod_{i=1}^n (3i-1)k_i$. If we order the elements of $T=\bigcup_{n=1}^\infty T_n$ lexicographically, then the assignment $T\rightarrow [0,1]$ given by $t\mapsto b_t$ is strictly increasing. Moreover, for each $n\in\mathbb{N}$ and each $t\in T_n$ with $c(t)=(c_1, c_2, \dots, c_n)$: \begin{itemize} \item[(i)] We have $(b_t-\epsilon_n,b_t)\cap \{b_{t'}\mid t'\in T\}=\emptyset$ and we define \[g|_{[b_t-\epsilon_n,b_t]}\equiv l_{s(t)}.\] \item[(ii)] If $c_n<n-1$, we have $b_{t^+}=b_t+3\epsilon_n$ and $[b_t+\epsilon_n, b_t+2\epsilon_n]\cap \{b_{t'}\mid t'\in T\}=\emptyset$, and we define \[g|_{[b_t+\epsilon_n, b_t+2\epsilon_n]}\equiv (f|_{[a_{s(t)}, a_{s(t)^+}]})^-.\] \end{itemize} Since the loop $l_s$ is based at $f(a_s)$, $g$ is well-defined. On one hand, if $x\in [0,1]$ is such that $g(x)$ is not defined, then $x\in \{1\}\cup\bigcap_{n\in \mathbb{N}}\bigcup_{t\in T_n} (b_t,b_t+\epsilon_n)$. On the other hand, for every $n\in \mathbb{N}$, every $t\in T_n$ and every $x\in [b_t,b_t+\epsilon_n]$ such that $g(x)$ is defined, we have $g(x)\in U_{s(t)}$. Hence, $g$ uniquely extends to a continuous function $g:[0,1]\rightarrow X$ with $g(0)=f(0)$ and $g(1)=f(1)$. Now fix $n\in \mathbb{N}$. We wish to decompose the homology cycle $\ell$ into appropriate 1-chains and rearrange them into smaller cycles. For each $t\in T_m$ with $1\leqslant m \leqslant n$, clearly $g|_{[b_t-\epsilon_m,b_t]}\equiv \ell_{s(t)}$ is a cycle itself. This leaves us with the 1-chains $g|_{[b_t, b_t+\epsilon_n]}$ for $t\in T_n$, the 1-chains $g|_{[b_t+\epsilon_m,b_t+2\epsilon_m]}=(f|_{[a_{s(t)}, a_{s(t)^+}]})^-$ for $t\in \bigcup_{m=2}^n T_m$ where $c(t)=(c_1, c_2, \dots, c_m)$ and $c_m<m-1$, and the second half of the loop $\ell=g\cdot f^-$. If $t\in T_n$ and $c(t)=(c_1, c_2, \dots, c_n)$ with $c_n<n-1$, we see that $g|_{[b_t, b_t+2\epsilon_n]}\equiv g|_{[b_t, b_t+\epsilon_n]} \cdot (f|_{[a_{s(t)},a_{s(t)^+}]})^-$ is trivially a cycle. In general, however, we need to regroup these 1-chains, which we do next. For each $t\in T_n$, define a sequence $t^\ast$ as follows: First, express $t=(t_1, t_2, \dots, t_n)$ and $c(t)=(c_1, c_2, \dots, c_n)$. If $c_i=i-1$ for all $1\leqslant i \leqslant n$, we let $t^\ast=()$ be the empty sequence; otherwise, there is a unique $m\geqslant 2$ with $c_m<m-1$ and $c_i=i-1$ for all $m< i \leqslant n$, in which case we put $t^\ast=(t_1, t_2, \dots, t_m)$. (Note that $t^\ast=t$ corresponds to the case $c_n<n-1$, which gave us the trivial grouping above.) Observe that for $t,\tilde{t}\in T_n$ with $t^\ast=\tilde{t}^\ast$ and $t\neq \tilde{t}$, we have $s(t)\neq s(\tilde{t})$. More precisely, we have a bijection \begin{equation} \label{bij} \{t\in T_n\mid t^\ast=()\;\}\rightarrow S_n: t\mapsto s(t), \end{equation} and if we put $T'_m=\{t\in T_m\mid c_m<m-1$, where $c(t)=(c_1, c_2,\dots, c_m)\}$, then for every $u\in \bigcup_{m=2}^n T_m'$, we have a bijection \begin{equation} \label{bijm} \{t\in T_n\mid t^\ast=u\}\rightarrow \{s\in S_n\mid a_{s(u)}\leqslant a_s< a_{s(u)^+}\}: t\mapsto s(t).\end{equation} The correspondence (\ref{bij}) allows us to subdivide the second half of $\ell=g\cdot f^-$ and group each piece $(f|_{[a_s,a_{s^+}]})^-$ (for $s\in S_n$) with the 1-chain $g|_{[b_t, b_t+\epsilon_n]}$ where $t\in T_n$, $t^\ast=()$ and $s(t)=s$. Similarly, the correspondence (\ref{bijm}) allows us to subdivide every $g|_{[b_u+\epsilon_m, b_u+2\epsilon_m]}\equiv (f|_{[a_{s(u)}, a_{s(u)^+}]})^-$ for which $u\in \bigcup_{m=2}^n T_m'$ and group each piece $(f|_{[a_s,a_{s^+}]})^-$ (for $s\in S_n$, $a_{s(u)}\leqslant a_s< a_{s(u)^+}$) with the 1-chain $g|_{[b_t, b_t+\epsilon_n]}$ where $t\in T_n$, $t^\ast=u$ and $s(t)=s$. Now, for every $s\in S_n$, we have $|\{t\in T_n\mid s(t)=s\}|=n!$ and for all $t,\tilde{t}\in T_n$ with $s(t)=s(\tilde{t})$, we have $g|_{[b_t, b_t+\epsilon_n]}\equiv g|_{[b_{\tilde{t}}, b_{\tilde{t}}+\epsilon_n]}$. Hence, \[\lfloor g\cdot f^-\rceil=\sum_{i=1}^{n} i!\sum_{s\in S_i}\lfloor \ell_s\rceil +n!\sum_{s\in S_n}\lfloor \delta_s\rceil= \sum_{i=1}^{n-1} i!\sum_{s\in S_i}\lfloor \ell_s\rceil +n!\left(\sum_{s\in S_n}\lfloor \ell_s\rceil+\lfloor \delta_s\rceil\right)\] in $H_1(X)$, where $\delta_s\equiv g|_{[b_t, b_t+\epsilon_n]}\cdot (f|_{[a_s, a_s^+]})^-$ for $t\in T_n$ with $t^\ast=()$ and $s(t)=s$. Applying $\widetilde{h}$ yields (\ref{goal}). \end{proof} \section{The Griffiths twin cone}\label{Griffiths} In this section, we calculate the first integral homology group of the Griffiths twin cone. Our approach is based on \cite[\S4]{Eda1992}. However, there is a gap in the proof of \cite[Lemma~4.11]{Eda1992}, which is addressed in \cite{Eda2014}. Lemma~\ref{form} below generalizes the corresponding adjustment of \cite[Lemma~4.11]{Eda1992} from \cite{Eda2014} to the situation at hand. In keeping with a geometric perspective, the proofs in this section are framed in terms of the generalized universal covering of the Hawaiian Earring, treating infinitary words implicitly. Two applications of van Kampen's Theorem (each time cutting off one simply-connected cone tip) yield $\pi_1(C(\mathbb{H}_o)\vee C(\mathbb{H}_e))\cong \pi_1(\mathbb{H})/N_0$, where $N_0$ is the normal subgroup of $\pi_1(\mathbb{H})$ generated by $\pi_1(\mathbb{H}_o)\ast \pi_1(\mathbb{H}_e)$ \cite[\S3]{Griffiths1954}; we take the origin ${\bf o}$ as the base point for $\mathbb{H}$. Hence, $H_1(C(\mathbb{H}_o)\vee C(\mathbb{H}_e)) \cong \pi_1(\mathbb{H})/N_1$, where $N_1=(\pi_1(\mathbb{H}_o)\ast \pi_1(\mathbb{H}_e))\pi_1(\mathbb{H})'$ and $\pi_1(\mathbb{H})'$ denotes the commutator subgroup of $\pi_1(\mathbb{H})$. Consider the generalized universal covering $p:\widetilde{\mathbb{H}}\rightarrow \mathbb{H}$. Then $\widetilde{\mathbb{H}}$ is an $\mathbb{R}$-tree \cite[Example~4.14]{FZ2007}, i.e., $\widetilde{\mathbb{H}}$ is a uniquely arcwise connected geodesic metric space. (Recall that an isometric embedding of a compact interval of the real line into a metric space is called a {\em geodesic}. A {\em geodesic metric space} is a metric space in which every pair of points is connected by a geodesic.) In particular, $\widetilde{\mathbb{H}}$ is simply-connected. Moreover, for every $\widetilde{x}\in p^{-1}({\bf o})\subseteq \widetilde{\mathbb{H}}$ and every $b\in \pi_1(\mathbb{H})$ there is a unique $\widetilde{y}\in p^{-1}({\bf o})$ such that the arc $[\widetilde{x},\widetilde{y}]$ in $\widetilde{\mathbb{H}}$ from $\widetilde{x}$ to $\widetilde{y}$, when projected by $p:\widetilde{\mathbb{H}}\rightarrow \mathbb{H}$, is a loop representing $b$. \begin{lemma}\label{nooverlap} Given arcs $[\widetilde{x}_1,\widetilde{y}_1], [\widetilde{x}_2,\widetilde{y}_2]\subseteq \widetilde{\mathbb{H}}$ whose projections represent the same element of $\pi_1(\mathbb{H})$, there is a homeomorphism $h: \widetilde{\mathbb{H}}\rightarrow \widetilde{\mathbb{H}}$ such that $p\circ h=p$, $h(\widetilde{x}_1)=\widetilde{x}_2$ and $h(\widetilde{y}_1)=\widetilde{y}_2$. If, moreover, $[\widetilde{x}_2,\widetilde{y}_2]\subseteq [\widetilde{x}_1,\widetilde{y}_1]$, then $[\widetilde{x}_2,\widetilde{y}_2]=[\widetilde{x}_1,\widetilde{y}_1]$. \end{lemma} \begin{proof} The first part follows from the lifting property of $p:\widetilde{\mathbb{H}}\rightarrow \mathbb{H}$. Considering $h^2$ if necessary, we may assume that $\widetilde{x}_1\leqslant h(\widetilde{x}_1)=\widetilde{x}_2\leqslant\widetilde{y}_2=h(\widetilde{y}_1)\leqslant\widetilde{y}_1$ on $[\widetilde{x}_1,\widetilde{y}_1]$. Then the nondecreasing sequence $\widetilde{x}_1, h(\widetilde{x}_1), h^2(\widetilde{x}_1),h^3(\widetilde{x}_1),\dots$ converges to some $\widetilde{x}\in [\widetilde{x}_1,\widetilde{y}_1]$ and each arc $[h^{i-1}(\widetilde{x}_1),h^{i}(\widetilde{x}_1)]$ projects to the same loop $\alpha$ in $\mathbb{H}$. The continuity of $p|_{[\widetilde{x}_1,\widetilde{y}_1]}$ implies that $\alpha$ is constant. Thus, $\widetilde{x}_1=\widetilde{x}_2$, so that $h=id$. Consequently, $\widetilde{y}_1=\widetilde{y}_2$. Finally, note that $h^2=id$ implies $h=id$, since the automorphism group of $p:\widetilde{\mathbb{H}}\rightarrow \mathbb{H}$ is isomorphic to $\pi_1(\mathbb{H})$, and hence isomorphic to a subgroup of the torsion-free group $\check{\pi}_1(\mathbb{H})$. \end{proof} Suppose we have two elements $a$ and $b$ in $\pi_1(\mathbb{H})$, represented by the projections of arcs $[\widetilde{x},\widetilde{y}]$ and $[\widetilde{y},\widetilde{z}]$ in $\widetilde{\mathbb{H}}$, respectively. Then the projection of the concatenation $[\widetilde{x},\widetilde{y}]\cup [\widetilde{y},\widetilde{z}]$ represents the product $ab$, but it is in general not an arc. Since $\widetilde{\mathbb{H}}$ is uniquely arcwise connected, there is a unique $\widetilde{t}\in [\widetilde{x},\widetilde{y}]\cap [\widetilde{y},\widetilde{z}]$ such that the arc $[\widetilde{x},\widetilde{z}]$, whose projection also represents $ab$, satisfies $[\widetilde{x},\widetilde{z}]=[\widetilde{x},\widetilde{t}]\cup [\widetilde{t},\widetilde{z}]$. Note that $\widetilde{t}\in p^{-1}({\bf o})$, since $p|_{\widetilde{\mathbb{H}}\setminus p^{-1}({\bf o})}:\widetilde{\mathbb{H}}\setminus p^{-1}({\bf o})\rightarrow \mathbb{H}\setminus\{{\bf o}\}$ is a local homeomorphism. Thus, we have: \begin{lemma}\label{tree} Let $g_1,g_2,\dots, g_s\in \pi_1(\mathbb{H})$ and $f:[a,b]\rightarrow \widetilde{\mathbb{H}}$ a path such that $f|_{[t_{i-1},t_i]}$ is a geodesic with $g_i=[p\circ f|_{[t_{i-1},t_i]}]$ for some subdivision $\{t_0, t_1, \dots, t_s\}$ of $[a,b]$. Then $Im(f)$ is homeomorphic to a finite simplicial tree whose vertices lie in $p^{-1}({\bf o})$. Standard edge cancellation yields a geodesic $\widehat{f}:[\widehat{a},\widehat{b}]\rightarrow \widetilde{\mathbb{H}}$ with $g_1g_2\cdots g_s=[p\circ \widehat{f}]$. \end{lemma} A somewhat more delicate algorithm is necessary if we are given a product $g_1g_2\cdots g_s\in (\pi_1(\mathbb{H}_o)\ast \pi_1(\mathbb{H}_e))\pi_1(\mathbb{H})'$, whose factors $g_i$ either lie in $\pi_1(\mathbb{H}_o)$ or in $\pi_1(\mathbb{H}_e)$, or else are paired by inverses, and wish to end up with a similar pairing structure for the geodesic $\widehat{f}$. This is carried out in the proof of the following lemma, which is a generalization of the corresponding result for $\pi_1(\mathbb{H})'$ in \cite{Eda2014}: \begin{lemma}\label{form} Let $g\in(\pi_1(\mathbb{H}_o)\ast \pi_1(\mathbb{H}_e))\pi_1(\mathbb{H})'$. Then every geodesic $f:[a,b]\rightarrow \widetilde{\mathbb{H}}$ with $g=[p\circ f]$ has the following property: $(\ast)$ There is a subdivision $\{t_0, t_1,\dots,t_s\}$ of $[a,b]$ defining loops $f_i=p\circ f|_{[t_{i-1},t_i]}$ in $\mathbb{H}$ based at $\bf o$, i.e., $g=[f_1][f_2]\cdots [f_s]$, along with a partition $F_o, F_e, C, \overline{C}$ of $\{1,2,\dots,s\}$ and a bijection $\varphi:C\rightarrow \overline{C}$ such that \begin{itemize} \item[(i)] for every $i\in F_o$, $f_i$ lies in $\mathbb{H}_o$; \item[(ii)] for every $i\in F_e$, $f_i$ lies in $\mathbb{H}_e$; \item[(iii)] for every $i\in C$, $f_{\varphi(i)}\equiv f_i^-$ \end{itemize} \end{lemma} \begin{proof} We may assume $g\not=1$. Let $\widetilde{x}\in p^{-1}({\bf o})$. Since $g\in(\pi_1(\mathbb{H}_o)\ast \pi_1(\mathbb{H}_e))\pi_1(\mathbb{H})'$, there exists a path $f:[a,b]\rightarrow \widetilde{\mathbb{H}}$ with $f(a)=\widetilde{x}$ and $g=[p\circ f]$, satisfying $(\ast)$, such that each $f|_{[t_{i-1},t_i}]$ is a geodesic. It suffices to show that the (unique) geodesic in $\widetilde{\mathbb{H}}$ from $f(a)$ to $f(b)$ also satisfies $(\ast)$. To this end, we let $r$ denote the number of indices $i\in \{1,2,\dots, s-1\}$ for which $f|_{[t_{i-1},t_{i+1}]}$ is not a geodesic. If $r=0$, then $f$ is a geodesic and we are done. Otherwise, we recursively subject $f$ to the transformation described in the following paragraph, which replaces $f$ with $f|_{[a,a_1]\cup[b_1,b]}$ for some $f|_{[a_1,t_i]}\equiv f|_{[t_i,b_1]}^-$, while retaining property ($\ast$) on a refined subdivision (which includes $a_1$ and $b_1$), eventually reducing the pair $(r,s)$ in the lexicographical ordering. Let $i$ be the largest index for which $f|_{[t_{i-1},t_{i+1}]}$ is not a geodesic. Let $a_1\in [t_{i-1},t_i)$ and $b_1\in (t_i,b]$ be the unique points such that the arc $[f(t_{i-1}),f(b)]$ from $f(t_{i-1})$ to $f(b)$ in $\widetilde{\mathbb{H}}$ equals $[f(t_{i-1}),f(a_1)]\cup [f(b_1),f(b)]$. In particular, $f(a_1)=f(b_1)\in p^{-1}({\bf o})$ and $f|_{[a_1,t_i]}\equiv f|_{[t_i,b_1]}^-$. Say, $b_1\in (t_{j-1},t_j]$. Define $f_{(i,1)}=p\circ f|_{[t_{i-1},a_1]}$, $f_{(i,2)}=p\circ f|_{[a_1,t_i]}$, $f_{(j,1)}=p\circ f|_{[t_{j-1},b_1]}$, and $f_{(j,2)}=p\circ f|_{[b_1,t_j]}$. Then $f_i=f_{(i,1)}f_{(i,2)}$, $f_j=f_{(j,1)}f_{(j,2)}$, and $f_{(i,2)}=(f_{i+1}f_{i+2}\cdots f_{j-1} f_{(j,1)})^-$. If $a_1=t_{i-1}$, then $f_{(i,1)}$ is degenerate and we treat it as empty. Likewise, if $b_1= t_j$, we treat $f_{(j,2)}$ as empty. If $i\in C$, then there is a (unique) subdivision $\eta=\{t_{\varphi(i)-1}<t^\ast_{i+1}<t^\ast_{i+2}<\cdots< t^\ast_j<t_{\varphi(i)}\}$ such that $f(t^\ast_k)=f(t_k)$ for all $i<k<j$ and $f(t^\ast_j)=f(b_1)$. (By Lemma~\ref{nooverlap}, either $\varphi(i)<i$ or $\varphi(i)\geqslant j$.) If $j\in \overline{C}$, then there is a (unique) $t^{\ast\ast}_j \in [t_{\varphi^{-1}(j)-1}, t_{\varphi^{-1}(j)})$ such that $f(t^{\ast\ast}_j)=f(b_1)$. Analogous statements hold if $i\in \overline{C}$ or $j\in C$. Provided $\varphi(i)\not= j$, we may thus reduce the domain of $f$ to $[a,a_1]\cup [b_1,b]$ and adjust the concatenation $f_1 f_2\cdots f_s$ to the new subdivision by replacing $f_i$ with $f_{(i,1)}$, replacing $f_{j}$ with $f_{(j,2)}$, then eliminating $f_{i+1}, f_{i+2},\dots,f_{j-1}$, then replacing $f_{\varphi(i)}$ with $\widehat{f}_{i+1}\widehat{f}_{i+2}\cdots \widehat{f}_{j-1}\widehat{f}_{(j,1)}\widehat{f}_{(i,1)}^-$ (where $\widehat{f}_m=p\circ f|\equiv f_m$ for all $m$), and finally replacing $f_{\varphi^{-1}(j)}$ (which could by now be one of the new $\widehat{f}_{i+1}, \widehat{f}_{i+2}, \dots, \widehat{f}_{j-1}$) with $\widehat{f}_{(j,2)}^-\widehat{f}_{(j,1)}^-$. Observe that if $i\in F_o\cup F_e$, then $f_i, f_{i+1}, \dots, f_{j-1}$ and $f_{(j,1)}$ all lie in $\mathbb{H}_o$ or $\mathbb{H}_e$, so that there is no need to introduce $\widehat{f}_{i+1}\widehat{f}_{i+2}\cdots \widehat{f}_{j-1}\widehat{f}_{(j,1)}\widehat{f}_{(i,1)}^-$. Likewise, if $j\in F_o\cup F_e$, then there is no need to introduce $\widehat{f}_{(j,2)}^-\widehat{f}_{(j,1)}^-$. If $\varphi(i)=j$, we instead consider the common subdivision $\eta\cup\{b_1\}$ of $[t_{j-1},t_j]$. While in this case $[t_{j-1},b_1]$ (the domain of $f_{(j,1)}$) might overlap with $[t^\ast_{j-1},t^\ast_j]$ (the domain of $\widehat{f}_{(j,1)}$), the former cannot properly contain the latter by Lemma~\ref{nooverlap}. Consequently, we can transfer the points $\eta\cap [t_{j-1},b_1]$ to $[b_1,t_j]$ along the correspondence of $f_{(j,1)}$ with $\widehat{f}_{(j,1)}$ (repeatedly if necessary) and find a $k\in \{i+1,i+2,\dots,j-1\}$ with which we may adjust $f_1 f_2\cdots f_s$ by replacing $f_i$ with $f_{(i,1)}$, then eliminating $f_{i+1}, f_{i+2},\dots,f_{j-1}$, then replacing $f_{\varphi(i)}=f_{j}$ with $\widehat{f}_{(k,2)} \widehat{f}_{k+1}\widehat{f}_{k+2}\cdots \widehat{f}_{j-1}\widehat{f}_{i+1}\widehat{f}_{i+2}\cdots \widehat{f}_{k-1}\widehat{f}_{(k,1)}\widehat{f}_{(i,1)}^-$, and finally (if applicable) replacing $f_{\varphi(k)}$ or $f_{\varphi^{-1}(k)}$ (or its new copy) by $\widehat{f}_{(k,2)}^-\widehat{f}_{(k,1)}^-$. Then $f|_{[a,a_1]\cup[b_1,b]}$ satisfies $(\ast)$ with respect to the new subdivision. While this transformation potentially increases $s$ by as much as 2, it may decrease~$r$. Specifically, if $f_{(i,1)}$ is not empty, then $r$ decreases by 1. So, let us assume that $f_{(i,1)}$ is empty. If $f_{(j,2)}$ is also empty, then $r$ either decreases (by 1 or 2) or remains constant, but $s$ definitely decreases. So, assume that $f_{(j,2)}$ is not empty. If $i=1$, then $r$ decreases from 1 to 0. So, assume that $i>1$. Now the outcome depends on whether $f|_{[t_{i-2},t_{i-1}]\cup [b_1,b]}$ is a geodesic or not. If it is, then $r$ decreases by 1. If it is not, then $r$ remains constant and $s$ does not increase. It now suffices to show that this final scenario cannot occur indefinitely, leaving both $r$ and $s$ unchanged, as we iterate the transformation for $f|_{[a,a_1]\cup[b_1,b]}$ and its subdivision. Suppose, to the contrary, that it does. We then have sequences $a<\cdots <a_{n+1}<a_n<\cdots <b_n<b_{n+1}<\cdots <b$\linebreak with subdivisions $\xi_n$ of $[a,a_n]\cup [b_n,b]$ into $s$ intervals, each obtained from the previous one by the above transformation, such that $f|_{[a,a_n]\cup [b_n,b]}$ satisfies~$(\ast)$ with pairings $\varphi_n:C_n\rightarrow \overline{C}_n$. In particular, $[a_{n+1},a_n]$ is a subinterval of $\xi_n$ and $f|_{[a_{n+1},a_n]}\equiv f|_{[b_n,b_{n+1}]}^-$. Put $a_\infty=\inf\{a_n\mid n\in \mathbb{N}\}$ and $b_\infty=\sup\{b_n\mid n\in \mathbb{N}\}$. We will call a subinterval of $\xi_n$ an {\em inside} interval if it is contained in ${\mathcal I}=[a_\infty,b_\infty]$, an {\em outside} interval if it is contained in ${\mathcal O}=[a,a_\infty]\cup [b_\infty,b]$, and an {\em overlapping} interval otherwise. Since $\xi_n\cap \left([a,a_{n+1}]\cup [b_{n+1},b]\right)\subseteq \xi_{n+1}$ and since the number of subintervals of $\xi_n$ is equal to $s$ for all $n$, the number of points in $\xi_n\cap {\mathcal O}$ is nondecreasing and bounded by~$s$. So, we may assume that this number is constant. In particular, an overlapping interval $[u,v]$ of $\xi_n$ cannot be paired by $\varphi_n$ with an outside interval $[u',v']$ of $\xi_n$. (Otherwise, they would be paired subintervals of $\xi_{n+1}, \xi_{n+2}, \cdots, \xi_m$ until $\xi_m\cap(u,v)\not=\emptyset$ for some minimal $m>n$. But then $\xi_m\cap(u',v')\not=\emptyset$, according to our transformation rule, implying that $|\xi_m\cap {\mathcal O}|>|\xi_{m-1}\cap {\mathcal O}|$; a contradiction.) It also follows that if a lower (respectively upper) overlapping interval persists, for all $n$, then its left (respectively right) endpoint is constant. Therefore, a persistent overlapping lower interval $[u,v_n]$ cannot be paired with an inside interval for infinitely many $n$. (Otherwise, there are $c,d\in [u,a_\infty]$ and $c_m,d_m\in [a_\infty,a_m]\cup [b_m,b_\infty]$ with $f(c_m)=f(c)\not=f(d)=f(d_m)$ for all $m$. However, since $f(a_m)=f(b_m)$ converges to $f(a_\infty)=f(b_\infty)$, so do $f(c_m)$ and $f(d_m)$.) The same is true for a persistent overlapping upper interval $[u_n,v]$. So, we may assume that overlapping intervals are not paired with inside intervals. (Note that an overlapping interval might cease to exist, in which case it will not reappear.) Similarly, we may assume that outside intervals are not paired with inside intervals. In summary, we now assume that inside, outside, and overlapping intervals, if paired, are paired with an interval of the same kind. We claim that for every point $c_n$ contained in an inside interval of $\xi_n$, there is a point $c_{n+1}$ contained in an inside interval of $\xi_{n+1}$ such that $f(c_n)=f(c_{n+1})$. In order to show this, we may assume that $c_n\in [a_{n+1},a_n]\cup [b_n,b_{n+1}]$. (Otherwise we can take $c_{n+1}=c_n$.) We will use the notation from our transformation rule for $C=C_n$. Since $f_{(i,1)}$ is empty, but $f_{(j,2)}$ is not, and since the number of subintervals of $\xi_n$ and $\xi_{n+1}$ are equal, we have $j\in C\cup \overline{C}$. Observe that the domain of $f_i$, i.e., $[a_{n+1},a_n]$, is an inside interval of $\xi_n$, while the domain of $f_j$ might be an upper overlapping interval, in which case the two would not be paired by $\varphi$. First suppose that $i\in C$. Then the desired point $c_{n+1}$ can be found in the domain of $f_{\varphi(i)}$. The same is true if $i\in \overline{C}$. Consequently, the only case that needs attention is when $i\in F_o\cup F_e$. Say, $i\in F_o$. If $j>i+1$, then both $f_i$ and $f_{i+1}$ lie in $\mathbb{H}_o$, so that combining their domains will reduce the pair $(r,s)$ to $(r,s-1)$ for $\xi_n$. So, we may assume that $j=i+1$. In this case, the domain of $f_{(j,1)}$ equals $[b_n,b_{n+1}]$ and $f_i\equiv f_{(j,1)}^-$. Since $|\xi_{n+1}\cap {\mathcal O}|=|\xi_{n}\cap {\mathcal O}|$, the domains of $f_{(j,1)}$ and $\widehat{f}_{(j,1)}$ must both be inside intervals of $\xi_{n+1}$, so that we can find $c_{n+1}$ in $\widehat{f}_{(j,1)}$. Hence, starting with any two points $c_n$ and $d_n$ in any inside interval of some $\xi_n$ such that $f(c_n)\not= f(d_n)$, we obtain sequences $(c_m)_{m\geqslant n}$ and $(d_m)_{m\geqslant n}$ such that $c_m$ and $d_m$ are contained in (possibly different) inside intervals of $\xi_m$ with $f(c_m)=f(c_n)$ and $f(d_m)=f(d_n)$. This leads to the same contradiction as before. \end{proof} \begin{proof}[Proof of Theorem~\ref{GH}] As in \cite{EdaKawamura2000}, it suffices to show that $H_1(C(\mathbb{H}_o)\vee C(\mathbb{H}_e))$ is (i)~torsion-free; (ii)~algebraically compact; (iii) contains a subgroup isomorphic to $\bigoplus_{2^{\aleph_0}}\mathbb{Q}$; and (iv) contains a pure subgroup isomorphic to $\bigoplus_{2^{\aleph_0}}\mathbb{Z}$. The fact that $A=H_1(C(\mathbb{H}_o)\vee C(\mathbb{H}_e))$ is torsion-free follows from the formula $H_1(\mathbb{H}) \cong H_1(\mathbb{H}_o)\oplus H_1(\mathbb{H}_e)\oplus A$ \cite[Theorem~1.2] {Eda1991} and the fact that $H_1(\mathbb{H})$ is torsion-free \cite[Corollary~2.2]{EdaKawamura1998}. (Note that the torsion-freeness of $H_1(\mathbb{H})$ also follows from the fact that $\pi_1(\mathbb{H})$ is isomorphic to a subgroup of $\check{\pi}_1(\mathbb{H})$, all of whose finitely generated subgroups are free.) It follows from \cite[Corollary~9.2]{BZ} (or by adapting the proof of \cite[Theorem~4.14]{Eda1992}) that the maximal divisible subgroup of $A$ is isomorphic to $\bigoplus_{2^{\aleph_0}}\mathbb{Q}$. By \cite[Theorem~1.1]{Eda1991}, $A$ is complete mod-U and hence algebraically compact. Therefore, Lemma~\ref{sort} below completes the proof. \end{proof} \begin{lemma}\label{sort} $H_1(C(\mathbb{H}_o)\vee C(\mathbb{H}_e))$ contains a pure subgroup isomorphic to $\bigoplus_{2^{\aleph_0}}\mathbb{Z}$. \end{lemma} \begin{proof} Consider the element $a\in \pi_1(\mathbb{H})$ represented by $\ell_1\ell_2\ell_3\cdots=\ell:[0,1]\rightarrow \mathbb{H}$, where $\ell_i=\ell|_{[(i-1)/i,i/(i+1)]}$ canonically winds once around the circle $C_i$ of $\mathbb{H}$ for each $i$. We first show that $aN_1$ generates a non-trivial pure subgroup of $\pi_1(\mathbb{H})/N_1$. To this end, suppose that $a^m=b^nc$ for some $b\in \pi_1(\mathbb{H})$, $c\in N_1$, $m\geqslant 1$ and $n\geqslant 0$. We wish to show that $n>0$ and $n|m$. Since $\pi_1(\mathbb{H})/N_1$ is torsion-free, it suffices to show this for $m=1$. (For the general case, express $d=gcd(m,n)$ as $d=\alpha m + \beta n$ with $\alpha, \beta\in \mathbb{Z}$ and write $m=m_0d$, $n=n_0d$. Then $a^{m_0}=b^{n_0}c_0$ for some $c_0\in N_1$. Hence, $a=(b^\alpha a^\beta)^{n_0}c_o^\alpha$, implying $n_0=1$ and $n|m$.) Let $[\widetilde{x}_a,\widetilde{y}_a], [\widetilde{x}_{b},\widetilde{y}_{b}]$ and $[\widetilde{x}_c,\widetilde{y}_c]$ be arcs of $\widetilde{\mathbb{H}}$ whose projections represent $a, b$ and $c$, respectively. Let $\kappa_i^+(c)$, respectively $\kappa_i^-(c)$, denote the number of subarcs $[\widetilde{x},\widetilde{y}]$ of $[\widetilde{x}_c,\widetilde{y}_c]$ which project to $\ell_i\ell_{i+1}\ell_{i+2}\cdots$, respectively $(\ell_i\ell_{i+1}\ell_{i+2}\cdots)^-$. (Observe that two such arcs cannot overlap. Hence, as in the proof of Lemma~\ref{nooverlap}, this number is finite.) Analogously, we define $\kappa_i^\pm(b)$ and $\kappa_i^\pm(a)$. Applying Lemma~\ref{tree} to $c=b^{-n}a$, we obtain $\kappa_i^+(c)-\kappa_i^-(c)=1-n\left(\kappa_i^+(b)-\kappa_i^-(b)\right)$, for sufficiently large $i$. However, by Lemma~\ref{form}, $\kappa_i^+(c)-\kappa_i^-(c)=0$, for sufficiently large $i$. Hence $n=1$. We now vary this construction. Choose a collection $\{I_{\alpha}\mid \alpha\in\{1,2\}^\mathbb{N}\}$ of infinite subsets of $\mathbb{N}$ such that $I_\alpha\cap I_\beta$ is finite for all $\alpha\neq \beta$; e.g., for $\alpha=(s_k)_{k\in\mathbb{N}}$, take $I_{\alpha}=\{\sum_{k=1}^n s_k2^{n-k}\mid n\in \mathbb{N}\}$. Let $a_\alpha\in \pi_1(\mathbb{H})$ be the element represented by the loop $\ell_{2i_1-1}\ell_{2i_1}\ell_{2i_2-1}\ell_{2i_2}\ell_{2i_3-1}\ell_{2i_3}\cdots$, where $i_1<i_2<i_3<\cdots$ is an enumeration of $I_\alpha$. Then the set $\{a_\alpha N_1\mid \alpha\in \{1,2\}^\mathbb{N}\}$ is linearly independent in $\pi_1(\mathbb{H})/N_1$. Indeed, suppose $a_{\alpha_1}^{m_1}a_{\alpha_2}^{m_2}\cdots a_{\alpha_k}^{m_k}N_1=N_1$ for some $\alpha_i\in \{1,2\}^\mathbb{N}$ and $m_i\in \mathbb{Z}$. Choose $M\in \mathbb{N}$ so that $(I_{\alpha_i}\setminus\{1,2,\cdots,M\})\cap (I_{\alpha_j}\setminus\{1,2,\cdots,M\})=\emptyset$ for all $i\neq j$. Put $J_i=I_{\alpha_i}\setminus\{1,2,\cdots,M\}$ and let $q_i:\mathbb{H}\rightarrow \bigcup_{t\in J_i} C_t\subseteq \mathbb{H}$ denote retraction with $q_i(x)={\bf o}$ for $x\not\in \bigcup_{t\in J_i} C_t$. Then $q_i$ induces a homomorphism $q_{i\#}:\pi_1(\mathbb{H})/N_1\rightarrow \pi_1(\mathbb{H})/N_1$, which we may apply to the equation $a_{\alpha_1}^{m_1}a_{\alpha_2}^{m_2}\cdots a_{\alpha_k}^{m_k}N_1=N_1$ to obtain $q_{i\#}(a_{\alpha_i})^{m_i}N_1=N_1$. We conclude that $m_i=0$ for all $i\in\{1,2,\cdots,k\}$. In order to show that $\left<a_\alpha N_1\mid \alpha\in \{1,2\}^\mathbb{N}\right>\cong \bigoplus_{2^{\aleph_0}}\mathbb{Z}$ is a pure subgroup of $\pi_1(\mathbb{H})/N_1$, suppose that $a_{\alpha_1}^{m_1}a_{\alpha_2}^{m_2}\cdots a_{\alpha_k}^{m_k}N_1=b^nN_1$ for some $\alpha_i\in\{1,2\}^\mathbb{N}$, $m_i\in \mathbb{Z}\setminus\{0\}$, $b\in \pi_1(\mathbb{H})$ and $n\in \mathbb{N}$. Then $q_{i\#}(a_{\alpha_i})^{m_i}N_1=q_{i\#}(b)^nN_1$ for all $i$. As in the proof of the purity of $aN_1$ above, we conclude that $n|m_i$ for all $i$. Hence, $n|a_{\alpha_1}^{m_1}a_{\alpha_2}^{m_2}\cdots a_{\alpha_k}^{m_k}N_1$ in $\pi_1(\mathbb{H})/N_1$. \end{proof} \begin{corollary}\label{ST} Let $A$ be an abelian group which is Spanier-trivial relative to the Griffiths twin cone $C(\mathbb{H}_o)\vee C(\mathbb{H}_e)$. Then $A$ is cotorsion-free. \end{corollary} \begin{proof} By Theorem~\ref{GH}, $\pi^s(C(\mathbb{H}_o)\vee C(\mathbb{H}_e),\ast)=\pi_1(C(\mathbb{H}_o)\vee C(\mathbb{H}_e),\ast)$ can be mapped homomorphically onto $\mathbb{Q}$, $\mathbb{J}_p$ and $\mathbb{Z}/p\mathbb{Z}$ for any prime $p$. It follows that $A$ cannot contain a subgroup isomorphic to any of these groups and is consequently cotorsion-free by Theorem~\ref{goebel}.\end{proof} \begin{remark} By \cite[Theorem~1.2]{KarimovRepovs}, the first integral homology group of the Harmonic Archipelago is also isomorphic to the group described in Theorem 4.2 above. Indeed, it can be calculated in much the same way as we computed the first integral homology group of the Griffiths twin cone. We leave the details to the reader. \end{remark} \section{The Hawaiian Earring and product properties}\label{hom} \begin{proposition}\label{onto} Suppose $A$ is either $\mathbb{Q}$, $\mathbb{Z}/p\mathbb{Z}$ or $\mathbb{J}_p$ for some prime $p$. Then there is a homomorphism $h: \pi_1(\mathbb{H},{\bf o})\rightarrow A$ with $h(\pi({\mathcal U},{\bf o}))=A$ for all ${\mathcal U}\in Cov(\mathbb{H})$. \end{proposition} \begin{proof} First suppose $A=\mathbb{J}_p$. Consider the surjective homomorphism \[\tau:\prod_{m\in \mathbb{N}}\mathbb{Z}\rightarrow \mathbb{J}_p=\lim_{\leftarrow}(\mathbb{Z}/p\mathbb{Z}\leftarrow \mathbb{Z}/p^2\mathbb{Z}\leftarrow \mathbb{Z}/p^3\mathbb{Z}\leftarrow \cdots)\] given by $\tau(a_1, a_2, a_3, \dots)=([a_1],[a_1+pa_2],[a_1+pa_2+p^2a_3],\dots)$. Since the direct sum $\bigoplus_{n\in\mathbb{N}} \prod_{m\in\mathbb{N}} \mathbb{Z}$ is a pure subgroup of the direct product $\prod_{n\in\mathbb{N}} \prod_{m\in\mathbb{N}} \mathbb{Z}$ and since $\mathbb{J}_p$ is pure-injective, we can extend $\bigoplus_{n\in \mathbb{N}}\tau: \bigoplus_{n\in\mathbb{N}} \prod_{m\in\mathbb{N}} \mathbb{Z} \rightarrow \mathbb{J}_p$ to a homomorphism $ \sigma: \prod_{n\in\mathbb{N}} \prod_{m\in\mathbb{N}} \mathbb{Z}\rightarrow \mathbb{J}_p$. Choose a bijection $\rho:\mathbb{N}\times \mathbb{N}\rightarrow \mathbb{N}$ such that for every $k\in \mathbb{N}$ there is an $n\in \mathbb{N}$ with $\rho^{-1}(\{1,2,\dots,k\})\cap \left(\{n\}\times \mathbb{N}\right)=\emptyset$, e.g., the diagonal enumeration $\rho(n,m)= n+(n+m-1)(n+m-2)/2$. Consider the isomorphism $\varphi:\prod_{k\in \mathbb{N}}\mathbb{Z}\rightarrow \prod_{n\in\mathbb{N}}\prod_{m\in\mathbb{N}} \mathbb{Z}$, defined by $\varphi((a_k)_{k\in\mathbb{N}})=(a_{\rho(n,m)})_{(n,m)\in\mathbb{N}\times\mathbb{N}}$. Put $\phi=\sigma\circ \varphi:\prod_{k\in \mathbb{N}}\mathbb{Z} \rightarrow \mathbb{J}_p$. Then $\phi(\prod_{k=n}^\infty \mathbb{Z})=\mathbb{J}_p$ for all $n\in \mathbb{N}$. Define $r_k:\mathbb{H}\rightarrow C_k$ by $r_k(x)=x$ if $x\in C_k$ and $r_k(x)={\bf o}$ if $x\not\in C_k$. Let $\mu:\pi_1(\mathbb{H},{\bf o})\rightarrow \prod_{k\in \mathbb{N}} \mathbb{Z}$ be the composition of $(r_{n\#})_{n\in \mathbb{N}}:\pi_1(\mathbb{H},{\bf o})\rightarrow \prod_{k\in\mathbb{N}} \pi_1(C_k,{\bf o})$ and the canonical isomorphism $\prod_{k\in\mathbb{N}} \pi_1(C_k,{\bf o})\cong \prod_{k\in \mathbb{N}} \mathbb{Z}$. Put $h=\phi\circ \mu: \pi_1(\mathbb{H},{\bf o})\rightarrow \mathbb{J}_p$. Observe that $\mu(incl_\#(\pi_1(\bigcup_{k=n}^\infty C_k,{\bf o})))= \prod_{k=n}^\infty \mathbb{Z}$ for all $n\in \mathbb{N}$, because $\mu(\ell_1^{a_1}\ell_2^{a_2}\ell_3^{a_3}\cdots) =(a_1,a_2,a_3,\dots)$ for $a_i\in \mathbb{Z}$, where $\ell_i$ is as in the proof of Lemma~\ref{sort}, parametrized appropriately. Let $\mathcal U\in Cov(\mathbb{H})$. Choose $n\in \mathbb{N}$ with $incl_\#(\pi_1(\bigcup_{k=n}^\infty C_k,{\bf o}))\leqslant \pi({\mathcal U},{\bf o})$. Then $\mathbb{J}_p=\phi(\prod_{k=n}^\infty \mathbb{Z}) = \phi \circ \mu(incl_\#(\pi_1(\bigcup_{k=n}^\infty C_k, {\bf o})))\leqslant \phi \circ \mu(\pi({\mathcal U},{\bf o}))=h(\pi({\mathcal U},{\bf o}))$. This also covers the case $A=\mathbb{Z}/p\mathbb{Z}$, since there is an epimorphism $\mathbb{J}_p\rightarrow \mathbb{Z}/p\mathbb{Z}$. If $A=\mathbb{Q}$, we instead start with $\tau:\bigoplus_{k\in \mathbb{N}}\mathbb{Z} \rightarrow \mathbb{Q}$, given by $\tau({\bf e}_k)=1/k$, where ${\bf e}_k(n)=0$ for $n\not=k$ and ${\bf e}_k(k)=1$, and extend it to $\phi:\prod_{k\in \mathbb{N}}\mathbb{Z}\rightarrow \mathbb{Q}$, noting that $\mathbb{Q}$ is injective. Then $\phi(\prod_{k=n}^\infty\mathbb{Z})=\mathbb{Q}$ for all $n\in \mathbb{N}$ and we can proceed as before. \end{proof} \begin{corollary}\label{HE} Let $A$ be an abelian group which is homomorphically Hausdorff relative to the Hawaiian Earring $\mathbb{H}$. Then $A$ is cotorsion-free. \end{corollary} \begin{proof} Combine Proposition~\ref{onto} with Theorem~\ref{goebel}. \end{proof} \begin{remark} Implicitly contained in our argument is also a proof of the following characterization, appearing in \cite[\S7]{CC1}, extracted by Dugas, G\"obel, Wald et al.\@ from Nunke's characterization of slender groups \cite{Nunke}: An abelian group $A$ is cotorsion-free if and only if for every homomorphism $\phi:\mathbb{Z}^\mathbb{N}\rightarrow A$, we have $\bigcap_{n\in \mathbb{N}} \phi(\prod_{k=n}^\infty \mathbb{Z})=0$. \end{remark} While n-slenderness is preserved under restricted direct products (and free products) \cite[Theorem~3.6]{Eda1992}, the following holds for arbitrary direct products: \begin{proposition}\label{prod} A product $\prod_{i\in I} G_i$ is homomorphically Hausdorff or Spanier-trivial relative to $X$ if and only if each group $G_i$ has the corresponding property. \end{proposition} \begin{proof} First suppose that each $G_i$ is homomorphically Hausdorff relative to $X$. Let $h:\pi_1(X,x)\rightarrow \prod_{i\in I} G_i$ be a homomorphism and consider the projections $p_j: \prod_{i\in I} G_i\rightarrow G_j$. Then $p_j(\bigcap_{{\mathcal U}\in Cov(X)} h(\pi({\mathcal U},x)))\leqslant \bigcap_{{\mathcal U}\in Cov(X)} p_j\circ h(\pi({\mathcal U},x))=1$ for every $j\in I$, so that $\bigcap_{{\mathcal U}\in Cov(X)} h(\pi({\mathcal U},x))\leqslant \bigcap_{j\in I} ker(p_j)=1$. The argument is analogous if each $G_i$ is Spanier-trivial relative to $X$. The converse is obvious. \end{proof} Let us call a group $G$ {\em residually n-slender} if for every $g\in G\setminus\{1\}$ there is a homomorphism $h:G\rightarrow S$ to an n-slender group $S$ such that $h(g)\not=1$. (See \cite{CE}.) \begin{corollary}\label{residually} Every residually n-slender group is homomorphically Hausdorff relative to every Peano continuum. \end{corollary} \begin{proof} Since a group is residually n-slender if and only if it is isomorphic to a subgroup of a direct product of n-slender groups, this follows from Proposition~\ref{prod}. \end{proof} \begin{remark} Since every residually free group is, in particular, residually n-slender, Corollary~\ref{residually} applies to the following examples of fundamental groups: If $X$ is a compact one-dimensional metric space or a proper compact subset of a surface, then $\pi_1(X,\ast)$ is isomorphic to a subgroup of a direct product of free groups of finite rank \cite{CF,FZ2005} and thus residually free. Suppose $Y=\prod_{n\in \mathbb{N}} Y_n$ is a product of countably many one-dimensional Peano continua $Y_n$, each of which is not semilocally simply-connected at any point. Then $\pi_1(Y,\ast)\cong \prod_{n\in\mathbb{N}} \pi_1(Y_n,\ast)$ is a residually free group from whose isomorphism type one can recover the space $Y$ by a construction described in \cite{CE}. This construction, in turn, makes use of the fact that each $\pi_1(Y_n,\ast)$ is residually n-slender. Examples for $Y_n$ include the Menger curve, the Sierpi\'{n}ski curve, and the Sierpi\'{n}ski triangle. Lastly, if $Z$ is a well-balanced tree of surfaces, then $\pi_1(Z,\ast)$ is isomorphic to a subgroup of a direct product of fundamental groups of closed surfaces \cite{FG} each of which is residually free. (Note that, with the exception of the three non-orientable surfaces of smallest genus, surface groups are fully residually free. See, for example, \cite[Lemma~5.5.11]{Chiswell}.) Hence, $\pi_1(Z,\ast)$ is residually free. Examples for $Z$ include the Pontryagin sphere (a densely iterated connected sum of tori) and the Pontryagin surface $\Pi_2$ (a densely iterated connected sum of real projective planes). \end{remark} \begin{remark} It has recently been shown that certain amalgamated free products and certain HNN extensions of n-slender groups are n-slender, among them the Baumslag-Solitar groups \cite{Nakamura}. \end{remark} \noindent {\bf Acknowledgements.} This research was partially supported by the Grant-in-Aid for Scientific Research (C) of Japan (No. 20540097 and 23540110 to Katsuya Eda) and by a grant from the Simons Foundation (No. 245042 to Hanspeter Fischer).
1,314,259,993,131
arxiv
\section{Introduction} The IceCube Neutrino Observatory \cite{Aartsen:2016nxy} is the world’s largest neutrino telescope, and is located at the South Pole. IceCube consists of more than 5000 large area photomultipliers (PMT) inside Digital Optical Modules (DOM) attached to 86 cable-strings instrumenting roughly 1\,\si{\kilo\meter^3} of ice. These PMTs detect Cherenkov light emitted by charged particles produced in the interactions of neutrinos in the surrounding ice or nearby bedrock. IceCube was optimized to investigate high-energy neutrinos in the TeV to PeV energy scale. The DeepCore extension \cite{Abbasi:2012} lowered the energy threshold to $\sim$10\,GeV by a denser instrumented section in the center of the detector. This threshold will be further reduced by the upcoming IceCube Upgrade \cite{Ishihara:2019uL}. It will consist of seven cable-strings with up to 120 optical modules each, embedded near the bottom center of the existing IceCube Neutrino Observatory. Two new types of optical modules, the Multi-PMT Digital Optical Module (mDOM) \cite{mDOM} and the Dual optical sensors in an Ellipsoid Glass for Gen2 (D-Egg) \cite{DEgg}, are to be installed. The mDOM (see figure \ref{fig:froggy}) features a matrix of 24 3-inch photomultipliers (PMTs) \cite{mDOM_PMTs} as well as several calibration devices, such as fast LEDs \cite{Cal_LEDs}, CCD cameras \cite{Camera} and other on-board sensors. Since the PMTs are the primary detection unit for the mDOMs, they need to be tested for functionality before integration into modules. \begin{figure}[htb] \centering \includegraphics[width=.45\textwidth]{figures/froggy_cropped.png} \caption{The first completed mDOM. Each hemisphere houses twelve PMTs as well as various calibration devices. Picture by Matthias Schust, DESY.} \label{fig:froggy} \end{figure} \section{Requirements of the PMT Testing Facility} The main goal of the testing facility is ensuring the functionality of each PMT before it is installed in the sensor modules. Additionally, calibrations of required high voltage, photo-detection efficiency, charge response linearity, darkpulse rate, timing resolution, and probabilities of pre-, late-, and afterpulses will be done. These tests will be performed at a temperature of \SI{-20}{\celsius}, the typical ambient temperature of the deep South Pole ice \cite{Price7844}. In total, over 10.000 PMTs have to be tested for the completion of all mDOM modules. In order to keep up with production timelines, the throughput of the testing facility has to be at a few hundred PMTs per week. This requires testing many PMTs at the same time as well as fast turnaround times between measurements. For the future, the adaptability of the testing facility to other types of PMTs is important as preparations for the construction of IceCube Gen2 \cite{Gen2_DOM} proceed. \section{Mechanical Design and Implementation} PMT Testing facilities have been implemented at two sites: RWTH Aachen University and TU Dortmund University. Both follow the conceptual design seen in figure \ref{fig:setup-schematic}. A dark, temperature controlled room is needed for the tests. In Aachen, a commercial refrigeration container is used (see figure \ref{fig:container}). In Dortmund, a climate controlled chamber is used (see figure \ref{fig:cooling room}). Special care was taken to ensure that the rooms are completely light- and air-tight. Light leaking into the setup would lead to an enhanced darkpulse rate, air leaking in can cause problems of high air humidity, icing of components, and condensation during temperature cycles. \begin{figure}[htbp] \centering \includegraphics[width=0.75\textwidth]{figures/schematic.png} \caption{Schematic of the test facility design. The photomultipliers are mounted in a rack inside a cooling unit. They are supplied with high-voltage by active bases \cite{mDOM}. The readout and control of the bases and PMTs is done with mDOM mainboards \cite{mDOM}. Processing, storage, and analysis of data is performed by a computer running a local database. The computer controls all other hardware devices at the facility, e.g. the light source system. All electronics but the PMTs are kept outside the cooling unit to avoid temperature and humidity dependent effects. Light is routed into the cooling unit through an optical fiber that ends in a PTFE integration sphere from \cite{POCAM}, acting as a diffuser and illuminating the PMTs.} \label{fig:setup-schematic} \end{figure} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/container.png} \caption{Shipping refrigeration container in the Aachen setup. It is equipped with additional insulation and cable entries. The container can cool down to \SI{-25}{\celsius}.} \label{fig:container} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/container_dortmund.jpg} \caption{Off-the-shelf food industry freezing container in the Dortmund setup. It is modified with two cable entries and a light absorbing cloth on the inside.} \label{fig:cooling room} \end{subfigure} \caption{Solutions for climate controlled testing environments in Aachen and Dortmund.} \label{fig:cooling-units} \end{figure} Inside the cooling rooms, a rack carrying eight slide-in bars (see figure \ref{fig:vogelstange}), holding twelve PMTs each, is installed. In total, 96 PMTs can be installed in each rack at once. Using this modular approach, PMTs can be mounted outside the testing environment and quickly installed into the rack. PMTs, PMT holders, bars, as well as each rack position has unique barcode identifiers attached, that enable a direct association of the PMT serial number to the readout channel. \begin{figure}[htbp] \centering \includegraphics[width=0.75\textwidth]{figures/vogelstange.png} \caption{A slide-in bar with twelve mounted PMTs. The PMT mounts are 3D-printed and fastened on an aluminum extrusion profile. In red, the active bases \cite{mDOM} can be seen. Barcodes enable a direct association of PMT serial number and readout channel in the setup.} \label{fig:vogelstange} \end{figure} Each PMT connects to one of four mDOM mainboards \cite{mDOM} in an electronics rack outside the cooling room. Each mainboard is responsible for controlling and reading out 24 PMTs. This includes setting the high voltage for each PMT, recording waveforms, and reading scaler rates. The mainboards as well as all other hardware devices are controlled by a computer that also processes and stores data received from the mainboards. Each testing cycle is composed of several measurements and analyses. The configurations for the testing is stored in a locally hosted database. The measured data is also stored in the database after processing, i.e. after baseline correction and feature extraction on waveforms. That data can instantaneously be accessed by fully automated analysis procedures that produce results, which are stored in a collaboration-wide database. This ensures that all testing and calibration data is well available and remains in permanent storage. \section{Light Source System} Testing the PMTs requires fast light sources of different wavelengths and intensities. Both facilities use the same design, but slightly different implementations (see figures \ref{fig:thorlabs-lightsource} and \ref{fig:lego-lightsource}). Outside the climate controlled room, LEDs (\SI{375}{\nano\meter}, \SI{505}{\nano\meter}) are mounted in a selection wheel, that is driven by a stepper motor. The LEDs are electrically driven by nanosecond-resolution pulsers \cite{mrongen}, which also provide a synchronous trigger signal to the mainboards. Light from the selected LED passes through an additional wheel with optional neutral-density filters and is coupled into an optical fiber. The fiber is routed inside the test setup and ends in an integrating PTFE sphere adapted from \cite{POCAM}, acting as a diffuser. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/lightsources.png} \caption{Light source system in the Aachen setup. LEDs driven by short electrical pulses \cite{mrongen} are coupled into an optical fiber. Single LEDs can be selected with a motor-driven wheel. Optionally, filters can be driven into the light's path.} \label{fig:thorlabs-lightsource} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/lightsource_dortmund.png} \caption{Light source system in the Dortmund setup. Single LEDs can be coupled into the fiber by driving the coupling lens in front of the LED. It is possible to install up to five different LEDs.\\} \label{fig:lego-lightsource} \end{subfigure} \caption{Light source solutions in Aachen and Dortmund.} \label{fig:lightsources} \end{figure} To test the stability of the light yield, repeated measurements of the number of observed photons with a PMT have been performed \cite{Marco}. Between each measurement, the selection apparatus was driven away from and back to the LED in question. Figure \ref{fig:lightsource-stability} shows that there are a few measurements with decreased brightness. In measurements sensitive to the photon yield, this can be corrected for by including known reference PMTs into the setup. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{figures/flat_field_lightsource_stability.PNG} \caption{Light source brightness study. The teal markers show repeated rate measurements of the same PMT (BA0) after driving the light source selection wheel before each measurement. Shown are the number of detected pulses in 50,000 recorded waveforms triggered on the light source output. The deviation of the number of detected pulses is small compared to the total number of pulses ($\approx 1.5\%$). Notably, there is a large deviation in one measurement. This probably is an effect of non-optimal coupling of light into the optical fiber. Such deviations can later on be corrected for using well-calibrated reference PMTs.} \label{fig:lightsource-stability} \end{figure} For measurements of the relative photo-detection efficiency, the relative amount of light at each position of the rack has to be known and corrected for. With perfectly isotropic light, we expect a quadratic dependency of the light yield as a function of the distance to the center of the rack. Additionally, the effective area of the PMTs decreases with larger angles between the PMT's symmetry axis and the incident light. In total, the relative light yield can be described by equation \ref{eq:flat-field-correction}, where $C_\mathrm{rel.}$ is the relative light yield, $R_0$ is the distance from the diffuser to the center of the PMT rack, $x$ is the distance of the PMT position to the center of the rack, and $a$ and $c$ are free parameters \begin{equation} C_{\mathrm{rel.}} = a \cdot \left({\frac{R_0}{\sqrt{R_0^2 + x^2}}}\right)^c.\\\label{eq:flat-field-correction} \end{equation} A calibration measurement of this dependency is shown in figure \ref{fig:flat-field-correction}. Four PMTs moving around the rack were used to cover all positions in the rack. The exponent $c$ in the combined fit yields $2.64\pm0.07$. This parametrization will be used for the flat field correction in photo-detection efficiency measurements. \begin{figure}[htb!] \centering \includegraphics[width=0.75\textwidth]{figures/flat_field_distance_correction.PNG} \caption{Flat field correction calibration. Each set of colored markers shows measurements performed with a different PMT (BAx) in multiple positions in the PMT rack. The number of detected pulses relative to the mean number of detected pulses of two reference PMTs in the center of the rack is shown as a function of the distance from the center of the grid. The number of detected pulses has been corrected for the individual relative photo-detection efficiencies of the PMTs. The red line shows a combined fit of equation \ref{eq:flat-field-correction} to the data.} \label{fig:flat-field-correction} \end{figure} \section{Results from Test Runs} For the construction of the first ten modules, several PMTs have undergone testing with a reduced set of measurements and analyses. Fully automated measurements of Single-Photo-Electron (SPE) spectra and calibration of target high voltage are implemented. Figure \ref{fig:example-spe-plot} shows such a recorded SPE spectrum. The recorded SPE spectrum can be well described by a simple fit function (see caption). From the fit results, the gain and other characteristics of the PMT are extracted. In total, only a single PMT out of the first 320 was rejected because of a faulty solder joint. \begin{figure}[htbp!] \centering \includegraphics[width=0.8\textwidth]{figures/spe_spectrum_96V_DM00168.png} \caption{Single-Photo-Electron spectrum of PMT DM00168 at voltages of \SI{96}{\volt} between dynodes. The blue histogram shows the measured charges with an external trigger from the light source system. The colored lines show a fit to the histogram with three components: The orange line is a Gaussian describing the pedestal region, the red line is a combination of Gaussians for the 1\,PE and 2\,PE peak, the green line is an exponential with cutoff at zero describing badly amplified photo-electrons. One can directly read off the gain of the PMT from the position of the SPE peak at a value of \SI{1.48}{\pico\coulomb}.} \label{fig:example-spe-plot} \end{figure} \clearpage \section{Summary and Outlook} The construction of mDOMs for the IceCube Upgrade requires fast testing of many PMTs. A design for a facility testing up to 96 PMTs simultaneously is implemented at two sites: RWTH Aachen University and TU Dortmund University. Custom light source systems have been constructed and tested at both sites. A modular mounting system of slide-in bars reduces the time between testing cycles. The testing procedures, data processing, analysis, and storage have been fully automized. The mounts can easily be adapted to fit PMTs of other sizes, such that the design can also be used for future IceCube-Gen2 testing. In the near future, these testing facilities will be used to test every single PMT that will be integrated into the mDOM modules. \bibliographystyle{ICRC}
1,314,259,993,132
arxiv
\section{Introduction} Dormant black holes (BHs) with masses in excess of $\mathrel{\rlap{\lower 3pt\hbox{$\sim$}} \raise 2.0pt\hbox{$>$}} 10^6\rm M_{\odot}$ are found to be ubiquitous in bright spheroids, today \cite{kormendy95,richstone98}. This local population comprises the dead remnants of a bright past, when the same BHs were powering the most luminous quasars. The recent discovery of scale relations between the BH mass and the properties of the stellar bulge \cite{ferrarese05}, likely set via AGN feedback, has prompted the study of BH growth and evolution in the general framework of galaxy evolution. According to the current $\Lambda$CDM paradigm for structure formation, galaxies interact and merge as their dark matter halos assemble in a hierarchical fashion \cite{springel06}, and BHs incorporated through mergers into larger and more massive systems evolve concordantly: major central gas-inflows are triggered during the violence of a merger that feed the BH and power AGN activity \cite{hopkins08}. {\it In this context, close BH pairs form as inescapable outcome of galaxy evolution} \cite{kazantzidis05}. In our local universe, NGC 6240 and Mrk 463 provide compelling evidence of ongoing gas-rich mergers where two active nuclei are present, still at large separations of $\sim$ kpc. Whether these BHs will spiral inward, form a BH {\it binary} and {\it coalesce} under the emission of gravitational waves is a matter of our concern. The {\it Laser Interferometer Space Antenna} ({\it LISA}) is expected to record these extraordinary events out to redshift $z\sim 20$ providing not only a firm test of General Relativity, but also a view, albeit indirect, of galaxy clustering together with an extremely accurate measure of the BH mass and spin \cite{vecchio04,volonteri03,sesana05}. Galaxy mergers cover cosmological volumes (of hundred kpc aside) whereas BH coalescences probe volumes from a few parsecs (when they bind in nuclear discs) down to an astronomical unit and less. Thus, following a merger, how can BHs reach the gravitational wave inspiral regime? Our aim is at studying the BH dynamics in gas-rich environments, and in particular, the transit from the state P of {\it pairing} when each BH moves individually inside the time-varying potential of the colliding galaxies, to state B when the two BHs dynamically couple their motion to form a {\it binary}. After all transient inflows have subsided and the new galaxy has formed, the BH binary, surrounded by a massive circum-nuclear disc, enters phase H where it hardens to smaller separations under the action of gas-dynamical and gravitational torques, ideally down to $\sim 10^{-3}$ pc (for typical BH mass of $10^6\,\rm M_{\odot}$) where the gravitational waves domain G starts. There is a number of key questions to address: (i) How does transition from state P $\to$ B depend on the gas thermodynamics and level of dissipation? (ii) In the grand nuclear disc inside the remnant galaxy, how do orbits evolve? (iii) Do the BHs reach the gravitational wave driven domain? (iv) During the hardening through phase B $\to$ H, do the BHs collect substantial amounts of gas to form cold individual discs? \section{Pairing of Massive Black Holes in gas-rich mergers} There are two types of mergers: major mergers between galaxies of comparable mass (1:1 mass ratio), and minor mergers between galaxies with smaller mass ratios (1:10 typically). {\sl Major Mergers} has been studied with N-Body/SPH simulations with unprecedented force resolution (down to $\sim 1$ pc using splitting techniques and {\it GASOLINE} as integrator; see \cite{mayer07}) to describe the collision of two galaxies similar to the Milky Way. Each galaxy comprises a central BH of $2.6\times 10^6\,\rm M_{\odot}$, a stellar bulge, a disc of stars and gas (with mass fraction of 10\% relative to the total disc mass), and an extended spherical dark matter halo (of $10^{12}\,\rm M_{\odot}$) with NFW density profile (see \cite{mayer07} for details). \begin{figure} \begin{center} \includegraphics[width=0.70\textwidth]{FIGURE1.ps} \end{center} \caption[]{ The different stages of the merger between two identical disc galaxies seen face--on. The color-coded density maps of the gas component are shown using a logarithmic scale, with brighter colors for higher densities. The four panels to the left show the large-scale evolution at different times (obtained with a force resolution of 100 pc). The boxes are 120 kpc on a side (top) and 60 kpc on a side (bottom) and the density ranges between $10^{-2}$ atoms cm$^{-3}$ and $10^{2}$ atoms cm$^{-3}$. During the interaction, tidal forces tear the galactic discs apart, generating spectacular tidal tails and plumes. The upper panel to the right shows a zoom in view of the two discs before they merge into a single rotating nuclear gaseous disc embedded in a series of large-scale ring-like structures (middle panel). The boxes are now 8 kpc on a side and the density ranges between $10^{-2}$ atoms cm$^{-3}$ and $10^{5}$ atoms cm$^{ -3}$. The two bottom panels, with a gray color scale, show the detail of the inner 160 pc of the middle panel (here the force resolution is 2 pc); the nuclear disc is shown edge-on (left) and face-on (right), and the two BHs are also shown in the face-on image.} \end{figure} The galaxies first experience two close fly-bys: in this early phase, the cuspy potentials of both galaxies are deep enough to allow for the survival of their baryonic cores that sink under the action of dynamical friction against the dark matter background, dragging together the two BHs. Strong spiral patterns appear in both the stellar and gaseous discs, and as the merger continues, non--axisymmetric torques redistribute angular momentum: as much as 60\% of the gas originally present in each disc of the parent galaxies is funneled inside the inner few hundred parsecs of the individual cores. This is illustrated in the upper right panel of Figure 1, where the enlarged color coded density map of the gas is shown, after 5.1 Gyr from the onset of the collision. {\it Each BH is surrounded by a rotating stellar and gaseous disc of mass $\sim 4 \times 10^8$ M$_{\odot}$ and size of a few hundred parsecs.} The two discs and BHs are just 6 kpc far apart, and at the same time a star-burst of $\sim 30 \rm M_{\odot}\,$yr$^{-1}$ has invested the central region of the ongoing merger. At this stage, the simulation is stopped and restarts with increased resolution (of $\sim 2$ pc). In order to simulate the environment of a star burst where cool gas coexists with the warm phase heated by stellar feedback, the pressure is set equal to $P=(\gamma-1)\rho u$ with $\gamma=7/5$ (according to fits by \cite{spaans00}). The internal energy per particle $u$ evolves with time as a result of $P dV$ work and shock heating modeled via the standard Monaghan artificial viscosity term. With time, the two baryonic discs get closer and closer and eventually merge in {\it a single massive self-gravitating, rotationally supported nuclear disc}, now weighing $3\times 10^9\,\rm M_{\odot}$. This is illustrated again in Figure 1 (mid and bottom right panels). The gaseous disc, dominant in mass, is surrounded by a background of dark matter and stars distributed in a spheroid. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{FIGURE2.ps} \end{center} \caption[]{Orbital separation of the two BHs as a function of time during the last stage of the galaxy merger. The orbit of the pair is eccentric until the end of the simulation. The two peaks at scales of tens of parsecs at around $t=5.1213$ Gyr mark the end of the phase during which the two holes are still embedded in two distinct gaseous cores. Until this point the orbit is the result of the relative motion of the cores combined with the relative motion of each BH relative to the surrounding core, explaining the presence of more than one orbital frequency. The inset shows the details of the last part of the orbital evolution, which takes place in the nuclear disc arising from the merger of the two cores. The binary stops shrinking when the separation approaches the force resolution limit (2 pc).} \label{fig:birth} \end{figure} The BHs have been dragged together toward the dynamical center of the merging galaxies, and move inside the grand disc spiraling inward under the action of gas-dynamical friction. In less than a million years after the merger, they eventually bind gravitationally to each other, as the mass of the gas enclosed within their separation is less than the mass of the binary. It is the gas that controls the orbital decay, not the stars. The transition between state P to B is now completed as illustrated in Figure 2. Dynamical friction against the stellar background would bring the two BHs this close only on a longer timescale, $\sim 10^8$ yr \cite{mayer07}. This short sinking timescale comes from the combination of the fact that gas densities are much higher than stellar densities in the center, and that in the mildly supersonic regime the drag against a gaseous background is stronger than that in a stellar background with the same density \cite{ostriker99}. It is worth noticing that the transition P $\to $ B is sensitive to the gas thermodynamics: BH coupling is delayed if gas were to follow thermal evolution with a $\gamma=5/3$ \cite{mayer07}. \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{FIGURE3.eps} \end{center} \caption{ BH separation as a function of time in four of our simulations. Upper row: BH distance in 1:4 mergers (for galaxy models at $z=0$); the thin and thick lines refer to simulations with no-gas and with gas ($f_{\rm gas}=0.1\%$), respectively. Lower row: BH distance for the 1:10 mergers (for galaxy models at $z=3$); the thin and thick lines refer to simulations with no-gas and with gas ($f_{\rm gas}=0.3\%$), respectively. The insets show the color-coded density maps of stars (left) and gas (right), 4 kpc on a side. The large dot on the BH curve indicates the time at which the two snapshots are recorded. Colors code the range $10^{-2}-1$~M$_\odot$~pc$^{-3}$ for stars, and $10^{-3}-0^{-1}$~M$_\odot$~pc$^{-3}$ for the gas. These snapshots are representative of the average behavior of the satellites during the first two orbits. Note the formation of a strong bar for the 1:4 minor merger, which is absent for the 1:10 case, and the truncation of the gaseous disc in the 1:10 satellite caused by ram pressure stripping. \label{fig:minor}} \end{figure} Does BH pairing proceed similarly, in {\sl minor mergers} predicted to be common events in the high redshift universe \cite{volonteri03} and of primary importance for {\it LISA} \cite{sesana05}? To answer this question we extended our numerical investigation to 1:4 mergers at $z=0$, and 1:10 mergers at $z=3$, assuming a roughly constant $M_{\rm BH}-M_{\rm bulge}$ relation in between these cosmic epochs, and initial galaxy models replica of a Milky Way suitably rescaled in mass and size (see for details \cite{callegari08}. The masses of the two BHs in the $z=3$ runs are thus $6\times10^5$ and $6\times10^4M_\odot$, and their expected inspiral and coalescence signal falls nicely in the {\it LISA} sensitivity window \cite{sesana05}. It is found that minor mergers differ profoundly from major mergers as early noticed by \cite{governato94}. The encounter is closer to an {\it accretion} process whereby the less massive galaxy is dramatically damaged during its sinking into the primary. In our recent study \cite{callegari08}, paring is found to be very sensitive to the details of the physical processes involved. In all cases with no-gas (i.e., in "dry" runs) the formation of a close BH pair is aborted: tidal shocks progressively lower the density in the satellite until it dissolves, leaving a wandering black hole in the remnant. This is illustrated in Figure 3 (thin lines) where the BH relative distance remains as large as 1-10 kpc. Only with the inclusion of a cold gaseous disc component, and star formation the outcome of the merger changes significantly. Figure 3 depicts the the stellar and gaseous components of the satellite to show their profound structural damage (the primary is not shown). \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{FIGURE4.ps} \end{center} \caption[]{ BH separation as a function of time. One BH is set at the center of a rotationally supported disc while the secondary BH moves initially on an eccentric ($e=0.7$) either co- or counter-rotating orbit. BH masses are of $10^6\,\rm M_{\odot}$,. The gaseous disc of mass $M_{\rm disc}=10^8\,\rm M_{\odot}$ is described by a Mestel profile. Solid (blue) line refers to the co-rotating case, while (red) dashed line refers to the counter-rotating case (Dotti et al. in preparation). } \label{fig:birth} \end{figure} For mass ratios 1:4 at $z=0$, bar instabilities excited at pericentric passages funnel gas (present in a fraction $f_{\rm gas}$ of the total disc mass) to the center of the satellite, steepening its potential well and allowing its survival against tidal disruption down to the center of the merger remnant. As shown in Figure 3 (thick lines), the BHs pair down to $\sim100$~pc scales (the force resolution limit), creating conditions favorable to the formation of a BH binary. The smaller satellites (with 1:10 mass ratio at $z=3$) are more strongly affected by both internal star formation and the gas-dynamical interaction between their interstellar medium and that of the primary galaxy. Torques in the early stages of the merger are not acting to concentrate gas to the center, due to the absence of a stellar bar and the stabilizing effect of turbulence. As a result, ram pressure strips all of the ISM of the satellite. Only the gas-rich satellites (those with $f_{\rm gas}=0.3$) undergo a central burst of star formation during the first orbits which increases their central stellar density allowing for their survival. In these models, pairing of the two BHs via dynamical friction occurs down to $\sim 100$ pc, a few Gyrs after the disruption of the satellite (see Figure 3). \begin{figure} \begin{center} \includegraphics[width=0.40\textwidth]{FIGURE5.ps} \end{center} \caption[]{$z$-component of the orbital angular momentum of the secondary BH normalized to its initial value $L_0$ for the counter-rotating case, described also in Figure 4, to show the occurrence of the angular momentum flip induced by dynamical friction \cite{dotti09}.} \label{fig:birth} \end{figure} \section{Black hole binaries in massive nuclear discs} As shown in Section 2, massive circum-nuclear discs form in the aftermath of a major gas-rich merger. It is in these discs that the BHs complete their transition from P$\to$ B, and continue to spiral inward under the action of gas-dynamical torques from B $\to$ H \cite{escala05,dotti06,dotti07}. A still open issue is whether the BHs will reach the domain of gravitational waves inspiral within a Hubble time: for a $10^{6}\,\rm M_{\odot}$ BH binary on a circular orbit, the transition H $\to$ G occurs when the separation is around $10^{-3}$ pc. Can material and gravitational torques be effective in driving the BHs down to this tiny scale? Here we describe our attempts to explore the transition from P $\to $ B $\to$ H, for BHs orbiting inside a circum-nuclear disc using {\it GADGET} as N-Body/SPH code with a force resolution of only $\approx 0.1$ pc, and no splitting during the entire course of the evolution \cite{dotti09}. In our selected model, a BH (called primary) is set at the center of a massive differentially rotating gaseous disc in equilibrium with a stellar bulge (see \cite{dotti06,dotti07} for the details). A second BH (called secondary) with similar/equal mass is delivered at a large distance ($\sim 50$ pc) from the center, along a coplanar eccentric ($e=0.7$) orbit that can either be co- or counter-rotating relative to the background disc. The $10^6$ SPH particles, making the disc, evolve as in \cite{mayer07}: accordingly, the gas thermodynamics is described by the index $\gamma= 7/5$ that accounts for the presence of net cooling in a star-forming region. Shocks here are less important as the equilibrium disc is only mildly perturbed by the BHs. Figure 4 shows the BH separation as a function of time for the co-rotating and counter-rotating cases. No stalling is observed in both cases, as the relative BH distance decays rapidly (in $\mathrel{\rlap{\lower 3pt\hbox{$\sim$}}\raise 2.0pt\hbox{$<$}} 20$ Myr) down to the force resolution length-scale. In the final stages, the more rapid orbital decay is due to the torque exerted by the ellipsoidal deformation that forms in the gas when the heads of the density wakes overlap and whose axis is misaligned relative to the BH binary axis \cite{escala05}. In the counter-rotating case, the angular momentum of the secondary BH (initially negative) grows very efficiently during the first Myr, when the BH is passing through the central, high density region of the disc. Angular momentum continues to grow monotonically for the next $3-4$ Myrs, then becomes positive, i.e. the BH starts to move on a co-rotating orbit with respect to the disc. This is illustrated in Figure 5. It is dynamical friction that causes this orbital ``angular momentum flip''. In both cases the eccentricity of the orbit decreases to very small values (also in the counter-rotating case, after the orbit becomes co-rotating) due to the different response of the fluid to the gravitational pull of the BH at the different orbital phases. When at pericenter the BH moves faster than the gas and it is decelerated by the density wake excited behind its trail; when at apocenter the BH is moving more slowly than the gas and the wake is trailing in front causing a tangential acceleration. The composite effect is a decrease of $e$. In a suite of runs, the BHs have been modeled as ``sink particles'', i.e. they are allowed to accrete gas particles during their dynamical evolution \cite{dotti09}. We introduced an "on flight" algorithm for accretion and determined the amount of gas that binds to the BHs. It is only when the BH binary circularizes that gas is accreted to such an extent that both BHs are surrounded by their own {\it accretion} disc, and these discs are expected to play a role in guiding the subsequent hardening phase down to the gravitational domain. \section{Open issues} Numerical simulations, carried on with unprecedented accuracy, have revealed that the transit of dual BHs from P $\to$ B $\to$ H, and finally from H$\to$ G is a sensitive function of the merger type and of amount of cold gas present in the interacting galaxies. While the transition from P (pairing) $\to$ B (binary formation) appears to be likely in gas-rich major mergers as well as in gas-rich minor mergers at high redshift, hardening down to the gravitational wave domain remains still uncertain on scales below $\sim 0.1$ pc and not fully explored. It has been suggested that a circum-binary viscous disc inevitably forms around the BH binary on sub-parsec scales that absorbs the the angular momentum of the binary \cite{cuadra08}. This circum-binary disc would represent the last cold environment for BH hardening from H $\to $ G. Braking of the BH rapid motion requires energy loss and angular momentum transport trough a mechanism that is reminiscent of planet migration in proto-stellar discs {\cite{cuadra08,gould00}: while tidal torques from the BH binary carry away orbital angular momentum, viscous torques inside the disc sustain the radial motion of the gas toward the BHs, maintaining the binary in near contact with the disc. Equilibrium between these two torques would cause the slow drift of the BHs toward smaller and smaller separations, until gravitational waves guide the final inspiral. No calculation has reproduced yet the formation of a circum-binary disc from the earlier phase, in a self-consistent manner, nor it is clear how fast will be the inspiral, and how large the growth of the eccentricity \cite{armitage02}. The BH binary likely enters phase G with a residual eccentricity still imprinted in the gravitational wave signal, despite the circularizing action of the gravitational wave back reaction. {\it LISA} is expected to be lunched by 2020. By that time, our theoretical understanding of binary hardening in a gas-rich environment will hopefully improve thanks to the progress expected in numerical simulations, and in our ability to model fragmentation, star formation and feedback, inside galactic nuclei. \medskip We thank all collaborators that made this research possible: F. Governato, S. Kazantzidis, F. Haardt, P. Madau, B. Moore, L. Paredi, J. Wadsley, M. Ruszkowski, J. Stadel, T. Quinn, and M. Volonteri. \section*{References}
1,314,259,993,133
arxiv
\section{Introduction} One of the recent remarkable discoveries is the connection between quantum $K$-theory and 3d TQFT. For a long time, quantum $K$-theory has been viewed as a variant to quantum cohomology, which comes from a 2d TQFT. The new connection puts quantum $K$-theory in a different path to the new territory of 3d physics, the mathematics of which was much less understood. The 3d physics has its own mirror symmetry phenomenon, which falls into two versions, for $\mathcal{N}=4$ theories versus $\mathcal{N}=2$ theories. 3d $\mathcal{N}=4$ theories are much better behaved in physics, but apply to more restrictive targets such as Nakajima quiver varieties, due to the presence of more supersymmetries. There are many mathematical results on the enumerative-geometric aspect of 3d $\mathcal{N}=4$ mirror symmetry by Okounkov's group \cite{AOelliptic, KZ, RSVZ, RSVZ2, SZ}. Our main interest is in 3d $\mathcal{N}=2$ theories which apply to a general K\"ahler manifold. Its mirror symmetry was poorly understood even in physics. In addition, for such theories there is a new feature called \emph{level structure} introduced by Ruan--Zhang \cite{RZ}. Any duality for $\mathcal{N}=2$ theories should incorporate the level structure, which makes it quite difficult. Right now, we are still in the early stage of exploration. There are many conjectural examples for 3d $\mathcal{N}=4$ mirror pairs in physics. However, as far as our knowledge of physics literature \cite{DT, AHKT, ARW}, the only major class of 3d $\mathcal{N}=2$ mirror pairs are toric varieties as follows. Consider the following short exact sequence \begin{align} \label{exact-intro} \xymatrix{ 0 \ar[r] & \mathbb{Z}^k \ar[r]^-\iota & \mathbb{Z}^n \ar[r]^-\beta & \mathbb{Z}^{n-k} \ar[r] & 0. } \end{align} {\bf 3d Toric Mirror Conjecture: } \emph{ \begin{itemize} \setlength{\parskip}{1ex} \item {\bf Model A:} Let $\iota=(\iota_{ij})$ for $j=1, \cdots, k$ and $i=1, \cdots, n$. One side of the mirror symmetry is the toric quotient $[{\bf C}^n/({\bf C^*})^k]$ defined by the charge matrix $\iota$, with the so-called {effective level } (see Definition \ref{effective-level}) $\frac{1}{2} \sum_{j,l=1}^k \iota_{ij} \iota_{il}$. We consider its equivariant theory with quantum parameters $\zeta^{b}$ for $b=1, \cdots, k$ and the equivariant parameters $m_{i}$ for $i=1, \cdots, n$. \item {\bf Model B:} Let $\beta=(\beta_{ji})$ for $j=1, \cdots, n-k$ and $i = 1, \cdots, n$. The other side of mirror is the toric quotient $[{\bf C}^n/({\bf C^*})^{n-k}]$ defined by the charge matrix $\beta^T$, with the effective level $\frac{1}{2}\sum_{j,l=1}^{n-k} \beta_{ji} \beta_{li}$. We consider its equivariant theory with quantum parameters $\widehat{\zeta}^{p}$ for $p=1, \cdots, n-k$ and equivariant parameters $\widehat{m}_{i}$ for $i=1, \cdots, n$. \end{itemize} Model A is ``equivalent" to Model B via the mirror map \begin{align} &\zeta^{b}-\frac{1}{2} \sum_{i=1}^{n} \iota_{ib} m_{i}=\sum_{i=1}^{n} \iota_{ib} \widehat{m}_{i} \label{mirror-map-1} \\ &-\sum_{i=1}^{n} \beta_{pi} m_{i}=\widehat{\zeta}^{p}+\frac{1}{2} \sum_{i=1}^{N} \beta_{pi} \widehat{m}_{i} . \label{mirror-map-2} \end{align} Note that the above statement is slightly different from that in \cite{AHKT}.} One can study the above ``equivalence" in different contexts. The main goal of this article is to prove the conjecture for equivariant quantum $K$-theoretic $I$-functions. For a person with background in 2d mirror symmetry, the above conjecture is rather strange, since in 2d mirror symmetry, a toric variety is mirrored to a Landau--Ginzburg model, instead of another toric variety. Furthermore, there are many chambers of toric quotients for given toric data, which in general admit different cohomology/$K$-groups. A logical first step, which is the first approach we tried, seems to be an attempt to match different chambers of mirror pairs. Unfortunately, for the simplest example of projective spaces, this turns out to be false! Our main conceptual breakthrough is the realization that we should consider the $I$-function for the \emph{entire toric stack} (see Definition \ref{total-stack-I-intro} and \ref{total-stack-I}), rather than any of its particular GIT quotients, in the sense to sum up the contributions from ALL chambers. Of course, a naive sum would result in double counting. At this point, we do not know how to do this in general. For the equivariant theory of toric variety, its $I$-function (defined by quasimap graph moduli space) localizes to the fixed point contributions. It is easy to see from examples that a fixed point could appear in multiple chambers. Let $\mathsf{K} := (\mathbb{C}^*)^k$, and $\mathsf{T} := (\mathbb{C}^*)^n$. Let $\mathfrak{X}$ be the quotient stack $ [ \mathbb{C}^n / \mathsf{K} ]$. Choose a character $\theta \in \operatorname{Lie}_{\mathbb{R}}(\mathsf{K}^\vee) $. We obtain a GIT quotient $ X_{\theta} $, and some of the $\mathsf{T}$-fixed points $\mathbf{p} \in \mathfrak{X} $ descend to the fixed points $\mathbf{p} \in X_{\theta}$. Combinatorially, the $\mathsf{T}$-fixed points of $\mathfrak{X}$ can be identified with subsets $\mathbf{p} \subset \{1, \cdots, n\}$ of size $k$, where the $k\times k$ submatrix of $\iota$ corresponding to $\mathbf{p}$ is of full rank. Let $I_{\mathbf{p}, \theta}$ be the fixed point contribution of $\mathbf{p}$ to the $I$-function of $X_{\theta}$. A key technical lemma is \begin{Lemma}[Corollary \ref{independence-I-bp}] $I_{\mathbf{p}, \theta}$ is independent of the chamber and we denote it by $I({\mathbf{p}})$. \end{Lemma} The above lemma leads to the following definition. \begin{Definition} \label{total-stack-I-intro} We introduce the $I$-function for the toric stack $\mathfrak{X}$ with the effective level as follows $$ I^{\operatorname{eff}} (\mathfrak{X}) := \sum_{\mathbf{p} \in \mathfrak{X}^{\mathsf{T}} } I^{\operatorname{eff}}({\mathbf{p}}). $$ Let $\mathbf{p} \in \mathfrak{X} $ be a fixed point, the \emph{modified $I$-function with the effective level} is defined as $$ \widetilde I^{\operatorname{eff}} (\mathbf{p}) := e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln U_i |_\mathbf{p} }{\ln q} } \cdot \prod_{i\not\in \mathbf{p}} \frac{1-U_i^{-1}|_{\mathbf{p}}}{ (qU_i |_\mathbf{p} )_\infty } \cdot I^{\operatorname{eff}} (\mathbf{p}). $$ Similarly, we define $$ \widetilde{I}^{\operatorname{eff}}(\mathfrak{X}) := \sum_{\mathbf{p} \in \mathfrak{X}^{\mathsf{T}}} \widetilde{I}^{\operatorname{eff}}(\mathbf{p}). $$ \end{Definition} The exponential prefactor $e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln U_i |_\mathbf{p} }{\ln q} }$ here is a $q$-analogue of the exponential factor $e^{\sum_i \frac{t_i p_i}{z}}$ as in the cohomological $I$-functions. The factors $\dfrac{1}{(q U_i |_\mathbf{p} )_\infty}$ are $q$-analogues of gamma functions. The $Q$-coefficients of $\widetilde I^{\operatorname{eff}} (\mathbf{p})$ no longer lie in $K_{\mathsf{T} \times \mathbb{C}_q^*} (\mathbf{p})_\mathrm{loc}$. We consider the dual exact sequence of (\ref{exact-intro}) as the \emph{mirror}: \begin{equation} \label{dual-exact-intro} \xymatrix{ 0 \ar[r] & (\mathbb{Z}^d)^\vee \ar[r]^-{\iota^!} & (\mathbb{Z}^n)^\vee \ar[r]^-{\beta^!} & (\mathbb{Z}^k)^\vee \ar[r] & 0, } \end{equation} where $$ \iota^! := \beta^T, \qquad \beta^! := \iota^T. $$ This is often refered to as the Gale dual. \begin{Definition}[Definition \ref{defn-mirror}] Let $\mathsf{A} := \mathsf{T} / \mathsf{K} \cong (\mathbb{C}^*)^d$ be the quotient torus. \begin{itemize} \setlength{\parskip}{1ex} \item The \emph{mirror toric stack} to $\mathfrak{X}$ is defined as $$ \mathfrak{X}^! := [\mathbb{C}^n / \mathsf{A}^\vee], $$ the toric quotient stack associated with the short exact sequence (\ref{dual-exact-intro}), where the action by $\mathsf{A}^\vee$ is defined by $\iota^!$. \item There is a natural bijection between fixed points of a mirror pair. Given a fixed point $\mathbf{p}$ of $\mathfrak{X}$, the \emph{mirror fixed point} is defined as the complement $$ \mathbf{p}^! = \{1, \cdots, n \} \backslash \mathbf{p} \quad \in \quad (\mathfrak{X}^!)^\mathsf{T} . $$ \end{itemize} \end{Definition} The main result of this article is a proof of above mirror conjecture for modified $I$-functions with effective level structure for toric stacks. \begin{Theorem} [Theorem \ref{main-theorem}] Let $(\mathfrak{X}, \mathfrak{X}^!)$ be a mirror pair of toric stacks. Let $q^{z_i \partial_{z_i}}$ denote the $q$-difference operator that shifts $z_i \mapsto q z_i$, and similar with $q^{a_i \partial_{a_i}}$. \begin{enumerate} [1)] \setlength{\parskip}{1ex} \item The modified $I$-function of $\mathfrak{X}$ with effective level structure satisfies the following two sets of $q$-difference equations, with respect to the K\"ahler and equivariant parameters. \begin{itemize} \setlength{\parskip}{1ex} \item Let $\{ e_i \}_{i=1}^n$ be the standard basis of $\mathbb{Z}^n$, and consider any $\sum_{i=1}^n \mu_i e_i \in \ker \beta $ such that $\mu_i = \pm 1 $ or $0$. Denote by $S_\pm$ the subset of indices with $\mu_i= \pm1$. Then $$ \left[ \prod_{i\in S_+} ( z_i^{-1} (1- q^{- z_i \partial_{z_i}} ) ) - \prod_{i\in S_-} (z_i^{-1} (1- q^{- z_i \partial_{z_i}} ) ) \right] \widetilde I^{\operatorname{eff}} (\mathfrak{X}) = 0. $$ \item Let $\{ e^!_i \}_{i=1}^n$ be the standard basis of $\mathbb{Z}^n$ in the dual exact sequence, and consider any $\sum_{i=1}^n \mu^!_i e^!_i \in \ker \iota^T $ such that $\mu^!_i = \pm 1 $ or $0$. Denote by $R_\pm$ the subset of indices with $\mu^!_i= \pm 1$. Then $$ \left[ \prod_{i\in R_+} (a_i^{-1} (1- q^{ a_i \partial_{a_i}} ) ) - \prod_{i\in R_-} ( a_i^{-1} (1- q^{ a_i \partial_{a_i}} ) ) \right] \left( e^{\sum_{i=1}^n \frac{\ln z_i \ln a_i}{\ln q} } \cdot \widetilde I^{\operatorname{eff}} (\mathfrak{X}) \right) = 0 . $$ \end{itemize} Moreover, the solution to the above difference equations is unique, with certain prescribed asymptotic initial condition (see Lemma \ref{asymptotic}). \item Under the mirror map $$ \tau (z_i^!) = a_i, \qquad \tau (a_i^!) = z_i, \qquad \tau (q) = q^{-1}, $$ the two sets of $q$-difference equations (\ref{eqn-for-kalher}) (\ref{eqn-for-equiv}), for modified $I$-functions of the mirror pair $(\mathfrak{X}, \mathfrak{X}^!)$ with effective level structure, coincide with each other. Therefore, combining it with the uniqueness result, we have $$ \widetilde I^{\operatorname{eff}}(\mathfrak{X}) = e^{\sum_{i=1}^n \frac{\ln z_i \ln a_i}{\ln q} } \cdot \tau ( \widetilde I^{\operatorname{eff}} (\mathfrak{X}^!) ) . $$ \end{enumerate} \end{Theorem} \begin{Remark} Our mirror map $\tau$ appears different from the mirror map (\ref{mirror-map-1}) and (\ref{mirror-map-2}) in physics literature. In Section \ref{parameters}, we introduced two versions of K\"ahler parameters, the \emph{effective} and the \emph{redundant}. Our mirror map $\tau$ is written in terms of the redundant parameters. To see how they are related, first rewrite (\ref{mirror-map-1}) (\ref{mirror-map-2}) as \begin{align} \label{rewrite-mirror-map} \zeta^{b} =\sum_{i=1}^{n} \iota_{ib} \left( \widehat{m}_{i} +\frac{1}{2} m_{i} \right), \qquad - \sum_{i=1}^{n} \beta_{pi} \left( \frac{1}{2}\widehat{m}_{i}+ m_{i} \right) =\widehat{\zeta}^{p} . \end{align} $\zeta^b$ can then be interpreted as our effective K\"ahler parameter $Q_b$ (see Section \ref{parameters}). From Remark \ref{effective-Kahler} we know the relation between effective and redundant K\"ahler parameters are as follows \begin{align*} \ln Q_b = \sum_{i=1}^n \iota_{ib} \ln z_i, \qquad \ln Q^!_p = \sum_{i=1}^n \beta_{pi} \ln z^!_i . \end{align*} Applying mirror map (\ref{mirror-map}), we obtain \begin{align*} \tau(\ln Q_b) = \sum_{i=1}^n \iota_{ib} \ln a^!_i, \quad \tau(\ln Q^!_p) = \sum_{i=1}^n \beta_{pi} \ln a_i , \end{align*} which is the same as (\ref{rewrite-mirror-map}) if we let \begin{align*} \widehat{m}_{i} +\frac{1}{2} m_{i}= \ln a^!_i, \quad \frac{1}{2}\widehat{m}_{i}+ m_{i} = \ln a_i^{-1} . \end{align*} \end{Remark} The paper is organized as follows. In Section \ref{sec-2}, we briefly outline the duality of combinatorial structures for mirror pairs. This section is an extension of previous results of the last author \cite{SZ}. The main technical results are in Section \ref{sec-3} where we prove the main technical lemma and compute the modified $I$-function with effective level structure. The proof of the main theorem is in the Section \ref{sec-4}. To give the reader a sense of the problem, we also show a hands-on approach to the case of projective space, using the $q$-binomial formula. The general case follows from the duality of $q$-difference equations and the analysis of their solution spaces. \subsection{Acknowledgement} The bulk of this work was done during our stay at the Institute for Advanced Study in Mathematics at Zhejiang University. We express our special thanks to the institute for the wonderful environment and support. The second author would like to thank Prof. Bohan Fang, Prof. Huijun Fan and Peking University for the helpful support during the visit. Thanks are also due to Prof. Shuai Guo and Ming Zhang for helpful discussions. \vspace{1cm} \section{Toric stacks and GIT quotients from fixed points} \label{sec-2} Toric varieties \cite{Ful} have been studied for decades and provide important examples in algebraic geometry that can be explicitly described in terms of combinatorial data. To define a toric variety, one starts with a fan $\Sigma$ in a lattice $N \cong \mathbb{Z}^r$, whose rays are denoted by $\rho_1, \cdots, \rho_n$. The toric variety is then constructed as a quotient $(\mathbb{C}^n - Z ) / \mathsf{K}$, where $Z$ is the irrelavant locus determined by the fan, and $\mathsf{K} \cong (\mathbb{C}^*)^k$ (we denote $k: = n-r$) is a torus acting on $\mathbb{C}^n$, whose action is determind by the relations among the rays $\rho_i$'s. An alternative way to construct toric varieties is to consider them as (real) symplectic reductions, or equivalently, GIT quotients. By choosing an appropriate stability condition $\theta$, the irrelevant subvariety $Z$ turns out to be the unstable locus of the action, and the quotient stated as above is the GIT quotient $\mathbb{C}^n /\!/_\theta \mathsf{K}$. However, there are more possible choices of $\theta$ that might be interesting. The variation of GIT \cite{DH, Tha} implies that when $\theta$ crosses a wall and enters a different chamber, one obtains a different GIT quotient, which might not be the toric variety defined by the original fan $\Sigma$. Even if one happens to obtain the same variety, it is not canonically isomorphic to the original one, but related to it by a birational transformation. Therefore, if we are interested in a global understanding of \emph{all possible} GIT quotients, it is better to study the quotient stack $[\mathbb{C}^n / \mathsf{K}]$ directly. This is the viewpoint we would take in the following of this paper. Let $\mathsf{T} := (\mathbb{C}^*)^n$ be the standard $n$-dimensional torus acting on $\mathbb{C}^n$. The action \footnote{which is, by construction, faithful. } by $\mathsf{K}$ is then characterized by an injective homomorphism $\mathsf{K} \to \mathsf{T}$, or equivalently, an injective homomorphism of free $\mathbb{Z}$-modules $\iota: \mathbb{Z}^k \to \mathbb{Z}^n$. The map $\iota$ will be our starting datum. A more convenient and symmetric way is to consider the short exact sequence \begin{equation} \label{knd} \xymatrix{ 0 \ar[r] & \mathbb{Z}^k \ar[r]^-\iota & \mathbb{Z}^n \ar[r]^-\beta & \mathbb{Z}^{r} \ar[r] & 0. } \end{equation} \begin{Definition} A matrix $\iota$ is called \emph{totally unimodular} if the determinants of all its maximal square submatrices are either $\pm 1$ or $0$. \end{Definition} \begin{Remark} Let $\iota$ be the $n\times k$ matrix as above, which is of rank $k$. The definition of totally unimodularity is equivalent to the following: there exists $P\in GL(k, \mathbb{Z})$, such that the determinants of all square submatrices (of any size) of $\iota P$ are either $\pm 1$ or $0$. In particular, all entries of $\iota P$ are $\pm 1$ or $0$. \end{Remark} In the rest of this paper, we will always assume that $\iota$ is totally unimodular. It then follows that $\beta$ is also totally unimodular. Consider the quotient stack $$ \mathfrak{X} := [\mathbb{C}^n / \mathsf{K}], $$ where the action of $\mathsf{K}$ on $\mathbb{C}^n$ is defined by $\iota$. The action of $\mathsf{T}$ on $\mathbb{C}^n$ descends to $\mathfrak{X}$, and to any of its GIT quotients $X$. The existence of this torus action enables us to study the geometry from the aspect of $\mathsf{T}$-equivariant theory. Moreover, this torus is large enough, in the sense that $X$ is a GKM manifold \cite{GKM}, whose $\mathsf{T}$-equivariant geometry can be completely recovered from the information of its $\mathsf{T}$-fixed points and the 1-dimensional $\mathsf{T}$-orbits connecting them. For the quotient stack $\mathfrak{X}$, its $\mathsf{T}$-fixed points are characterized as follows. \begin{Lemma} A $\mathsf{T}$-\emph{fixed point} of $\mathfrak{X}$ is given by a subset $\mathbf{p} \subset \{1, \cdots, n\}$, such that the $i$-th rows of the matrix $\iota$ with $i\in \mathbf{p}$ are linearly independent. \end{Lemma} \begin{proof} Geometrically, the locally closed subset $\{ x\in \mathbb{C}^n \mid x_i \neq 0, \ i\in \mathbf{p}; x_i = 0, \ i\not\in \mathbf{p} \} \cong (\mathbb{C}^*)^k$ in $\mathbb{C}^n$ defines a closed substack in $\mathfrak{X}$ which is isomorphic to $[(\mathbb{C}^*)^k / \mathsf{K}] \cong \operatorname{pt}$, and invariant under the $\mathsf{T}$-action, in the sense of \cite{Rom}. \end{proof} By abuse of notation, we will also denote this closed substack by $\mathbf{p}$, and write $\mathbf{p}\in \mathfrak{X}^\mathsf{T}$. \subsection{K\"ahler cone and GIT quotients} Let $\operatorname{Lie}_\mathbb{R} (\mathsf{K}^\vee) \cong \mathbb{R}^k$ be the (real) Lie algebra of the character group of $\mathsf{K}$, which can be identified with the space of stability conditions one can take when performing GIT quotients. Given any $\theta \in \operatorname{Lie}_\mathbb{R} (\mathsf{K}^\vee)$, the GIT theory \cite{MFK} defines an open subscheme in $\mathfrak{X}$: $$ X_\theta := \mathbb{C}^n /\!/_\theta \mathsf{K}. $$ By results on the variation of GIT, the space of stability conditions $\operatorname{Lie}_\mathbb{R} (\mathsf{K}^\vee)$ admits a wall-and-chamber structure. In other words, there exist certain walls (i.e. codimension one subsets) in $\operatorname{Lie}_\mathbb{R} (\mathsf{K}^\vee)$, the connected components of whom are called chambers, such that when we vary $\theta$ in a single chamber, the resulting space $X_\theta$ stays the same. In this paper we will only consider the case where $\theta$ is chosen \emph{generically}, i.e. avoiding all the walls. By the totally unimodularity of $\iota$, if nonempty, $X_\theta$ obtained for generic $\theta$ is always a smooth toric variety of dimension $d$. The action by the torus $\mathsf{T}$ descends naturally to the quotient. However, not all fixed points $\mathbf{p}$ of $\mathfrak{X}$ lie in the quotient $X_\theta$. Recall that for generic $\theta$, the GIT quotient can be defined as $X_\theta = \mathbb{C}^{n,s} / \mathsf{K}$, where $\mathbb{C}^{n,s} \subset \mathbb{C}^n$ is the stable locus, determined by the stability condition $m$. It might happen that representatives for $\mathbf{p}$ in $\mathfrak{X}$ fall in the unstable locus, and hence $\mathbf{p}$ is excluded in the GIT quotient procedure. \begin{Definition} \label{Kahler-cone} Let $\mathbf{p}\in \mathfrak{X}$ be a $\mathsf{T}$-fixed point. The \emph{K\"ahler cone} $\mathcal{K}(\mathbf{p})$ associated with $\mathbf{p}$ is defined as $$ \mathcal{K}(\mathbf{p}) := \{ \theta \in \operatorname{Lie}_\mathbb{R} (\mathsf{K}^\vee) \mid \mathbf{p} \in X_\theta \}. $$ We also define the \emph{effective cone} $\operatorname{Eff} (\mathbf{p}) := \mathcal{K}(\mathbf{p})^\vee$ as the dual of $\mathcal{K}(\mathbf{p})$, which lies in $\operatorname{Lie}_\mathbb{R} (\mathsf{K})$. \end{Definition} The following lemma provides a combinatorial description of $\mathcal{K}(\mathbf{p})$. \begin{Lemma} $\mathcal{K}(\mathbf{p})$ is the interior of the cone in $\mathbb{R}^k$ generated by the $i$-th rows of $\iota$ with $i\in \mathbf{p}$. \end{Lemma} \begin{proof} Let $m\in \operatorname{Lie}_\mathbb{R} (\mathsf{K}^\vee)$ be a stability condition. To tell whether $p\in X_m$, or equivalently, to tell whether a representative $x\in \mathbb{C}^n$ of $\mathbf{p}$ is stable, we apply the Hilbert--Mumford criterion. It states that $x \in \mathbb{C}^n$ is stable, if and only if for any 1-parameter subgroup $\lambda: \mathbb{C}^* \to \mathsf{K}$, either the limit $\lim_{t\to 0} \lambda (t) \cdot x$ does not exist, or $\langle \lambda, \theta \rangle > 0$. A general 1-parameter subgroup $\lambda : \mathbb{C}^* \to \mathsf{K}$ is of the form $\lambda (t) = (t^{\lambda_1}, \cdots, t^{\lambda_k} )$, with $\lambda_1, \cdots, \lambda_k \in \mathbb{Z}$. It acts on $x\in \mathbb{C}^n$ as $$ \lambda (t) \cdot x = \left( t^{\sum_{j=1}^k \iota_{1j} \lambda_j } x_1 , \cdots, t^{ \sum_{j=1}^k \iota_{nj} \lambda_j } x_n \right) . $$ A representative of $\mathbf{p}$ can be taken as an $x\in \mathbb{C}^n$, such that $x_i \neq 0$ for $i\in \mathbf{p}$, and $x_i = 0$ for $i\not\in \mathbf{p}$. The limit $\lim_{t\to 0} \lambda(t) \cdot x$ exists if and only if $\sum_{j=1}^k \iota_{ij} \lambda_j \geq 0$, for all $i\in \mathbf{p}$. The Hilbert--Mumford criterion then implies that $\langle \lambda, \theta \rangle >0$ for all such $\lambda$. Therefore the lemma holds. \end{proof} Consider all fixed points $\mathbf{p}\in \mathfrak{X}$. The closure of each K\"ahler cone $\mathcal{K}(\mathbf{p})$ is a rational polyhetral strictly convex cone in $\operatorname{Lie}_\mathbb{R} (\mathsf{K}^\vee)$. The codimension-one boundaries of such cones in $\operatorname{Lie}_\mathbb{R} (\mathsf{K}^\vee)$ then form the walls in the variation of GIT\footnote{The fan determined by such walls is the so-called \emph{secondary fan}. }. We have the following direct description of the chambers. \begin{Lemma} Let $X \subset \mathfrak{X}$ be a generic GIT quotient. Then the K\"ahler cone $\mathcal{K} (X)$ of $X$ is a chamber in the wall-and-chamber structure of the variation of GIT. More precisely, we have $$ \mathcal{K}(X) = \bigcap_{\mathbf{p}\in X} \mathcal{K}(\mathbf{p}), \qquad \operatorname{Eff}(X) = \bigcup_{\mathbf{p}\in X} \operatorname{Eff} (\mathbf{p}). $$ \end{Lemma} \begin{Example} \label{Bl(P^2)} Consider the exact sequence \begin{equation}\xymatrix{ 0 \ar[r] & \mathbb{Z}^2 \ar[r]^-{\iota} & \mathbb{Z}^4 \ar[r]^-\beta & \mathbb{Z}^2 \ar[r] & 0, } \end{equation} where $$ \iota = \begin{pmatrix} 1 & 1 \\ 0 & 1 \\ 1 & 0 \\ 0 & 1 \end{pmatrix}, \qquad \beta = \begin{pmatrix} 1 & 0 & -1 & -1 \\ 0 & 1 & 0 & -1 \end{pmatrix}. $$ Denote $C_1 = \mathbb{R}_+ \cdot (1,0) + \mathbb{R}_+ \cdot (1,1)$ \footnote{We denote $\mathbb{R}_+ := (0, +\infty)$, and $\mathbb{R}_- := (-\infty, 0)$. }, $C_2 = \mathbb{R}_+ \cdot (1,1) + \mathbb{R}_+ \cdot (0,1)$. There are 5 fixed points: \begin{itemize} \setlength{\parskip}{1ex} \item $\{1,2\}$, with K\"ahler cone $C_2$; \item $\{1, 3\}$, with K\"ahler cone $C_1$; \item $\{1, 4\}$, with K\"ahler cone $C_2$; \item $\{2,3\}$, with K\"ahler cone $\mathbb{R}_+^2$, whose closure is $\overline C_1 \cup \overline C_2$; \item $\{3,4\}$, with K\"ahler cone $\mathbb{R}_+^2$. \end{itemize} We can see that $C_1$ and $C_2$ are the two chambers in the variation of GIT. The GIT quotients for them are the following. \begin{itemize} \setlength{\parskip}{1ex} \item Chamber $C_1$, $X = \mathbb{P}^2$, containing fixed points $\{1,3\}$, $\{2,3\}$, $\{3,4\}$. \item Chamber $C_2$, $X = \operatorname{Bl}_{\operatorname{pt}} \mathbb{P}^2$, containing fixed points $\{1,3\}$, $\{1,4\}$, $\{2,3\}$, $\{2,4\}$. \end{itemize} See Figure 1 below. \end{Example} \begin{Example} \label{mirror-of-Bl(P^2)} Let's consider the dual exact sequence of that in Example \ref{Bl(P^2)}, i.e. $$ \xymatrix{ 0 \ar[r] & \mathbb{Z}^2 \ar[r]^-\iota & \mathbb{Z}^4 \ar[r]^-\beta & \mathbb{Z}^2 \ar[r] & 0, } $$ where $$ \iota = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ -1 & 0 \\ -1 & -1 \end{pmatrix}, \qquad \beta = \begin{pmatrix} 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1 \end{pmatrix}. $$ Denote $C_1 = \mathbb{R}_+^2$, $C_2 = \mathbb{R}_- \times \mathbb{R}_+$, $C_3 = \mathbb{R}_+ \cdot (-1,0) + \mathbb{R}_+ \cdot (-1,-1)$, $C_4 = \mathbb{R} \cdot (-1, -1) + \mathbb{R}_+ \cdot (1,0)$. There are also 5 fixed points: \begin{itemize} \setlength{\parskip}{1ex} \item $\{3,4\}$, with K\"ahler cone $C_3$; \item $\{2,4\}$, with K\"ahler cone $\mathbb{R}_+ \cdot (0,1) + \mathbb{R}_+ \cdot (-1,-1)$, whose closure is $\overline C_2 \cup \overline C_3$; \item $\{2,3\}$, with K\"ahler cone $C_2$; \item $\{1,4\}$, with K\"ahler cone $C_4$; \item $\{1,2\}$, with K\"ahler cone $C_1$. \end{itemize} $C_i \ (1\leq 4)$ are the 4 chambers in the variation of GIT. The GIT quotients are the following. \begin{itemize} \setlength{\parskip}{1ex} \item Chamber $C_1$, $X = \mathbb{C}^2$, containing fixed point $\{1,2\}$. \item Chamber $C_2$, $X = \operatorname{Bl}_{\operatorname{pt}} \mathbb{C}^2$, containing fixed points $\{2,3\}$, $\{2,4\}$. \item Chamber $C_3$, $X = \operatorname{Bl}_{\operatorname{pt}} \mathbb{C}^2$, containing fixed points $\{2,4\}$, $\{3,4\}$. \item Chamber $C_4$, $X = \mathbb{C}^2$, containing fixed point $\{1,4\}$. \end{itemize} See Figure 2 below. \begin{align*} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.9cm,y=1.9cm] \draw [color=cqcqcq,, xstep=0.5cm,ystep=0.5cm] (-4.594651950510948,-0.671428433701874); \clip(-4.594651950510948,-0.771428433701874) rectangle (4.875368844973067,2.3217392055655575); \fill[line width=1pt,color=yqyqyq,fill=yqyqyq,fill opacity=1] (-3,0) -- (-1,0) -- (-1,2) -- cycle; \fill[line width=1pt,color=cqcqcq,fill=cqcqcq,fill opacity=1] (-3,0) -- (-1,2) -- (-3,2) -- cycle; \fill[line width=1pt,color=aqaqaq,fill=aqaqaq,fill opacity=1] (1,1) -- (1,0) -- (2,1) -- cycle; \fill[line width=1pt,color=wqwqwq,fill=wqwqwq,fill opacity=1] (2,1) -- (3,1) -- (3,2) -- (2,2) -- cycle; \fill[line width=1pt,color=yqyqyq,fill=yqyqyq,fill opacity=1] (2,1) -- (2,2) -- (1,2) -- (1,1) -- cycle; \fill[line width=1pt,color=cqcqcq,fill=cqcqcq,fill opacity=1] (2,1) -- (1,0) -- (3,0) -- (3,1) -- cycle; \draw [->,line width=1.5pt] (-3,0) -- (-1,0); \draw [->,line width=1.5pt] (-3,0) -- (-3,2); \draw [->,line width=1.5pt] (-3,0) -- (-1,2); \draw [->,line width=1.5pt] (2,1) -- (3,1); \draw [->,line width=1.5pt] (2,1) -- (2,2); \draw [->,line width=1.5pt] (2,1) -- (1,0); \draw [->,line width=1.5pt] (2,1) -- (1,1); \draw (-2.407709574462179,-0.16015520639458675) node[anchor=north west] {Figure 1}; \draw (1.6089034260868553,-0.1506596673861966) node[anchor=north west] {Figure 2}; \begin{scriptsize} \draw[color=black] (-1.9709147800762319,-0.122469890772929768) node {$v_{1}$}; \draw[color=black] (-3.11037946108305,1.154976946267447) node {$v_{2}$}; \draw[color=black] (-1.4296690565979935,1.7152137477624652) node {$v_{3}$}; \draw[color=black] (2.871810114202746,0.7840572441164247) node {$v_1$}; \draw[color=black] (2.1691402275818743,1.8620977619806088) node {$v_2$}; \draw[color=black] (1.343028333851931,0.1813873574955546) node {$v_3$}; \draw[color=black] (1.1436220146757379,1.2209412583345685) node {$v_{4}$}; \draw[color=yqyqyq] (-1.6005887587490162,0.8131375419654022) node {$C_{1}$}; \draw[color=cqcqcq] (-2.274772028344717,1.4778252725527117) node {$C_{2}$}; \draw[color=aqaqaq] (1.3905060288938818,0.8131375419654022) node {$C_3$}; \draw[color=wqwqwq] (2.5584573269258706,1.6392494356953442) node {$C_1$}; \draw[color=yqyqyq] (1.5614257310449045,1.6392494356953442) node {$C_2$}; \draw[color=cqcqcq] (2.311573312707727,0.6422178398143797) node {$C_4$}; \end{scriptsize} \end{tikzpicture} \end{align*} \end{Example} \begin{Remark} In Section \ref{mirror}, we will see that Example \ref{Bl(P^2)} and \ref{mirror-of-Bl(P^2)} are mirror to each other. We can directly see that there is a natural bijection between the fixed points. However, this symmetry only exists if we include all $\mathsf{T}$-fixed points of the stack $\mathfrak{X}$, i.e. of all possible GIT quotients. Also it might happen that different GIT quotients share a few common fixed points. \end{Remark} \subsection{Attracting cone} In this subsection, following Maulik--Okounkov \cite{MO}, we would like to associated to each fixed point $\mathbf{p}\in \mathfrak{X}$ a cone in the space of equivariant parameters. Let $\mathbf{p} \in \mathfrak{X}$ be a $\mathsf{T}$-fixed point. First, let's choose $\theta \in \mathcal{K} (\mathbf{p})$, and consider the GIT quotient $X = X_\theta$. By definition of the K\"ahler cone, $\mathbf{p}$ is then a $\mathsf{T}$-fixed point in $X$. The action of $\mathsf{T}$ contains the kernel $\mathsf{K}$ which acts trivially. Hence the actual torus acting on $X$ is the quotient torus $\mathsf{A} = \mathsf{T} / \mathsf{K} \cong (\mathbb{C}^*)^r$. Its Lie algebra $\operatorname{Lie}_\mathbb{R} (\mathsf{A}) \cong \mathbb{R}^r$ can be identified with the space of cocharacters of $\mathsf{A}$. For any cocharacter $\sigma: \mathbb{C}^* \to \mathsf{A}$, the 1-parameter subgroup $\mathbb{C}^*_\sigma$ induced by it acts on $X$ and gives a Bialynicki--Birula stratification. Each stratum will be a union of $\mathsf{A}$-orbits. When $\sigma$ is \emph{generic} (which we will always assume), the fixed loci of $\mathbb{C}^*_\sigma$ will be the same as the fixed loci of $\mathsf{A}$ itself. Therefore, the BB strata are the ``attracting sets", parameterized by the fixed points $$ \operatorname{Attr}_\mathbf{p}^\theta := \{ q \in X_\theta \mid \lim_{t\to 0} \sigma (t) \cdot q = \mathbf{p} \}. $$ There exists a unique $\mathbf{p}$, such that $\operatorname{Attr}_\mathbf{p}^\theta$ is the the largest stratum, i.e. has the same dimension as $X_\theta$. In particular, this $\operatorname{Attr}_\mathbf{p}^\theta$ contains points $q\in X_\theta$ whose representatives $x$ has all $x_i \neq 0$. In other words, this $\operatorname{Attr}_\mathbf{p}^\theta$ contains the largest open $T$-orbit. We call such a $\mathbf{p}$ the \emph{minimal} fixed point. \begin{Definition} The \emph{attracting cone} $\mathcal{A} (\mathbf{p})$ is defined as $$ \mathcal{A}(\mathbf{p}) := \{ \sigma \in \operatorname{Lie}_\mathbb{R} (\mathsf{A}) \mid \mathbf{p} \text{ is minimal under } \mathbb{C}^*_\sigma \} . $$ \end{Definition} A priori, the definition depends on the choice of $\theta$. However, we can easily see that it actually does not depend on $\theta$. A cocharacter $\tilde\sigma \in \operatorname{Lie}_\mathbb{R} (\mathsf{T})$ is called a \emph{lift} of $\sigma$ if it is a preimage of $\sigma$ along the map $\beta: \operatorname{Lie}_\mathbb{R}(\mathsf{T}) \to \operatorname{Lie}_\mathbb{R} (\mathsf{A})$. \begin{Lemma} \label{attracting-cone} \begin{enumerate}[1)] \item There is a unique lift $\tilde\sigma = (\tilde\sigma_1, \cdots, \tilde\sigma_n)$ such that $\tilde\sigma_i = 0$, for all $i \in \mathbf{p}$. \item $\mathbf{p}$ is minimal, if and only if for the lift $\tilde\sigma$ in 1), we have $\tilde\sigma_i >0$, for all $i\not\in \mathbf{p}$. In particular, the minimality of $\mathbf{p}$ does not depend on the choice of the stability condition $\theta$. \item $\mathcal{A}(\mathbf{p})$ is the interior of the cone in $\operatorname{Lie}_\mathbb{R} (\mathsf{A})$ generated by the $i$-th columns of $\beta$ with $i\not\in \mathbf{p}$. \end{enumerate} \end{Lemma} \begin{proof} For 1), all lifts $\tilde\sigma$ in $\operatorname{Lie}_\mathbb{R} (\mathsf{T})$ form a $k$-dimensional affine linear subspace $\beta^{-1} (\sigma)$. Moreover, since $\sigma$ is generic, $\beta^{-1} (\sigma)$ intersects transversally with $x_i = 0$, $i\in \mathbf{p}$. Therefore, they intersect at a unique point. For 2), we know that $\mathbf{p}$ is minimal if and only if for a generic point $q\in X_\theta$, $\lim_{t\to 0} \sigma(t) \cdot q = \mathbf{p}$. Choose the lift $\tilde\sigma$ as in 1), and let $x$ be a representative of a generic point, which implies that $x_i \neq 0$ for all $i$. Then $\tilde\sigma(t) \cdot x$ is a representative of $\sigma(t) \cdot q$, and we see that $$ ( \tilde\sigma(t) \cdot x )_i = \left\{ \begin{aligned} & x_i, \qquad && i\in \mathbf{p} \\ & t^{\tilde\sigma_i} x_i, \qquad && i\not\in \mathbf{p} \end{aligned} \right. $$ The limit as $t\to 0$ is $\mathbf{p}$ if and only if $\tilde\sigma_i >0$ for all $i\not\in \mathbf{p}$. By 2), we see that the cone $\mathcal{A} (\mathbf{p})$ is exactly the image under $\beta$ of the cone $\{0\} \times \mathbb{R}_+^r$, where the $\{0\}$ is for the indices in $\mathbf{p}$, and $\mathbb{R}_+^r$ is for the indices not in $\mathbf{p}$. The image is just the cone described in 3). \end{proof} \subsection{Equivariant $K$-theory, K\"ahler and equivariant parameters} \label{parameters} The $\mathsf{A}$-equivariant $K$-theory ring of $\mathfrak{X}$ is $$ K_\mathsf{A} (\mathfrak{X}) = K_{\mathsf{A}} ( [\mathbb{C}^n / \mathsf{K} ] ) \cong K_{\mathsf{K} \times \mathsf{A} } (\operatorname{pt}) . $$ However, the product $\mathsf{K} \times \mathsf{A}$ here is not canonical. There is no natural basis for the equivariant parameters of $\mathsf{A}$; more precisely, for different fixed points $\mathbf{p}$, there are different choices of decompositions $\mathsf{K} \times \mathsf{A}$ and different coordinates on $\mathsf{A}$. A better way is to introduce the \emph{redundant parameters}. Recall the exact sequence $$ \xymatrix{ 1 \ar[r] & \mathsf{K} \ar[r] & \mathsf{T} \ar[r] & \mathsf{A} \ar[r] & 1 , } $$ where we view $$ \mathsf{T} = \operatorname{Spec} K_{\mathsf{T}} (\operatorname{pt}) = \operatorname{Spec} \mathbb{C} [a_1^{\pm 1}, \cdots, a_n^{\pm 1} ], \qquad \mathsf{A} = \operatorname{Spec} K_{\mathsf{A}} (\operatorname{pt}). $$ Here $a_i$ are the standard coordinates on $\mathsf{T}$, or functions associated with the standard basis. We call them the \emph{redundant equivariant parameters}. We will then call functions on the quotient torus $\mathsf{A}$ the \emph{effective equivariant parameters}. Similar phenomenon occurs for the K\"ahler parameters, and the dual exact sequence $$ \xymatrix{ 1 \ar[r] & \mathsf{A}^\vee \ar[r] & \mathsf{T}^\vee \ar[r] & \mathsf{K}^\vee \ar[r] & 1. } $$ We write $$ \mathsf{T}^\vee = \operatorname{Spec} K_{\mathsf{T}^\vee} (\operatorname{pt}) = \operatorname{Spec} \mathbb{C} [z_1^{\pm 1}, \cdots, z_n^{\pm 1} ], $$ where $z_i$'s are the \emph{redundant K\"ahler parameters}. Functions on $\mathsf{K}^\vee$ are called the \emph{effective K\"ahler parameters}. We denote the standard coordinates on $\mathsf{K}^\vee$ by $$ Q_j, \qquad 1\leq j\leq k. $$ The relationship between $z_i$'s and $Q_j$'s is $$ Q_j = \prod_{i=1}^n z_i^{\iota_{ij}}, \qquad 1\leq j\leq k. $$ \begin{Remark} The effective parameters are what usually appear in the literature, which records the degrees of curves. The redundant ones, introduced by the third author in a previous work \cite{SZ}, is a set of globally chosen coordinates on the K\"ahler and equivariant tori, which would make the presentation of the $I$-functions much more convenient. The mirror map also looks more concise in terms of the redundant parameters. \end{Remark} Now we apply the base change $\mathsf{T} \to \mathsf{A}$ to the $\mathsf{A}$-equivariant theory and consider the $\mathsf{T}$-equivariant $K$-theory of $\mathfrak{X}$. The ring admits a global presentation \begin{equation} \label{pres-u} K_{\mathsf{T}} (\mathfrak{X}) = \mathbb{C} [ a_1^{\pm 1} , \cdots, a_n^{\pm 1} , u_1^{\pm 1} , \cdots, u_n^{\pm 1} ] / \langle \prod_{j=1}^n \left( \frac{u_j}{a_j} \right)^{\beta_{ij}} = 1, \ 1 \leq i\leq r \rangle, \end{equation} where the $u_i$'s are the characters associated with the 1-dimensional representations given by the standard basis in $\mathbb{C}^n$. It is easy to see that the monomials $\prod_{j=1}^n a_j^{\beta_{ij}}$ appearing in the relations are functions on the quotient torus $\mathsf{A}$; hence the ring is indeed a base change from $\mathsf{A}$. The picture is the following Cartesian diagram $$ \xymatrix{ \operatorname{Spec} K_{\mathsf{T}} (\mathfrak{X}) \ar[d] \ar[r] & \operatorname{Spec} K_\mathsf{A} (\mathfrak{X}) \ar[d] \\ \mathsf{T} \ar[r] & \mathsf{A}. } $$ The $K$-theory spectrum $\operatorname{Spec} K_\mathsf{T} (\mathfrak{X})$ is a fibration of $k$-torus over $\mathsf{T}$, and the ring $K_\mathsf{T} (\mathfrak{X})$ is of infinite dimensions over $K_\mathsf{T} (\operatorname{pt})$. Same holds for $\mathsf{A}$. An alternative way to look at this is as follows. Let $s_j$ be the $T$-character associated with the $j$-th standard basis vector in $\mathbb{C}^k$. We have the relation \begin{equation} \label{u-s} u_i = a_i \prod_{j=1}^k s_j^{\iota_{ij}}, \qquad 1\leq i\leq n, \end{equation} and hence we have an alternative presentation of the $K$-theory ring \begin{equation} \label{pres-s} K_{\mathsf{T}} (\mathfrak{X}) = \mathbb{C} [ a_1^{\pm 1} , \cdots, a_n^{\pm 1} , s_1^{\pm 1} , \cdots, s_k^{\pm 1} ]. \end{equation} We see that $s_j$'s form a set of global coordinates of the above fibration. In computations of the later sections, we will always consider $a_i$ and $s_i$ as independent parameters, and $u_i$'s as functions in terms of $a_i$ and $s_i$'s. For any (generic) GIT quotient $X \subset \mathfrak{X}$, there is a Kirwan surjection \cite{HL, BH} $K_\mathsf{T} (\mathfrak{X}) \twoheadrightarrow K_\mathsf{T} (X)$, under which $u_i$ and $s_i$ map to the corresponding tautological line bundles on $X$, equvariant and nonequivariant respectively. The ring $K_\mathsf{T} (X)$ then admits similar presentations as (\ref{pres-u}) and (\ref{pres-s}), with some extra relations described by the following lemma. \begin{Lemma} Let $\mathbf{p}$ be a fixed point of $\mathfrak{X}$. \begin{enumerate}[1)] \item $\operatorname{Spec} K_\mathsf{T} (\mathbf{p})$ is the section of the fibration $\operatorname{Spec} K_\mathsf{T} (\mathfrak{X}) \to \mathsf{T}$ defined by $u_i = 1$, for all $i\in \mathbf{p}$. \item For any generic GIT quotient $X$, its $K$-theory ring is $$ \operatorname{Spec} K_\mathsf{T} (X) = \bigcup_{\mathbf{p} \in X} \operatorname{Spec} K_\mathsf{T} (\mathbf{p}), $$ where the irreducible components intersect transversally. \end{enumerate} \end{Lemma} We see that the restriction of the line bundle $u_i$ to a fixed point $\mathbf{p}$ is $u_i |_\mathbf{p} = 1$ for $i\in \mathbf{p}$, and $$ \{ u_i |_\mathbf{p} \mid i\not\in \mathbf{p} \} $$ is the unique solution to the system of linear equations \begin{equation} \label{solution} \left\{ \begin{aligned} & \prod_{j=1}^n \left( \frac{u_j}{a_j} \right)^{\beta_{ij}} = 1 , \qquad && 1\leq i\leq r \\ & u_l = 1, \qquad && l\in \mathbf{p} . \end{aligned} \right. \end{equation} In terms of $s_j$'s, the fixed point $\mathbf{p}$ is given by the equations \begin{equation} \prod_{j=1}^k s_j^{\iota_{lj}} = a_l^{-1}, \qquad l\in \mathbf{p}. \end{equation} In particular, $u_i |_\mathbf{p}$'s are functions on $\mathsf{A}$, i.e. only in terms of effective equivaraint parameters; however $s_j |_\mathbf{p}$'s involve all redundant equivariant parameters. \vspace{1cm} \section{$K$-theoretic $I$-function for toric stacks} \label{sec-3} The quantum $K$-theory was introduced by Givental \cite{Giv-WDVV} and Y.-P. Lee \cite{Lee} decades ago. Recently, Givental shows that $q$-hypergeometric solutions represent $K$-theoretic Gromov--Witten invariants in the toric case \cite{Giv-perm-V}. Ruan--Zhang \cite{RZ} introduce the level structures and there is a serendipitous discovery that some special toric spaces with certain level structures result in Mock theta functions. Nevertheless, beyond the toric case, much less is known. Let $X := X_\theta$ be a GIT quotient $V/\!/_{\theta} G$ where $V$ is a vector space and $G$ is a connected complex reductive group. The theory of the moduli space of quasimaps to GIT is established in \cite{CKM} where Ciocan-Fontanine, Kim and Maulik define the cohomological big $I$-function. The first two authors prove the wall-crossing formula which relates big $I$-function and Givental's big $J$-function of $X$ in their following paper \cite{CK}. The $K$-theoretic stable quasimaps invariants are defined by Tseng--You in \cite{TY}. Let ${\mathcal{Q}}^{\epsilon}_{g,n}(X, d)$ be the moduli stack of $\epsilon$-stable quasimaps \cite{CKM} parametrizing quasimaps $f=(C,p_1, \cdots ,p_n,\mathcal{P},s)$ where $C$ is an $n$-pointed nodal curve of genus $g$, $\mathcal{P}$ is a principal $G$-bundle over $C$, $s$ is a section and $d \in \mathrm{Hom}(\mathrm{Pic}^G(V), \mathbb{Z})$. There are natural maps: \begin{align*} \operatorname{ev}_{i}: {\mathcal{Q}}^{\epsilon}_{g,n}(X, d) \rightarrow X, \qquad i=1, \cdots, n, \end{align*} given by evaluation at the $i$-th marked point. There are line bundles \begin{align*} \mathbb{L}_{i} \rightarrow {\mathcal{Q}}^{\epsilon}_{g,n}(X, d) , \qquad i=1, \cdots, n, \end{align*} called universal cotangent line bundles. The fiber of $\mathbb{L}_i$ over a point $(C,p_1, \cdots ,p_n,\mathcal{P},s)$ is the cotangent line to $C$ at the point $p_i$. The permutation-equivariant $K$-theoretic quasimap invariants with level structures \cite{RZ} are holomorphic characteristics over ${\mathcal{Q}}^{\epsilon}_{g,n}(X,d)$ of the sheaves: \begin{align} \left\langle \mathbf{t}(\mathbb{L}_1), \cdots, \mathbf{t}(\mathbb{L}_n) \right\rangle_{g, n, d}^{R,l,S_n,\epsilon} :=\pi_* \Big( {\mathcal{Q}}^{\epsilon}_{g,n}(X, d) ; \, \mathcal{O}_{g, n, d}^\mathrm{vir} \otimes \prod_{m,i} \mathbb{L}_{i}^{k} t_{k,i} \mathrm{ev}_{i}^{*}\left(\phi_{i}\right) \otimes \mathcal{D}^{R,l} \Big) \label{quasimap-invariants} \end{align} where $\mathcal{O}^\mathrm{vir}_{g,n,d}$ is the virtual structure sheaf \cite{Lee}, and $\mathbf{t}(q)$ is a Laurent polynomial in $q$ defined as follows \begin{align*} \mathbf{t}(q)=\sum_{m \in \mathbb{Z}} t_{m} q^{m}, \qquad t_{m}=\sum_{\alpha} t_{m, \alpha} w_{\alpha}. \end{align*} Moreover, $\pi_*$ is the $K$-theoretic pushforward along the projection \begin{align*} \pi_* : \left[ {\mathcal{Q}}^{\epsilon}_{g,n}(X, d )/S_n \right] \rightarrow \operatorname{pt}, \end{align*} $\{\phi_\alpha \}$ is a basis of $K^0(X_\theta)\otimes \mathbb{Q}$, and $t_{k,\alpha}$ are formal variables. The last term in (\ref{quasimap-invariants}) is the level $l$ determinant line bundle over $\mathcal{Q}^{\epsilon}_{g,n}(X_\theta, \beta)$, defined as \begin{align*} \mathcal{D}^{R,l} :=\left( \mathrm{det}R^{\bullet}\pi_*(\mathcal{P} \times _G R ) \right)^{-l} , \end{align*} where the bundle $\mathcal{P} \times _G R $ is the pullback of the vector bundle $[V \times R / G] \rightarrow [ V/G ] $ along the evaluation map to the quotient stack $[V/G]$. Similarly, we can define the moduli space for graph space quasimaps $ \mathcal { QG }^{\epsilon} _ { 0 , n } ( X , d ) $, which parametrizes quasimaps with \emph{parametrized} domain component $\mathbb{P}^1$. As a result, there is a natural $\mathbb{C}^*$-action, coming from the $\mathbb{C}^*$-action that scales the parametrized domain component. Denote by $F_{0, d}$ the special fixed loci in $ \mathcal { QG }^{\epsilon} _ { 0 , n } ( X , d )^{\mathbb{C}^*}$, i.e., the open substack consisting of quasimaps $f$ such that $\infty \in \mathbb{P}^1$ is not a base point, and denote by $q$ the $\mathbb{C} ^*$-character of the cotangent bundle at $0 :=[1,0] $ of $\mathbb{P}^1$ (sometimes we denote by $\mathbb{C}_q^*$, the same $\mathbb{C}^*$ to emphasize the character). For details, see \cite{CKM}. \begin{Definition} {\cite{RZ}} The permutation-equivariant $K$-theoretic $\mathcal{J}^{R,l,\epsilon}$-function of $V /\!/ {G}$ with level $l$ is defined as \begin{align*} \mathcal { J } _ { S _ { \infty } } ^ { R , l , \epsilon } ( \mathbf { t } ( q ) , Q ) &:= \sum_{k \geq 0, \, d \in {\operatorname { Eff } ( V , G , \theta )} } Q^d (\operatorname{ev}_{\bullet})_{*} \left[ \operatorname{Res}_{F_{0, d}}( \mathcal{QG} _ { 0 , n } ^ { \epsilon } ( V /\!/ G , d )_{0})^{\mathrm{vir}} \otimes \mathcal { D } ^ { R , l } \otimes _ { i = 1 } ^ { n } \mathbf { t } ( \mathbb{L} _ { i } ) \right]^{S_n} \\ &:= 1 + \frac { \mathbf { t } ( q ) } { 1 - q } +\sum _ { a } \sum _ { d \neq 0 } Q ^d \chi \left( F_ { 0 , d } , \, \mathcal { O } _ { F_ { 0 , d} } ^ { \mathrm { vir } } \otimes \operatorname{ev}_{\bullet} ^ { * } ( \phi _ { a } ) \otimes \left( \frac { \operatorname { t r } _ { \mathbb { C } ^ { * } } \mathcal { D } ^ { R , l } } { \lambda_{-1}^{\mathbb{C}^*} N _ { F_ { 0 , d } } ^ { \vee } } \right) \right) \phi ^ { a } \\ & + \sum_a \sum _ { n \geq 1 \, \text{or} \, d(L_\theta) \geq 1 / \epsilon \atop ( n , d ) \neq ( 1,0 ) } Q ^d \left\langle \frac { \phi _ { a } } { ( 1 - q ) ( 1 - q \mathbb{L}_{n+1} ) } , \mathbf { t } ( \mathbb{L}_1 ) , \cdots , \mathbf { t } ( \mathbb{L}_n ) \right\rangle _ { 0 , n + 1 , d } ^ { R , l , \epsilon , S _ { n } } \phi ^ { a } , \end{align*} where $\operatorname{ev}_\bullet$ is the evaluation map at the point $\infty\in \mathbb{P}^1$, $\{ \phi_\alpha \}$ is a basis of $K^0(X)$ and $\{ \phi^\alpha \}$ is the dual basis with respect to twisted pairing $( \ \ , \ \ )^{R,l}$, i.e., \begin{align*} (u,v)^{R,l}:=\chi\left( X,u \otimes v \otimes \mathrm{det}^{-l}(V^{ss} \times_{G} R ) \right) . \end{align*} \end{Definition} \begin{Definition}{\cite{RZ}} When $\epsilon$ is small enough (denoted by $\epsilon=0^{+}$), we call $\mathcal{J}^{R,l,0^{+}}(0)$ the small $I$-function of level $l$, i.e, \begin{align*} {I}^{R,l}(q;Q):= \mathcal { J } _ { S _ { \infty } } ^ { R , l , 0^{+} } ( 0 , Q ) = 1 + \sum _ { d \geq 0 } Q^d (\operatorname{ev}_{\bullet})_{*} \left( \mathcal { O } _ { F_ { 0 , d } } ^ { \mathrm { vir } } \otimes \left( \frac { \operatorname { t r } _ { \mathbb { C } ^ { * } } \mathcal { D } ^ { R , l } } { \lambda_{-1}^{\mathbb{C}^*} N _ { F_ { 0 , d } } ^ { \vee } } \right) \right) \cdot \mathrm{det}^l(V^{ss} \times_{G} R ) . \end{align*} Note that here we take $(g,n) = (0,0)$. \end{Definition} In the rest of this section, we will compute the explicit formula for the contribution of fixed points to the $I$-function. \subsection{Quasimaps to fixed points and their $I$-functions} From now on, let $V = \mathbb{C}^n$, and let $G = \mathsf{K} \cong (\mathbb{C}^*)^k$ act on $V$ by the charge matrix $\iota$. Consider the GIT quotient $X_\theta = \mathbb{C}^n/\!/_\theta \mathsf{K} $ with respect to a character $ \theta \in \operatorname{Lie}_{\mathbb{R}}(\mathsf{K}^\vee) $, there is a natural $\mathsf{T}$--action on $X_\theta$, which induces an action on $\mathcal{Q}_{0,0}(X_\theta,d)$, then \begin{align*} \mathcal{Q}_{0,0}(X_\theta,d)^{\mathsf{T}} = \bigsqcup_{\mathbf{p} \in X_{\theta}^{\mathsf{T}}} \mathcal{Q}_{0,0}(\mathbf{p},d) , \end{align*} where $\mathcal{Q}_{0,0} (\mathbf{p}, d)$ be the moduli space of quasimaps from $\mathbb{P}^1$ to $\mathbf{p}$ of degree $d$. Recall that $u_i$ and $s_j$, for $1\leq i\leq n$, $1\leq j\leq k$, are the characters associated with the basis of $\mathbb{C}^n$ and $\mathbb{C}^k$ respectively. Let $(-)|_{X_\theta}$ be the restriction map $K_\mathsf{T}(\mathfrak{X}) \to K_\mathsf{T} (X_\theta)$ induced by the open inclusion $X_\theta \hookrightarrow \mathfrak{X}$. Denote by $$ U_i := u_i |_{X_\theta}, \qquad L_j := s_j |_{X_\theta} $$ the tautological line bundles on $X_\theta$ associated with the standard characters. By abuse of notations, we often use the same letters $U_i$ and $L_j$'s for line bundles pulled back to $\mathbb{P}^1$, i.e. $f^*u_i$ and $f^* s_j$. An alternatively description of a quasimap is the datum consisting of $k$ line bundles $\{ L_j \}_{j=1}^{k}$ (which are identified with $f^* s_j$'s), and a certain section of the associated vector bundle $\bigoplus_{i=1}^n U_i$ (which is identified with $\bigoplus_{i=1}^n f^* u_i$). \begin{Definition} \label{total-stack-I} The \emph{modified $I$-function} is defined as $$ \widetilde I^{R,l} (q,Q):= e^{-\sum_{i=1}^n\frac{\ln z_i \ln U_i}{\ln q}} \cdot \frac{ {\lambda_{-1} (T^* X_{\theta})} }{(q \cdot T X_\theta )_\infty} \cdot I^{R,l}(q,Q), $$ where $(-)_\infty$ and $\lambda_{-1} (-)$ are characteristic classes defined as $$ (E)_\infty := \prod_i (\mathcal{L}_i)_\infty, \quad \lambda_{-1} (E)=\prod_{i}(1- \mathcal{L}_i) $$ if a vector bundle $E$ splits into line bundles $E = \bigoplus_i \mathcal{L}_i$. Recall that $z_i$'s are the redundant K\"ahler parameters. \end{Definition} \begin{Remark} The exponential prefactor $e^{-\sum_{i=1}^n\frac{\ln z_i \ln U_i}{\ln q}}$ is a $q$-analogue of the exponential factor $e^{\sum_i \frac{t_i H_i}{z}}$ as in the cohomological $I$-functions. The factor $\dfrac{1}{(q \cdot TX_\theta )_\infty}$ are $q$-analogues of the Gamma class, which lies in a \emph{completion} of the $K$-group. \end{Remark} The \emph{degree} of a quasimap is defined as $d = (d_1, \cdots, d_k)\in \mathbb{Z}^k$, where $d_j = \deg L_j $, for $1\leq j\leq k$. One can also describe it by $D = (D_1, \cdots, D_n) \in \mathbb{Z}^n$, where $D_i = \deg U_j$, for $1\leq i\leq n$. The relation between them, by (\ref{u-s}), is $$ D_i = \sum_{j=1}^k \iota_{ij} d_j, \qquad \text{or} \qquad D = \iota d. $$ A point in $\mathbb{P}^1$ is called a base point if it is not mapped to $\mathbf{p}$ under $f$. Let $\mathcal{Q}_{0,0} (\mathbf{p}, d)^\circ$ be the open substack consisting of quasimaps $f$ such that $\infty \in \mathbb{P}^1$ is not a base point. We have the following diagram $$ \xymatrix{ \mathcal{Q}_{0,0} (\mathbf{p}, d)^\circ \ar@{^{(}->}[r] & \mathcal{Q}_{0,0} (\mathbf{p}, d) \ar@{^{(}->}[r] & \mathcal{Q}_{0,0} (X_\theta, d), } $$ where the second inclusion is a closed embedding. There is a perfect obstruction theory on $\mathcal{Q}_{0,0} (X_\theta, d)^\circ $, (whose dual is) given by $$ R\pi_* f^* T X_\theta, $$ where $\pi: \mathbb{P}^1 \times \operatorname{Hom} (\mathbb{P}^1, X_\theta) \to \operatorname{Hom}(\mathbb{P}^1, X_\theta)$ is the universal curve, and $ f: \mathbb{P}^1 \times \operatorname{Hom} (\mathbb{P}^1, X_\theta) \to X_\theta $ is the universal morphism. Let $T_\mathrm{vir} (\mathbf{p})$ be the pull-back of $R\pi_* f^* T X_\theta$ to $\mathcal{Q}_{0,0} (\mathbf{p}, d)^\circ$, which is (the dual of) a perfect obstruction theory on $\mathcal{Q}_{0,0} (\mathbf{p}, d)^\circ$. Let $\mathcal{O}_\mathrm{vir} (\mathbf{p})$ be the virtual structure sheaf \cite{Lee} associated with the obstruction theory $T_\mathrm{vir} (\mathbf{p})$. There is an evaluation map $\operatorname{ev}_\infty: \mathcal{Q}_{0,0} (\mathbf{p}, d)^\circ \to X_\theta$, which is not proper. From the following commutative diagram $$ \xymatrix{ \mathcal{Q}_{0,0} (\mathbf{p}, d)^\circ \ar[d]^-{\operatorname{ev}_\infty} \ar@{^{(}->}[r] & \mathcal{Q}_{0,0} (X_\theta, d)^\circ \ar[d]^-{\operatorname{ev}_\infty} \\ \mathbf{p} \ar@{^{(}->}[r] & X_\theta , } $$ by using Atiyah-Bott localization formula, we obtain \begin{align*} I^{R,l}(q,Q) = \sum_{d\in \operatorname{Eff} (X_\theta)} \sum_{\mathbf{p} \in X_{\theta}^{\mathsf{T}}} Q^d (\operatorname{ev}_\infty)_* \left( \mathcal{O}_\mathrm{vir} (\mathbf{p}) \otimes \operatorname { t r } _ { \mathbb { C } ^ { * } } \mathcal { D } ^ { R , l } \right)\cdot \frac{ \mathrm{det}^l(\mathbb{C}^{n,s} \times_{\mathsf{K}} R )}{\lambda^{\mathsf{T}}_{-1} (T_\mathbf{p}^* X_{\theta} )} . \end{align*} It is a formal power series lying in $$ K_{\mathsf{T} \times \mathbb{C}^*_q} (\mathbf{p})_\mathrm{loc} \llbracket Q^{\operatorname{Eff}(\mathbf{p})} \rrbracket , $$ where ``loc" means to tensor with the fractional field, $Q_j$ with $1\leq j\leq k$ are the effective K\"ahler parameters, and $Q^d = \prod_{j=1}^k Q_j^{d_j}$. \begin{Remark} \label{effective-Kahler} By $D = \iota d$, the relation between effective and redundant parameters is $Q_j = \prod_{i=1}^n z_i^{\iota_{ij}}$. It then follows that $Q^d = z^D = \prod_{i=1}^n z_i^{D_i}$. Later we will use redundant parameters $z_i$ more often. \end{Remark} We claim the contribution from each fixed point $\mathbf{p}$ to the $I$-function is \begin{align} \label{I-of-fixed-point} I^{R,l}(\mathbf{p},\theta) :=\sum_{d\in \operatorname{Eff} (\mathbf{p})} Q^d (\operatorname{ev}_\infty)_* \left( \mathcal{O}_\mathrm{vir} (\mathbf{p}) \otimes \operatorname { t r } _ { \mathbb { C } ^ { * } } \mathcal { D } ^ { R , l } \right)\cdot \frac{ \mathrm{det}^l(\mathbb{C}^{n,s} \times_{\mathsf{K}} R )}{\lambda^{\mathsf{T}}_{-1}(T_\mathbf{p}^* X_{\theta} )} . \end{align} Note that the summation is over $\operatorname{Eff}(\mathbf{p})$, which is a subcone of $\operatorname{Eff}(X_\theta)$. This is based on the following observation. \begin{Lemma} \label{key-lemma} Let $f$ be a quasimap from $\mathbb{P}^1$ to $\mathbf{p}$. Then \begin{enumerate}[1)] \item $D_i \geq 0$, for all $i\in \mathbf{p}$; \item the vector $d = (d_1, \cdots, d_k)$ lies in the effective cone $\operatorname{Eff}(\mathbf{p})$. \end{enumerate} \end{Lemma} \begin{proof} Since $f$ generically maps into $\mathbf{p}$, we know that for $i\in \mathbf{p}$, the section of the line bundle $U_i$ defined by $f$ is generically nonzero. Therefore $D_i \geq 0$ for all $i\in \mathbf{p}$. In other words, $\sum_{j=1}^k \iota_{ij} d_j \geq 0$ for all $i\in \mathbf{p}$, which means that $d$ lies in the dual of $\mathcal{K} (\mathbf{p})$. \end{proof} \begin{Corollary} \label{independence-I-bp} The contribution of the $I$-function $I^{R, l} (\mathbf{p}, \theta)$ is independent of the choice of the stability condition $\theta$. \end{Corollary} \begin{proof} Every step in the localization computation can be performed on the quotient stack $\mathfrak{X}$, instead of the GIT quotient $X_\theta$. More precisely, one has the embedding $$ \xymatrix{ \mathcal{Q}_{0,0} (\mathbf{p}, d)^\circ \ar@{^{(}->}[r] & \mathcal{Q}_{0,0} (\mathbf{p}, d) \ar@{^{(}->}[r] & \mathcal{Q}_{0,0} (X_\theta, d) \ar@{^{(}->}[r] & \operatorname{Hom} (\mathbb{P}^1, \mathfrak{X}), } $$ where $\operatorname{Hom} (\mathbb{P}^1, \mathfrak{X})$ is the Artin stack parametrizing representable morphisms from $\mathbb{P}^1$ to $\mathfrak{X}$. The obstruction theory on $\mathcal{Q}_{0,0} (X_\theta, d)$ is the restriction from an obstruction theory $$ T_\mathrm{vir} := R\pi_* f^* T \mathfrak{X}, $$ where $T \mathfrak{X}$ is the tangent complex of the stack $\mathfrak{X}$, and $\pi$, $f$ are similar as above. One can then define the obstruction theory $T_\mathrm{vir} (\mathbf{p})$ alternatively, as the restriction of $T_\mathrm{vir}$ to $\mathcal{Q}_{0,0} (\mathbf{p}, d)^\circ$. Replacing $T_p^* X_\theta$ by the same space $T_\mathbf{p} \mathfrak{X}$, we see that every factor in the formula \ref{I-of-fixed-point} can be defined directly from the embedding $\mathbf{p} \hookrightarrow \mathfrak{X}$, without a choice of $X_\theta$. Recalling that the effective cone $\operatorname{Eff}(\mathbf{p})$ also does not depend on $\theta$, the corollary follows. \end{proof} \begin{Definition} According to Corollary \ref{independence-I-bp}, we denote from now on $$ I^{R, l} (\mathbf{p}) := I^{R, l} (\mathbf{p}, \theta), $$ for any $\theta$ such that $\mathbf{p} \in X_\theta$. The $I$-function with level structure for the toric stack is then defined as $$ I^{R,l} (\mathfrak{X}) := \sum_{\mathbf{p} \in \mathfrak{X}^{\mathsf{T}} } I^{R,l}({\mathbf{p}}) \quad \in \quad \bigoplus_{\mathbf{p}\in \mathfrak{X}^\mathsf{T}} K_{\mathsf{T} \times \mathbb{C}^*_q} (\mathbf{p})_\mathrm{loc} \llbracket Q^{\operatorname{Eff}(\mathbf{p})} \rrbracket . $$ Similarly, the modified $I$-function with levels structure is defined as as $$ \widetilde{I}^{R,l} (\mathfrak{X}) := \sum_{\mathbf{p} \in \mathfrak{X}^{\mathsf{T}} } \widetilde{I}^{R,l}({\mathbf{p}}). $$ \end{Definition} \subsection{Explicit formula for $I$-functions} Let $\mathbf{p}\in \mathfrak{X}_\theta \subset \mathfrak{X}$ be a $\mathsf{T}$-fixed point. Recall that the restriction of the line bundle $U_i |_\mathbf{p}$ is the character $u_i |_\mathbf{p}$, given as in (\ref{solution}). The virtual tangent bundle, restricted to a fixed quasimap $f$, is \begin{eqnarray*} T^\mathrm{vir} |_f &=& H^\bullet (\mathbb{P}^1, U_1 |_\mathbf{p} \oplus \cdots \oplus U_n |_\mathbf{p} ) - H^\bullet (\mathbb{P}^1, \mathcal{O}^{\oplus k} ) \\ &=& \sum_{i=1}^n H^\bullet (\mathbb{P}^1, U_i |_\mathbf{p} \otimes \mathcal{O} (D_i) ) - k \\ &=& \sum_{i\in \mathbf{p}} ( H^\bullet (\mathbb{P}^1, \mathcal{O} (D_i) ) -1) + \sum_{i\not\in \mathbf{p}} H^\bullet (\mathbb{P}^1, U_i |_\mathbf{p} \otimes \mathcal{O} (D_i) ) . \end{eqnarray*} Recall that we have \begin{eqnarray*} H^\bullet (\mathbb{P}^1, \mathcal{O} (d)) &=& \left\{ \begin{aligned} & 1 + q^{-1} + \cdots + q^{-d} , && \qquad d\geq 0 \\ & - q - \cdots - q^{-d-1}, && \qquad d<0. \end{aligned} \right. \\ &=& \sum_{l=0}^\infty q^{-l} - \sum_{l=d+1}^\infty q^{-l} . \end{eqnarray*} Since \footnote{We write $(x)_\infty := (x; q)_\infty$. When we need symboles such as $(x; q^{-1})_\infty$ or $(x; q^2)_\infty$, we do not omit the $q^{-1}$ or $q^2$. } $$ \lambda_{-1} [U \cdot (H^\bullet (\mathbb{P}^1, \mathcal{O}(d)) -1) ]^\vee = \frac{\prod_{l=-\infty}^{d} (1-U^{-1} q^l)}{\prod_{l=-\infty}^{0} (1-U^{-1} q^l) } =: (q U^{-1})_d , $$ the $I$-function is \begin{eqnarray*} I (\mathbf{p}) &:=& \sum_{d \in \operatorname{Eff}(\mathbf{p})} \frac{1}{\lambda_{-1} (T^\mathrm{vir} |_f )^\vee } \\ &=& \frac{1}{\prod_{i \notin \mathbf{p}}(1-U^{-1}_i|_{\mathbf{p}})} \cdot \sum_{d\in \operatorname{Eff} (\mathbf{p})} \frac{Q^d}{\prod_{i=1}^n (q U_i^{-1} |_\mathbf{p} )_{D_i} } . \end{eqnarray*} \begin{Remark} \label{sum-over-lattice} The summation over $d\in \operatorname{Eff} (\mathbf{p})$ here can actually be replaced with summation over the entire lattice $d\in \mathbb{Z}^k$. The reason is that one always have $$ \frac{1}{(qU_i^{-1}|_{\mathbf{p}})_d}=\frac{1}{(q)_d} = 0, \qquad \text{if} \ d<0 \quad \text{and} \quad i \in \mathbf{p}. $$ \end{Remark} \begin{Remark} The $I$-function is invariant under the $GL(k, \mathbb{Z})$-action on $\mathbb{Z}^k$ or the $GL(d, \mathbb{Z})$-action on $\mathbb{Z}^d$, i.e. the matrices $\iota$ (resp. $\beta$) we start with can be replaced up to a right multiplication by a matrix in $GL(k, \mathbb{Z})$ (resp. left multiplication by a matrix in $GL(d, \mathbb{Z})$). Under such a change of basis, the redundant parameters $z_i$ are unchanged, while the effective parameters $Q_j$ will be changed accordingly. \end{Remark} \subsection{Effective level and modified $I$-function} In Section \ref{sec-4}, we will use the following special representation in the mirror symmetry construction. Let \begin{align} R = \operatorname{Hom}(\mathbb{C},\mathbb{C}^n) \simeq \mathbb{C}^n, \end{align} and the $\mathsf{K}=(\mathbb{C}^*)^k$-action be given by the matrix $\iota$ in the exact sequence (\ref{knd}). We choose the level $l = 1$. Then \begin{align*} \operatorname{tr}_{\mathbb{C}^*} \mathcal{D}^{\mathbb{C}^n,1} &= tr_{\mathbb{C}^*} \left( \operatorname{det}^{-1}R^{\bullet} \left( \oplus_{i=1}^n U_i \otimes \mathcal{O}_{\mathbb{P}^1}(D_i) \right) \right) \\ &= \bigotimes_{i=1}^n \left(U_i^{-D_i+1}q^{D_i(D_i+1)/2} \right) , \end{align*} and \begin{align*} \operatorname{det}\left( \mathbb{C}^{n,s} \times_{\mathsf{K}} R \right) = \bigotimes_{i=1}^{n} U_i . \end{align*} Thus \begin{align} \label{I-function-with-special-level} I^{\mathbb{C}^n,1}(\mathbf{p}) &= \frac{1}{\prod_{i \notin \mathbf{p}}(1-U^{-1}_i|_{\mathbf{p}}) } \cdot \sum_{d\in \operatorname{Eff} (\mathbf{p})} \frac{\prod_{i=1}^{n}\left( U_i^{-1}|_{\mathbf{p}}q^{D_i(D_i+1)/2} \right) Q^d}{ \prod_{i=1}^n (q U_i^{-1} |_\mathbf{p} )_{D_i} } \\ &= \prod_{i \notin \mathbf{p}}(1-U^{-1}_i|_{\mathbf{p}}) \cdot \sum_{d\in \operatorname{Eff} (\mathbf{p})} \frac{\prod_{i=1}^{n}\left( U_i^{-1}|_{\mathbf{p}}q^{D^2_i/2} \cdot (q^{1/2}z_i)^{D_i} \right) }{\prod_{i=1}^n (q U_i^{-1} |_\mathbf{p} ; q )_{D_i} } . \end{align} Here we use the relation between effective parameters $Q_i$'s and redundant parameters $z_i$'s, see Remark \ref{effective-Kahler}. \begin{Remark} If we write out the expression of \begin{align*} \frac{1}{2}\sum_{j=1}^n D_j^2 &=\frac{1}{2} \sum_{j=1}^n \Big(\sum_{i=1}^{k} \iota_{ij}\cdot d_i \Big)^2 \\ &= \frac{1}{2} \sum_{j=1}^n \sum_{a,b=1}^k \iota_{aj} \iota_{bj} \cdot d_a d_b \end{align*} {the term $\frac{1}{2} \sum_{a,b=1}^k \iota_{aj}\iota_{bj} $ appears in physics literatures as effective Chern-Simons term, thus we define this special level as follows:} \end{Remark} \begin{Definition} \label{effective-level} We call the number $\frac{1}{2}\sum_{a,b=1}^k \iota_{a,j}\iota_{bj}, j=1,\cdots,n$ the \emph{effective levels}, and we call (\ref{I-function-with-special-level}) the $I$-function with \emph{effective levels}, denoted by $I^{\operatorname{eff}}(\mathbf{p})$. \end{Definition} From the above computations, the \emph{modified $I$-function with effective level} $\widetilde{I}^{\operatorname{eff}}(\mathbf{p})$ is $$ \widetilde{I}^{\operatorname{eff}} (\mathbf{p}) := e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln U_i |_\mathbf{p} }{\ln q} } \cdot \prod_{i\not\in \mathbf{p}} \frac{1-U_i^{-1}|_{\mathbf{p}}}{ (U_i |_\mathbf{p} )_\infty } \cdot I^{\operatorname{eff}} (\mathbf{p}). $$ Note that the $Q$-coefficients of $\widetilde I (\mathbf{p})$ no longer lie in $K_{\mathsf{T} \times \mathbb{C}_q^*} (\mathbf{p})_\mathrm{loc}$. \vspace{1cm} \section{3d $\mathcal{N} = 2$ mirror symmetry} \label{mirror} \label{sec-4} Recall the toric stack $\mathfrak{X}$ is defined according to the following short exact sequence \begin{equation} \label{exact-sequnce-1} \xymatrix{ 0 \ar[r] & \mathbb{Z}^k \ar[r]^-\iota & \mathbb{Z}^n \ar[r]^-\beta & \mathbb{Z}^d \ar[r] & 0. } \end{equation} Inspired by \cite{DT, AHKT}, we consider the dual short exact sequence, i.e., the Gale dual to (\ref{exact-sequnce-1}): \begin{equation} \label{exact-sequnce-2} \xymatrix{ 0 \ar[r] & (\mathbb{Z}^d)^\vee \ar[r]^-{\iota^!} & (\mathbb{Z}^n)^\vee \ar[r]^-{\beta^!} & (\mathbb{Z}^k)^\vee \ar[r] & 0, } \end{equation} where $$ \iota^! := \beta^T, \qquad \beta^! := \iota^T. $$ \begin{Definition} \label{defn-mirror} Let $\mathsf{A} := \mathsf{T} / \mathsf{K} \cong (\mathbb{C}^*)^d$ be the quotient torus. \begin{itemize} \setlength{\parskip}{1ex} \item The \emph{mirror toric stack} to $\mathfrak{X}$ is defined as $$ \mathfrak{X}^! := [\mathbb{C}^n / \mathsf{A}^\vee], $$ the toric quotient stack associated with the short exact sequence (\ref{exact-sequnce-2}), where the action by $\mathsf{A}^\vee$ is defined by $\iota^!$. \item There is a natural bijection between fixed points of a mirror pair. Given a fixed point $\mathbf{p}$ of $\mathfrak{X}$, the \emph{mirror fixed point} is defined as the complement $$ \mathbf{p}^! = \{1, \cdots, n \} \backslash \mathbf{p} \quad \in \quad (\mathfrak{X}^!)^\mathsf{T} . $$ \end{itemize} \end{Definition} It is easy to check that the columns of $\beta$, i.e. the rows of $\iota^!$, corresponding to $\mathbf{p}^!$, are linearly independent. \begin{Lemma} The K\"ahler and attracting cones of $\mathbf{p}^!$ are $$ \mathcal{K} (\mathbf{p}^!) = \mathcal{A}(\mathbf{p}) , \qquad \mathcal{A} (\mathbf{p}^!) = \mathcal{K} (\mathbf{p}). $$ \end{Lemma} \begin{proof} This follows from the combinatorial descriptions (Lemma \ref{key-lemma} and Lemma \ref{attracting-cone} 3)) of the K\"ahler and attracting cones of fixed points. \end{proof} \begin{Definition} \label{mirror-map} The \emph{mirror map} is defined as an isomorphism of tori $\tau: \mathsf{T} \times \mathsf{T}^\vee \times \mathbb{C}_q^* \cong \mathsf{T}^\vee \times \mathsf{T} \times \mathbb{C}_q^*$, \begin{equation} \label{tau} \tau (z_i^!) = a_i, \qquad \tau (a_i^!) = z_i, \qquad \tau (q) = q^{-1}. \end{equation} \end{Definition} We would like to apply the mirror map $\tau$ to the modified $I$-functions $\widetilde I(\mathbf{p})$. However, the map $q\mapsto q^{-1}$ only makes sense for rational functions in $q$. For functions such as $(u_i^{-1} |_\mathbf{p})_\infty$, which converges for $|q|<1$, the operation $q\mapsto q^{-1}$ will result in a function which converges for $|q|>1$. Therefore, it is necessary to understand the meaning of $\widetilde I(\mathbf{p})$ under the mirror map. We are now going to regard the modified $I$-function with effective level $ \widetilde I^{\operatorname{eff}} (\mathbf{p}) $ also as a formal power series with respect to equivariant parameters. More precisely, we treat it as in the following larger space of functions: \begin{equation} \label{double-series} \widetilde I^{\operatorname{eff}} (\mathbf{p}) \quad \in \quad e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln U_i |_\mathbf{p} }{\ln q} } \cdot \mathbb{C}(q) \llbracket a^{ \mathcal{A}(\mathbf{p})^\vee}, Q^{\operatorname{Eff}(\mathbf{p})} \rrbracket , \end{equation} where $a^{\mathcal{A}(\mathbf{p})^\vee}$ here means the power series consists of monomials of the form $\prod_{i=1}^n a_i^{l_i}$, which are themselves \emph{effective} equivariant parameters, such that the vector $(l_1, \cdots, l_n)$ lies in the image of $\mathcal{A}(\mathbf{p})^\vee$ under the map $\beta^T$. Moreover, according to the combinatorial description of $\mathcal{A}(\mathbf{p})$ in Lemma \ref{attracting-cone}, we see that $$ U_i |_\mathbf{p} \ \in \ \mathbb{C} \llbracket a^{\mathcal{A}(\mathbf{p})^\vee} \rrbracket, \qquad i\not\in \mathbf{p}. $$ So the embedding (\ref{double-series}) in fact simply means to expand functions such as $1 / (U_i |_\mathbf{p})$ and $1 / (q^{-1} U_i |_\mathbf{p}; q^{-1})_{D_i}$ for $i\not\in \mathbf{p}$ as formal power series in $U_i |_\mathbf{p}$, $i\not\in \mathbf{p}$. \begin{Example} \label{example-for-tau} We look at a simplest example to see what the above means. The function $1 / (x)_\infty$, can be expanded as a power series in $x$ by the $q$-binomial formula: $$ \frac{1}{(x)_\infty} = \sum_{d=0}^\infty \frac{x^d}{(q)_d} \quad \in \quad \mathbb{C}(q) \llbracket x \rrbracket. $$ Assume that the mirror map $\tau$ acts trivially on $x$. Now by our definition, the operation $q\mapsto q^{-1}$ applies to the coefficients. So $$ \tau \Big( \frac{1}{(x)_\infty} \Big) = \sum_{d=0}^\infty \frac{x^d}{(q^{-1}; q^{-1})_d} \quad \in \quad \mathbb{C}(q) \llbracket x \rrbracket . $$ A second application of the $q$-binomial formula implies that $$ \tau \Big( \frac{1}{(x)_\infty} \Big) = (qx)_\infty. $$ \end{Example} Now the mirror map $q\mapsto q^{-1}$ makes sense for formal power series with coefficients rational in $q$, such as in (\ref{double-series}). Our main theorem is the following. \begin{Theorem}\label{main-theorem} Let $(\mathfrak{X}, \mathfrak{X}^!)$ be a mirror pair of toric stacks. Let $q^{z_i \partial_{z_i}}$ denote the $q$-difference operator that shifts $z_i \mapsto q z_i$, and similar with $q^{a_i \partial_{a_i}}$. \begin{enumerate} [1)] \setlength{\parskip}{1ex} \item The modified $I$-function of $\mathfrak{X}$ with effective level structure satisfies the following two sets of $q$-difference equations, with respect to the K\"ahler and equivariant parameters. \begin{itemize} \setlength{\parskip}{1ex} \item Let $\{ e_i \}_{i=1}^n$ be the standard basis of $\mathbb{Z}^n$, and consider any $\sum_{i=1}^n \mu_i e_i \in \ker \beta $ such that $\mu_i = \pm 1 $ or $0$. Denote by $S_\pm$ the subset of indices with $\mu_i= \pm 1$. Then \begin{equation} \label{eqn-for-kalher} \left[ \prod_{i\in S_+} ( z_i^{-1} (1- q^{- z_i \partial_{z_i}} ) ) - \prod_{i\in S_-} (z_i^{-1} (1- q^{- z_i \partial_{z_i}} ) ) \right] \widetilde I^{\operatorname{eff}} (\mathfrak{X}) = 0. \end{equation} \item Let $\{ e^!_i \}_{i=1}^n$ be the standard basis of $\mathbb{Z}^n$ in the dual exact sequence, and consider any $\sum_{i=1}^n \mu^!_i e^!_i \in \ker \iota^T $ such that $\mu^!_i = \pm 1 $ or $0$. Denote by $R_\pm$ the subset of indices with $\mu^!_i=\pm 1$. Then \begin{equation} \label{eqn-for-equiv} \left[ \prod_{i\in R_+} (a_i^{-1} (1- q^{ a_i \partial_{a_i}} ) ) - \prod_{i\in R_-} ( a_i^{-1} (1- q^{ a_i \partial_{a_i}} ) ) \right] \left( e^{\sum_{i=1}^n \frac{\ln z_i \ln a_i}{\ln q} } \cdot \widetilde I^{\operatorname{eff}} (\mathfrak{X}) \right) = 0 . \end{equation} \end{itemize} Moreover, the solution to the above difference equations is unique, with certain prescribed asymptotic initial condition (see Lemma \ref{asymptotic}). \item Under the mirror map $$ \tau (z_i^!) = a_i, \qquad \tau (a_i^!) = z_i, \qquad \tau (q) = q^{-1}, $$ the two sets of $q$-difference equations (\ref{eqn-for-kalher}) (\ref{eqn-for-equiv}), for modified $I$-functions of the mirror pair $(\mathfrak{X}, \mathfrak{X}^!)$ with the effective level structure, coincide with each other. Therefore, combining it with the uniqueness result, we have $$ \widetilde I^{\operatorname{eff}}(\mathfrak{X}) = e^{\sum_{i=1}^n \frac{\ln z_i \ln a_i}{\ln q} } \cdot \tau ( \widetilde I^{\operatorname{eff}} (\mathfrak{X}^!) ) . $$ \end{enumerate} \end{Theorem} The rest of this section is to prove the main theorem. Before the proof of the general case, we give a direct computation in the special case of projective spaces. \subsection{Special case: $\mathbb{P}^N$} Let's consider the following exact sequence \begin{equation} \xymatrix{ 1 \ar[r] & \mathbb{C}^* \ar[r]^-\iota & (\mathbb{C}^*)^{N+1} \ar[r]^-\beta & (\mathbb{C}^*)^{N} \ar[r] & 1, } \label{exact-seq-P^N} \end{equation} where $\iota$ and $\beta $ are given as follows \begin{align*} \iota = \left( \begin{array}{ccccc} 1 \\ 1 \\ \vdots \\ 1 \end{array} \right)_{(N+1) \times 1}, \qquad \beta = \left( \begin{array}{ccccc} 1& 0 & \cdots & 0 & -1 \\ 0 & 1 & \cdots & 0&-1 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\0 & 0 & \cdots &1 & -1 \end{array} \right)_{N \times (N+1)} . \end{align*} Then the GIT quotient with respect to $\iota $ is projective space $\mathbb{P}^{N}$ and $\beta$ gives the mirror. There is only one GIT chamber for $(\mathbb{C}^*)^{N+1}/\!/_\theta \mathbb{C}^* $, whose GIT quotient is $\mathbb{P}^N$. The $(\mathbb{C}^*)^{N+1}$--fixed points are given by $\mathbf{p}_j := \{ j\} \subset \{1, \cdots, N+1 \} $. It's well known that its $I$-function is \cite{Giv-perm-II} \begin{align*} I_{\mathbb{P}^N}=\sum_{d \geq 0} \frac{Q^d}{\prod_{k=1}^{d}\prod_{i=1}^{N+1}(1-U_i^{-1}q^k)} . \end{align*} The restriction to one of the $(\mathbb{C}^*)^{N+1}$-fixed points is $$ I_{\mathbb{P}^N}|_{\mathbf{p}_j} =\sum_{d \geq 0} \frac{Q^d}{\prod_{i=1}^{N+1}(qU_i^{-1}|_{\mathbf{p}_j};q)_d } , $$ where \begin{align*} U_i^{-1}|_{\mathbf{p}_j} = \left\{ \begin{aligned} & 1, \qquad && i\in \mathbf{p}_j \\ & a_j/a_i . \qquad && i\not\in \mathbf{p}_j \end{aligned} \right. \end{align*} Then the restriction of $I$-function with effective level structure to the fixed point $\mathbf{p}_j$ is \begin{align*} I^{\operatorname{eff}}_{\mathbb{P}^N}|_{\mathbf{p}_j}&=\sum_{d \geq 0} \frac{Q^d}{\prod_{i=1}^{N+1}(q^{-1}U_i|_{\mathbf{p}_j};q^{-1})_d } \\ &=\sum_{d \geq 0} \frac{(z_1\cdots z_{N+1})^d}{\prod_{i=1}^{N+1}(q^{-1}a_i/a_j;q^{-1})_d} . \end{align*} Let's consider the mirror of projective space $\mathbb{P}^n$, i.e. the GIT quotient \begin{align*} \mathbb{C}^{n+1}/\!/_{\theta} (\mathbb{C}^*)^n \end{align*} with charge matrix coming from the dual exact sequence of (\ref{exact-seq-P^N}), i.e. \begin{equation} \xymatrix{ 1 \ar[r] & (\mathbb{C}^*)^N \ar[r]^-{\iota^{!}} & (\mathbb{C}^*)^{N+1} \ar[r]^-{\beta^{!}} & \mathbb{C}^* \ar[r] & 1, } \end{equation} where \begin{align*} \iota^{! } = \beta^T= \left( \begin{array}{ccccc} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & & 1 \\ -1 & -1 & \cdots & -1 \end{array} \right)_{(N+1) \times N} . \end{align*} Let $v_j= (0, \cdots, 1,\cdots,0)$, $j=1,\cdots,N$ be the standard basis of $\mathbb{R}^N$, and let $v_{N+1}=(-1,-1,\cdots,-1)$. Using the Hilbert--Mumford criterion, we find there are $N+1$ generic chambers $$ C_j := \mathbb{R}_{\geq 0} v_1 + \cdots +\mathbb{R}_{\geq 0}\widehat{v_j}+ \cdots + \mathbb{R}_{\geq 0} v_{N+1} . $$ For example, the following picture is the chamber structures of $\mathbb{P}^2$ and its mirror. \begin{align*} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.8cm,y=1.8cm] \draw [color=cqcqcq,, xstep=0.5cm,ystep=0.5cm] (-5.30540795946268,-1.419981614718684); \clip(-4.60540795946268,-1.419981614718684) rectangle (2.9187966264045735,1.781117965598392); \fill[line width=2pt,color=yqyqyq,fill=yqyqyq,fill opacity=1] (0,1) -- (0,0) -- (1,0) -- (1,1) -- cycle; \fill[line width=2pt,color=aqaqaq,fill=aqaqaq,fill opacity=1] (0,1) -- (-1,1) -- (-1,-1) -- (0,0) -- cycle; \fill[line width=2pt,color=cqcqcq,fill=cqcqcq,fill opacity=1] (-1,-1) -- (1,-1) -- (1,0) -- (0,0) -- cycle; \draw [->,line width=2pt] (-4,0) -- (-2,0); \draw [->,line width=1.5pt] (0,0) -- (1,0); \draw [->,line width=1.5pt] (0,0) -- (0,1); \draw [->,line width=1.5pt] (0,0) -- (-1,-1); \begin{scriptsize} \draw[color=black] (-3.0111538487800438,0.19915013854564284) node {$v_{1}$}; \draw [fill=uuuuuu] (0,0) circle (2pt); \draw[color=black] (0.8859117551108426,-0.15764689723853465) node {$v_1$}; \draw[color=black] (-0.18535952561240973,0.9245779349409153) node {$v_2$}; \draw[color=black] (-0.5577311139628201,-0.7371578312203483) node {$v_3$}; \draw[color=yqyqyq] (0.4810218222390622,0.6293456922219092) node {$C_1$}; \draw[color=aqaqaq] (-0.5480734238100464,0.3509838633725603) node {$C_2$}; \draw[color=cqcqcq] (0.21953040725937067,-0.43349038156651337) node {$C_3$}; \end{scriptsize} \end{tikzpicture} \end{align*} The GIT quotient for each $C_j$ is the affine space $\mathbb{C}$, containing exactly one fixed point $$ \mathbf{p}^{!}_j = \{1,\ldots,N+1 \} \backslash \{ j \} . $$ The restriction of the $I$-function with effective level is \begin{align*} I^{\operatorname{eff}}|_{\mathbf{p}_j^!} = \sum_{D \in \operatorname{Eff}(\mathbf{p}_j^!)} \frac{(z^!)^D}{\prod_{i=1}^{N+1}(q^{-1}U_i|_{\mathbf{p}_j^!;q^{-1}})_{D_i}} , \end{align*} where \begin{align*} \operatorname{Eff}(\mathbf{p}_j^!) = \{ D_i \geq 0, \ i \neq j, \ D_j = -D_1 - \cdots - \widehat{D_j} - \cdots - D_{N+1} \leq 0 \} . \end{align*} So \begin{equation} \label{I_j} {I}^{\operatorname{eff}}|_{\mathbf{p}_j^!} =\sum_{\operatorname{Eff}(\mathbf{p}_j^!)} \frac{\prod_{i=1,i\neq j}^{N+1}(z^!_i/z^!_{j})^{D_i}}{((a^!_1 \cdots a^!_{N+1})q^{-1};q^{-1})_{D_j} \prod_{i=1,i\neq j}^{N+1}(q^{-1}; q^{-1})_{D_i} } . \end{equation} Recall the $q$-binomial formula. $$ \frac{(ax)_\infty}{(x)_\infty} = \sum_{m=0}^\infty \frac{(a)_m}{(q)_m} x^m, \qquad |q|<1, \ |x|<1. $$ Let $a = 0$, or let $a = b/x$ and let $x \to 0$. Get \begin{align} \frac{1}{(x)_\infty} = \sum_{m=0}^\infty \frac{x^m}{(q)_m} , \qquad (b)_\infty = \sum_{m=0}^\infty \frac{q^{m(m-1)/2} (-b)^m }{(q)_m} = \sum_{m=0}^\infty \frac{(q^{-1} b)^m }{(q^{-1}; q^{-1} )_m} . \label{q-binomial-formula} \end{align} Then under the mirror map, we have \begin{align} \label{I_j-mirror} &(I^{\operatorname{eff}}_{\mathbb{P}^N}|_{\mathbf{p}_{j}} )(z_i \mapsto a_i^!, a_i \mapsto z_i^!, q \mapsto q^{-1} ) \nonumber \\ &= \sum_{d \geq 0 }\frac{(a_1^!\cdots a_{N+1}^!)^d}{\prod_{i=1}^{N+1}(q z^{!}_{i} / z^{!}_{j};q )_d} \nonumber \\ &= \frac{1}{\prod_{i \neq j }^{N+1}(qz^!_i/z^!_j ;q )_\infty} \sum_{d \geq 0} \frac{(a_1^!\cdots a_{N+1}^!)^d}{(q;q)_d} \prod_{i=1, i \neq j}^{N+1} (q^{d+1}z^!_i/z^!_j;q)_\infty \nonumber \\ &=\frac{1}{\prod_{i \neq j }^{N+1}(qz^!_i/z^!_j ;q )_\infty} \sum_{d \geq 0} \frac{(a_1^!\cdots a_{N+1}^!)^d}{(q;q)_d} \prod_{i=1,i\neq j}^{N+1} \left( \sum_{D_i \geq 0} \frac{(q^{d}z^!_i/z^!_j )^{D_i}}{(q^{-1};q^{-1})_{D_i}} \right) \nonumber \\ &=\frac{1}{\prod_{i \neq j }^{N+1}(qz^!_i/z^!_j ;q )_\infty} \sum_{D \geq 0} \left(\frac{ \prod_{i=1,i\neq j}^{N+1}(z^!_i/z^!_j)^{D_i}}{ \prod_{i=1,i\neq j}^{N+1}(q^{-1};q^{-1})_{D_i}} \sum_{d \geq 0 } \frac{\left( q^{\sum_{i=1,i\neq j}^{N+1}D_i}a^!_1\cdots a^!_{N+1}\right)^d}{(q;q)_d} \right) \nonumber \\ &=\frac{1}{(a_1^!\cdots a_{N+1}^!;q)_\infty\prod_{i \neq j }^{N+1}(qz^!_i/z^!_j ;q )_\infty} \\ \nonumber &\times \sum_{D_1,\cdots \hat{j} \cdots D_{N+1} \geq 0}\frac{ \prod_{i=1,i\neq j}^{N+1}(z^!_i/z^!_j)^{D_i}}{ \prod_{i=1,i\neq j}^{N+1}(q^{-1};q^{-1})_{D_i}(q^{-1}a^!_1 \cdots a^!_{N+1};q^{-1})_{-D_1-\cdots \hat{j} \cdots-D_{N+1}} } . \end{align} Comparing the above formula (\ref{I_j-mirror}) with (\ref{I_j}), we know that under the change of K\"ahler parameters with equivariant parameters and also the change $q \mapsto q^{-1} $, the $I$-functions with effective level structure restricting to the corresponding fixed points are the same by multipling a prefactor. The modified $I$-function with effective level for $\mathbb{P}^N$ is as follows, \begin{align*} \widetilde{I}_{\mathbb{P}^N}^{\operatorname{eff}}(\mathbf{p}_j) &= e^{-\sum_{i\not\in {\mathbf{p}_j}} \frac{\ln z_i \ln U_i |_{\mathbf{p}_j} }{\ln q} } \cdot \prod_{i\not\in {\mathbf{p}_j}} \frac{1}{ (U_i |_{\mathbf{p}_j} )_\infty } \sum_{\operatorname{Eff}(\mathbf{p}_j^!)} \frac{\prod_{i=1,i\neq j}^{N+1}(z^!_i/z^!_{j})^{D_i}}{((a^!_1 \cdots a^!_{N+1})q^{-1};q^{-1})_{D_j} \prod_{i=1,i\neq j}^{N+1}(q^{-1}; q^{-1})_{D_i} } . \end{align*} In the following, let's compute the prefactor under the mirror map. \begin{align*} \tau( \sum_{i=1, i \neq j}^{N+1} \ln z_i \ln a_i/a_j ) &= \sum_{i=1, i \neq j}^{N+1} \ln a^!_i \ln z^!_i/z^!_j \\ &= \sum_{i=1}^{N+1} \ln a_i^! \ln z_i^! - \ln z_j^! \ln \big( \prod_{i=1}^{N+1} a_i^! \big), \end{align*} and from Example \ref{example-for-tau}, we have \begin{align*} \tau \big( \prod_{i=1,i \neq j}^{N+1} \frac{1}{(a_i/a_j)_{\infty}} \big) = \prod_{i=1, i \neq j}^{N+1} (qz^!_i/z^!_j)_\infty . \end{align*} In summary, we obtain \begin{align*} \tau (\widetilde{I}_{\mathbb{P}^N}^{\operatorname{eff}}(\mathbf{p}_j)) = e^{\sum_{i=1}^{N+1}\frac{\ln z_i^! \ln a_i^!}{\ln q}} \widetilde{I}^{\operatorname{eff}}(\mathbf{p}_j^!) \end{align*} In the following subsection, we consider general cases, and compare the modified $I$-function with level structure. \subsection{Proof of the main theorem} In this subsection we prove our main Theorem \ref{main-theorem}. We briefly summarize the structure of the proof. First, we use the explicit formula of the modified $I$-function to find the $q$-difference equations. We then show the uniqueness of the $q$-difference equations in Lemma \ref{asymptotic}, which uniquely characterizes the modified $I$-functions. Finally, we compare the modified $I$-functions of a mirror pair, under the mirror map, and identify them. \begin{proof}[Proof of Theorem \ref{main-theorem} 1): $q$-difference equations] By definition of the modified $I$-function with effective level structure, it suffices to prove this for the contribution from each fixed point $\widetilde{I}^{\operatorname{eff}}(\mathbf{p})$. Let $\mathbf{p} \in \mathfrak{X}^\mathsf{T}$ be a fixed point. Let's compute the actions of $q$-difference operators on the $I$-function. Recall that by Remark \ref{sum-over-lattice}), $$ \widetilde I^{\operatorname{eff}} (\mathbf{p}) = e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln U_i |_\mathbf{p} }{\ln q} } \cdot \prod_{i\not\in \mathbf{p}} \frac{1}{ (U_i |_\mathbf{p} )_\infty } \cdot \sum_{d\in \mathbb{Z}^k} \frac{z^D}{\prod_{i=1}^n (q^{-1} U_i |_\mathbf{p}; q^{-1} )_{D_i} } . $$ Therefore for all $1\leq i\leq n$, $$ q^{- z_i \partial_{z_i}} \widetilde I^{\operatorname{eff}} (\mathbf{p}) = e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln U_i |_\mathbf{p} }{\ln q} } \cdot \prod_{i\not\in \mathbf{p}} \frac{1}{ (U_i |_\mathbf{p} )_\infty } \cdot \sum_{d\in \mathbb{Z}^k} \frac{z^D}{\prod_{i=1}^n (q^{-1} U_i |_\mathbf{p}; q^{-1} )_{D_i} } \cdot q^{-D_i} U_i |_\mathbf{p} . $$ To compute the action of $q^{-a_i \partial_{a_i}}$'s, we need an explicit expression of $U_i |_\mathbf{p}$ in terms of $a_i$'s. Let $P$ be the $k\times k$ submatrix of $\iota$ with rows in $\mathbf{p}$, which is of full rank, and let $Q$ be the $d\times k$ submatrix of $\iota$ with rows not in $\mathbf{p}$. Let $C:= Q P^{-1}$. The unique solution to the system (\ref{solution}) is then $$ U_i |_\mathbf{p} = \left\{ \begin{aligned} & 1 , && \qquad i\in \mathbf{p} \\ & a_i \prod_{j\in \mathbf{p}} a_j^{-C_{ij}}, && \qquad i\not\in \mathbf{p}. \end{aligned} \right. $$ We see that for $i\not\in \mathbf{p}$, $$ q^{ a_i \partial_{a_i}} \widetilde I^{\operatorname{eff}} (\mathbf{p}) = e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln U_i |_\mathbf{p} }{\ln q} } \cdot \prod_{i\not\in \mathbf{p}} \frac{1}{ (U_i |_\mathbf{p} )_\infty } \cdot \sum_{d\in \mathbb{Z}^k} \frac{z^D}{\prod_{i=1}^n (q^{-1} U_i |_\mathbf{p}; q^{-1} )_{D_i} } \cdot z_i^{-1} (1 - q^{-D_i} U_i |_\mathbf{p} ), $$ and for $j\in \mathbf{p}$, \begin{eqnarray*} q^{ a_j \partial_{a_j}} \widetilde I^{\operatorname{eff}} (\mathbf{p}) &=& e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln (q^{-C_{ij}} U_i |_\mathbf{p}) }{\ln q} } \cdot \prod_{i\not\in \mathbf{p}} \frac{1}{ (q^{-C_{ij}} U_i |_\mathbf{p} )_\infty } \\ && \cdot \sum_{d\in \mathbb{Z}^k} \frac{z^D}{\prod_{i\in \mathbf{p}} (q^{-1}; q^{-1})_{D_i} \prod_{i\not\in\mathbf{p}} (q^{-1 - C_{ij} } U_i |_\mathbf{p}; q^{-1} )_{D_i} } \\ &=& e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln U_i |_\mathbf{p} }{\ln q} } \cdot \prod_{i\not\in \mathbf{p}} \frac{1}{ (U_i |_\mathbf{p} )_\infty } \cdot \sum_{d\in \mathbb{Z}^k} \frac{z^D}{\prod_{i\in \mathbf{p}} (q^{-1}; q^{-1})_{D_i} \prod_{i\not\in\mathbf{p}} (q^{-1} U_i |_\mathbf{p}; q^{-1} )_{D_i + C_{ij}} } \\ && \cdot \prod_{i\not\in \mathbf{p}} \left( z_i^{C_{ij}} (U_i |_\mathbf{p})_{-C_{ij}} (q^{-1} U_i |_\mathbf{p} ; q^{-1} )_{C_{ij}} \right) \\ &=& e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln U_i |_\mathbf{p} }{\ln q} } \cdot \prod_{i\not\in \mathbf{p}} \frac{1}{ (U_i |_\mathbf{p} )_\infty } \cdot \sum_{d\in \mathbb{Z}^k} \frac{\prod_{i\in \mathbf{p}} z_i^{D_i} \prod_{i\not\in \mathbf{p}} z_i^{D_i + C_{ij} } }{\prod_{i\in \mathbf{p}} (q^{-1}; q^{-1})_{D_i} \prod_{i\not\in\mathbf{p}} (q^{-1} U_i |_\mathbf{p}; q^{-1} )_{D_i + C_{ij}} } \\ &=& e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln U_i |_\mathbf{p} }{\ln q} } \cdot \prod_{i\not\in \mathbf{p}} \frac{1}{ (U_i |_\mathbf{p} )_\infty } \\ && \cdot \sum_{d\in \mathbb{Z}^k} \frac{z_j^{D_j + 1} \prod_{i\in \mathbf{p} \backslash\{j\} } z_i^{D_i} \prod_{i\not\in \mathbf{p}} z_i^{D_i + C_{ij} } \cdot z_j^{-1} (1 - q^{-D_j - 1} )}{(q^{-1}; q^{-1} )_{D_j+1} \prod_{i\in \mathbf{p} \backslash \{j\} } (q^{-1}; q^{-1})_{D_i} \prod_{i\not\in\mathbf{p}} (q^{-1} U_i |_\mathbf{p}; q^{-1} )_{D_i + C_{ij}} } \\ &=& e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln U_i |_\mathbf{p} }{\ln q} } \cdot \prod_{i\not\in \mathbf{p}} \frac{1}{ (U_i |_\mathbf{p} )_\infty } \cdot \sum_{d\in \mathbb{Z}^k} \frac{z^D}{\prod_{i=1}^n (q^{-1} U_i |_\mathbf{p}; q^{-1} )_{D_i} } \cdot z_j^{-1} (1 - q^{-D_j} ) , \end{eqnarray*} where we used the identity $(x)_d \cdot (q^{-1} x; q^{-1})_{-d} = 1$. Comparing the results above, we obtain the conclusion that for any $1\leq i\leq n$, the modified $I$-function $\widetilde I (\mathbf{p})$ satisfies the following $q$-difference equations: \begin{align} \label{linear-relation} \left( q^{- z_i \partial_{z_i}} + z_i q^{ a_i \partial_{a_i}} -1 \right) \widetilde I^{\operatorname{eff}} (\mathbf{p}) = 0. \end{align} By (\ref{linear-relation}), (\ref{eqn-for-kalher}) is equivalent to the identity $$ \prod_{i\in S_+} q^{a_i \partial_{a_i}} \prod_{i\in S_-} q^{-a_i \partial_{a_i}} \widetilde I (\mathbf{p}) = \widetilde I^{\operatorname{eff}}(\mathbf{p}), $$ which follows from the fact that $\widetilde I(\mathbf{p})$ depends only on $a_i$'s in terms of $U_i |_\mathbf{p}$, and hence only depends on the \emph{effective} equivariant parameters. Indeed, the \emph{effective} equivariant parameter is as follows \begin{align*} \Lambda_b=\prod_{j=1}^{n} a_j^{\beta_{bj}}, \qquad 1\leq b\leq d. \end{align*} Then \begin{align*} \prod_{i\in S_+} q^{a_i \partial_{a_i}} \prod_{i\in S_-} q^{-a_i \partial_{a_i}} \Lambda_b = q^{\sum_{i \in S_+ }\beta_{bi} - \sum_{i \in S_{-} }\beta_{bi}} \cdot \prod_{j=1}^n a_j^{\beta_{bj}} . \end{align*} Let $\{ e_i \}_{i=1}^n$ be the standard basis of $\mathbb{Z}^n$, and consider any $\sum_{i=1}^n \mu_i e_i \in \ker \beta $ such that $\mu_i = \pm 1 $ or $0$. Denote by $S_\pm$ the subset of indexes that $\mu_i= \pm 1$. We know that $\sum_{i \in S_+ }\beta_{bi} - \sum_{i \in S_{-} }\beta_{bi} = 0$, and hence $q^{\sum_{i \in S_+ }\beta_{bi} - \sum_{i \in S_{-} }\beta_{bi}}=1$. For (\ref{eqn-for-equiv}) , note that $e^{\sum_{i=1}^n \frac{\ln z_i \ln a_i}{\ln q} } \cdot \widetilde I^{\operatorname{eff}} (\mathbf{p})$ satisfies an analogue to Proposition \ref{linear-relation}: $$ \left( a_iq^{- z_i \partial_{z_i}} + q^{ a_i \partial_{a_i}} -1 \right) \left( e^{\sum_{i=1}^n \frac{\ln z_i \ln a_i}{\ln q} } \cdot \widetilde I^{\operatorname{eff}} (\mathbf{p}) \right) = 0. $$ Moreover, we have $$ \sum_{i=1}^n \ln z_i \ln a_i - \sum_{i\not\in\mathbf{p}} \ln z_i \ln u_i |_\mathbf{p} = - \sum_{j\not\in p} \ln \Big( z_j^{-1} \prod_{i\in \mathbf{p}} z_i^{-C_{ij}} \Big) \ln a_j , $$ and hence $e^{\sum_{i=1}^n \frac{\ln z_i \ln a_i}{\ln q} } \cdot \widetilde I (\mathbf{p})$ depends only on the \emph{effective} K\"ahler parameters. (\ref{eqn-for-equiv}) then follows from similar arguments as (\ref{eqn-for-kalher}) does. \end{proof} We can also deduce the $q$-difference equations satisfied by the $I$-functions without modification. \begin{Corollary} Let $\{ e_i \}_{i=1}^n$ be the standard basis of $\mathbb{Z}^n$, and consider any $\sum_{i=1}^n \mu_i e_i \in \ker \beta $ such that $\mu_i = \pm 1 $ or $0$. Denote by $S_\pm$ the subset of indexes that $\mu_i=\pm 1$. Then the $I$-function (without modification) $I^{\operatorname{eff}} (\mathbf{p})$ satisfies \begin{equation} \label{q-diff-z-2} \left[ \prod_{i\in S_+} ( z_i^{-1} (1- U_i |_\mathbf{p} \cdot q^{- z_i \partial_{z_i}} ) ) - \prod_{i\in S_-} (z_i^{-1} (1- U_i |_\mathbf{p} \cdot q^{- z_i \partial_{z_i}} ) ) \right] I^{\operatorname{eff}} (\mathbf{p}) = 0. \end{equation} \end{Corollary} To study the uniqueness of the $q$-difference equations, we need the following lemma. \begin{Lemma} \label{FG-uniqueness} Let $K$ be a field. Suppose $f (x_1, \cdots, x_k) \in K(q) \llbracket x_1, \cdots, x_k \rrbracket$ satisfies the following system of $q$-difference equations $$ \left[ F_j (q^{x_1 \partial_{x_1} }, \cdots, q^{x_k \partial_{x_k} } ) - x_j G_j (q^{x_1 \partial_{x_1} }, \cdots, q^{x_k \partial_{x_k} } ) \right] f (x_1, \cdots, x_k) = 0, \qquad 1\leq j\leq k, $$ where $F_j$ and $G_j$'s are polynomials with coefficients in $R(q)$, such that $F_j (q^{n_1}, \cdots, q^{n_k}) \neq 0$ for any $n_1, \cdots, n_k \in \mathbb{Z}_{\geq 0}$, $n_j >0$. Then the solution $f (x_1, \cdots, x_k) \in R(q) \llbracket x_1, \cdots, x_k \rrbracket$ is uniquely determined by its constant term $f(0, \cdots, 0)$. \end{Lemma} \begin{proof} Let $f$ be $$ f(x_1, \cdots, x_k) = \sum_{n_1, \cdots, n_k\geq 0} f_{n_1, \cdots, n_k} x_1^{n_1} \cdots x_k^{n_k}, \qquad f_{n_1, \cdots, n_k} \in K(q). $$ For any $1\leq j\leq k$, the $j$-th $q$-difference equation implies the following recursion relations among the coefficients: $$ F_j (q^{n_1}, \cdots, q^{n_k} ) f_{n_1, \cdots, n_k} = G_j (q^{n_1}, \cdots, q^{n_j - 1}, \cdots, q^{n_k} ) f_{n_1, \cdots, n_j - 1, \cdots, n_k}, $$ for any $n_j\geq 1$, and other $n_i\geq 0$, $i\neq j$. By the nonvanishing assumption on $F_j (q^{n_1}, \cdots, q^{n_k} )$, the coefficient $f_{n_1, \cdots, n_k}$ is determined by $f_{n_1, \cdots, n_j - 1, \cdots, n_k}$. The entire formal power series $f$ is then uniquely determined by $f_{0, \cdots, 0}$. \end{proof} \begin{Lemma} \label{asymptotic} Let $\mathbf{p}\in \mathfrak{X}$ be a $\mathsf{T}$-fixed point. \begin{enumerate}[1)] \item The $I$-function $I^{\operatorname{eff}} (\mathbf{p})$ is uniquely characterized as the solution to the system of $q$-difference equations (\ref{q-diff-z-2}), taking values in $K_{\mathsf{T} \times \mathbb{C}_q^*} (\mathbf{p})_\mathrm{loc} \llbracket Q^{\operatorname{Eff}(\mathbf{p})} \rrbracket $, and satisfying the initial condition $I^{\operatorname{eff}}(\mathbf{p}) |_{Q=0} = 1$. \item $\widetilde I^{\operatorname{eff}}(\mathbf{p})$ is uniquely characterized as the solution to the system of $q$-difference equations (\ref{eqn-for-kalher}), with the following prescribed asymptotic behavior: $$ \widetilde I^{\operatorname{eff}} (\mathbf{p}) \quad \in \quad e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln u_i |_\mathbf{p} }{\ln q} } \cdot \prod_{i\not\in \mathbf{p}} \frac{1}{ (U_i |_\mathbf{p} )_\infty } \cdot \left( 1+ Q \cdot K_{\mathsf{T} \times \mathbb{C}^*_q} (\mathbf{p})_\mathrm{loc} \llbracket Q^{\operatorname{Eff}(\mathbf{p})} \rrbracket \right) , $$ where $Q \cdot K_{\mathsf{T} \times \mathbb{C}^*_q} (\mathbf{p})_\mathrm{loc} \llbracket Q^{\operatorname{Eff}(\mathbf{p})} \rrbracket$ denotes the maximal ideal in $K_{\mathsf{T} \times \mathbb{C}^*_q} (\mathbf{p})_\mathrm{loc} \llbracket Q^{\operatorname{Eff}(\mathbf{p})} \rrbracket$ generated by monomials $Q^\beta$, with $\beta \in \operatorname{Eff} (\mathbf{p}) \backslash \{0\}$. \end{enumerate} \end{Lemma} \begin{proof} It suffices to prove 1), which 2) directly follows from. Let $\mathbf{p}\in \mathfrak{X}$ be a fixed point. Up to a change of basis on $\mathbb{Z}^k$, we can assume that the matrix $\iota$ is of the form $\begin{pmatrix} I \\ C \end{pmatrix}$, where $I$ is the $k\times k$ submatrix with rows in $\mathbf{p}$. To avoid complicated notations, we assume that $\mathbf{p} = \{1, \cdots, k\}$, without the loss of too much generality. We have $Q_j = z_j \prod_{i=k+1}^n z_i^{C_{ij}}$, $1\leq j\leq k$. Each column of $\iota$ then gives a circuit, and the $q$-difference equations (\ref{q-diff-z-2}), written in terms of $Q_j$'s, are \begin{eqnarray*} && \left[ (1 - q^{- Q_j \partial_{Q_j}} ) \prod_{i\not\in \mathbf{p}, \, C_{ij} = 1} (1- U_i |_\mathbf{p} \cdot q^{- \sum_{l\in \mathbf{p}} C_{il} Q_l \partial_{Q_l}} ) \right. \\ && \left. - Q_j \prod_{i\not\in \mathbf{p}, \, C_{ij} = -1} (1- U_i |_\mathbf{p} \cdot q^{- \sum_{l\in \mathbf{p}} C_{il} Q_l \partial_{Q_l}} ) \right] I^{\operatorname{eff}} (\mathbf{p}) = 0 \end{eqnarray*} for any $1\leq j\leq k$. We used the fact $U_j |_\mathbf{p} = 1$ since $j\in \mathbf{p}$; and note that $U_i |_\mathbf{p} \neq 1$, for all $i\not\in \mathbf{p}$. The system of $q$-difference equations then satisfies the assumption in Lemma \ref{FG-uniqueness}, and hence the lemma follows. \end{proof} Lemma \ref{asymptotic} proves the uniqueness part in 1) of Theorem \ref{main-theorem}. Now let's prove the second part of Theorem \ref{main-theorem}. \begin{proof}[Proof of Theorem \ref{main-theorem} 2)] From (\ref{eqn-for-equiv}), we know that $ \widetilde I (\mathbf{p}^!) $ satisfies \begin{align*} \left[ \prod_{i\in R_+} ((a_i^!)^{-1} (1- q^{ a^!_i \partial_{a^!_i}} ) ) - \prod_{i\in R_-} ( (a^!_i)^{-1} (1- q^{ a^!_i \partial_{a^!_i}} ) ) \right] \left( e^{\sum_{i=1}^n \frac{\ln z^!_i \ln a^!_i}{\ln q} } \cdot \widetilde I^{\operatorname{eff}} (\mathbf{p}^!) \right) = 0 . \end{align*} Applying the mirror map $\tau$ to both sides, we can see that the two functions $$ \widetilde I(\mathbf{p}), \qquad e^{-\sum_{i=1}^n \frac{\ln z_i \ln a_i}{\ln q} } \cdot \tau ( \widetilde I^{\operatorname{eff}} (\mathbf{p}^!) ) $$ satisfy the same $q$-difference equations. We regard the two functions as formal power series in K\"ahler parameters, with appropriate exponential prefactors. Therefore, by the uniqueness result, it suffices to check that $e^{\sum_{i=1}^n \frac{\ln z_i \ln a_i}{\ln q} } \cdot \tau ( \widetilde I^{\operatorname{eff}} (\mathbf{p}^!) )$ admits the same asymptotic form as in 2) of Lemma \ref{asymptotic}. Recall that $$ \widetilde I^{\operatorname{eff}} (\mathbf{p}^!) = e^{-\sum_{i\not\in \mathbf{p}^!} \frac{\ln z_i^! \ln U_i |_{\mathbf{p}^!} }{\ln q} } \cdot \prod_{i\not\in \mathbf{p}^!} \frac{1}{ (U_i |_{\mathbf{p}^!} )_\infty } \cdot \sum_{d^! \in \operatorname{Eff}(\mathbf{p}^!)} \frac{(z^!)^{D^!}}{\prod_{i\in \mathbf{p}^!} (q^{-1}; q^{-1})_{D_i^!} \prod_{i\not\in \mathbf{p}^!} (q^{-1} U_i |_{\mathbf{p}^!}; q^{-1} )_{D_i^!} } . $$ Its asymptotes as $U_i |_{\mathbf{p}^!} \to 0$ is $$ e^{-\sum_{i\not\in \mathbf{p}^!} \frac{\ln z_i^! \ln U_i |_{\mathbf{p}^!} }{\ln q} } \cdot \sum_{D_i^! \geq 0, \, i\in \mathbf{p}^!} \frac{(z^!)^{D^!}}{\prod_{i\in \mathbf{p}^!} (q^{-1}; q^{-1})_{D_i^!} } , $$ where the summation $D_i^!\geq 0$ for $i\in \mathbf{p}^!$ is by the combinatorial description of $\operatorname{Eff}(\mathbf{p}^!)$ in Lemma \ref{key-lemma}. Without loss of generality, we assume that the matrix $\iota$ is of the form $\begin{pmatrix} I \\ C \end{pmatrix}$, and the matrix $\beta$ is of the form $\begin{pmatrix} -C^T & I \end{pmatrix}$. Denoted by $C_{ij}$ the entries of submatrix $C$ in $iota$ and denoted by $C_{ij}^!$ the entries of submatrix $-C^T$ in $\beta$, then we have $C_{ji}^! = - C_{ij}$, for $j\in \mathbf{p}$, $i\not\in \mathbf{p}$. So \begin{align*} U_j |_{\mathbf{p}^!} = a_j^! \prod_{i\in \mathbf{p}^!} (a_i^!)^{C_{ij}} \quad {\rm{and}} \quad (z^!)^{D^!} = \prod_{j\in \mathbf{p}^!} (z_j^!)^{D_j^!} \prod_{j \not\in \mathbf{p}^!} (z_j^!)^{\sum_{i\in \mathbf{p}^!} C_{ji}^! D_i^!} \end{align*} Under the mirror map, we have \begin{eqnarray*} \tau \Big( \sum_{j\not\in \mathbf{p}^!} \ln z_j^! \ln U_j |_{\mathbf{p}^!} \Big) &=& \tau \Big( \sum_{j\not\in \mathbf{p}^!} \ln z_j^! \ln \Big( a_j^! \prod_{i\in \mathbf{p}^!} (a_i^!)^{C_{ij}} \Big) \Big) \\ &=& \sum_{j\in \mathbf{p}} \ln a_j \ln \Big( z_j \prod_{i\not\in \mathbf{p}} (z_i)^{C_{ij}} \Big) \\ &=& \sum_{j=1}^n \ln z_j \ln a_j + \sum_{i \not\in \mathbf{p}} \ln z_i \ln \Big( a_i^{-1} \prod_{j\in \mathbf{p}} a_j^{C_{ij}} \Big) \\ &=& \sum_{j=1}^n \ln z_j \ln a_j - \sum_{i \not\in \mathbf{p}} \ln z_i \ln U_i |_\mathbf{p}, \end{eqnarray*} and \begin{align*} \tau \left( \sum_{D_i^! \geq 0, \, i\in \mathbf{p}^!} \frac{(z^!)^{D^!}}{\prod_{i\in \mathbf{p}^!} (q^{-1}; q^{-1})_{D_i^!} } \right) &= \sum_{D_i \geq 0, i \notin \mathbf{p}} \frac{\prod_{i \notin \mathbf{p}}a^{D_i}_i \prod_{j \in \mathbf{p}} a_j^{\sum_{i \notin \mathbf{p}}-C_{ij}D_i} }{\prod_{i \notin \mathbf{p}}(q;q)_{D_i}} \\ &=\sum_{D_i \geq 0, i \notin \mathbf{p}} \frac{\prod_{i \notin \mathbf{p}} \left( a_i \prod_{j \in \mathbf{p}} a_j^{-C_{ij}} \right)^{D_i} }{\prod_{i \notin \mathbf{p}}(q;q)_{D_i}} \\ &=\prod_{i \notin \mathbf{p}}\frac{1}{(U_i|_{\mathbf{p}})_\infty} \end{align*} where we use the $q$-binomial formula (\ref{q-binomial-formula}). Hence as $\tau(U_i|_{\mathbf{p}^!}) \to 0$, \begin{eqnarray*} \tau (\widetilde I^{\operatorname{eff}} (\mathbf{p}^!) ) &\sim& e^{\sum_{i=1}^n \frac{\ln z_i \ln a_i}{\ln q} } \cdot e^{-\sum_{i\not\in \mathbf{p}} \frac{\ln z_i \ln u_i |_\mathbf{p} }{\ln q} } \cdot\prod_{i\not\in \mathbf{p}} \frac{1}{(U_i |_\mathbf{p} )_\infty } . \end{eqnarray*} We see that $e^{-\sum_{i=1}^n \frac{\ln z_i \ln a_i}{\ln q} } \cdot \tau ( \widetilde I^{\operatorname{eff}} (\mathbf{p}^!) )$ admits the same prefactor as in Lemma \ref{asymptotic} when $Q\to 0$. The theorem is then proved. \end{proof} \begin{Remark} Here we only treat the modified $I$-functions as \emph{formal} power series in K\"ahler or equivariant parameters. In general they do not converge as (multi-valued) analytic functions. For K\"ahler parameters this is due to the possible divergence of the $I$-function $I^{\operatorname{eff}}(\mathbf{p})$ itself. For equivariant parameters, this can be seen already from the formula of $\widetilde I^{\operatorname{eff}}(\mathbf{p})$: in terms of $U_i |_\mathbf{p}$, it admits an infinite family of poles $U_i |_\mathbf{p} = q^{\mathbb{Z}}$. \end{Remark} \bibliographystyle{abbrv}
1,314,259,993,134
arxiv
\section{Introduction} Correlation functions are fundamental objects for statistical analysis, and are thus ubiquitous in most kinds of scientific inquires and their applications \cite{Jaynes,Devore}. In physics, correlation functions have an important role for research in areas such as quantum optics and open systems \cite{Knight_book,Petruccione_book}, phase transitions and condensed matter physics \cite{Sachdev_book,Mezzadri_AMP}, and quantum field theory and nuclear and particle physics \cite{Mozo_QFT}. Another area in which correlation functions are omnipresent is quantum information science (QIS), an interdisciplinary field that extends the applicabilities of the classical theories of information, computation, and computational complexity \cite{Nielsen=000026Chuang,Preskill,Wilde}. Investigations about the quantum correlations in physical systems have been one of the main catalyzers for developments in QIS \cite{Leuchs_RMP,Horodecki_RMP,Jost_AMP,Modi_RMP,Lucas_RevD}. There are several guises of quantum correlations, and quantum discord stands among the most promising quantum resources for fueling the quantum advantage \cite{Lucas_PTRSA,Brito_AMP,Streltsov_book,Datta_DQC1,Winter_DSMerg,Pirandola_DCryp,Retamal_DSDisc,Piani_DDHid,Vedral_RSP,Adesso_metro,Piani_DEComm}. When computing or witnessing quantum discord, or other kinds of correlation or quantumness quantifiers, we are frequently faced with the need for calculating coherence vectors and correlation matrices \cite{Maziero_W1,Maziero_W2,Sarandy_2sc_exp,Adesso_OD,Maziero_W3,Bose_AMP,LoFranco1,LoFranco2,LoFranco3,LoFranco4,LoFranco5,LoFranco6,LoFranco7,LoFranco8,LoFranco9}. And it is the main aim of this article to provide formulas for these functions that are amenable for more efficient numerical calculations when compared with the direct implementation of their definitions. In order to define coherence vectors and correlation matrices, let us consider a composite bipartite system with Hilbert space $\mathcal{H}_{ab}=\mathcal{H}_{a}\otimes\mathcal{H}_{b}$. Hereafter the corresponding dimensions are denoted by $d_{s}=\dim\mathcal{H}_{s}$ for $s=ab,a,b$. In addition, let $\Gamma_{j}^{s}$, with \begin{equation} \mathrm{Tr}(\Gamma_{j}^{s})=0\mbox{ and }\mathrm{Tr}(\Gamma_{j}^{s}\Gamma_{k}^{s})=2\delta_{jk}, \end{equation} be a basis for the special unitary group $SU(d_{s})$. Any density operator describing the state of the system $\mathcal{H}_{ab}$ can be written in the local basis $\Gamma_{j}^{a}\otimes\Gamma_{k}^{b}$ as follows: \begin{eqnarray} \rho & = & \frac{1}{d_{ab}}\left(\mathbb{I}_{a}\otimes\mathbb{I}_{b}+\sum_{j=1}^{d_{a}^{2}-1}a_{j}\Gamma_{j}^{a}\otimes\mathbb{I}_{b}+\mathbb{I}_{a}\otimes\sum_{k=1}^{d_{b}^{2}-1}b_{k}\Gamma_{k}^{b}\right.\nonumber \\ & & \hspace{1em}\hspace{1em}\hspace{1em}\left.+\sum_{j=1}^{d_{a}^{2}-1}\sum_{k=1}^{d_{b}^{2}-1}c_{j,k}\Gamma_{j}^{a}\otimes\Gamma_{k}^{b}\right),\label{eq:rho_Bloch} \end{eqnarray} where \begin{equation} j=1,\cdots,d_{a}^{2}-1\mbox{ and }k=1,\cdots,d_{b}^{2}-1, \end{equation} and $\mathbb{I}_{s}$ is the identity operator in $\mathcal{H}_{s}$. One can readily verify that the components of the coherence (or Bloch's) vectors $\mathbf{a}=(a_{1},\cdots,a_{d_{a}^{2}-1})$ and $\mathbf{b}=(b_{1},\cdots,b_{d_{b}^{2}-1})$ and of the correlation matrix $C=(c_{j,k})$ are given by: \begin{eqnarray} a_{j} & = & 2^{-1}d_{a}\mathrm{Tr}(\Gamma_{j}^{a}\otimes\mathbb{I}_{b}\rho),\\ b_{k} & = & 2^{-1}d_{b}\mathrm{Tr}(\mathbb{I}_{a}\otimes\Gamma_{k}^{b}\rho),\\ c_{j,k} & = & 2^{-2}d_{ab}\mathrm{Tr}(\Gamma_{j}^{a}\otimes\Gamma_{k}^{b}\rho). \end{eqnarray} It is worthwhile mentioning that the mean value of \emph{any observable} in $\mathcal{H}_{s}$, for $s=a,b,ab$, can be obtained using these quantities. In \href{https://github.com/jonasmaziero/LibForQ.git}{https://github.com/jonasmaziero/LibForQ.git}, we provide Fortran code to compute the coherence vectors, correlation matrices, and quantum discord quantifiers we deal with here. Besides these functions, there are other tools therein that may be of interest to the reader. The instructions on how to use the software are provided in the readme file. Related to the content of this section, the subroutine \texttt{bloch\_vector\_gellmann\_unopt($d_{s}$, $\rho_{s}$, $\mathbf{s}$)} returns the coherence vectors $\mathbf{a}$ or $\mathbf{b}$ and the subroutine \texttt{corrmat\_gellmann\_unopt($d_{a}$, $d_{b}$, $\rho$, $C$)} computes the correlation matrix $C$. Now, let us notice that if calculated directly from the equations above, for $d_{a},d_{b}\gg1$, the computational complexity (CC) to obtain the coherence vectors $\mathbf{a}$ and $\mathbf{b}$ or the correlation matrix $C$ is: \begin{equation} CC(\mathbf{a})=CC(\mathbf{b})=CC(C)\approx\mathcal{O}(d_{a}^{6}d_{b}^{6}). \end{equation} The remainder of this article is structured as follows. In Sec. \ref{coh_vec}, we obtain formulas for $\mathbf{a}$, $\mathbf{b}$, and $C$ that are amenable for more efficient numerical computations. In Sec. \ref{discord} we test these formulas by applying them in the calculation of Hilbert-Schmidt quantum discords. In Sec. \ref{conclusion} we make some final remarks about the usefulness and possible applications of the results reported here. \section{Computing coherence vectors and correlation matrices} \label{coh_vec} The partial trace function \cite{Maziero_PTr} can be used in order to obtain the reduced states $\rho_{a}=\mathrm{Tr}_{b}(\rho)$ and $\rho_{b}=\mathrm{Tr}_{a}(\rho)$ and to write the components of the Bloch vectors in the form: \begin{eqnarray} a_{j} & = & 2^{-1}d_{a}\mathrm{Tr}(\Gamma_{j}^{a}\rho_{a}),\\ b_{k} & = & 2^{-1}d_{b}\mathrm{Tr}(\Gamma_{k}^{b}\rho_{b}). \end{eqnarray} Thus, when computing the coherence vectors of the parties $a$ and $b$, we shall have to solve a similar problem; so let's consider it separately. That is to say, we shall regard a generic density operator written as \begin{equation} \rho_{s}=\frac{1}{d_{s}}\left(\mathbb{I}_{s}+\sum_{j=1}^{d^{2}-1}s_{j}\Gamma_{j}^{s}\right), \end{equation} where $s_{j}=2^{-1}d_{s}\mathrm{Tr}(\rho_{s}\Gamma_{j}^{s}).$ Now, and for the remainder of this article, we assume that the matrix elements of regarded density operator $\rho$ in the standard computational basis are given. We want to compute the \emph{Bloch's vector} \cite{Petruccione_param}: $\mathbf{s}=(s_{1},\cdots,s_{d_{s}^{2}-1}).$ For the sake of doing that, a particular basis $\Gamma_{j}^{s}$ must be chosen. Here we pick the generalized Gell Mann matrices, which are presented below in three groups \cite{Bertlmann}: \begin{eqnarray} & & \Gamma_{j}^{s(1)}=\sqrt{\frac{2}{j(j+1)}}\sum_{k=1}^{j+1}(-j)^{\delta_{k,j+1}}|k\rangle\langle k|,\label{eq:SU1}\\ & & \mbox{ }\mbox{ }\mbox{ }\mbox{ for }j=1,\cdots,d_{s}-1,\nonumber \\ & & \Gamma_{(k,l)}^{s(2)}=|k\rangle\langle l|+|l\rangle\langle k|,\mbox{ for }1\leq k<l\leq d_{s},\label{eq:SU2}\\ & & \Gamma_{(k,l)}^{s(3)}=-i(|k\rangle\langle l|-|l\rangle\langle k|),\mbox{ for }1\leq k<l\leq d_{s},\label{eq:SU3} \end{eqnarray} which are named the diagonal, symmetric, and antisymmetric group, respectively. The last two groups possess $d_{s}(d_{s}-1)/2$ generators each. Any one of these matrices can be obtained by calling the subroutine \texttt{gellmann($d_{s}$, $g$, $k$, $l$, $\Gamma_{(k,l)}^{s(g)}$)}. For the first group, $g=1$, we make $j=k$ and, in this case, one can set $l$ to any integer. It is straightforward seeing that, for the generators above, the corresponding components of the Bloch's vector can expressed directly in terms of the matrix elements of the density operator $\rho_{s}$ as follows: \begin{eqnarray} & & s_{j}^{(1)}=\frac{d_{s}}{\sqrt{2j(j+1)}}\sum_{k=1}^{j+1}(-j)^{\delta_{k,j+1}}\langle k|\rho_{s}|k\rangle,\\ & & \mbox{ }\mbox{ }\mbox{ }\mbox{for }j=1,\cdots,d_{s}-1,\nonumber \\ & & s_{(k,l)}^{(2)}=d_{s}\mathrm{Re}\langle l|\rho_{s}|k\rangle,\mbox{ for }1\leq k<l\leq d_{s},\\ & & s_{(k,l)}^{(2)}=d_{s}\mathrm{Im}\langle l|\rho_{s}|k\rangle,\mbox{ for }1\leq k<l\leq d_{s}. \end{eqnarray} These expressions were implemented in the Fortran subroutine \texttt{bloch\_vector\_gellmann($d_{s}$, $\rho_{s}$, $\mathbf{s}$)}. With this subroutine, and the partial trace function \cite{Maziero_PTr}, we can compute the coherence vectors $\mathbf{a}$ and $\mathbf{b}$. We observe that after these simple algebraic manipulations the computational complexity of the Bloch's vector turns out to be basically the CC for the partial trace function. Hence, from Ref. \cite{Maziero_PTr} we have that for $d_{a},d_{b}\gg1$, \begin{equation} CC(\mathbf{a})\approx\mathcal{O}(d_{a}^{2}d_{b})\mbox{ and }CC(\mathbf{b})\approx\mathcal{O}(d_{a}d_{b}^{2}). \end{equation} One detail we should keep in mind, when making use of the codes linked to this article, is the convention we apply for the indexes of the components of $\mathbf{s}$. For the first group of generators, $\Gamma_{j}^{s(1)}$, naturally, $j=1,\cdots,d_{s}-1$. We continue with the second group of generators, $\Gamma_{j}^{s(2)}=\Gamma_{(k,l)}^{s(2)}$, by setting $j_{(k,l)=(1,2)}=d_{s}-1+1=d_{s}$, $j_{(k,l)=(1,3)}=d_{s}+1$, $\cdots$, $j_{(k,l)=(1,d_{s})}=2(d_{s}-1)$, $j_{(k,l)=(2,3)}=2(d_{s}-1)+1$, $\cdots$. The same convention is used for the third group of generators, $\Gamma_{j}^{s(3)}=\Gamma_{(k,l)}^{s(3)}$, but here we begin with $j_{(k,l)=(1,2)}=d_{s}-1+2^{-1}d_{s}(d_{s}-1)+1=d_{s}+2^{-1}d_{s}(d_{s}-1).$ Next we address the computation of the \emph{correlation matrix} $C=(c_{j,k})$, which is a $(d_{a}^{2}-1)\mathrm{x}(d_{b}^{2}-1)$ matrix that we write in the form: \begin{equation} C=\begin{bmatrix}C^{(1,1)} & C^{(1,2)} & C^{(1,3)}\\ C^{(2,1)} & C^{(2,2)} & C^{(2,3)}\\ C^{(3,1)} & C^{(3,2)} & C^{(3,3)} \end{bmatrix},\label{eq:corrmat} \end{equation} with the sub-matrices given as shown below. For convenience, we define the auxiliary variables: \begin{equation} \iota\coloneqq\sqrt{\frac{2}{j(j+1)}}\mbox{, }\kappa\coloneqq\sqrt{\frac{2}{k(k+1)}}\mbox{, and }\varsigma\coloneqq\frac{d_{a}d_{b}}{4}. \end{equation} The matrix elements of $C^{(1,1)}$, whose dimension is $(d_{a}-1)\mathrm{x}(d_{b}-1)$, correspond to the diagonal generators for $a$ and diagonal generators for $b$: \begin{eqnarray} & & c_{j,k}^{(1,1)}=\varsigma\mathrm{Tr}(\Gamma_{j}^{a(1)}\otimes\Gamma_{k}^{b(1)}\rho)\nonumber \\ & = & \varsigma\iota\kappa\mathrm{Tr}(\sum_{m=1}^{j+1}(-j)^{\delta_{m,j+1}}|m\rangle\langle m|)\otimes(\sum_{p=1}^{k+1}(-k)^{\delta_{p,k+1}}|p\rangle\langle p|)\rho\nonumber \\ & = & \varsigma\iota\kappa\sum_{m=1}^{j+1}\sum_{p=1}^{k+1}(-j)^{\delta_{m,j+1}}(-k)^{\delta_{p,k+1}}\mathrm{Tr}(|m\rangle\langle m|\otimes|p\rangle\langle p|\rho)\nonumber \\ & = & \varsigma\iota\kappa\sum_{m=1}^{j+1}\sum_{p=1}^{k+1}(-j)^{\delta_{m,j+1}}(-k)^{\delta_{p,k+1}}\langle mp|\rho|mp\rangle.\label{eq:c11} \end{eqnarray} The matrix elements of $C^{(1,2)}$, whose dimension is $(d_{a}-1)\mathrm{x}2^{-1}d_{b}(d_{b}-1)$, correspond to the diagonal generators for $a$ and symmetric generators for $b$: \begin{eqnarray} & & c_{j,k}^{(1,2)}=\varsigma\mathrm{Tr}(\Gamma_{j}^{a(1)}\otimes\Gamma_{k}^{b(2)}\rho)=\varsigma\mathrm{Tr}(\Gamma_{j}^{a(1)}\otimes\Gamma_{(p,q)}^{b(2)}\rho)\nonumber \\ & = & \varsigma\mathrm{Tr}\left(\iota(\sum_{m=1}^{j+1}(-j)^{\delta_{m,j+1}}|m\rangle\langle m|)\otimes(|p\rangle\langle q|+|q\rangle\langle p|)\rho\right)\nonumber \\ & = & \varsigma\iota\sum_{m=1}^{j+1}(-j)^{\delta_{m,j+1}}\left(\mathrm{Tr}(|mp\rangle\langle mq|\rho)+\mathrm{Tr}(|mq\rangle\langle mp|\rho)\right)\nonumber \\ & = & \varsigma\iota\sum_{m=1}^{j+1}(-j)^{\delta_{m,j+1}}\left(\langle mq|\rho|mp\rangle+\langle mq|\rho|mp\rangle^{*}\right)\nonumber \\ & = & 2\varsigma\iota\sum_{m=1}^{j+1}(-j)^{\delta_{m,j+1}}\mathrm{Re}\langle mq|\rho|mp\rangle.\label{eq:c12} \end{eqnarray} The matrix elements of $C^{(1,3)}$, whose dimension is $(d_{a}-1)\mathrm{x}2^{-1}d_{b}(d_{b}-1)$, correspond to the diagonal generators for $a$ and antisymmetric generators for $b$: \begin{eqnarray} & & c_{j,k}^{(1,3)}=\varsigma\mathrm{Tr}(\Gamma_{j}^{a(1)}\otimes\Gamma_{k}^{b(3)}\rho)=\varsigma\mathrm{Tr}(\Gamma_{j}^{a(1)}\otimes\Gamma_{(p,q)}^{b(3)}\rho)\nonumber \\ & = & -i\varsigma\iota\mathrm{Tr}\left((\sum_{m=1}^{j+1}(-j)^{\delta_{m,j+1}}|m\rangle\langle m|)\otimes(|p\rangle\langle q|-|q\rangle\langle p|)\rho\right)\nonumber \\ & = & -i\varsigma\iota\sum_{m=1}^{j+1}(-j)^{\delta_{m,j+1}}\left(\mathrm{Tr}(|mp\rangle\langle mq|\rho)-\mathrm{Tr}(|mq\rangle\langle mp|\rho)\right)\nonumber \\ & = & -i\varsigma\iota\sum_{m=1}^{j+1}(-j)^{\delta_{m,j+1}}\left(\langle mq|\rho|mp\rangle-\langle mq|\rho|mp\rangle^{*}\right)\nonumber \\ & = & 2\varsigma\iota\sum_{m=1}^{j+1}(-j)^{\delta_{m,j+1}}\mathrm{Im}\langle mq|\rho|mp\rangle.\label{eq:c13} \end{eqnarray} The matrix elements of $C^{(2,1)}$, whose dimension is $2^{-1}d_{a}(d_{a}-1)\mathrm{x}(d_{b}-1)$, correspond to the symmetric generators for $a$ and diagonal generators for $b$: \begin{eqnarray} & & c_{j,k}^{(2,1)}=\varsigma\mathrm{Tr}(\Gamma_{j}^{a(2)}\otimes\Gamma_{k}^{b(1)}\rho)=\varsigma\mathrm{Tr}(\Gamma_{(m,n)}^{a(2)}\otimes\Gamma_{k}^{b(1)}\rho)\nonumber \\ & = & \varsigma\mathrm{Tr}\left((|m\rangle\langle n|+|n\rangle\langle m|)\otimes\kappa(\sum_{p=1}^{k+1}(-k)^{\delta_{p,k+1}}|p\rangle\langle p|)\rho\right)\nonumber \\ & = & \varsigma\kappa\sum_{p=1}^{k+1}(-k)^{\delta_{p,k+1}}\left(\mathrm{Tr}(|mp\rangle\langle np|\rho)+\mathrm{Tr}(|np\rangle\langle mp|\rho)\right)\nonumber \\ & = & \varsigma\kappa\sum_{p=1}^{k+1}(-k)^{\delta_{p,k+1}}\left(\langle np|\rho|mp\rangle+\langle np|\rho|mp\rangle^{*}\right)\nonumber \\ & = & 2\varsigma\kappa\sum_{p=1}^{k+1}(-k)^{\delta_{p,k+1}}\mathrm{Re}\langle np|\rho|mp\rangle.\label{eq:c21} \end{eqnarray} The matrix elements of $C^{(2,2)}$, whose dimension is $2^{-1}d_{a}(d_{a}-1)\mathrm{x}2^{-1}d_{b}(d_{b}-1)$, correspond to the symmetric generators for $a$ and symmetric generators for $b$: \begin{eqnarray} & & c_{j,k}^{(2,2)}=\varsigma\mathrm{Tr}(\Gamma_{j}^{a(2)}\otimes\Gamma_{k}^{b(2)}\rho)=\varsigma\mathrm{Tr}(\Gamma_{(m,n)}^{a(2)}\otimes\Gamma_{(p,q)}^{b(2)}\rho)\nonumber \\ & & =\varsigma\mathrm{Tr}(\left(|m\rangle\langle n|+|n\rangle\langle m|\right)\otimes\left(|p\rangle\langle q|+|q\rangle\langle p|\right)\rho)\nonumber \\ & & =\varsigma\left(\langle nq|\rho|mp\rangle+\langle mq|\rho|np\rangle+\langle np|\rho|mq\rangle+\langle mp|\rho|nq\rangle\right)\nonumber \\ & & =2\varsigma\left(\mathrm{Re}\langle nq|\rho|mp\rangle+\mathrm{Re}\langle np|\rho|mq\rangle\right).\label{eq:c22} \end{eqnarray} The matrix elements of $C^{(2,3)}$, whose dimension is $2^{-1}d_{a}(d_{a}-1)\mathrm{x}2^{-1}d_{b}(d_{b}-1)$, correspond to the symmetric generators for $a$ and antisymmetric generators for $b$: \begin{eqnarray} & & c_{j,k}^{(2,3)}=\varsigma\mathrm{Tr}(\Gamma_{j}^{a(2)}\otimes\Gamma_{k}^{b(3)}\rho)=\varsigma\mathrm{Tr}(\Gamma_{(m,n)}^{a(2)}\otimes\Gamma_{(p,q)}^{b(3)}\rho)\nonumber \\ & & =\varsigma\mathrm{Tr}(\left(|m\rangle\langle n|+|n\rangle\langle m|\right)\otimes(-i)\left(|p\rangle\langle q|-|q\rangle\langle p|\right)\rho)\nonumber \\ & & =-i\varsigma\left(\langle nq|\rho|mp\rangle+\langle mq|\rho|np\rangle-\langle np|\rho|mq\rangle-\langle mp|\rho|nq\rangle\right)\nonumber \\ & & =2\varsigma\left(\mathrm{Im}\langle nq|\rho|mp\rangle-\mathrm{Im}\langle np|\rho|mq\rangle\right).\label{eq:c23} \end{eqnarray} The matrix elements of $C^{(3,1)}$, whose dimension is $2^{-1}d_{a}(d_{a}-1)\mathrm{x}(d_{b}-1)$, correspond to the antisymmetric generators for $a$ and diagonal generators for $b$: \begin{eqnarray} & & c_{j,k}^{(3,1)}=\varsigma\mathrm{Tr}(\Gamma_{j}^{a(3)}\otimes\Gamma_{k}^{b(1)}\rho)=\mathrm{Tr}(\Gamma_{(m,n)}^{a(3)}\otimes\Gamma_{k}^{b(1)}\rho)\nonumber \\ & & =\varsigma\mathrm{Tr}\left(-i(|m\rangle\langle n|-|n\rangle\langle m|)\otimes\kappa\sum_{p=1}^{k+1}(-k)^{\delta_{p,k+1}}|p\rangle\langle p|\rho\right)\nonumber \\ & & =-i\varsigma\kappa\sum_{p=1}^{k+1}(-k)^{\delta_{p,k+1}}\left(\langle np|\rho|mp\rangle-\langle mp|\rho|np\rangle\right)\nonumber \\ & & =2\varsigma\kappa\sum_{p=1}^{k+1}(-k)^{\delta_{p,k+1}}\mathrm{Im}\langle np|\rho|mp\rangle.\label{eq:c31} \end{eqnarray} The matrix elements of $C^{(3,2)}$, whose dimension is $2^{-1}d_{a}(d_{a}-1)\mathrm{x}2^{-1}d_{b}(d_{b}-1)$, correspond to the antisymmetric generators for $a$ and symmetric generators for $b$: \begin{eqnarray} & & c_{j,k}^{(3,2)}=\varsigma\mathrm{Tr}(\Gamma_{j}^{a(3)}\otimes\Gamma_{k}^{b(2)}\rho)=\varsigma\mathrm{Tr}(\Gamma_{(m,n)}^{a(3)}\otimes\Gamma_{(p,q)}^{b(2)}\rho)\nonumber \\ & & =\varsigma\mathrm{Tr}\left((-i)\left(|m\rangle\langle n|-|n\rangle\langle m|\right)\otimes\left(|p\rangle\langle q|+|q\rangle\langle p|\right)\rho\right)\nonumber \\ & & =-i\varsigma\left(\langle nq|\rho|mp\rangle-\langle mp|\rho|nq\rangle+\langle np|\rho|mq\rangle-\langle mq|\rho|np\rangle\right)\nonumber \\ & & =2\varsigma\left(\mathrm{Im}\langle nq|\rho|mp\rangle+\mathrm{Im}\langle np|\rho|mq\rangle\right).\label{eq:c32} \end{eqnarray} The matrix elements of $C^{(3,3)}$, whose dimension is $2^{-1}d_{a}(d_{a}-1)\mathrm{x}2^{-1}d_{b}(d_{b}-1)$, correspond to antisymmetric generators for $a$ and antisymmetric generators for $b$: \begin{eqnarray} & & c_{j,k}^{(3,3)}=\varsigma\mathrm{Tr}(\Gamma_{j}^{a(3)}\otimes\Gamma_{k}^{b(3)}\rho)=\varsigma\mathrm{Tr}(\Gamma_{(m,n)}^{a(3)}\otimes\Gamma_{(p,q)}^{b(3)}\rho)\nonumber \\ & & =\varsigma\mathrm{Tr}\left(-i\left(|m\rangle\langle n|-|n\rangle\langle m|\right)\otimes(-i)\left(|p\rangle\langle q|-|q\rangle\langle p|\right)\rho\right)\nonumber \\ & & =-\varsigma\left(\langle nq|\rho|mp\rangle+\langle mp|\rho|nq\rangle-\langle np|\rho|mq\rangle-\langle mq|\rho|np\rangle\right)\nonumber \\ & & =2\varsigma\left(\mathrm{Re}\langle np|\rho|mq\rangle-\mathrm{Re}\langle nq|\rho|mp\rangle\right).\label{eq:c33} \end{eqnarray} We remark that when implementing these expressions numerically, for the sake of mapping the local to the global computational basis, we utilize, e.g., \begin{equation} |np\rangle\equiv|(n-1)d_{b}+p\rangle. \end{equation} The subroutine \texttt{corrmat\_gellmann($d_{a}$, $d_{b}$, $\rho$, $C$)} returns the correlation matrix $C=(c_{j,k})$, as written in Eq. (\ref{eq:corrmat}), associated with the bipartite density operator $\rho$ and computed using the Gell Mann basis, as described in this section. The convention for the indexes of the matrix elements $c_{j,k}$ is defined in the same way as for the coherence vectors. The computational complexity for $C$, computed via the optimized expressions obtained in this section, is, for $d_{a},d_{b}\gg1$, \begin{equation} CC(C)\approx\mathcal{O}(d_{a}^{2}d_{b}^{2}). \end{equation} By generating some random density matrices \cite{Maziero_FICT}, we checked that the expressions and the corresponding code for the unoptimized and optimized versions of $\mathbf{a}$, $\mathbf{b}$, and $C$ agree. Additional tests shall be presented in the next section, where we calculate some quantum discord quantifiers. \section{Computing Hilbert-Schmidt quantum discords} \label{discord} The calculation of quantum discord functions (QD) usually involves hard optimization problems \cite{Huang_DiscCC,Sen_DiscCC}. In the last few years, a large amount of effort have been dedicated towards computing QD analytically, with success being obtained mostly for low dimensional quantum systems \cite{Luo_DiscAn1,Luo_DiscAn2,Alber_DiscAnal_XS,Maziero_DiscAn,Adesso_DAnal,Oh_DAnal,Joag_DhsAnal,Giovannetti_AnalDtr,Orzag_DAnal,Zahir_DAnal,Seddik_DAnal,Bordone_DAnal,Fan_DAnal,Fei_DAnal,Wang_DAnal,Zhang_DAnal,Sarandy_Dtr_Anal,Huang_2}. Although not meeting all the required properties for a \emph{bona fide} QD quantifier \cite{Modi_DCriteria}, the Hilbert-Schmidt discord (HSD) \cite{Brukner_HSD}, \begin{equation} D_{hs}^{a}(\rho)=\min_{\rho_{cq}}||\rho-\rho_{cq}||_{2}^{2}, \end{equation} is drawing much attention due to its amenability for analytical computations, when compared with most other QD measures. In the last equation, the minimization is performed over the classical-quantum states \begin{equation} \rho_{cq}=\sum_{j}p_{j}|a_{j}\rangle\langle a_{j}|\otimes\rho_{j}^{b}, \end{equation} with $p_{j}$ being a probability distribution, $|a_{j}\rangle$ an orthonormal basis for $\mathcal{H}_{a}$, $\rho_{j}^{b}$ generic density operators defined in $\mathcal{H}_{b}$, and $||O||_{2}\coloneqq\sqrt{\mathrm{Tr}(O^{\dagger}O)}$ is the Hilbert-Schmidt norm of the linear operator $O$, with $O^{\dagger}$ being the transpose conjugate of $O$. In this article, as a basic test for the Fortran code provided to obtain coherence vectors and correlation matrices, we shall compute the following lower bound for the HSD \cite{Luo_HSD}: \begin{equation} D_{hs}^{a}(\rho)=\sum_{j=d_{a}}^{d_{a}^{2}-1}\lambda_{j}^{a}, \end{equation} where $\lambda_{j}^{a}$ are the eigenvalues, sorted in non-increasing order, of the $(d_{a}^{2}-1)\mathrm{x}(d_{a}^{2}-1)$ matrix: \begin{equation} \Xi_{a}=\frac{2}{d_{a}^{2}d_{b}}\left(\mathbf{a}\mathbf{a}^{t}+\frac{2}{d_{b}}CC^{t}\right). \end{equation} In the equation above $t$ stands for the transpose of a vector or matrix. We observe that the other version of the HSD, $D_{hs}^{b}$, can be obtained from the equations above simply by exchanging $a$ and $b$ and using $C^{t}C$ instead of $CC^{t}$. It is interesting regarding that, as was proved in Ref. \cite{Akhtarshenas1}, a bipartite state $\rho$, with polarization vectors $\mathbf{a}$ and $\mathbf{b}$ and correlation matrix $C$, is classical-quantum if and only if there exists a $(d_{a}-1)$-dimensional projector $\Pi_{a}$ in the space $\mathbb{R}^{d_{a}^{2}-1}$ such that: \begin{equation} \Pi_{a}\mathbf{a}=\mathbf{a}\mbox{ and }\Pi_{a}C=C, \end{equation} Based on this fact, an ameliorated version for the Hilbert-Schmidt quantum discord (AHSD) was proposed \cite{Akhtarshenas1}: \begin{equation} D_{hsa}^{a}(\rho)\coloneqq\min_{\Pi_{a}}||\Upsilon_{a}-\Pi_{a}\Upsilon_{a}||_{2}^{2}, \end{equation} with the matrix $\Upsilon_{a}$ defined as \begin{equation} \Upsilon_{a}\coloneqq\sqrt{\frac{2}{d_{a}^{2}d_{b}}}\left(f(b)\mathbf{a}\hspace{1em}g(b)\sqrt{\frac{2}{d_{b}}}C\right), \end{equation} where $f$ and $g$ are arbitrary functions of $b\equiv||\mathbf{b}||_{2}$. Then, by setting $f(b)=g(b)=P(\rho_{b})$ and using the purity, \begin{eqnarray} P(\rho_{b}) & \coloneqq & \mathrm{Tr}(\rho_{b}^{2})=\sum_{j,k}|\rho_{j,k}^{b}|^{2}, \end{eqnarray} to address the problem of non-contractivity of the Hilbert-Schmidt distance, the following analytical formula was presented \cite{Akhtarshenas1}: \begin{equation} D_{hsa}^{a}(\rho)=\frac{1}{P(\rho_{b})}\sum_{j=d_{a}}^{d_{a}^{2}-1}\lambda_{j}^{a}=\frac{D_{hs}^{a}(\rho)}{P(\rho_{b})}.\label{eq:AHSD} \end{equation} Thus both discord quantifiers $D_{hs}^{a}$ and $D_{hsa}^{a}$ are, in the end of the day, obtained from the eigenvalues $\lambda_{j}^{a}$. And the computation of these eigenvalues requires the knowledge of the coherence vector $\mathbf{a}$ (or $\mathbf{b}$) and of the correlation matrix $C$. These QD measures were implemented in the Fortran functions \texttt{discord\_hs(ssys, $d_{a}$, $d_{b}$, $\rho$)} and \texttt{discord\_hsa(ssys, $d_{a}$, $d_{b}$, $\rho$)}, where \texttt{ssys = `s'}, with $s=a,b$, specifies which version of the quantum discord is to be computed. As an example, let us use the formulas provided in this article and the associated code to compute the HSD and AHSD of Werner states in $\mathcal{H}_{a}\otimes\mathcal{H}_{b}$ (with $d_{a}=d_{b})$: \begin{equation} \rho^{w}=\frac{d_{a}-w}{d_{a}(d_{a}^{2}-1)}\mathbb{I}_{a}\otimes\mathbb{I}_{b}+\frac{d_{a}w-1}{d_{a}(d_{a}^{2}-1)}F, \end{equation} where $w\in[-1,1]$ and $F=\sum_{j,k=1}^{d_{a}}|jk\rangle\langle kj|.$ The reduced states of $\rho^{w}$ are $\mathbb{I}_{s}/d_{s}$, whose purity is $P(\rho_{s})=1/d_{s}$. The results for the HSD and AHSD of $\rho^{w}$ are presented in Fig. \ref{werner}. \begin{figure}[h] \begin{centering} \includegraphics[scale=0.42]{werner} \par\end{centering} \caption{The points are the values of the ameliorated Hilbert-Schmidt quantum discord of Werner states computed numerically using Eq. (\ref{eq:AHSD}). The lines are the corresponding values of the AHSD plotted via the analytical formula: $D_{hsa}^{a}(\rho^{w})=d_{a}D_{hs}^{a}(\rho^{w})=(d_{a}w-1)^{2}/((d_{a}-1)(d_{a}+1)^{2})$. Due to the symmetry of $\rho^{w}$, here $D_{hsa}^{a}(\rho^{w})=D_{hsa}^{b}(\rho^{w}).$ In the inset is shown the difference between the times taken by the two methods to compute the AHSD for a fixed value of $d_{a}$. We see clearly here that our optimized algorithm gives a exponential speedup against the brute force calculation of the Bloch's vectors and correlation matrix. } \label{werner} \end{figure} \section{Concluding Remarks} \label{conclusion} In this article, we addressed the problem of computing coherence vectors and correlations matrices. We obtained formulas for these functions that make possible a considerably more efficient numerical implementation when compared with the direct use of their definitions. We provided Fortran code to calculate all the quantities regarded in this paper. As a test for our formulas and code, we computed Hilbert-Schmidt quantum discords of Werner states. It is important observing that, although our focus here was in quantum information science, the tools provided can find application in other areas, such as e.g. in the calculation of order parameters and correlations functions for the study of phase transitions in discrete quantum or classical systems. \section*{Conflict of Interests} The author declares that the funding mentioned in the Acknowledgments section do not lead to any conflict of interest. Additionally, the author declares that there is no conflict of interest regarding the publication of this manuscript. \begin{acknowledgments} This work was supported by the Brazilian funding agencies: Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (CNPq), processes 441875/2014-9 and 303496/2014-2, Instituto Nacional de Ci\^encia e Tecnologia de Informa\c{c}\~ao Qu\^antica (INCT-IQ), process 2008/57856-6, and Coordena\c{c}\~ao de Desenvolvimento de Pessoal de N\'{i}vel Superior (CAPES), process 6531/2014-08. I gratefully acknowledges the hospitality of the Physics Institute and Laser Spectroscopy Group at the Universidad de la Rep\'{u}blica, Uruguay.\end{acknowledgments}
1,314,259,993,135
arxiv
\section{Introduction} The time-dependent Joule heating problem is a coupled non-linear elliptic-parabolic system of the form \begin{equation}\label{JH} \dot u - \Delta u = \sigma(u)|\nabla \varphi|^2,\quad \nabla \cdot \sigma(u) \nabla \varphi = 0, \end{equation} where $u$ denotes the temperature and $\varphi$ the electric potential. It models the heat flow generated when an electric current is passed through a conductor. In applications the electric potential is typically only applied to smaller parts of the boundary, for instance through electric pads. To model such problems properly mixed boundary conditions are needed, see, e.g.~\cite{Henneken06}. The Joule heating problem has been studied both in a theoretical context \cite{Cimatti92, Antontsev94, Yuan94, Meinlschmidt17}, focusing on the well-posedness of \eqref{JH}, and from a numerical point of view \cite{Elliott95, Akrivis05, Gao14, Li14}, focusing on convergence (with rate) of numerical solutions to \eqref{JH}. There are also several works on the stationary version of the problem, see, for instance \cite{Howison93, Holst10, Jensen13} and references therein. The main issue with the system \eqref{JH} is the low regularity of the source term $\sigma(u)|\nabla \varphi|^2$. In one and two dimensions this does not lead to a problem. However, in three dimensions this term is not in $H^{-1}$ and the problem does not fit into the classical variational framework for PDEs. In \cite{Antontsev94} this issue is resolved by rewriting the source term using the equation for $\varphi$ (see also \cite{Howison93} for the stationary case). With this formulation existence of a solution in $L_2(H^1)$ is proved. However, to derive convergence for finite element approximations additional regularity of the solution is usually required, see \cite{Elliott95, Akrivis05}. Typically, sufficient regularity in three dimensions cannot be proved, but needs to be assumed. To the authors' knowledge, there is no numerical analysis of this problem under more realistic assumptions on the domain (Lipschitz in three spatial dimensions) and the boundary conditions (mixed). The purpose of this paper is to prove the strong convergence of finite element approximations of \eqref{JH} on Lipschitz domains in three spatial dimensions with mixed boundary conditions. A challenge is to avoid the need for a discrete maximum principle and the associated restrictive mesh conditions \cite{Holst10} because a direct energy argument only delivers $L^1$-control on the critical $|\nabla \varphi|^2$ term in \eqref{JH}. In our analysis this is achieved by introducing a variational formulation with a cut-off functional, extending \cite{Jensen13}. The analysis presented in this paper covers finite element methods of any order that are conforming in space and piecewise constant in time, satisfying a backward Euler scheme. The choice of approximation spaces only needs to ensure the stability of the $L^2$ projection in the $H^1$-norm, which holds for a large class of non-uniform meshes \cite{Bank14}. Having arrived at only mild mesh conditions, we find the Joule heating problem with mixed boundary conditions well suited for adaptive mesh refinement. Indeed, starting from the assumption of creased domains \cite{Mitrea07}, we prove uniqueness and additional regularity of the solution. This result combines the regularity for the Poisson equation on creased domains in \cite{Mitrea07} with the results for parabolic systems in \cite{Hieber08}. The additional regularity we obtain for $\varphi$, namely $\varphi \in L_{2q/(q-3)}(W^1_q)$ for some $q>3$, is in line with the sufficient condition for uniqueness established in~\cite{Antontsev94}. Importantly, we can show higher regularity, in some cases $C^\infty$, in the interior of the domain. To exploit the difference in regularity within the domain we equip the Joule heating problem with a goal functional to examine duality-based additive mesh refinement in the numerical experiments. The paper is outlined as follows: In Section 2 we formulate the problem of interest and introduce some notation. Section 3 is devoted to the analysis of semi-discrete methods and Section 4 to fully discrete methods. In Section 5 we prove additional regularity and uniqueness of the solution. In Section 6 we present some numerical examples that confirm the convergence results and investigate adaptive mesh refinements. \section{Variational formulations and weak solutions} In this section we introduce two variational formulations, one ``classical'', see \eqref{classicalweak} below, and one based on a cut-off functional, see \eqref{weak} below. We prove that these two are equivalent, that is, that they have the same set of solutions. The latter formulation is preferable when working with finite element discretizations of the problem, since we avoid using a discrete maximum principle, see Section~\ref{sec_semidiscrete} and Section~\ref{sec_discrete}. \subsection{Problem formulation and notation} Let $D_t$ denote the time derivative $\frac{\partial }{\partial t}$ and $\Omega\subseteq \mathbb{R}^3$ be a domain describing the body of a conductor. Let $u\colon\Omega \times [0,T] \rightarrow \mathbb{R}$ denote the temperature inside the conductor, $\varphi\colon \Omega\times [0,T] \rightarrow \mathbb{R}$ the electric potential, and $\sigma\colon \mathbb{R} \rightarrow \mathbb{R}_{+}$ the electric conductivity. Furthermore, we use $\Gamma^u_D$ and $\Gamma^u_N$ to denote the Dirichlet and Neumann boundary for $u$ and $\overline{\Gamma^u_D} \cup \overline{\Gamma^u_N} = \partial \Omega$. Analogously, we define $\Gamma^\varphi_D$ and $\Gamma^\varphi_N$ for $\varphi$. With this notation, the time-dependent Joule heating problem is given by the following nonlinear elliptic-parabolic system \begin{subequations}\label{jouleheating} \begin{alignat}{2} D_t u - \Delta u &= \sigma(u) |\nabla \varphi|^2,& \quad &\text{in } \Omega \times (0,T),\label{joule1}\\ \nabla\cdot (\sigma(u)\nabla \varphi) &= 0,& &\text{in } \Omega \times (0,T),\label{joule2}\\ u &= g_u,& &\text{on } \Gamma^u_{D} \times (0,T),\label{bc1}\\ \varphi &= g_\varphi,& &\text{on } \Gamma^\varphi_{D} \times (0,T),\label{bc2}\\ n \cdot \nabla u &= 0,& &\text{on } \Gamma^u_{N} \times (0,T),\label{bc3}\\ n \cdot \nabla \varphi &= 0,& &\text{on } \Gamma^\varphi_{N} \times (0,T),\label{bc4}\\ u(\cdot,0) &= u_0,& &\text{in } \label{initial}\Omega. \end{alignat} \end{subequations} Let $W^k_p(\Omega)$ denote the classical range of Sobolev spaces and define \begin{align*} W^k_p(\Omega;\Gamma^u_D) := \{v\in W^k_p(\Omega): v|_{\Gamma^u_D}=0\}, \quad \text{for } k>1/p. \end{align*} The space $W^k_p(\Omega;\Gamma^\varphi_D)$ is defined analogously and $H^1$ is used to denote $W^1_2$. We also use $V^\ast$ for the dual space to $V$. Furthermore, we adopt the notation $L_p(0,T;V)$ for the Bochner space with norm \begin{align*} \|v\|_{L_p(0,T;V)} &= \Big(\int_0^T \|v\|_V^p \, \mathrm{dt} \Big)^{1/p}, \quad 1\leq p<\infty,\\ \|v\|_{L_\infty(0,T;V)} &= \esssup_{0\leq t \leq T} \|v\|_V, \end{align*} where $V$ is a Banach space equipped with the norm $\|\cdot\|_V$. The notation $v\in H^1(0,T;V)$ is used to denote $v,D_t v\in L_2(0,T;V)$. Finally, $C_b(\Omega)$ is the space of bounded continuous functions. \subsection{Classical variational formulation} To this end we make the following assumptions on the domain and the data. \begin{enumerate}[label=(A\arabic*)] \item $\Omega\subseteq \mathbb{R}^3$ is a bounded domain with Lipschitz boundary, $\meas(\Gamma^u_D)>0$, and $\meas(\Gamma^\varphi_D)>0$. \label{ass_omega} \item $g_u \in L_2(0,T;H^1(\Omega))\cap H^1(0,T;H^1(\Omega)^\ast)$ and there are points \\ $0 = t_0 < t_1 < \cdots < t_K = T$, $K \in \mathbb{N}$, such that \begin{align*} D_t g_u & \in C_b([t_i,t_{i+1});H^1(\Omega)^\ast),\\ g_\varphi & \in C_b([t_i,t_{i+1}); W^1_3(\Omega)\cap L_\infty(\Omega)), \end{align*} on each subinterval $[t_i,t_{i+1})$. \label{ass_g} \item $u_0 \in L_2(\Omega)$. \label{ass_intial} \item $\sigma \in C^1(\mathbb{R})$, Lipschitz continuous, and $0< \sigma_\circ \leq \sigma(x) \leq \sigma^\circ < \infty$, $\forall x \in \mathbb{R}$. \label{ass_sigma} \end{enumerate} A weak solution to the Joule heating problem \eqref{jouleheating} is a pair \\ $(u,\varphi)=(g_u + \tilde u, g_\varphi + \tilde \varphi)$ such that \begin{align*} (\tilde u, \tilde \varphi)\in L_2(0,T;H^1(\Omega;\Gamma^u_D)) \cap H^1(0,T;H^1(\Omega;\Gamma^u_D)^\ast) \times L_2(0,T;H^1(\Omega,\Gamma^\varphi_D)) \end{align*} and for a.e.~$t \in (0,T]$ \begin{subequations}\label{classicalweak} \begin{align} \langle D_t u, v \rangle + \langle \nabla u, \nabla v \rangle &= \langle \sigma(u)|\nabla \varphi|^2, v \rangle ,\label{classicalweaku}\\ \langle \sigma(u) \nabla \varphi, \nabla w \rangle &= 0, \label{classicalweakphi}\\ \langle u(0), z \rangle &= \langle u_0, z \rangle,\label{classicalweak0} \end{align} \end{subequations} for all $(v,w)\in W^1_\infty(\Omega; \Gamma^u_D) \times H^1(\Omega;\Gamma^\varphi_D)$ and $z \in L_2(\Omega)$, see, for instance, \cite{Cimatti92}. Note that $\langle \cdot, \cdot \rangle$ is used to denote both the inner product in $L_2$ and the duality bracket. The choice of spaces guarantees $\sigma(u)|\nabla \varphi|^2 \in L_1(0,T;L_1(\Omega))$ so that the right-hand side in \eqref{classicalweaku} is well-defined for all $v \in W^1_\infty(\Omega; \Gamma^u_D)$. Throughout the text we adopt the notational convention that for a function $\flat$ one understands $\tilde \flat = \flat - g_\varphi$ if $\flat$ is a Greek letter and $\tilde \flat = \flat - g_u$ if $\flat$ is a Latin letter. \begin{remark} In some works, e.g.~\cite{Roubicek13}, the notion of strong (instead of weak) solution is used when the equation is satisfied almost everywhere in time. \end{remark} The following lemma provides a maximum principle for $\varphi(x,t)$. \begin{lemma}\label{phi_reg} If $(u,\varphi)$ is a solution to \eqref{classicalweak}, then $g_\circ \leq \varphi(x,t) \leq g^\circ$ for a.e.~$(x,t)\in \bar\Omega\times[0,T]$, where \begin{align*} g^\circ := \max_{(x,t) \in \Gamma^\varphi_D \times [0,T]} g_\varphi(x,t), \quad g_\circ := \min_{(x,t) \in \Gamma^\varphi_D \times [0,T]} g_\varphi(x,t), \end{align*} \end{lemma} \begin{proof} Define $\chi = \max(0,\varphi-g^\circ) \in L_2(0,T;H^1(\Omega;\Gamma^\varphi_D))$ and choose $w=\chi(t)$ in \eqref{classicalweakphi} and integrate from $0$ to $T$. Then \begin{align*} 0 &= \int_0^T \langle \sigma(u) \nabla \varphi, \nabla \chi \rangle \,\mathrm{d} t = \int_0^T \langle \sigma(u) \nabla (\varphi-g^\circ), \nabla \chi \rangle \,\mathrm{d} t \\&= \int_0^T \int_{\supp(\chi)\cap \Omega}\sigma(u) \nabla \chi \cdot \nabla \chi \,\mathrm{d} x\,\mathrm{d} t = \int_0^T \langle \sigma(u) \nabla \chi, \nabla \chi \rangle \,\mathrm{d} t. \end{align*} Using $\sigma(u)\geq \sigma_\circ$ and the Poincar\'e-Friedrichs inequality we get $\int_0^T\|\chi\|^2\,\mathrm{d} t=0$ and we deduce $\varphi \leq g^\circ$. A similar argument using $g_\circ$ proves $\varphi \geq g_\circ$. This gives $\varphi \in L_\infty(0,T;L_\infty(\Omega))$. \end{proof} In one and two spatial dimensions the formulation \eqref{classicalweak} is suitable for proving existence of a solution, see, e.g.~\cite{Cimatti92, Elliott95}. However, because of the low regularity of the right-hand side in \eqref{classicalweaku} this strategy does not apply to the three dimensional setting. To overcome this difficulty, it can be proved that due to \eqref{joule2}, see, for instance, \cite{Antontsev94,Howison93}, \begin{align}\label{rewrite} \sigma(u) |\nabla \varphi|^2 = \nabla \cdot (\sigma(u)\varphi\nabla \varphi), \end{align} and from Lemma~\ref{phi_reg} it follows that $\nabla \cdot (\sigma(u)\varphi\nabla \varphi) \in L_2(0,T;H^1(\Omega,\Gamma^u_D)^\ast)$. With this right-hand side it is now possible to use Schauder's fixed point theorem to prove existence of a solution also in three dimensions. \begin{thm}\label{existence_uniqueness_classical} There exists a solution $(u,\varphi)$ to \eqref{classicalweak}. If $\nabla \varphi \in L_{\frac{2q}{q-3}}(0,T;L_q(\Omega))$ for some $q>3$, then the solution is unique. \end{thm} \begin{proof} This follows by adapting the fixed point argument in \cite[Theorem~2.2]{Antontsev94} to mixed boundary conditions. The proof uses identity \eqref{rewrite} and Schauder's fixed point theorem on the space $L_2(0,T;L_2(\Omega))$. More precisely, we consider the mapping $F:L_2(0,T;L_2(\Omega)) \rightarrow L_2(0,T;L_2(\Omega))$ where $y=F(s)$ is the solution to \begin{align*} \langle D_t y, v \rangle + \langle \nabla y, \nabla v \rangle &= \langle \nabla \cdot (\sigma(s)\psi\nabla \psi), v \rangle ,\\ \langle \sigma(s) \nabla \psi, \nabla w \rangle &= 0\\ \langle y(0), z \rangle &= \langle u_0, z \rangle, \end{align*} for all $(v,w)\in H^1(\Omega; \Gamma^u_D) \times H^1(\Omega;\Gamma^\varphi_D)$ and $z \in L_2(\Omega)$. It is clear, via \eqref{rewrite} and the fact that $W^1_\infty(\Omega; \Gamma^u_D)$ is dense in $H^1(\Omega; \Gamma^u_D)$, that a fixed point to $F$ solves \eqref{classicalweak}. To prove that $F$ satisfies the conditions of Schauder's fixed point theorem on some ball $B_R$ we may now follow \cite[Theorem~2.2]{Antontsev94}. The mixed boundary conditions only affect the definition of the space $V \subset H^1(\Omega)$, that is, the functions in $V$ in our case only vanish on $\Gamma_D \subset \partial \Omega$. The uniqueness follows from \cite[Theorem~4.1]{Antontsev94}, also by first adapting the argument to mixed boundary conditions. \end{proof} \subsection{Variational formulation with cut-off} In this paper we are interested in proving convergence of finite element approximations. For this purpose, we propose a variational formulation based on a cut-off functional to avoid using a discrete maximum principle. The cut-off functional was introduced for the stationary problem in \cite{Jensen13}, and is defined as \begin{align*} \lceil f \rceil := \min\{\max\{f + g_\varphi, a \}, b\}-g_\varphi. \end{align*} for some fixed $a, b \in \mathbb{R}$ with $a \le g_\circ$ and $b \ge g^\circ$. Note that $\min$ and $\max$ are taken over both space and time $\Omega\times[0,T]$ and we have $a - g_\varphi \leq \lceil f \rceil \leq b - g_\varphi$ To introduce the new weak formulation we define \begin{align*} X &:= L_2(0,T;H^1(\Omega;\Gamma^u_D)) \cap H^1(0,T;H^1(\Omega;\Gamma^u_D)^\ast)\times L_2(0,T;H^1(\Omega,\Gamma^\varphi_D)), \\ Y &:= H^1(\Omega;\Gamma^u_D) \times H^1(\Omega;\Gamma^\varphi_D). \end{align*} Using these spaces, a weak solution to the system \eqref{jouleheating} is a pair \\$(u,\varphi)=(g_u + \tilde u, g_\varphi + \tilde \varphi)$ such that $(\tilde u, \tilde \varphi)\in X$ and for a.e.~$t\in (0,T]$ \begin{subequations}\label{weak} \begin{align} \langle D_t u, v \rangle + \langle \nabla u, \nabla v \rangle &= -\langle \sigma(u)\lceil \tilde \varphi \rceil \nabla \varphi, \nabla v \rangle + \langle \sigma(u) \nabla \varphi \cdot \nabla g_\varphi, v\rangle,\label{weaku} \\ \langle \sigma(u) \nabla \varphi, \nabla w \rangle &= 0, \label{weakphi}\\ \langle u(0), z \rangle &= \langle u_0, z \rangle, \label{weakintial} \end{align} \end{subequations} for all $(v,w)\in Y$ and $z \in L_2(\Omega)$. \begin{lemma}\label{weak_forms_eq} The set of solutions which satisfy \eqref{weak} is equal to the set of solutions to \eqref{classicalweak}. In particular, the right-hand side in \eqref{weaku} defines an element in $L_2(0,T; (H^1(\Omega; \Gamma^u_D))^\ast)$. \end{lemma} \begin{proof} The identity \begin{align*} \langle \sigma(u) |\nabla \varphi|^2, v \rangle = -\langle \sigma(u)\tilde \varphi\nabla \varphi,\nabla v \rangle + \langle \sigma(u) \nabla \varphi \cdot \nabla g_\varphi, v\rangle, \end{align*} for $v \in W^1_\infty(\Omega)$ and a.e.~$t\in [0,T]$ follows by choosing $w = (\varphi(t)-g_\varphi(t))v$ in \eqref{classicalweakphi}, see \cite[Lemma~1]{Howison93} for the stationary case. The definition of the cut-off functional and the maximum principle for $\varphi$ in Lemma~\ref{phi_reg} implies that $\lceil \tilde \varphi \rceil = \tilde \varphi$. The larger space of test functions does not affect the set of solutions since $W^1_\infty(\Omega;\Gamma^u_D)$ is dense in $H^1(\Omega;\Gamma^u_D)$. Furthermore, the right-hand side in \eqref{weaku} satisfies the following bound \begin{align*} |-\langle \sigma(u)\lceil \tilde \varphi \rceil \nabla \varphi,& \nabla v \rangle + \langle \sigma(u) \nabla \varphi \cdot \nabla g_\varphi, v\rangle| \leq C(\sigma, g_\varphi) \|\nabla \varphi\|_{L_2(\Omega)} \|\nabla v\|_{L_2(\Omega)} \\&\quad+ \sigma^\circ \|\nabla \varphi\|_{L_2(\Omega)}\|\nabla g_\varphi\|_{L_3(\Omega)} \|v\|_{L_6(\Omega)}, \end{align*} where the Sobolev embedding in $\mathbb{R}^3$ gives $\|v\|_{L_6(\Omega)} \leq C\|v\|_{H^1(\Omega)}$. Hence \begin{align*} \int_0^T &\|\nabla \cdot (\sigma(u)\lceil \tilde \varphi \rceil \nabla \varphi) + \sigma(u) \nabla \varphi \cdot \nabla g_\varphi\|^2_{H^1(\Omega;\Gamma^u_D)^\ast} \,\mathrm{d} t \\&\leq C(\sigma,g_\varphi)(\|\nabla \varphi\|_{L_2(0,T;L_2(\Omega))} + \|\nabla \varphi\|_{L_2(0,T;L_2(\Omega))}\|\nabla g_\varphi\|_{L_\infty(0,T;L_3(\Omega))}), \end{align*} where $\|\nabla g_\varphi\|_{L_\infty(0,T;L_3(\Omega))}$ is bounded due to assumption \ref{ass_g}, so the right-hand side defines an element in $L_2(0,T;H^1(\Omega; \Gamma^u_D)^\ast)$. \end{proof} \section{Semidiscrete methods}\label{sec_semidiscrete} In this section we analyze spatially semidiscrete Galerkin methods. We prove existence and uniqueness of semidiscrete solutions and strong convergence to a weak solution satisfying \eqref{weak}. \subsection{Semidiscrete formulation} Let $\{V^u_m\}_{m \in \mathbb{N}}$ and $\{V^\varphi_m\}_{m \in \mathbb{N}}$ be hierarchical families of finite-dimensional subspaces, whose unions are dense in $H^1(\Omega; \Gamma^u_D)$ and $H^1(\Omega; \Gamma^\varphi_D)$, respectively and define \begin{align*} X_m &:= \{ v \in C(0,T;V^u_m) : v|_{[t_i, t_{i+1})} \in C^1(t_i, t_{i+1};V^u_m) \; \forall \, i \} \times L_\infty(0,T;V^\varphi_m). \end{align*} Typically, $V^u_m$ and $V^\varphi_m$ are finite element spaces corresponding to a family of meshes $\{\mathcal T_m\}_{m \in \mathbb{N}}$. For instance, one may choose Lagrangian finite elements or conforming $hp$-finite elements, see also the numerical examples in Section~\ref{sec:examples}. We make the following additional assumption on $V^u_m$; \begin{enumerate}[label=(A\arabic*)] \setcounter{enumi}{4} \item Let $V^u_m$ be of a form such that the $L_2$-projection $P_m$ onto $V^u_m$ is stable, uniformly in $m$, in the $H^1$-norm. \label{ass_proj} \end{enumerate} In the case when $V^u_m$ is a finite element space, we refer to \cite{Bank14} and references therein, where the $H^1$-stability of the $L_2$-projection is proved for a large class of (non-uniform) meshes in three spatial dimensions. In the subsequent sections we let $C_m$ denote a generic constant that depends on the discretization $m$, for instance, the mesh size $h_m$. A semidiscrete Galerkin solution is a pair $(u_m,\varphi_m)=(g_u + \tilde u_m, g_\varphi + \tilde \varphi_m)$ such that $(\tilde u_m, \tilde \varphi_m)\in X_m$ and for a.e.~$t \in (0,T]$ \begin{subequations}\label{semi_weak} \begin{align} \langle D_t u_m, v \rangle + \langle \nabla u_m, \nabla v \rangle &= -\langle \sigma(u_m)\lceil \tilde \varphi_m \rceil \nabla \varphi_m, \nabla v \rangle \label{semi_weak_u}\\&\quad+ \langle \sigma(u_m) \nabla \varphi_m \cdot \nabla g_\varphi, v\rangle, \notag\\ \langle \sigma(u_m) \nabla \varphi_m, \nabla w \rangle &= 0, \label{semi_weak_phi} \\ \langle u_m(0), z \rangle &= \langle u_0, z \rangle, \label{semi_weak_initial} \end{align} \end{subequations} for all $(v,w) \in V^u_m \times V^\varphi_m$ and $z \in V^u_m$. \begin{remark}\label{no_disc_max} Recall the bound of the cut-off functional; $a - g_\varphi \leq \lceil \tilde \varphi_m \rceil \leq b- g_\varphi$. This uniform boundedness of $\lceil \tilde \varphi_m \rceil$ in $m$ will allow us to consider the limit of $\langle \sigma(u_m)\lceil \tilde \varphi_m \rceil \nabla \varphi_m, \nabla v \rangle$ as $m \to \infty$ without appealing to a discrete maximum principle. \end{remark} \begin{lemma}\label{semi_bounds} A solution to \eqref{semi_weak} fulfils the following bounds \begin{align} \|\nabla \tilde \varphi_m\|_{L_\infty(0,T;L_2(\Omega))} &\leq C(\sigma,g_\varphi),\label{semi_bound_phi}\\ \|\tilde u_m(T)\|^2_{L_2(\Omega)} + \int_0^T \|\nabla \tilde u_m\|^2_{L_2(\Omega)} \,\mathrm{d} t &\leq C(u_0,\sigma,g_u,D_t g_u, g_\varphi),\label{semi_bound_u}\\ \int_0^T\|D_t \tilde u_m(t)\|^2_{H^1(\Omega;\Gamma^u_D)^\ast} &\leq C(u_0,\sigma,g_u,D_t g_u, g_\varphi).\label{semi_bound_u_t} \end{align} \end{lemma} \begin{proof} By choosing $w=\tilde \varphi_m(t)$ in \eqref{semi_weak_phi} we can prove \begin{align*} \|\nabla \tilde \varphi_m(t)\|^2_{L_2(\Omega)} \leq C(\sigma)\|\nabla g_\varphi(t)\|^2_{L_2(\Omega)}, \end{align*} and \eqref{semi_bound_phi} follows by using \ref{ass_g} and \ref{ass_sigma}. By choosing $v=\tilde u_m(t)$ in \eqref{semi_weak_u} and integrating from $0$ to $T$ we have \begin{align*} & \int_0^T \langle D_t \tilde u_m, \tilde u_m \rangle \,\mathrm{d} t +\int_0^T \| \nabla \tilde u_m \|^2_{L_2(\Omega)} \,\mathrm{d} t\\ = &\, -\int_0^T \langle D_t g_u, \tilde u_m \rangle \,\mathrm{d} t - \int_0^T \langle \nabla g_u, \nabla \tilde u_m \rangle \,\mathrm{d} t - \int_0^T \langle \sigma(u_m)\lceil\tilde \varphi_m\rceil \nabla \varphi_m, \nabla\tilde u_m \rangle \,\mathrm{d} t\\ & \, + \int_0^T \langle \sigma(u_m)\nabla \varphi_m\cdot \nabla g_\varphi, \tilde u_m \rangle \,\mathrm{d} t \\ = & \; I + II + III + IV. \end{align*} Using the Cauchy-Schwarz, Poincar\'{e}, and Young's (weighted) inequality we get \begin{align} \label{semi_est_1} I + II \leq \frac{1}{4} \int_0^T \|\nabla \tilde u_m\|^2_{L_2(\Omega)} \,\mathrm{d} t + C\int_0^T\|D_t g_u\|^2_{(H^1(\Omega;\Gamma^u_D))^\ast} + \|\nabla g_u\|^2_{L_2(\Omega)} \,\mathrm{d} t, \end{align} and \begin{align}\label{semi_est_2} III + IV &\le \frac{1}{4}\int_0^T \|\nabla \tilde u_m\|^2_{L_2(\Omega)}\,\mathrm{d} t + C(\sigma,g_\varphi)\Big(\|g_\varphi\|^2_{L_\infty(0,T;W^1_3(\Omega))} \\&\quad + \int_0^T\|\nabla \varphi_m\|^2_{L_2(\Omega)} \,\mathrm{d} t \Big), \notag \end{align} where we used Sobolev embeddings as in the proof of Lemma~\ref{weak_forms_eq}. We can now use \eqref{semi_bound_phi} to bound the last term on the right-hand side. Finally, using \eqref{semi_weak_initial} we have \begin{align*} 2\int_0^T \langle D_t{\tilde u}_m, \tilde u_m \rangle \,\mathrm{d} t &= \int_0^T D_t \|\tilde u_m\|^2_{L_2(\Omega)} = \|\tilde u_m(T)\|^2_{L_2(\Omega)} - \|\tilde u_m(0)\|^2_{L_2(\Omega)} \\& \geq \|\tilde u_m(T)\|^2_{L_2(\Omega)} - \|u_0\|^2_{L_2(\Omega)} - \|g_u(0)\|^2_{L_2(\Omega)}, \end{align*} and \eqref{semi_bound_u} follows. Observe that $D_t \tilde u_m$ belongs to $V^u_m$. With $P_m$ denoting the $L_2$-projection onto $V^u_m$ we get \begin{align*} \|D_t \tilde u_m(t)\|_{H^1(\Omega; \Gamma^u_D)^\ast} &= \sup_{v \in H^1(\Omega; \Gamma^u_D) \atop v \neq 0} \frac{\langle D_t \tilde u_m(t), v \rangle}{\|v\|_{H^1(\Omega)}} = \sup_{v \in H^1(\Omega; \Gamma^u_D) \atop v \neq 0} \frac{\langle D_t \tilde u_m(t), P_m v \rangle}{\|v\|_{H^1(\Omega)}}\\ &\leq \sup_{v \in H^1(\Omega; \Gamma^u_D)\atop P_m v \neq 0} C\frac{\langle D_t \tilde u_m(t), P_m v \rangle}{\|P_m v\|_{H^1(\Omega)}}\notag = \sup_{v \in V^u_m \atop v \neq 0} C\frac{\langle D_t \tilde u_m(t), v \rangle}{\|v\|_{H^1(\Omega)}}, \end{align*} where $C$ is the $H^1$-norm of the $L_2$-projection, which is independent of $m$ due to \ref{ass_proj}. Now we can use bounds similar to \eqref{semi_est_1} and \eqref{semi_est_2} to prove \eqref{semi_bound_u_t}. \end{proof} \begin{lemma}\label{semi_existence_uniqueness} There exists a unique solution $(\tilde u_m,\tilde \varphi_m)\in X_m$ to \eqref{semi_weak}. \end{lemma} \begin{proof} For each $\tilde u_m$ there is a solution $\tilde \varphi_m = S(\tilde u_m)$ of \eqref{semi_weak_phi}, which defines a mapping $S:L_2(0,T;V^u_m)\rightarrow L_2(0,T;V^\varphi_m)$. First, we assume that $g_\varphi(x,\cdot)$ and $D_t g_u(x,\cdot)$ are continuous to prove existence and uniqueness on $[0,T]$. Let $\{\lambda_i\}_{i=1}^M$ be a basis for $V^u_m$. Then $\tilde u_m = \sum_{j=1}^M \alpha_j(t) \lambda_j$ for some $\alpha(t) = (\alpha_j(t)) \in \mathbb{R}^M$. By substituting into \eqref{semi_weak_u} we arrive at the following system of ODEs \begin{align}\label{ODE_alpha} M D_t \alpha(t) + K \alpha(t) = F(\alpha(t),t) + G(t), \end{align} where $M$ and $K$ denote the mass and stiffness matrices, respectively, and \begin{align*} F_i(\alpha(t),t) &= -\langle \sigma(u_m(t))\lceil \tilde \varphi_m(t) \rceil \nabla \varphi_m(t), \nabla \lambda_i \rangle + \langle \sigma(u_m(t)) \nabla \varphi_m(t) \cdot \nabla g_\varphi(t), \lambda_i\rangle, \\ G_i(t) &= -\langle D_t g_u(t), \lambda_i \rangle - \langle \nabla g_u(t), \nabla \lambda_i \rangle, \end{align*} where $\tilde \varphi_m = S(\tilde u_m)$. The initial data is given by \eqref{semi_weak_initial} and corresponds to the equation $M \alpha(0) = b$, where $b_i = \langle u_0-g_u(0), \lambda_i \rangle$. Let $\tilde u^1_m = \sum_{j=1}^M \alpha^1_j\lambda_j$ and $\tilde u^2_m = \sum_{j=1}^M \alpha^2_j\lambda_j$, and $\tilde \varphi^1_m := S(\tilde u^1_m)$ and $\tilde \varphi^2_m := S(\tilde u^2_m)$. Note that \begin{align*} \|\nabla(\tilde u^1_m &- \tilde u^2_m)\|_{L_2(\Omega)} \leq C_m \, \|\alpha^1-\alpha^2\|_{\mathbb{R}^M}, \end{align*} due to the Lipschitz continuity of linear mappings in $V^u_m$. For $\tilde \varphi^1_m- \tilde \varphi^2_m$ we use Strang's first Lemma \cite[Chapter III]{Braess07} and the Lipschitz continuity of $\sigma$ to get \begin{align*} \sigma_\circ \|\nabla(\varphi^1_m-\varphi^2_m)\|_{L_2(\Omega)} &\leq \|(\sigma(u^1_m)- \sigma(u^2_m))\nabla \varphi^1_m\|_{L_2(\Omega)} \\&\leq C\|\tilde u^1_m- \tilde u^2_m\|_{L_2(\Omega)}\|\nabla \varphi^1_m\|_{L_\infty(\Omega)} \leq C_m\|\alpha^1-\alpha^2\|_{\mathbb{R}^M}, \end{align*} which means that the mapping $S$ is Lipschitz continuous. Here we have used that the boundary data is identical for the two instances, that is, $u^1_m-u^2_m = \tilde u^1_m-\tilde u^2_m$ and $\varphi^1_m-\varphi^2_m=\tilde \varphi^1_m-\tilde \varphi^2_m$. Now, for $F_i$ we have for $i=1,...,M$ \begin{align*} |F_i(\alpha^1(t),t) - F_i(\alpha^2(t),t)| &\leq |\langle \sigma(u^1_m)\lceil \tilde \varphi^1_m \rceil \nabla \varphi^1_m - \sigma(u^2_m)\lceil \tilde \varphi^2_m \rceil \nabla \varphi^2_m, \nabla \lambda_i \rangle| \\&\quad+ |\langle (\sigma(u^1_m) \nabla \varphi^1_m -\sigma(u^2_m) \nabla \varphi^2_m) \cdot \nabla g_\varphi, \lambda_i\rangle| \end{align*} Note that $\sigma$ and the cut-off functional $\lceil \cdot \rceil$ are Lipschitz continuous and bounded. Furthermore, the image of $S(\cdot)$ is bounded owing to Lemma~\ref{semi_bounds}. Using this, together with the fact that the product of bounded Lipschitz continuous functions is Lipschitz continuous, we prove the Lipschitz continuity of $F$ in $t$; \begin{align*} \|F(\alpha^1(t),t) - F(\alpha^2(t),t)\|_{\mathbb{R}^M} \leq C_m\|\alpha^1(t)-\alpha^2(t)\|_{\mathbb{R}^M}, \end{align*} where $C_m$ is a generic constant that does not depend on $t$. Picard-Lindel\"{o}f's theorem gives existence and uniqueness on some maximal interval $(\beta_1,\beta_2)$. If $(\beta_1,\beta_2)$ is a strict subset $(0,T)$ then by \cite[Theorem~7.6]{Amann90} either \begin{align*} \lim_{t \rightarrow \beta_1^{+}} \|\alpha(t)\|_{\mathbb{R}^M} = \infty, \qquad \text{or} \qquad \lim_{t \rightarrow \beta_2^{-}} \|\alpha(t)\|_{\mathbb{R}^M} = \infty, \end{align*} which contradicts Lemma~\ref{semi_bounds}. Hence there exist a unique solution in $C^1(0,T;V^u_m)\times L_\infty(0,T;V^\varphi_m)$. Finally, we consider the case when $D_t g_u(x,\cdot)$ and $g_\varphi(x,\cdot)$ have at most finitely many discontinuities as specified in \ref{ass_g}. We may then use Picard-Lindel\"{o}f's theorem on the sub-interval $[t_{k},t_{k+1})$ with the initial data $\alpha(t_{k}) = \lim_{t\rightarrow t_{k}^{-}} \alpha(t)$. The existence and uniqueness on $[0,T]$ now follows by induction over $k$. \end{proof} \subsection{Convergence of semidiscrete solutions} The following lemma will be used several times in the convergence analysis in the subsequent text. Recall that $\tilde \flat = \flat - g_\varphi$ if $\flat$ is a Greek letter and $\tilde \flat = \flat - g_u$ if $\flat$ is a Latin letter. \begin{lemma} \label{lem_second_equation} Consider a sequence $\{ \tilde y_m \}_m$ which converges pointwise a.e.~to $\tilde y$ and a sequence $\{\tilde \psi_m \}_m$ which converges weakly in $L_2(0,T;H^1(\Omega;\Gamma^\varphi_D))$ to $\tilde \psi$, and the corresponding sequences $\{ y_m \}_m$ and $\{\psi_m \}_m$ that converges to $y$ and $\psi$, respectively. Suppose that, for all $m \in \mathbb N$, \begin{align} \label{second_equation} \int_0^T \langle \sigma(y_m) \nabla \psi_m, \nabla \tilde \psi \rangle \,\mathrm{d} t = \int_0^T \langle \sigma(y_m) \nabla \psi_m, \nabla \tilde \psi_m \rangle \,\mathrm{d} t = 0. \end{align} Then $\tilde \psi_m \to \tilde \psi$ strongly in $L_2(0,T;H^1(\Omega;\Gamma^\varphi_D))$ as $m \to \infty$. Furthermore, subsequences of \[\sigma(y_m) \nabla \psi_m \quad \text{and} \quad \sigma(y_m)\lceil\tilde \psi_m\rceil\nabla \psi_m\] converge strongly in $L_2(0,T;L_2(\Omega; \mathbb{R}^3))$ to $\sigma(y) \nabla \psi$ and $\sigma(y)\lceil\tilde \psi\rceil\nabla \psi$, respectively \end{lemma} \begin{proof} The dominated convergence theorem implies that $\sigma(y_{m})\nabla \tilde \psi \rightarrow \sigma(y)\nabla \tilde \psi$ converges strongly in $L_2(0,T;L_2(\Omega; \mathbb{R}^3)) \simeq L_2( (0,T) \times \Omega; \mathbb{R}^3)$, as $|\sigma(y_{m}) \, \partial_i \psi| \le \sigma^\circ \| \nabla \psi \|$ for all $i, m$. Because in $L_2(0,T;L_2(\Omega))$ the scalar product of a bounded and weakly convergent sequence and a strongly convergent sequence converges to the scalar product of the limits, \begin{align} \label{conv_result_phi2} 0 = & \int_0^T\langle\sigma(y_{m}) \nabla \psi_{m}, \nabla \tilde \psi \rangle \,\mathrm{d} t = \int_0^T\langle \nabla \psi_{m}, \sigma(y_{m}) \nabla \tilde \psi \rangle \,\mathrm{d} t\\ \to & \int_0^T\langle \sigma(y) \nabla \psi, \nabla \tilde \psi \rangle \,\mathrm{d} t, \quad \text{as} \quad m \to \infty. \notag \end{align} In particular, $\int_0^T\langle \sigma(y) \nabla \psi, \nabla \tilde \psi \rangle \,\mathrm{d} t = 0$. To prove strong convergence of $\tilde \psi_{m}$ we write \begin{align*} 0 &\leq \int_0^T \langle \sigma(y_{m}) \nabla (\tilde \psi - \tilde \psi_{m}), \nabla (\tilde \psi - \tilde \psi_{m}) \rangle \,\mathrm{d} t \\&= \int_0^T \big(\langle \sigma(y_{m}) \nabla \tilde \psi , \nabla \tilde \psi \rangle - 2\langle \sigma(y_{m}) \nabla \tilde \psi_{m}, \nabla \tilde \psi \rangle + \langle \sigma(y_{m}) \nabla \tilde \psi_{m}, \nabla \tilde \psi_{m} \rangle\big) \,\mathrm{d} t\\ &=: I + II + III. \end{align*} Using the strong convergence of $\sigma(y_{m})\nabla \tilde \psi$ we get \begin{align*} I \rightarrow \int_0^T \langle \sigma(y) \nabla \tilde \psi , \nabla \tilde \psi \rangle \,\mathrm{d} t = - \int_0^T \langle \sigma(y) \nabla g_\varphi , \nabla \tilde \psi \rangle \,\mathrm{d} t, \end{align*} and due to \eqref{conv_result_phi2} we have \begin{align*} II \rightarrow -2\int_0^T \langle \sigma(y) \nabla \tilde\psi , \nabla\tilde\psi \rangle \,\mathrm{d} t = 2 \int_0^T \langle \sigma(y) \nabla g_\varphi , \nabla\tilde\psi \rangle \,\mathrm{d} t. \end{align*} Now, due to \eqref{second_equation} the third term gives \begin{align*} III = -\int_0^T \langle \sigma(y_{m}) \nabla g_\varphi , \nabla \tilde\psi_m \rangle \,\mathrm{d} t \rightarrow -\int_0^T \langle \sigma(y) \nabla g_\varphi , \nabla \tilde\psi \rangle \,\mathrm{d} t, \end{align*} since, due to the dominated convergence theorem, $\sigma(y_{m}) \nabla g_\varphi \to \sigma(y) \nabla g_\varphi$ strongly in $L_2(0,T;L_2(\Omega))$ and $\nabla \tilde \psi_m$ converges weakly. Thus, we conclude that \\ $I+II+III \rightarrow 0$ and, by applying a Poincar\'{e}-Friedrichs inequality, that $\tilde \psi_{m}$ converges strongly in $L_2(0,T;H^1(\Omega;\Gamma^\varphi_D))$, and, by passing to a subsequence, also pointwise a.e. Finally, by the dominated convergence theorem in the form of \cite[p.~270]{Royden88}, $\sigma(y_{m_k}) \nabla \psi_{m_k}$ and $\sigma(y_{m_k})\lceil\tilde \psi_{m_k}\rceil\nabla \psi_{m_k}$ converge strongly in $L_2(0,T;L_2(\Omega; \mathbb{R}^3))$,\\ where $m_k$ denotes a subsequence. \end{proof} \begin{thm}\label{semi_conv_galerkin} A subsequence of solutions $(\tilde u_{m_k},\tilde \varphi_{m_k})\in X_{m_k}$ of \eqref{semi_weak} converges strongly in $X$ to a solution $(\tilde u, \tilde \varphi)$ of \eqref{weak}. \end{thm} \begin{proof} Owing to Lemma~\ref{semi_bounds} and the reflexivity of $X$, there exists a subsequence $m_k$ and $(\tilde y, \tilde \psi) \in X$ such that \begin{align*} (\tilde u_{m_k}, \tilde \varphi_{m_k}) \rightharpoonup (\tilde y, \tilde \psi) \quad \text{in } X, \text{ as $k\rightarrow \infty$}. \end{align*} Because initial conditions are $L_2$-projected onto $V_m^u$ it follows that $\tilde u_{m_k}(0) \to \tilde y(0) = \tilde u(0)$ in $L_2(\Omega)$. The compactness of the embedding (Aubin-Lions lemma) $$L_2(0,T;H^1(\Omega;\Gamma^u_D))\cap H^1(0,T;H^1(\Omega;\Gamma^u_D)^\ast) \hookrightarrow L_2(0,T;L_2(\Omega)),$$ implies that there exist a subsequence, still denoted $m_k$, such that $\tilde u_{m_k}\rightarrow \tilde y$ strongly and pointwise a.e.~in $L_2(0,T;L_2(\Omega))$. Owing to Lemma \ref{lem_second_equation} we can pass to subsequences, without change of notation, so that $\varphi_{m_k}$, $\nabla \varphi_{m_k}$, $\sigma(u_{m_k}) \nabla \varphi_{m_k}$ and $\sigma(u_{m_k})\lceil\tilde \varphi_{m_k}\rceil\nabla \varphi_{m_k}$ converge strongly in~$L_2(0,T,L_2(\Omega))$. Now choose $v(t)\in L_2(0,T;V^u_{m_k})$ in \eqref{semi_weak_u}. Integrating from $0$ to $T$ gives, \begin{align*} \int_0^T\langle D_t u_{m_k}, v \rangle + \langle \nabla u_{m_k}, \nabla v \rangle \,\mathrm{d} t&= \int_0^T\big(-\langle \sigma(u_{m_k})\lceil \tilde \varphi_{m_k} \rceil \nabla \varphi_{m_k}, \nabla v \rangle \\&\quad+ \langle \sigma(u_{m_k}) \nabla \varphi_{m_k} \cdot \nabla g_\varphi, v\rangle\big) \,\mathrm{d} t. \notag \end{align*} Fixing $v$ we may now let $k \to \infty$ to get that \begin{align*} \int_0^T\langle D_t y, v \rangle + \langle \nabla y, \nabla v \rangle \,\mathrm{d} t&= \int_0^T-\langle \sigma(y)\lceil \tilde \psi \rceil \nabla \psi, \nabla v \rangle + \langle \sigma(y) \nabla \psi \cdot \nabla g_\varphi, v\rangle\,\mathrm{d} t, \notag \end{align*} which holds for all $v \in L_2(0,T;H^1(\Omega;\Gamma^u_D))$, since $\cup_{k\in \mathbb{N}} L_2(0,T;V^u_{m_k})$ is dense in this space and $u_{m,k},\varphi_{m,k}$ are bounded according to Lemma~\ref{semi_bounds}, see \cite[Theorem~3,~p.~121]{Yosida95}. In the spirit of Lemma~\ref{lem_second_equation} we may also prove that $\tilde \psi$ satisfies \eqref{weakphi}. This together with the convergence of initial conditions imply that the limit $(\tilde y, \tilde \psi)$ is a solution to~\eqref{weak}. To prove that $\{ \tilde u_{m_k} \}_k$ converges strongly in $L_2(0,T;H^1(\Omega;\Gamma^u_D))$, we write \begin{align*} &\int_0^T\langle \nabla(\tilde y-\tilde u_{m_k}), \nabla(\tilde y-\tilde u_{m_k}) \rangle \,\mathrm{d} t \\&\quad= \int_0^T \big(\langle \nabla\tilde y, \nabla\tilde y \rangle -2\langle \nabla\tilde y, \nabla\tilde u_{m_k} \rangle +\langle \nabla\tilde u_{m_k}, \nabla\tilde u_{m_k}\rangle\big) \,\mathrm{d} t =: I + II + III. \end{align*} Then $II \rightarrow -2\int_0^T \langle \nabla\tilde y, \nabla\tilde y \rangle \,\mathrm{d} t$ since $\tilde u_{m_k}$ converges weakly in $L_2(0,T;H^1(\Omega;\Gamma^u_D))$. For the third term we use \eqref{semi_weak_u} to get \begin{align*} III &= \int_0^T \big(\langle D_t \tilde u_{m_k}, \tilde u_{m_k} \rangle - \langle D_t g_u, \tilde u_{m_k} \rangle - \langle \nabla g_u, \nabla \tilde u_{m_k} \rangle \\&\quad- \langle \sigma(u_{m_k})\lceil\tilde \varphi_{m_k}\rceil \nabla \varphi_{m_k}, \nabla\tilde u_{m_k} \rangle + \langle \sigma(u_{m_k})\nabla \varphi_{m_k}\cdot \nabla g_\varphi, \tilde u_{m_k} \rangle \big) \,\mathrm{d} t\\ &\to \int_0^T \big(\langle D_t \tilde y, \tilde y \rangle - \langle D_t g_u, \tilde y \rangle - \langle \nabla g_u, \nabla \tilde y \rangle \\&\quad- \langle \sigma(y)\lceil\tilde \psi\rceil \nabla \psi, \nabla \tilde y \rangle + \langle \sigma(y)\nabla \psi \cdot \nabla g_\varphi, \tilde y\rangle \big) \,\mathrm{d} t \end{align*} as $k \to \infty$, recalling that $D_t \tilde u_{m_k}$ converges weakly, $u_{m_k}$ strongly, and the statement of Lemma \ref{lem_second_equation}. Now, \eqref{weaku} gives $III = \int_0^T \langle \nabla\tilde y, \nabla\tilde y \rangle \,\mathrm{d} t$ so that $I+II+III \rightarrow 0$. Finally, to show strong convergence of the time derivative, we note that \begin{align*} \| D_t \tilde y - & D_t \tilde{u}_{m_k} \|_{H^1(\Omega; \Gamma_D^u)^\ast} \\& \leq \sup_{v \in H^1(\Omega;\Gamma^u_D) \atop v \neq 0} \frac{\langle D_t \tilde y - P_{m_k}D_t \tildey, v \rangle}{\| v\|_{H^1(\Omega)}} + \sup_{v \in H^1(\Omega;\Gamma^u_D) \atop v \neq 0} \frac{\langle P_{m_k}D_t \tilde y - D_t \tilde{u}_{m_k}, v \rangle}{\| v\|_{H^1(\Omega)}} \\&=: a_{m_k} + b_{m_k}, \end{align*} where $P_{m_k}$ is the $L_2$-projection onto $V^u_m$. It follows, due to the density of $L_2$ in~$(H^1)^*$, that $\int_0^T a^2_{m_k} \,\mathrm{d} t \to 0$ as $k \to \infty$. Now, we use the self-adjointness of the $L_2$-projection, \eqref{weaku}, and \eqref{semi_weak_u} to get $\int_0^Tb_{m_k}^2\,\mathrm{d} t\to 0$ as $k \to 0$ since $\nabla u_{m_k}$, $\sigma(u_{m_k})\lceil \tilde \varphi_{m_k} \rceil \nabla \varphi_{m_k}$, and $\sigma(u_{m_k})\nabla \varphi_{m_k}\cdot \nabla g_\varphi$ converge strongly in $L_2(0,T;L_2(\Omega))$. We conclude that $\| D_t \tilde y - D_t \tilde{u}_{m_k} \|_{L_2(0,T;H^1(\Omega; \Gamma_D^u)^\ast)} \to 0.$ \end{proof} \begin{corollary}\label{cor_semi_conv} If the solution $(u,\varphi)$ to \eqref{weak} is unique, then the full sequence of Galerkin solutions $(u_m,\varphi_m)\in X_m$ converges. \end{corollary} \begin{proof} Due to Lemma~\ref{semi_bounds} the sequence is bounded in $X$. From the proof of Theorem~\ref{semi_conv_galerkin} we deduce that any accumulation point of the sequence is a solution to \eqref{weak} and that an accumulation point exists. If the solution to \eqref{weak} is unique there can only be one accumulation point and, hence, the full sequence $\{(u_m,\varphi_m)\}_m$ must converge. \end{proof} \section{Fully discrete methods}\label{sec_discrete} In this section we analyze fully discrete methods based on a backward Euler scheme in time and hierarchical families of finite dimensional subspaces, as introduced in Section~\ref{sec_semidiscrete}, in space. We prove existence and uniqueness of fully discrete solutions and strong convergence to a weak solution satisfying \eqref{weak}. \subsection{Fully discrete formulation} Let $\{J_l\}_{l\in \mathbb{N} }$ be a family of nested partitions of the time interval $J=[0,T]$, which subordinate to the decomposition of \ref{ass_g}. For each partition $0 = t_0 < t_1 < ... < t_N = T$ we denote the subintervals $I_n:=(t_{n-1},t_n]$ and $f^n:=f(t_n)$. We consider a uniform time discretization in the analysis, that is, we assume $t_n-t_{n-1}=\tau_l$ with $\tau_l=2^{-l}T$. It simplifies some of the analysis, but it is also a requirement for the compactness argument in \cite{Walkington10}. Fix $m$ and let $V^u_m$ and $V^\varphi_m$ be as in Section~\ref{sec_semidiscrete} and define the discrete space \begin{align*} X_{m,l} = &\{v(x,t): \forall n \ \exists w \in V^u_m: v(t,\cdot) = w, \ t\in I_n \} \\&\times \{v(x,t): \forall n \ \exists w \in V^\varphi_m: v(t,\cdot) = w, \ t\in I_n\}. \end{align*} This means that functions in $X_{m,l}$ are piecewise constant in time and on each interval equal to a function from $V^u_m \times V^\varphi_m$. Note that $X_{m,l}\not\subseteq X_m$ since $X_{m,l}$ discontinuous in time. However, we have $X_{m,l} \subseteq L_2(0,T; V^u_m) \times L_2(0,T; V^\varphi_m)$. We use the backward Euler scheme to define a fully discrete solution. Find a pair $(u_{m,l},\varphi_{m,l})=(g_u + \tilde u_{m,l}, g_\varphi + \tilde \varphi_{m,l})$ such that $(\tilde u_{m,l}, \tilde \varphi_{m,l}) \in X_{m, l}$ and for $n=1,...,N$, \begin{subequations}\label{full_weak} \begin{align} \left\langle\frac{ u^n_{m,l}-u^{n-1}_{m,l}}{\tau_l}, v \right\rangle + \langle \nabla u^n_{m,l}, \nabla v \rangle &= -\langle \sigma(u^n_{m,l})\lceil \tilde \varphi^n_{m,l} \rceil \nabla \varphi^n_{m,l}, \nabla v \rangle \label{fully_weak_u}\\&\quad + \langle \sigma(u^n_{m,l}) \nabla \varphi^n_{m,l} \cdot \nabla g^n_\varphi, v\rangle, \notag\\ \langle \sigma(u^n_{m,l}) \nabla \varphi^n_{m,l}, \nabla w \rangle &= 0, \label{fully_weak_phi}\\ \langle u^0_{m,l}, z \rangle &= \langle u_0, z \rangle, \label{fully_weak_initial} \end{align} \end{subequations} for all $(v,w)\in V^u_m\times V^\varphi_m$ and $z \in V^u_m$, where $u^n_{m,l} = u_{m,l}(t_n)$ and $\varphi^n_{m,l} = \varphi_{m,l}(t_n)$. \begin{lemma}\label{fully_bounds} A solution $(u_{m,l}, \varphi_{m,l})$ to \eqref{full_weak} fulfils the following bounds \begin{align} \|\nabla \tilde \varphi^n_{m,l} \|^2_{L_2(\Omega)} &\leq C(\sigma, g_\varphi),\label{fully_bounds_1}\\ \|\tilde u^n_{m,l}\|^2_{L_2(\Omega)} + \int_0^{t_n} \|\nabla \tilde u_{m,l}\|^2_{L_2(\Omega)} \,\mathrm{d} t &\leq C(u_0,\sigma,g_u,D_t g_u, g_\varphi),\label{fully_bounds_2}\\ \sum_{n=1}^N \|\tilde u^n_{m,l}-\tilde u^{n-1}_{m,l}\|^2_{L_2(\Omega)} &\leq C(u_0,\sigma,g_u,D_t g_u,g_\varphi).\label{fully_bounds_3} \end{align} for $n=0,...,N$. \end{lemma} \begin{proof} Choosing $w=\tilde \varphi^n_{m,l}$ in \eqref{fully_weak_phi} we have $\|\nabla \tilde \varphi^n_{m,l}\|^2_{L_2(\Omega)} \leq C(\sigma)\|g^n_\varphi\|_{L_2(\Omega)}$. To prove \eqref{fully_bounds_2}, we note that \begin{align*} \langle \tilde u^n_{m,l}- \tilde u^{n-1}_{m,l}, \tilde u^n_{m,l} \rangle = \frac{1}{2}\|\tilde u^n_{m,l}\|^2_{L_2(\Omega)}-\frac{1}{2}\|\tilde u^{n-1}_{m,l}\|^2_{L_2(\Omega)} + \frac{1}{2}\|\tilde u^n_{m,l}-\tilde u^{n-1}_{m,l}\|^2_{L_2(\Omega)}, \end{align*} and by choosing $v = \tau_l\tilde u^n_{m,l}$ in \eqref{fully_weak_u} and summing from $n=1$ to $N$ \begin{align}\label{bounds_u_fully} &\frac{1}{2}\sum_{n=1}^N\big(\|\tilde u^n_{m,l}\|^2_{L_2(\Omega)}-\|\tilde u^{n-1}_{m,l}\|^2_{L_2(\Omega)} + \|\tilde u^n_{m,l}-\tilde u^{n-1}_{m,l}\|^2_{L_2(\Omega)}\big) \\&\quad+ \frac{1}{2}\int_0^T\|\nabla \tilde u_{m,l}\|^2_{L_2(\Omega)} \,\mathrm{d} t \notag \\&\quad=- \sum_{n=1}^N\langle g^n_u-g^{n-1}_u, \tilde u^n_{m,l} \rangle - \int_0^T \langle \nabla g_u, \nabla \tilde u_{m,l} \rangle \,\mathrm{d} t \notag\\&\qquad - \int_0^T \langle \sigma(u_{m,l})\lceil\tilde \varphi_{m,l}\rceil \nabla \varphi_{m,l}, \nabla\tilde u_{m,l} \rangle \,\mathrm{d} t \notag \\&\qquad+ \int_0^T \langle \sigma(u_{m,l})\nabla \varphi_{m,l}\cdot \nabla g_\varphi, \tilde u_{m,l} \rangle \,\mathrm{d} t =: I + II + III + IV. \notag \end{align} For the first term we get \begin{align*} \sum_{n=1}^N \int_{t_{n-1}}^{t_n} \!\! \langle D_t g_u, \tilde u ^n_{m,l} \rangle \,\mathrm{d} t &\leq C \sum_{n=1}^N \int_{t_{n-1}}^{t_n} \!\! \|D_tg_u\|^2_{H^1(\Omega;\Gamma^u_D)^\ast} \,\mathrm{d} t+ \frac{1}{8}\int_0^T \!\! \|\nabla \tilde u_{m,l}\|^2_{L_2(\Omega)} \,\mathrm{d} t. \end{align*} The remaining terms $II-IV$ can be estimated as in the proof of Lemma~\ref{semi_bounds}. Using the telescoping effect of the first two terms in the sum in \eqref{bounds_u_fully} completes the proof. \end{proof} \begin{lemma}\label{fully_existence_uniqueness} There exists a solution $(\tilde u_{m,l}, \tilde \varphi_{m,l})\in X_{m,l}$ to \eqref{full_weak}. Furthermore, for any fixed $m$, there is an $L \in \mathbb{N}$, such that the solution $(\tilde u_{m,l}, \tilde \varphi_{m,l})$ is unique for any $l>L$. \end{lemma} \begin{proof} To prove this we use the ODE setting introduced in the proof of Lemma~\ref{semi_existence_uniqueness}. Let $\tilde u^n_{m,l} = \sum_{j=1}^M \alpha^n_{l,j}\lambda_j$ for $n=1,...,N$. Then \eqref{full_weak} corresponds to finding $\alpha^n_l \in \mathbb{R}^M$ such that \begin{align}\label{ODE_alpha_BE} \frac{M\alpha^n_l - M\alpha^{n-1}_l}{\tau_l} + K\alpha^n_l = F(\alpha^n_l,t_n) + G(t_n), \end{align} for $n=1,..,N$, with $M\alpha^0_l = b$, where $b_i = \langle u_0-g_u(0), \lambda_i \rangle$, $F$ as in \eqref{ODE_alpha}, and $G$ slightly modified as \begin{align*} G(t_n) := -\langle \frac{g^n_u-g^{n-1}_u}{\tau_l}, \lambda_i \rangle - \langle \nabla g^n_u, \nabla \lambda_i \rangle. \end{align*} Note that \eqref{ODE_alpha_BE} is the backward Euler discretization of the ODE \eqref{ODE_alpha}. To apply Brouwer's fixed point theorem we define the mapping $f: \mathbb{R}^M \rightarrow \mathbb{R}^M$ such that $\beta = f(\gamma)$ is the solution to the system \begin{align*} \frac{M\beta - M\alpha^{n-1}_l}{\tau_l} + K\beta = F(\gamma,t_n) + G(t_n), \end{align*} which is equivalent to \begin{align}\label{T_mapping} \beta = (M + \tau_lK)^{-1}(M\alpha^{n-1}_l + \tau_lF(\gamma,t_n) + \tau_lG(t_n)). \end{align} Let $\tilde y$ be the function corresponding to the vector $\gamma$, that is, $\tilde y^n = \sum_{j=1}^M \gamma^n_{j}\lambda_j$. From the definitions of $F_i$ and $G_i$ in the proof of Lemma~\ref{semi_existence_uniqueness}, and with $\tilde \psi=S(\tilde y)$ it follows that \begin{align*} |F_i(\gamma,t_n)|&\leq C(\sigma,g_\varphi)\|\nabla \psi\|_{L_2(\Omega)}\|\nabla \lambda_i\|_{L_2(\Omega)} \\&\quad+ \sigma^\circ \|\nabla \psi\|_{L_2(\Omega)}\|g_\varphi(t_n)\|_{L_3(\Omega)}\|\lambda_i\|_{L_6(\Omega)} \leq C_m(\sigma,g_\varphi),\\ |G_i(t_n)|&\leq C_m(D_tg_u,g_u). \end{align*} Here the boundedness of $\|\nabla \psi\|_{L_2(\Omega)}$ follows from Lemma~\ref{fully_bounds} with $\tilde u^n_{m,l}= \tilde y$. Hence, letting $B_R\in \mathbb{R}^M$ denote the ball with radius $R>0$, it is clear that $f\colon B_R\rightarrow B_R$ if $R$ sufficiently large. Now define $\beta_1=f(\gamma_1), \beta_2=f(\gamma_2) \in \mathbb{R}^M$. From \eqref{T_mapping} we have \begin{align*} (\beta^1-\beta^2) = \tau_l(M + \tau_lK)^{-1}(F(\gamma_1,t_n) - F(\gamma_2,t_n)), \end{align*} and using the Lipschitz continuity of $F(\cdot,t)$, see proof of Lemma~\ref{semi_existence_uniqueness}, we get \begin{align*} \|\beta^1-\beta^2\|_{\mathbb{R}^M} \leq C_L\tau_l\|(M + \tau_lK)^{-1}\|_{F}\|\gamma_1 - \gamma_2\|_{\mathbb{R}^M}, \end{align*} where $\|\cdot\|_F$ denotes the matrix Frobenius norm which is finite since $M+\tau_l K$ is invertible. This proves that $f$ is continuous and the existence of a solution follows from Brouwer's fixed point theorem. Furthermore, it is clear that if $\tau_l$ is sufficiently small, or equivalently $l$ sufficiently large, then $f$ is a contraction on $\mathbb{R}^M$ and Banach's fixed point theorem gives uniqueness. \end{proof} \subsection{Convergence of fully discrete solutions} To prove convergence of the fully discrete method we introduce the continuous and piecewise affine interpolant $\tilde U_{m,l}(t)$ of $\tilde u_{m,l}$ \begin{align}\label{U_def} \tilde U_{m,l}(t) := \tilde u^{n-1}_{m,l} \frac{t_n-t}{t_n-t_{n-1}} + \tilde u^n_{m,l} \frac{t-t_{n-1}}{t_n-t_{n-1}}, \quad t \in I_n. \end{align} Note that \begin{align}\label{DtU_equality} D_t \tilde U_{m,l}(t) = \frac{\tilde u^n_{m,l} - \tilde u^{n-1}_{m,l}}{\tau_l}, \quad t\in I_n. \end{align} Using \eqref{fully_weak_u}, we have for $t\in I_n$ and $v\in H^1(\Omega;\Gamma^u_D)$ \begin{align*} \langle D_t \tilde U_{m,l}(t),v\rangle &\leq \big(\|u^n_{m,l}\|_{H^1(\Omega)} + C(\sigma,g_\varphi)\|\nabla \varphi^n_{m,l}\|_{L_2(\Omega)} \\&\quad+ \sigma^\circ\|\nabla \varphi^n_{m,l}\|_{L_2(\Omega)} \|\nabla g^n_\varphi\|_{L_3(\Omega)} + \|\partial_t g^n_u\|_{H^1(\Omega;\Gamma^u_D)^\ast}\big)\|v\|_{H^1(\Omega)}, \end{align*} where $\partial_t g^n_u = (g_u^n-g_u^{n-1})/\tau_l$. Note that $\partial_t g^n_u = \tau_l^{-1}\int_{I_n} D_t g_u \,\mathrm{d} t$. This, together with Lemma~\ref{fully_bounds} and \ref{ass_g}, gives \begin{align*} \|D_t \tilde U_{m,l}\|_{L_2(0,T;(V^u_m)^\ast)} \leq C(u_0,\sigma,g_u,D_t g_u, g_\varphi). \end{align*} In addition, using \ref{ass_proj} as in the proof of Lemma~\ref{semi_bounds}, we get \begin{align}\label{fully_Dt_bound} \|D_t \tilde U_{m,l}\|_{L_2(0,T;H^1(\Omega; \Gamma^u_D)^\ast)} \leq C(u_0,\sigma,g_u,D_t g_u, g_\varphi). \end{align} In the analysis we also use the following reformulation of \eqref{fully_weak_u} \begin{align}\label{fully_u_walkington} \langle u^n_{m,l} - u^{n-1}_{m,l}, v \rangle = \langle F_{m,l}, v \rangle, \end{align} with \begin{align*} \langle F_{m,l}, v \rangle &:= - \tau_{l}\langle \nabla u^n_{m,l}, \nabla v \rangle - \tau_{l}\langle \sigma(u^n_{m,l})\lceil \tilde \varphi^n_{m,l} \rceil \nabla \varphi^n_{m,l}, \nabla v \rangle \\&\quad+ \tau_{l}\langle \sigma(u^n_{m,l}) \nabla \varphi^n_{m,l} \cdot \nabla g^n_\varphi, v\rangle, \quad \forall v \in V^u_m. \end{align*} \begin{thm}\label{fully_conv_galerkin} A subsequence of solutions $(\tilde u_{m_k, l_k}, \tilde \varphi_{m_k,l_k}) \in X_{m_k,l_k}$ of \eqref{full_weak} converges strongly in $L_2(0,T; H^1(\Omega;\Gamma^u_D))\times L_2(0,T; H^1(\Omega;\Gamma^\varphi_D))$ to a solution $(\tilde u, \tilde \varphi)$ of \eqref{weak}. \end{thm} \begin{proof} From Lemma~\ref{fully_bounds}, \eqref{fully_Dt_bound}, and the reflexivity of the spaces $$L_2(0,T; H^1(\Omega;\Gamma^u_D)),\quad L_2(0,T; H^1(\Omega;\Gamma^\varphi_D)),\quad L_2(0,T;H^1(\Omega; \Gamma^u_D)^\ast)$$ there exists a subsequence such that \begin{align*} (\tilde u_{m_k,l_k}, \tilde \varphi_{m_k,l_k}) &\rightharpoonup (\tilde y, \tilde \psi) &&\quad \text{in } L_2(0,T; H^1(\Omega;\Gamma^u_D))\times L_2(0,T; H^1(\Omega;\Gamma^\varphi_D)), \\ D_t \tilde U_{m_k,l_k} &\rightharpoonup D_t\tilde U &&\quad \text{in } L_2(0,T;H^1(\Omega; \Gamma^u_D)^\ast), \end{align*} for some $$(\tilde y, \tilde \psi) \in L_2(0,T; H^1(\Omega;\Gamma^u_D))\times L_2(0,T; H^1(\Omega;\Gamma^\varphi_D)), \ D_t\tilde U \in L_2(0,T;H^1(\Omega; \Gamma^u_D)^\ast).$$ The convergence of the initial conditions follows as in the semi-discrete case and we conclude $u^0_{m_k,l_k} \rightarrow y (0) = u(0)$. Next, we prove that $D_t \tilde y = \tilde D_t U$. First, we note that $\tilde U_{m_k,l_k}-\tilde u_{m_k,l_k} \rightharpoonup 0$ weakly in $L_2(0,T;L_2(\Omega))$. To see this, pick $\chi_{[\bar \tau a,\bar \tau b]} \otimes v : (t,x) \mapsto \chi_{[\bar \tau a,\bar \tau b]}(t) \, v(x)$, for $v\in L_2(\Omega)$, $a<b$, and some $\bar\tau>0$. Now, due to \cite[Theorem 8.9]{Roubicek13}, for $\tau_{l_k}\leq \bar\tau$ \begin{align*} &\int_0^T \langle \tilde U_{m_k,l_k}-\tilde u_{m_k,l_k}, \chi_{[\bar \tau a,\bar \tau b]} \otimes v \rangle \,\mathrm{d} t = \frac{\tau_{l_k}}{2}\langle \tilde u_{m_k,l_k}^{\bar{\tau}b/\tau_{l_k}} - \tilde u_{m,l_k}^{\bar{\tau}a/\tau_{l_k}},v\rangle \leq C\tau_{l_k} \rightarrow 0, \end{align*} where we used \eqref{U_def} and the bounds in Lemma~\ref{fully_bounds}. Since $\tilde{U}_{m_k}$ is bounded in $L_2(0;T;L_2(\Omega))$ and functions of the form $\chi_{[\bar \tau a,\bar \tau b]} \otimes v$ are dense in $L_2(0,T;L_2(\Omega))$ this implies $\tilde U_{m_k,l_k}-\tilde u_{m_k,l_k} \rightharpoonup 0$ in $L_2(0,T;L_2(\Omega))$, see \cite[Theorem~3~p.~121]{Yosida95}. Thus we get, for $v\in C^1(0,T;H^1(\Omega;\Gamma^u_D))$ with $v(0)=v(T)=0$, \begin{align*} \int_0^T \langle D_t \tilde U, v \rangle \,\mathrm{d} t \leftarrow \int_0^T \langle D_t \tilde U_{m_k,l_k}, v \rangle \,\mathrm{d} t = -\int_0^T \langle \tilde U_{m_k,l_k}, D_t v \rangle \,\mathrm{d} t \rightarrow -\int_0^T \langle \tilde y, D_t v \rangle \,\mathrm{d} t, \end{align*} and we conclude $D_t \tilde y = D_t\tilde U$, due to \eqref{fully_Dt_bound} and since $C^1(0,T;H^1(\Omega;\Gamma^u_D))$ with $v(0)=v(T)=0$ is dense in $L_2(0,T;L_2(\Omega))$, see see \cite[Theorem~3~p.~121]{Yosida95}. This implies that $(\tilde y, \tilde \psi) \in X$. In \cite[Theorem 3.1]{Walkington10} it is proved that if $\{u_{m_k,l_k}\}_k$ and $\{F_{m_k,l_k}\}_k$ in \eqref{fully_u_walkington} are bounded in $L_2(0,T;H^1(\Omega;\Gamma^u_D))$ and $L_2(0,T; (V^u_{m_k})^\ast)$, respectively, uniformly in $k$, then $\{u_{m_k,l_k}\}$ is precompact in $L_2(0,T; L_2(\Omega))$. Here, the boundedness of $\{u_{m_k,l_k}\}$ and $\{F_{m_k,l_k}\}$ follows from Lemma~\ref{fully_bounds}, \ref{ass_g}, and the bounds on $\sigma$ and $\lceil \cdot \rceil$. Hence, there exists a subsequence, still denoted $(m_k,l_k)$, such that $\tilde u_{m_k,l_k} \rightarrow \tilde y$ strongly and pointwise a.e.~in $L_2(0,T;L_2(\Omega))$. Owing to Lemma~\ref{lem_second_equation} we have for some subsequence, still denoted $(m_k,l_k)$, that $\varphi_{m_k,l_k}, \nabla \varphi_{m_k,l_k}, \sigma(u_{m_k,l_k})\nabla \varphi_{m_k,l_k}$, and $\sigma(u_{m_k,l_k})\nabla \lceil \tilde \varphi_{m_k,l_k} \rceil \varphi_{m_k,l_k}$, converges strongly in $L_2(0,T;L_2(\Omega))$. Using \eqref{fully_weak_u} we get for $v \in L_2(0,T;V^u_m)$ \begin{align*} &\int_0^T \langle D_t \tilde U_{m_k,l_k} + \partial_t g_u, v \rangle + \langle \nabla u_{m_k,l_k}, \nabla v \rangle \,\mathrm{d} t \\&\quad= \int_0^T \big(- \langle \sigma(u_{m_k,l_k})\lceil \tilde \varphi_{m_k,l_k} \rceil \nabla \varphi_{m_k,l_k}, \nabla v \rangle + \langle \sigma(u_{m_k,l_k})\nabla \varphi_{m_k,l_k}\cdot \nabla g_\varphi, v \rangle\big) \,\mathrm{d} t, \end{align*} where $\partial_t g^n_u = (g^n_u-g^{n-1}_u)/\tau_l$. Keeping $v$ fixed we get as $k \to \infty$ \begin{align*} \int_0^T \langle D_t \tilde y + D_t g_u, v \rangle + \langle \nabla u, \nabla v \rangle \,\mathrm{d} t &= \int_0^T \big(- \langle \sigma(u_{m_k})\lceil \tilde \varphi_{m_k} \rceil \nabla \varphi_{m_k}, \nabla v \rangle \\&\quad+ \langle \sigma(u_{m_k})\nabla \varphi_{m_k}\cdot \nabla g_\varphi, v \rangle\big) \,\mathrm{d} t, \end{align*} where we used the weak convergence of $\tilde U_{m_k}$ and $\nabla u_{m_k}$, the strong convergence of $\sigma(u_{m_k,l_k})\nabla \varphi_{m_k,l_k}$ and $\sigma(u_{m_k,l_k})\nabla \lceil \tilde \varphi_{m_k,l_k} \rceil \varphi_{m_k,l_k}$, and $\partial_t g_u \to D_t g_u$. Now, due to the density of $\{v(x,t): v|_{I_n} \in V^u_m\}$ and the boundedness of $u_{m_k,l_k}, \varphi_{m_k,l_k}$, this holds for all $v \in L_2(0,T;H^1(\Omega;\Gamma^u_D))$ \cite[Theorem~3~p.~121]{Yosida95}. Furthermore, in the spirit of Lemma~\ref{second_equation} we may prove that $\tilde \psi$ solves \eqref{weakphi}. Hence, $(\tilde y, \tilde \psi)$ is a solution to \eqref{weak}. To prove that $\tilde u_{m_k,l_k} \rightarrow \tilde y$ strongly in $L_2(0,T; H^1(\Omega;\Gamma^u_D))$ we may mimic the argument in the proof of Theorem~\ref{semi_conv_galerkin} (recall \eqref{DtU_equality}). \end{proof} \begin{corollary}\label{cor_fully_discrete} If the solution $(u,\varphi)$ to \eqref{weak} is unique, then the whole sequence of fully discrete approximations $(\tilde u_{m,l},\tilde \varphi_{m,l}) \in X_{m,l}$ converges. \end{corollary} \begin{proof} If the solution to \eqref{weak} is unique there can only be one accumulation point, cf.~Corollary~\ref{cor_semi_conv}. \end{proof} \begin{remark} There are many other time discretizations available, see, e.g., \cite[Chapter~16]{Thomee2006} for discretizations of a parabolic nonlinear problem. For instance, one could use linearized schemes to avoid solving a nonlinear problem in each time step. In this case the existence and uniqueness result follows easily. Convergence may be deduced by comparing the linearized solution to the backward Euler solution. However, to avoid overloading the paper, we shall not analyze this here. \end{remark} \section{Regularity and uniqueness}\label{sec_regularity} In this section we prove additional regularity and uniqueness of a solution to the weak problem \eqref{weak}. For this purpose we use Theorem~\ref{thm_main_reg} below, which is based on \cite{Hieber08, Meinlschmidt17}, see, in particular, \cite[Theorem 3.1]{Hieber08}. The theory in \cite{Mitrea07} gives a setting where assumption \ref{cond_OP} below, is satisfied. Our aim is to combine these results to obtain additional regularity for the Joule heating problem with mixed boundary conditions on creased domains. For similar settings, see \cite[Section 3]{Meinlschmidt17}, where regularity for the Joule heating problem with pure Robin boundary conditions for the temperature and mixed boundary conditions for the potential is studied. In addition, we also prove higher regularity of the solution in the interior of the domain. We emphasize that the differences in the regularity within the domain makes the problem well suited for $h$- and $hp$-adaptive finite elements. In \cite{Hieber08} the following type of systems are studied \begin{subequations}\label{eq_hieber} \begin{alignat}{2} D_t u - \Delta u &= R(u),& \quad & \text{in }\Omega \times (0,T),\\ u &= g_u,& & \text{on }\Gamma^u_D \times (0,T) \\ n \cdot \nabla u &= 0,& & \text{on }\Gamma^u_N \times (0,T),\\ u(\cdot, 0) &= u_0, & & \text{in }\Omega. \end{alignat} \end{subequations} If we define $R(u) = \sigma(u)|\nabla S(u)|^2$ such that $\varphi = S(u)$ solves \begin{subequations}\label{S_hieber} \begin{alignat}{2} -\nabla \cdot (\sigma(u)\nabla\varphi) &= 0, & \quad & \text{in }\Omega \times (0,T),\label{S_hieber_phi}\\ \varphi &= g_\varphi,& & \text{on }\Gamma^\varphi_D \times (0,T) \\ n \cdot \nabla \varphi &= 0,& & \text{on }\Gamma^\varphi_N \times (0,T), \end{alignat} \end{subequations} then \eqref{eq_hieber} is equivalent to \eqref{jouleheating}. For $p>\frac{3}{2}$ and a fixed $r > \frac{4p}{2p-3}$ we consider the following assumptions. \begin{enumerate}[label=(B\arabic*)] \item $\Omega$ is a bounded domain with Lipschitz boundary (in the sense of Gr{\"o}ger, see \cite{Hieber08}), $\meas(\Gamma^u_D)> 0$, and $\meas(\Gamma^\varphi_D)>0$.\label{ass_omega_reg} \item $g_u \in C([0,T]; W^1_{2p}(\Omega))\cap W^1_r(0,T;L_p(\Omega))$, $\Delta g_u(t) = 0$, $t\in(0,T)$, and $g_\varphi \in L_r(0,T; W^1_{2p}(\Omega))$. \label{ass_g_reg} \item $u_0-g_u(0) \in (L_p(\Omega),D(\Delta_p))_{1-\frac{1}{r},r}$. \label{ass_intial_reg} \item $\sigma \in C^1(\mathbb{R})$, Lipschitz continuous, and $0< \sigma_\circ \leq \sigma(x) \leq \sigma^\circ < \infty$, $\forall x \in \mathbb{R}$. \label{ass_sigma_reg} \item $\Delta$ is a topological isomorphism from $W^1_{2p}(\Omega;\Gamma^u_D)$ onto $W^{-1}_{2p}(\Omega;\Gamma^u_D)$ and from $W^1_{2p}(\Omega;\Gamma^\varphi_D)$ onto $W^{-1}_{2p}(\Omega;\Gamma^\varphi_D)$. \label{cond_OP} \end{enumerate} Here $D(A)$ denotes the domain of the operator $A$, that is \begin{align*} D(A) = \{v \in H^1(\Omega;\Gamma_D): \exists f \in L_2(\Omega), \ -a( v, w) = \langle f, w \rangle, \ \forall w \in H^1(\Omega;\Gamma_D)\}, \end{align*} for $\Gamma_D \subseteq \partial \Omega$. The semigroup generated by $A$ extends to a $C_0$-semigroup on $L_p(\Omega)$, $1< p < \infty$, and we denote its generator by $A_p$, see \cite{Hieber08} and references therein. In particular, we use $\Delta_p$ for the Laplacian that maps onto $L_p(\Omega)$. Furthermore, for Banach spaces $V$ and $W$ forming an interpolation couple, $(V,W)_{\alpha,\beta}$ denotes the real interpolation space. The following theorem relies on \cite[Theorem~3.1]{Hieber08}. \begin{thm}\label{thm_main_reg} Let $p>\frac{3}{2}$ and $r>\frac{4p}{2p-3}$. Under the assumptions \ref{ass_omega_reg}-\ref{cond_OP}, there exists a unique solution to \eqref{jouleheating} satisfying \begin{align}\label{u_reg} \tilde u \in W^1_r(0,T_\ast; L_p(\Omega)) \cap L_r(0,T_\ast;W^1_{2p}(\Omega;\Gamma^u_D)), \quad \tilde \varphi \in L_r(0,T_\ast;W^1_{2p}(\Omega;\Gamma^\varphi_D)), \end{align} for some $0<T_\ast\leq T$. \end{thm} \begin{proof} Consider \begin{enumerate}[label=(B\arabic*)]\setcounter{enumi}{5} \item The function $R\colon W^1_{2p}(\Omega) \rightarrow L_p(\Omega)$ is continuous. \label{cond_Ra} \item $R(0) \in L_r(0,T; L_p(\Omega))$ and for $\beta>0$ there exist $g_\beta \in L_r(0,T)$ such that \begin{align*} \|R(u_1)-R(u_2)\|_{L_p(\Omega)} \leq g_\beta(t)\|u_1-u_2\|_{W^1_{2p}(\Omega)}, \quad t\in(0,T), \end{align*} provided $\max(\|u_1\|_{W^1_{2p}(\Omega)}, \|u_2\|_{W^1_{2p}(\Omega)})\leq \beta$.\label{cond_Rb} \end{enumerate} In \cite[Theorem~3.1]{Hieber08} it is proved that if the conditions \ref{ass_omega_reg}-\ref{cond_OP} together with \ref{cond_Ra}-\ref{cond_Rb} on the operator $R$ are satisfied, then there is a unique solution to \eqref{eq_hieber} satisfying $u \in W^1_r(0,T_\ast; L_p(\Omega)) \cap L_r(0,T_\ast; D(\Delta_p))$ for some $0<T_\ast\leq T$. Our aim is now to prove that if $R(u)=\sigma(u)|\nabla S(u)|^2$ with $S:W^1_{2p}(\Omega) \rightarrow W^1_{2p}(\Omega)$ defined as in \eqref{S_hieber}, then $R$ satisfies \ref{cond_Ra}-\ref{cond_Rb} and we may conclude that $(u,\varphi)$ solves \eqref{jouleheating} and \eqref{u_reg} is fulfilled. From \cite[Corollary 3.24]{Meinlschmidt17} it follows that the operator $(-\nabla \phi \cdot \nabla)^{-1}$ is a linear homeomorphism from $W^{-1}_{2p}(\Omega;\Gamma^\varphi_D)$ to $W^{1}_{2p}(\Omega;\Gamma^\varphi_D)$, if $\phi \in \mathfrak{C}$ is uniformly continuous on $\bar \Omega$, $\mathfrak{C}$ compact set in $C(\bar\Omega)$, $\phi$ admits a positive lower bound, and \ref{cond_OP} holds. It also holds that $(-\nabla\cdot \phi \nabla)^{-1}$ is Lipschitz with respect to $\phi$. In our setting, we have due to Morrey's inequality $W^1_{2p}(\Omega) \subseteq C^{0,\alpha}(\bar \Omega)$, for some $\alpha>0$, and $C^{0,\alpha}(\bar \Omega)$ embeds compactly into $C(\bar \Omega)$. This implies that $\mathfrak C = \{\sigma(u): u\in W^1_{2p}(\Omega)\}$ is compact and $\sigma(u)$ is uniformly continuous, since $\sigma$ is Lipschitz continuous. Furthermore, $\sigma(\cdot) \geq \sigma_\circ > 0$. Hence, given $\phi=\sigma(u)$ we deduce that there exists a unique solution $\tilde \varphi(t) \in W^1_{2p}(\Omega;\Gamma^\varphi_D)$ to \eqref{S_hieber}. This proves that the mappings $S$ and $R$ are well defined in the given spaces. Given $u$ fixed, let $F:W^{-1}_{2p}(\Omega;\Gamma^\varphi_D)\rightarrow W^{1}_{2p}(\Omega;\Gamma^\varphi_D)$, such that $\psi = F(g)$ is the solution to \begin{align*} \nabla\cdot (\sigma(u) \nabla \psi) = g, \quad \psi|_{\Gamma^\varphi_D} = 0. \end{align*} By letting $G = \nabla \cdot (\sigma(u)\nabla g_\varphi)$ and using $\varphi = \tilde \varphi + g_\varphi$ it follows from \eqref{S_hieber_phi} that $\tilde \varphi = F(G)$. Since the operator $F$ is bounded we get \begin{align}\label{reg_time} \|\tilde \varphi\|_{W^1_{2p}(\Omega;\Gamma^\varphi_D)} &\leq C\|G\|_{W^{-1}_{2p}(\Omega;\Gamma^\varphi_D)} = C \sup_{w \in W^1_{(2p)'}(\Omega;\Gamma^\varphi_D)\setminus \{0\}} \frac{\langle \nabla \cdot (\sigma(u)\nabla g_\varphi),w\rangle}{\|w\|_{W^1_{(2p)'}(\Omega)}} \\&= C \sup_{w \in W^1_{(2p)'}(\Omega;\Gamma^\varphi_D)\setminus \{0\}} \frac{\langle \sigma(u)\nabla g_\varphi,\nabla w\rangle}{\|w\|_{W^1_{(2p)'}(\Omega)}} \leq C\|g_\varphi\|_{W^1_{2p}(\Omega)}, \notag \end{align} where $(2p)'$ is the H\"{o}lder conjugate exponent to $2p$. Thus, $\varphi\in L_r(0,T;W^1_{2p}(\Omega))$, since $g_\varphi \in L_r(0,T;W^1_{2p}(\Omega))$, which implies $R(0) \in L_r(0,T;L_p(\Omega))$ in \ref{cond_Rb}. For $\varphi_1 = S(u_1)$ and $\varphi_2 = S(u_2)$ we get \begin{align*} &\|R(u_1)-R(u_2)\|_{L_p(\Omega)} \\&\quad\leq \|(\sigma(u_1)-\sigma(u_2))|\nabla \varphi_1|^2\|_{L_p(\Omega)} + \|\sigma(u_2)(|\nabla \varphi_1|^2-|\nabla \varphi_2|^2\|_{L_p(\Omega)} \\ &\quad\leq C(\|u_1-u_2\|_{L_\infty(\Omega)}\|\nabla \varphi_1\|^2_{L_{2p}(\Omega)} + \|\nabla(\varphi_1 + \varphi_2)\|_{L_{2p}(\Omega)}\|\nabla(\varphi_1 - \varphi_2)\|_{L_{2p}(\Omega)}). \end{align*} Using Sobolev's inequality we get $\|u_1-u_2\|_{L_\infty(\Omega)} \leq C\|u_1-u_2\|_{W^1_{2p(\Omega)}}$. Due to the Lipschitz property of $(-\nabla\cdot \sigma(u) \nabla)^{-1}$ we get \begin{align*} \|\tilde \varphi_1 - \tilde \varphi_2\|_{W^1_{2p}(\Omega)} \leq C\|\sigma(u_1) - \sigma(u_2)\|_{L_\infty} \leq C\|u_1-u_2\|_{L_\infty} \leq C\|u_1-u_2\|_{W^1_{2p}(\Omega)}. \end{align*} which proves \ref{cond_Rb} and the continuity in \ref{cond_Ra}. Hence there exists a unique solution $u \in W^1_r(0,T_\ast; L_p(\Omega)) \cap L_r(0,T_\ast; D(\Delta_p))$ for some $T_\ast > 0$. Finally, by definition, $D(\Delta_p)$ denotes the domain such that the Laplacian maps into $L_p(\Omega)$. Let $p'$ and $(2p)'$ be the H\"{o}lder conjugates to $p$ and $2p$, respectively. Then \begin{align*} L_p(\Omega)=(L_{p'}(\Omega))^\ast, \quad W^{-1}_{2p}(\Omega; \Gamma^u_D) = (W^1_{(2p)'}(\Omega; \Gamma^u_D))^\ast. \end{align*} By Sobolev's inequality we have, \begin{align*} W^1_{(2p)'}(\Omega) \subseteq L_{6p/(4p-3)}(\Omega), \end{align*} and since $p'=p/(p-1)<6p/(4p-3)$ when $p>3/2$ we conclude $W^1_{(2p)'}(\Omega; \Gamma^u_D) \subseteq L_{p'}(\Omega)$, or equivalently $L_p(\Omega) \subseteq W^{-1}_{2p}(\Omega; \Gamma^u_D)$. Now, since we know that $\Delta$ is an isomorphism between $W^1_{2p}(\Omega;\Gamma^u_D)$ and $W^{-1}_{2p}(\Omega;\Gamma^u_D)$ and $L_p(\Omega) \subseteq W^{-1}_{2p}(\Omega;\Gamma^u_D)$ we deduce $D(\Delta_p) \subseteq W^1_{2p}(\Omega;\Gamma^u_D)$ and \eqref{u_reg} follows. \end{proof} Note that the result is only local in time, that is, the additional regularity and uniqueness are only guaranteed up to some $T_\ast\leq T$. We provide an example of a geometric setting for which \ref{cond_OP} is satisfied. Assume instead of \ref{ass_omega_reg} the following \begin{enumerate}[label=(B\arabic*')] \item $\Omega$ is a creased domain with respect to the boundary conditions for $u$ and $\varphi$. In addition $\meas(\Gamma^u_D)>0$ and $\meas(\Gamma^\varphi_D)>0$.\label{ass_creased} \end{enumerate} For the full definition of creased domains we refer to \cite[Definition~2.3]{Mitrea07}. We note however, that in our setting, it implies that $\Omega$ is a Lipschitz domain with $\Gamma^u_D$ and $\Gamma^\varphi_D$ open and non-empty and $\partial \Gamma^u_D$ and $\partial \Gamma^\varphi_D$ are not re-entrant. This means that the angles between the Dirichlet and Neumann parts of the boundary are strictly less than $\pi$. \begin{corollary} Let $p>\frac{3}{2}$ and $r>\frac{4p}{2p-3}$. Under the assumptions \ref{ass_creased} and \ref{ass_g_reg}-\ref{ass_sigma_reg}, there exists a unique solution to \eqref{jouleheating} satisfying \begin{align*} \tilde u \in W^1_r(0,T_\ast; L_p(\Omega)) \cap L_r(0,T_\ast;W^1_{2p}(\Omega;\Gamma^u_D)), \quad \tilde \varphi \in L_r(0,T_\ast;W^1_{2p}(\Omega;\Gamma^\varphi_D)), \end{align*} for some $0<T_\ast\leq T$. \end{corollary} \begin{proof} To see that condition \ref{cond_OP} is fulfilled we use the result on equations of Poisson type in \cite{Mitrea07}. If $\Omega$ is a creased domain, $\Gamma_D \subseteq \partial \Omega$, and $g \in B^s_{q,q}(\Gamma_D)$, then there exists $\epsilon \in (0,\frac{1}{2})$ such that Poisson's equation is well-posed in the spaces \begin{align}\label{Mitrea_eq} v \in W^{s+\frac{1}{q}}_q(\Omega; \Gamma_D), \ -\Delta v = f \in \big(W^{2-s-\frac{1}{q}}_{q'}(\Omega; \Gamma_D)\big)^\ast, \ v|_{\Gamma_D} = g_u, \end{align} for $\frac{1}{q'}=1-\frac{1}{q}$ and $(s,\frac{1}{q}) \in \mathcal{H}_\epsilon$ where $\mathcal{H}_\epsilon$ is the polygon with vertices in \begin{align*} (0,0), \quad (\epsilon, 0), \quad (1,\frac{1}{2}-\epsilon), \quad (1,1), \quad (1-\epsilon,1), \quad (0,\frac{1}{2}+\epsilon). \end{align*} Choosing $s=\frac{8+\epsilon}{12}$ and $q = \frac{12}{4-\epsilon}$ we have for $p=\frac{6}{4-\epsilon}>\frac{3}{2}$ \begin{align*} W^{s+\frac{1}{q}}_q = W^1_{\frac{12}{4-\epsilon}} = W^1_{2p} \ \text{ and } (\ W^{2-s-\frac{1}{q}}_{q'})^\ast= (W^1_{\frac{12}{8+\epsilon}})^\ast = W^{-1}_{2p}, \end{align*} and since $W^1_{2p}(\Omega)|_{\Gamma_d} = B^s_{p,p}(\Gamma_D)$ assumption \ref{ass_g_reg} gives $g_u(t), g_\varphi(t) \in B^s_{q,q}(\Gamma^u_D)$. We conclude that \ref{cond_OP} holds. \end{proof} \begin{remark} There are other geometric settings where condition \ref{cond_OP} is fulfilled, see, for instance, \cite{Hieber08,Meinlschmidt17,Disser15}. \end{remark} \begin{remark} In \cite[Section 4]{Antontsev94} it is established that $\nabla \varphi \in L_{2q/(q-3)}(0,T;L_q(\Omega))$, for $q>3$, is sufficient for a unique solution, see also Theorem~\ref{existence_uniqueness_classical}. This agrees with the regularity we get in Theorem~\ref{thm_main_reg}. \end{remark} The next theorem shows that higher regularity achieved in the interior of the domain, cf.~the stationary case \cite[Theorem~4.2]{Jensen13}. Here we use the notation $D(\Delta_{p,k})$ for the domain such that $\Delta$ maps into $W^k_p(\Omega)$. Note that $D(\Delta_{p}) = D(\Delta_{p,0})$. \begin{thm}\label{thm_interior_reg} Let $0<T_0<T_\ast$ and let $\Omega_0$ be a relatively compact domain in $\Omega$: $\Omega_0 \Subset \Omega$. Let $(u,\varphi)$ be the solution to \eqref{jouleheating} such that \begin{align}\label{u_phi_reg} \tilde u \in W^1_r(0,T_\ast; L_p(\Omega)) \cap L_r(0,T_\ast;W^1_{2p}(\Omega;\Gamma^u_D)), \quad \tilde \varphi \in L_r(0,T_\ast;W^1_{2p}(\Omega;\Gamma^\varphi_D)), \end{align} for some $p>\frac{3}{2}$ and $r>\frac{4p}{2p-3}$. Then $\tilde u,\tilde \varphi \in L_r(T_0,T_\ast; W^2_s(\Omega_0))$ for all $s\in(1,\infty)$. If $\sigma \in C^\infty(\mathbb{R})$, then $\tilde u, \tilde \varphi \in L_r(T_0,T_\ast;C^\infty(\Omega_0))$. \end{thm} \begin{proof} Let $\Omega_\infty$ and $\{\Omega_i\}_{i=1}^\infty$ be smooth domains such that $\Omega_{i-1} \Subset \Omega_{i}$ and $\Omega_i \subset \Omega_\infty \Subset \Omega$, for $i=0,1,...$. Assume, without loss of generality, that the boundary data $g_u$ and $g_\varphi$ have smooth extensions to $\Omega_\infty$ such that $g_u, D_t g_u, g_\varphi \in L_r(0,T;C^\infty(\Omega_\infty))$. Without loss of generality, we also assume $p=6/(4-\epsilon)$ for some $\epsilon>0$. Let $\bar\zeta_i \in C^\infty(\Omega_i,[0,1])$, such that $\bar\zeta_i|_{\partial \Omega_i}=0$ and $\bar\zeta_i|_{\Omega_{i-1}}=1$. Furthermore, let $\{T_i\}_{i=1}^\infty$ and $T_0$ be positive numbers such that $T_i<T_{i-1}$, and $0< T_i < T_0 < T_\ast$ for all $i$. Define $\eta_i(t) \in C^\infty([T_i,T_\ast],[0,1])$ such that $\eta_i(T_i)=0$ and $\eta_i|_{[T_{i-1}, T_0]}=1$. Let $\zeta_i := \eta_i\bar\zeta_i$, then $(\zeta_i\tilde u,\zeta_i \tilde \varphi)$ satisfies the following system in $\Omega_i \times (T_i,T_\ast)$. \begin{subequations}\label{smooth_system} \begin{align} D_t (\zeta_i\tilde u) - \Delta (\zeta_i\tilde u) &= \zeta_i\sigma(u) |\nabla \varphi|^2 -\zeta_i(D_t g_u - \Delta g_u) + \tilde uD_t\zeta_i -2\nabla \zeta_i \cdot \nabla \tilde u \\&\quad- \tilde u \Delta \zeta_i,\notag \\ \Delta (\zeta_i\tilde \varphi) &= \zeta_i\frac{\sigma'(u)}{\sigma(u)}\nabla u \cdot \nabla \varphi - \zeta_i\Delta g_\varphi - 2\nabla \zeta_i \cdot \nabla \tilde \varphi - \tilde \varphi \Delta \zeta_i, \end{align} \end{subequations} with homogeneous Dirichlet conditions and zero initial data. Note that we have used \begin{align*} \nabla \cdot(\sigma(u)\nabla \varphi) = 0 \Leftrightarrow \Delta \varphi= \frac{\sigma'(u)}{\sigma(u)}\nabla u \cdot \nabla \varphi, \end{align*} in the second equation. Because of the assumed regularity in \eqref{u_phi_reg} and the smoothness of $\zeta_i,g_u,$ and $g_\varphi$, the right-hand sides in \eqref{smooth_system} are in $L_r(T_i,T_\ast;L_p(\Omega_i))$. There exists a unique solution in $W^2_{p}(\Omega_i)$ to Poisson's equation with homogeneous Dirichlet boundary conditions, if the domain is smooth and the right-hand side in $L_p(\Omega_i)$, $1<p<\infty$, see e.g.~\cite[Theorem~9.15]{Gilbarg01}. We conclude that, for a fixed $t$, $\zeta_i(t) \tilde \varphi(t) \in W^2_p(\Omega_i)$. We may now use elliptic regularity in $L_p$, see \cite[Lemma 9.17]{Gilbarg01}, to deduce \begin{align*} \|\zeta_i \tilde \varphi\|_{W^2_{p}(\Omega_i)} \leq C\|\zeta_i\frac{\sigma'(u)}{\sigma(u)}\nabla u \cdot \nabla \varphi - \zeta_i\Delta g_\varphi - 2\nabla \zeta_i \cdot \nabla \tilde \varphi - \tilde \varphi \Delta \zeta_i\|_{L_p(\Omega_i)}. \end{align*} The regularity in time of the right hand side implies $\zeta_i\tilde \varphi \in L_r(T_i,T_\ast;W^2_p(\Omega_{i}))$. Thus $\tilde \varphi \in L_r(T_{i-1},T_\ast;W^2_p(\Omega_{i-1}))$, since $\zeta_i = 1$ on $[T_{i-1},T_\ast]\times \Omega_{i-1}$. For the parabolic equation we use the theory for maximal $L_p$-regularity with homogeneous Dirichlet boundary conditions on smooth domains, see, e.g., \cite[Theorem 3.1]{Hieber97}. If the right-hand side is in $L_r(0,T;L_p(\Omega))$ and the initial data is zero, then the solution belongs to $L_r(0,T;D(\Delta_p))\cap W^1_r(0,T;L_p(\Omega))$. From the results on Poisson's equation we deduce $D(\Delta_p)\subset W^2_p(\Omega_i)$ and thus $\tilde u\in L_r(T_{i-1},T_\ast;W^2_{p}(\Omega_{i-1}))\cap W^1_r(T_{i-1},T_\ast;L_p(\Omega_{i-1}))$. From the Sobolev inequality we have $W^2_{p}(\Omega_{i-1}) \subseteq W^1_{3p/(3-p)}(\Omega_{i-1})$. Using that $2p=12/(4-\epsilon)$ for some $\epsilon>0$ we get $3p/(3-p) = 12/(4-2\epsilon)$. Hence, we can substitute $\epsilon$ by $2\epsilon$, pass from $i$ to $i-1$ and repeat the argument. Note that if $12/(4-\epsilon)$ becomes negative, the right-hand side is in $L_\infty(\Omega_i)$. By induction $\tilde u, \tilde \varphi \in L_r(T_0,T_\ast;W^2_s(\Omega_0))$ for any $s\in(1,\infty)$. Now assume $\sigma \in C^\infty(\mathbb{R})$. A solution to Poisson's equation on a smooth domain is in $W^{k+2}_p(\Omega_i)$ if the right-hand side is in $W^k_p(\Omega_i)$, see e.g.~\cite[Theorem~9.19]{Gilbarg01}. By applying Leibniz's rule it is clear that there is an $s'$ such that the right-hand sides in \eqref{smooth_system} belongs to $L_r(T_i,T_\ast;W^{k}_s(\Omega_i))$ if $\tilde \varphi, \tilde u \in L_r(T_i,T_\ast;W^{k+1}_{s'}(\Omega_i))$. Hence, we may perform induction over $k$ and pass from $i$ to $i-1$, to achieve $\tilde u, \tilde \varphi \in L_r(T_0,T_\ast; W^{k+2}_s(\Omega_0))$, for any $k,s >1$. This implies $\tilde u, \tilde \varphi \in L_r(T_0,T_\ast; C^\infty(\Omega_0))$. \end{proof} \section{Numerical Examples}\label{sec:examples} In this section we consider four different examples. The first two are designed to test the convergence rates for different settings. In the first example we choose the domain and the data such that the exact solution is known. To achieve this we add a function $f(x,t)$ to the right-hand side in \eqref{weaku} and consider non-zero Neumann data for $\phi$, see Subsection~\ref{sec:example1} below. For the second example we consider a setting that does not fulfil the creased domain conditions. For this problem we expect low regularity and reduced convergence rates. Finally, in the last two examples we test a goal oriented adaptivity method. In all cases we consider a continuous, piecewise affine finite element discretization. We let $\{\mathcal T_{m}\}_m$ denote a family of uniform triangulations of the domain such that $h_{m+1} = 2^{-1}h_m$, $h_0 \in \mathbb{R}$, where $h_m$ is the maximal mesh size on $\mathcal T_m$. With this notation we may define \begin{align*} V^u_m &:= \{v \in H^1(\Omega;\Gamma^u_D)\cap C^0(\bar \Omega): v|_K \ \text{is a polynomial of degree} \leq 1, \forall K \in \mathcal T_h \},\\ V^\varphi_m &:= \{v \in H^1(\Omega;\Gamma^\varphi_D)\cap C^0(\bar \Omega): v|_K \ \text{is a polynomial of degree} \leq 1, \forall K \in \mathcal T_h \}. \end{align*} For the time discretization, we let $\tau_l = 2^{-l}T$ and the fully discrete space $X_{m,l}$ is defined as in Section~\ref{sec_discrete}. In the first two experiments we keep the time step proportional to the mesh size in each refinement. That is, we consider spaces of the form $X_{k,k}$, for $k=1,2,3,...$. This means that if the solution has sufficient regularity, then we expect at most linear convergence rate in the norm $L_2(0,T; H^1(\Omega))$, see also \cite{Elliott95,Stillfjord17,Gao14}. All computations are made using the FEniCS software \cite{FenicsBook}. \subsection{Example 1}\label{sec:example1} We let $T=0.1$, $\Omega$ be the unit cube, $\Gamma^u_D = \partial \Omega$, and $\Gamma^\varphi_D = \partial \Omega \setminus \{x_3=0 \text{ or } x_3=1\}$. To construct an example where the exact solution is known, we consider non-zero Neumann data $g_N$ for $\varphi$ and an additional function $f$ in the right-hand side of \eqref{weaku}. We get \begin{align*} \langle D_t u, v \rangle + \langle \nabla u, \nabla v \rangle &= -\langle \sigma(u)\lceil \tilde \varphi \rceil \nabla \varphi, \nabla v \rangle + \langle \sigma(u) \nabla \varphi \cdot \nabla g_\varphi, v\rangle + \langle f, v \rangle, \\ \langle \sigma(u) \nabla \varphi, \nabla w \rangle &= \langle g_N, w \rangle_{\Gamma^\varphi_N}, \end{align*} where $\langle \cdot, \cdot \rangle_{\Gamma^\varphi_N}$ denotes integration on the boundary $\Gamma^\varphi_N$. Letting $g_u = t, g_\varphi = x_2, g_N = -1 + 2x_2$, $\sigma = 1$, and \begin{align*} f &= 2(x_1x_2(1-x_1)(1-x_2) + x_1x_3(1-x_1)(1-x_3) + x_2x_3(1-x_2)(1-x_3)),\\ u_0 &= x_1(1-x_1)x_2(1-x_2)x_3(1-x_3), \end{align*} the exact solution is given by $u = x_1(1-x_1)x_2(1-x_2)x_3(1-x_3) + t$ and $\varphi = x_2$. Note that $\varphi = \tilde \varphi + g_\varphi$ and thus $\tilde \varphi = 0$. In our setting, the approximations $\varphi_{m,l}$ are all close to zero and, hence, we omit to plot the error for $\varphi$ below. We compute the finite element approximation on meshes with tetrahedra of maximal diameter $h=2^{-k}\sqrt{3}$ and time step size $\tau=2^{-k}T=2^{-k}0.1$ for $k=1,..,6$. With this refinement, the finest approximation ($k=6$) is computed on a mesh with 274625 nodes. The error in $L_2(0,T;H^1(\Omega))$ is approximated using Simpson's rule in time on each interval $I_n$ and the FEniCS function \verb|errornorm| in space. The relative error is depicted in Figure~\ref{fig:cube}. The convergence rate is approximately linear, which is expected for sufficiently regular problems. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{cube.pdf} \caption{Relative errors for the temperature $u$ (blue o) of Example 1 plotted against the mesh size $h$. The dashed line is $Ch$.}\label{fig:cube} \end{figure} \subsection{Example 2}\label{sec:example2} We let $T=0.1$ and $\Omega$ be the Fichera cube depicted in Figure~\ref{fig:fichera_domain} (left). We consider non-creased boundary conditions by imposing Dirichlet conditions on the striped areas, $\Gamma_0$ and $\Gamma_1$ in the figure (left), and homogeneous Neumann conditions on the remaining parts. On $\Gamma_0$ we set $g_u = 0$ and $g_\varphi = 10$ and on $\Gamma_1$ we set $g_u=g_\varphi=0$. Furthermore, we let $\sigma(u) = 2^{-1}(\pi - \arctan(u))$ and $u_0 = 0$. We compute the finite element approximation on meshes with tetrahedra of maximal diameter $h=2^{-(k-1)}\sqrt{2}$ and time step size $\tau=2^{-k}T=2^{-k}0.1$ for $k=1,..,5$. Since the exact solution is not known, the approximations are compared to a reference solution computed for $k=6$ corresponding to a mesh with 471233 nodes. The relative error in $L_2(0,T;H^1(\Omega))$-norm is plotted in Figure~\ref{fig:fichera}. We have convergence, but not with order one. This is due to the low regularity in the vicinity of the edges where the Dirichlet and Neumann boundaries meet with an angle greater than $\pi$, that is, where the creased domain condition fails. \begin{figure}[h] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{fichera_domain.pdf} \caption{Non-creased} \end{subfigure} ~ \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{fichera_domain_creased.pdf} \caption{Creased} \end{subfigure} \caption{Two settings of the Fichera cube with centre in the origin. Dirichlet boundary conditions are imposed on the striped areas ($\Gamma_0$ and $\Gamma_1$).}\label{fig:fichera_domain} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{fichera.pdf} \caption{Relative errors for the temperature $u$ (blue $\circ$) and the potential $\varphi$ (red $\ast$) of Example 2 plotted against the mesh size $h$. The dashed line is $Ch$.}\label{fig:fichera} \end{figure} \subsection{Example 3}\label{sec:example3} We continue in the setting of the Fichera cube as in Example~2, but with a choice of boundary conditions that fulfil the creased domain condition. We choose $\Gamma_0$ and $\Gamma_1$ as in Figure~\ref{fig:fichera_domain} (right), with $g_u = 0$ and $g_\varphi(x,t) = 2x_2(x_2+1)+5$ on both $\Gamma_0$ and $\Gamma_1$. The aim is to utilize the observation that the solution has higher regularity in the interior of the domain, see Theorem~\ref{thm_interior_reg}, and that the problem thus is suitable for $h$-adaptive finite elements. In this example we use a goal-oriented approach for the mesh refinement, which is supported for stationary problems in the FEniCS software, see \cite{Rognes2013}. We summarize the goal-oriented procedure here, and refer to \cite{Rognes2013} and references therein for details. Consider a nonlinear variational problem; find $u \in V$ such that \begin{align}\label{goal_stationary} F(u,v) = 0, \quad \forall v \in \hat V, \end{align} and the corresponding finite element problem; find $u_h \in V_h$ such that \begin{align}\label{goal_stationary_fem} F(u_h,v) = 0, \quad \forall v \in \hat V_h, \end{align} for some triangulation $\mathcal T_h$ and appropriate finite element space $V_h \subset V$, $\hat V_h \subset \hat V$. Let $\mathcal{M}: V \to \mathbb{R}$ denote a linear goal functional and define the dual problem; find $z \in V^\ast$ such that \begin{align*} \overline{F'}^\ast(z,v) = \mathcal{M}(v), \quad \forall v \in \hat V^\ast, \end{align*} where $\hat V^\ast = V_0 = \{v-w: v,w \in V\}$ and $V^\ast = \hat V$. The bilinear form $\overline{F'}^\ast$ denotes the following average of the Fr\'{e}chet derivative $F'$ of $F$, \begin{align*} \overline{F'}(\cdot,\cdot) = \int^1_0 F'(su + (1-s)u_h;\cdot,\cdot) \,\mathrm{d} s, \end{align*} and by the chain rule we have $\overline{F'}(u-u_h,\cdot) = F(u,\cdot) - F(u_h, \cdot)$. Using the definition of the dual problem we may now express the error in the goal functional as \begin{align*} \mathcal{M}(u)-\mathcal{M}(u_h) &= \mathcal{M}(u-u_h) = \overline{F'}^\ast(z,u-u_h) = \overline{F'}(u-u_h,z) \\&= F(u,z) - F(u_h, z) = - F(u_h, z) =:r(z), \end{align*} where $r(z)$ denotes the residual. The residual can be decomposed into local contributions from each cell $T \in \mathcal T_h$ \begin{align*} r(v) = \sum_{T \in \mathcal T_h} r_T(v) = \sum_{T \in \mathcal T_h}\Bigg(\int_TR_Tv \,\mathrm{d} x+ \int_{\partial T} R_{\partial T} v\,\mathrm{d} s \Bigg), \end{align*} where $R_T$ and $R_{\partial T}$ are the cell and facet residuals. In \cite[Theorem 4.1]{Rognes2013} it is proved that the error indicators $R_T, R_{\partial T}$ can be determined by solving a set of local problems on each cell $T$. The procedure of computing the error indicators $R_T$ and $R_{\partial T}$ and refining the mesh accordingly is performed in FEniCS by using \verb|solve| together with the goal functional and a given tolerance. In our case, the fully discrete problem \eqref{full_weak} is a stationary problem of the form \eqref{goal_stationary} with \begin{align*} F((u^n_{m,l},\varphi^n_{m,l}),(v,w)) &:= \langle\frac{ u^n_{m,l}-u^{n-1}_{m,l}}{\tau_l}, v \rangle + \langle \nabla u^n_{m,l}, \nabla v \rangle \\&\quad+\langle \sigma(u^n_{m,l})\lceil \tilde \varphi^n_{m,l} \rceil \nabla \varphi^n_{m,l}, \nabla v \rangle \\&\quad - \langle \sigma(u^n_{m,l}) \nabla \varphi^n_{m,l} \cdot \nabla g^n_\varphi, v\rangle + \langle \sigma(u^n_{m,l}) \nabla \varphi^n_{m,l}, \nabla w \rangle. \end{align*} In each time step the error indicators are computed and the mesh refined. Note that the refined mesh is reused in the next time step and additionally refined if needed. In this example we choose $\mathcal{M}(u) = \int_{\Omega} u \,\mathrm{d} x$. The initial data remains the same as in Example 2. We choose to have fixed (small) time step $\tau = 2^{-6} T$ in this experiment, since the the spatial error is the main concern here. The relative error in the goal functional for $h=2^{-4}\sqrt{2}$ compared to the reference solution, here denoted $u_\textrm{ref}$ and computed on mesh with 471233 nodes, is \begin{align*} \frac{\displaystyle \max_{0 \leq n \leq N} \left| \mathcal{M}(u^n_h) - \mathcal{M}(u^n_\textrm{ref}) \right|}{\displaystyle \max_{0 \leq n \leq N}\left| \mathcal{M}(u^n_\textrm{ref}) \right|} \approx 0.0254. \end{align*} We note that our uniform mesh of size $h=2^{-4}\sqrt{2}$ corresponds to $7985$ nodes. Using the goal oriented adaptivity, denoted $u_\textrm{ad}$ below, we get \begin{align*} \frac{\displaystyle \max_{0 \leq n \leq N} \left| \mathcal{M}(u^n_\textrm{ad}) - \mathcal{M}(u^n_\textrm{ref}) \right|}{\displaystyle \max_{0 \leq n \leq N}\left| \mathcal{M}(u^n_\textrm{ref}) \right|} \approx 0.0282, \end{align*} already for $1628$ nodes. This example indicates that the problem is suitable for $h$-adaptive finite elements and motivates a further analysis of a posteriori methods for the Joule heating problem, which will be considered in later works. \subsection{Example 4}\label{sec:example4} In this example, we use the non-creased Fichera cube as in Example~2, see Figure~\ref{fig:fichera_domain} (left). The aim is to investigate the use of goal-oriented adaptivity for non-creased domains. We emphasize that, in this setting, Theorem~\ref{thm_interior_reg} is not directly applicable. As in Example~3 we choose $\mathcal{M}(u) = \int_{\Omega} u \,\mathrm{d} x$. The initial and boundary data remain the same as in Example 2 and the time step is $\tau = 2^{-6} T$. The error in the goal functional for $h=2^{-5}\sqrt{2}$ compared to the reference solution is \begin{align*} \frac{\displaystyle \max_{0 \leq n \leq N} \left| \mathcal{M}(u^n_h) - \mathcal{M}(u^n_\textrm{ref}) \right|}{\displaystyle \max_{0 \leq n \leq N}\left| \mathcal{M}(u^n_\textrm{ref}) \right|} \approx 0.0271. \end{align*} Here $h=2^{-5}\sqrt{2}$ corresponds to $60513$ nodes. For the goal oriented adaptivity we get \begin{align*} \frac{\displaystyle \max_{0 \leq n \leq N} \left| \mathcal{M}(u^n_\textrm{ad}) - \mathcal{M}(u^n_\textrm{ref}) \right|}{\displaystyle \max_{0 \leq n \leq N}\left| \mathcal{M}(u^n_\textrm{ref}) \right|} \approx 0.0254, \end{align*} for $6560$ nodes. This example indicates that the goal oriented adaptivity is applicable also in non-creased domain settings. However, it is still an open problem to show that the solution to such a problem enjoys the appropriate regularity to be suitable for $h$-adaptivity. \subsection*{Acknowledgement} The authors acknowledge the hospitality of the Hausdorff Research Institute for Mathematics in Bonn, where parts of this paper were written. \bibliographystyle{abbrv}
1,314,259,993,136
arxiv
\section{Introduction} \noindent There has been an immense amount of interest in information theory recently. This is primarily due to the fact that quantum correlations, which is an important ingredient in the theory of information, play a crucial role in various branch of physics, namely, condensed matter physics, statistical mechanics as well as in quantum theories of gravity. It has been realized that the fundamental laws of physics can be given an information theoretic interpretation \cite{info1, info2}. In classical information theory, information is quantified by a measure called Shannon entropy. The counterpart of this concept in quantum information theory is entanglement entropy (EE). EE is a fundamental quantity in quantum information theory as it provides a measure for quantum correlation in a bipartite quantum system. In the last few years EE has been successfully used as a probe in quantum phases of matter \cite{Calabrese:2004eu}-\cite{Wen:2006}. Other areas where information theory has provided important insights are the thermodynamic derivation of Einstein's equations of general relativity \cite{jacob} and in resolving the black hole information loss paradox \cite{Hawking}-\cite{Maldacena:2001kr}. So it is realized that in understanding the geometry of spacetime, information theory and specially EE would play a vibrant role. Obtaining the EE of $1+1$-dimensional conformal field theories has been an important problem in theoretical physics. The computation was first done in \cite{Calabrese:2004eu}-\cite{Calabrese:2009qy} by a method known as the replica trick. Interestingly, the behaviour of the EE for a $1+1$-dimensional conformal field theory (CFT) exhibited an universal logarithmic behaviour \cite{Calabrese:2004eu}. Recently, the gauge/gravity correspondence has played a key role in computing the EE of a boundary CFT holographically from its bulk gravitational dual. The insight comes from the fact that the holographic principle \cite{thooft}-\cite{Gubser:1998bc} states that the number of degrees of freedom in a region of space is equal in number to the degrees of freedom on the boundary that surrounds the space. This principle first proposed in the context of black hole entropy, became one of the most cherished ideas in modern theoretical physics with the advent of the $AdS/CFT$ correspondence \cite{Maldacena:1997re, Aharony:1999ti}. This correspondence is the most successful realization of the holographic principle in theoretical physics. It relates the gravitational theory in $AdS$ space to the CFT that lives on the boundary of $AdS$ space. It has evolved as a powerful theoretical input in condensed matter physics, nuclear physics and QCD \cite{Hartnoll:2009sz}-\cite{CasalderreySolana:2011us}. \noindent The prescription of computing the holographic entanglement entropy (HEE) was first proposed in \cite{Ryu:2006bv},\cite{Ryu:2006ef}. According to the prescription, the HEE for a subsystem $A$ in a $(d-1)$-dimensional boundary field theory is given by \begin{equation} S_A =\frac{Area (\Gamma_A)}{4 G_{(d)}} \nonumber \end{equation} where $\Gamma_A $ is the minimal surface area of the bulk extension (on a fixed time slice) whose boundary coincides with the edges of the subsystem living at the boundary and $G_{(d)}$ is the $d$-dimensional Newton's gravitational constant. This formula is very similar to the black hole entropy formula suggested by Bekenstein and Hawking \cite{hawk1}-\cite{hawk3} \begin{eqnarray} S_{BH} = \frac{Area(\sigma)}{4G_{d}} \nonumber \end{eqnarray} where $\sigma$ is the area of the horizon of the black hole. This striking similarity between HEE and black hole entropy inspired many to suggest that that EE is the origin of black hole entropy \cite{Bombelli:1986rw}-\cite{Eisert:2008ur}. Studies of the HEE of $AdS$ black holes which are dual to a field theory at finite temperature have also been carried out \cite{Cadoni:2010kla}-\cite{Fischler:2012ca}. In \cite{Chaturvedi:2016kbk}, the HEE has been computed for charged $AdS$ black hole to observe the effect of temperature and charge on the EE of a strip like subregion in the boundary field theory dual to the charged $AdS$ black hole in the bulk. An important question in this regard is whether the HEE satisfies a relation analogous to the first law of thermodynamics, which is indeed satisfied by thermal entropy. In \cite{Bhattacharya:2012mi}, the difference in HEE of a thermally excited $AdS$ spacetime and pure $AdS$ spacetime has been computed. This led them to conclude that the change in HEE ($\Delta S_{E}$) is proportional to the change in energy ($\Delta E$). The proportionality constant was identified as the inverse of entanglement temperature ($T_{ent}$). The same question was addressed in \cite{Allahbakhshi:2013rda} and a similar relation between $\Delta S_{E}$ and $\Delta E$ was obtained. Entanglement thermodynamics has been explicitly studied in different backgrounds including non-conformal and non-relativistic backgrounds \cite{Mansoori:2015sit}-\cite{skarar}. Entanglement thermodynamics for charged black hole in $AdS_4$ background has also been studied in \cite{Chaturvedi:2016kbk}. In this investigation, a relation like the first law of entanglement thermodynamics in the low temperature limit was obtained. In this paper, we extend the analysis for a charged black hole in $d$-dimensions. We look at different temperature and charge limits. We have explicitly calculated the EE for charged black hole ($AdS$-RN black hole) in $d$-dimensions for different temperature and charge limits. A law like the first law of entanglement thermodynamics has also been obtained in the small temperature limit. The expressions for the EE in different limits have explicit dependence on the dimension of spacetime $d$. The analysis helps us to understand the implications of the dimension of spacetime on information theoretic quantities. The paper is organized as follows. We have first reviewed the $AdS$-RN black hole in arbitrary spacetime dimension in section 2. From the definition of black hole temperature, we then find the extremality condition for the $AdS$ charged black hole. In section 3, we have investigated the HEE in small charge and large charge limit for extremal black hole. Then we have computed the HEE in the low temperature and high temperature regime for small charged non-extremal black hole as well as for large charged non-extremal black hole. The form of the HEE expression for small charged extremal black hole is the same as the HEE in the low temperature regime for small charged non-extremal black hole. It is to be noted that the bulk dual extremal black hole is considered to be the ground state. Similarly the bulk dual non-extremal black hole is considered as the excited state. We then discuss the first law of entanglement thermodynamics in the small charge limit in section 4. We conclude in section 5. \section{Charged $AdS$ black hole} In this section, we start by writing down the charged $AdS$ black hole metric with planar horizon in $d$-dimensions. This reads \begin{eqnarray}{\label{system}} \label{ee2} ds^{2} &=& - \frac{r^2}{R^2} f(r) dt^2 + \frac{R^2 dr^2}{r^2 f(r)} + \frac{r^2}{R^2} \left( dx_1^2 + dx_2^2 + ......+dx^2_{d-2}\right) \nonumber \\ f(r) &=& 1- \frac{M}{r^{d-1}} +\frac{Q^2}{r^{2(d-2)}} \label{ee2b} \end{eqnarray} where $R$ is the radius of $AdS$ spacetime, $M$ and $Q$ are the mass and charge of the black hole respectively. We shall set $R=1$ for the rest of this paper. The horizon of the black hole is given by $f(r)\rvert_{r=r_h}=0$. This yields \begin{eqnarray}{\label{mass}} M= r_h^{d-1} \left( 1+ \frac{Q^2}{r_h^{2(d-2)}}\right) \end{eqnarray} which relates the mass $M$ of the $AdS$-RN black hole and its charge $Q$. The Hawking temperature for this black hole is given by \begin{equation}{\label{bhtemp}} T_{H} = \frac{r^2 f^{\prime}(r)}{4\pi}\Big\rvert_{r_{h}} = \frac{(d-1)r_h}{4\pi} \left[1- \left(\frac{d-3}{d-1}\right)\frac{Q^2}{r_h^{2(d-2)}} \right]~. \end{equation} The lapse function $f(r)$ can now be expressed in terms of only the charge $Q$ and the radius of the horizon $r_h$ as \begin{eqnarray}{\label{metric}} f(r) = 1 - \left(\frac{r_h}{r}\right)^{d-1} + Q^2 \left(\frac{1}{r^{2(d-2)}}-\frac{1}{r^{d-1}r_h^{d-3}} \right)~. \end{eqnarray} \section{Computation of HEE} With the basic setup in hand we are now ready to calculate the HEE of the $AdS$-RN black hole. We consider an entangling region at the boundary in the form of a straight belt of width $l$ given by \begin{equation} -\frac{l}{2} \le x_1 \le \frac{l}{2}; \quad 0 \le x_2, x_3,\cdots ,x_{d-2} \le L. \end{equation} According to the proposal in \cite{Ryu:2006bv, Ryu:2006ef} we have to find the minimal codimension two hypersurface in the bulk whose boundary coincides with the two ends of the interval $-\frac{l}{2} \le x_1 \le \frac{l}{2}$. Then the entanglement entropy is given by the minimal area divided by $4 G_{(d)}$, where $G_{(d)}$ is the Newton's gravitational constant in $d$ dimensions. The area of the hypersurface for the system (\ref{system}) is given by \begin{equation} \mathcal{A} = L^{d-3} \int_{-\frac{l}{2}}^\frac{l}{2} dx_1 \sqrt{r^{2(d-2)} + \frac{(r')^2}{f(r)} r^{2(d-4)}} \end{equation} where the surface is parametrized by $r=r(x_1)$. Using the standard procedure of minimization we get \begin{equation}{\label{area}} \mathcal{A} = 2L^{d-3} \int_{r_t}^{\infty} \frac{r^{d-4} dr}{\sqrt{f(r)\left\{1- \left(\frac{r_t}{r}\right)^{2d-4}\right\}}} \end{equation} with the minimal surface characterized by \begin{equation} \frac{d r}{d x_1} =\sqrt{f(r)\; r^4 \left\{ \left(\frac{r^2}{r_t^2} \right)^{(d-2)} -1 \right\}}. \end{equation} where $r_t$ is the turning point of the extremal surface satisfying $r' \rvert_{r=r_t}=0$. Integration of the above equation gives the length of the system to be \begin{eqnarray}{\label{el}} \frac{l}{2} = \int_{r_t}^{\infty} \frac{r_t^{d-2} dr}{r^d\sqrt{f(r)\left\{1- \left(\frac{r_t}{r}\right)^{2d-4}\right\}}} \label{ee4} \end{eqnarray} \noindent It is not difficult to see that the integral (\ref{area}) is divergent as we reach the boundary. Therefore we have to introduce an infrared (IR) cutoff at $r=r_b$, where $r_b$ is very large. This IR cutoff is related holographically to the field theory counterpart which is the ultraviolet (UV) cutoff $a$ by the relation $r_b = 1/a$. In field theory this UV cutoff is nothing but the lattice spacing. The finite part of the entangling entropy can be used to study the high and low charge (or temperature) behavior of the field theory which is dual to the $AdS$-RN black hole. To compute the integrals \eqref{area} and \eqref{el}, we change the integration variable $r$ by $u= \frac{r_t}{r}$, which makes the lapse function \begin{eqnarray}{\label{lapsu}} f(u)= 1- \left(\frac{r_h}{r_t}\right)^{d-1} u^{d-1} - \frac{Q^2}{r_h^{d-3}}\left(\frac{u}{r_t}\right)^{d-1} + Q^2 \left(\frac{u}{r_t}\right)^{2(d-2)} ~. \end{eqnarray} The length of the subsystem along $x_1$ and the area of subsystem now read \begin{eqnarray} \label{elu} l &=& \frac{2}{r_t} \int_0^1 du \frac{u^{d-2} }{\sqrt{ 1-u^{2(d-2)}}} \left( 1- \left(\frac{r_h}{r_t}\right)^{d-1} u^{d-1} - \frac{Q^2}{r_h^{d-3}}\left(\frac{u}{r_t}\right)^{d-1} + Q^2 \left(\frac{u}{r_t}\right)^{2(d-2)} \right)^{-1/2} \\ \mathcal{A} &=& 2 (L r_t)^{d-3} \int_0^1 du \frac{u^{-(d-2)}}{\sqrt{1-u^{2(d-2)}}}\left( 1- \left(\frac{r_h}{r_t}\right)^{d-1} u^{d-1} - \frac{Q^2}{r_h^{d-3}}\left(\frac{u}{r_t}\right)^{d-1} + Q^2 \left(\frac{u}{r_t}\right)^{2(d-2)} \right)^{-1/2}. \nonumber \label{areau} \\ \end{eqnarray} In the following sections we compute the HEE for both the extremal black hole and the black hole at finite temperature. The extremal black hole is characterized by its zero Hawking temperature ($T_{H} = 0$). An important relation that arises in this context is the relation between the charge $Q$ of the black hole and its horizon radius $r_h$ \begin{eqnarray}{\label{extremality}} r_h^{2(d-2)} \geq \left(\frac{d-3}{d-1}\right) Q^2 ~ . \end{eqnarray} In the above expression, the equality sign holds for the extremal black hole and the inequality holds for the non-extremal black hole. It is interesting to note that the $AdS$/CFT correspondence tells that the field theory counterpart of the $AdS$-RN black hole is in its ground state for the extremal black hole and in its excited state for the non-extremal black hole. \subsection{Extremal black hole} Let us first calculate the HEE for the case of the extremal black hole. The extremality condition says that the charge of the black hole has to be \begin{eqnarray} Q^2 = \left( \frac{d-1}{d-3}\right) r_h^{2(d-2)} ~. \label{ee14} \end{eqnarray} Using the above condition in eq.(\ref{lapsu}), we can rewrite the lapse function in terms of $u$ as \begin{eqnarray}{\label{lapseue}} f(u) = 1- \frac{2(d-2)}{d-3} \left(\frac{r_h u}{r_t}\right)^{d-1} + \frac{d-1}{d-3} \left(\frac{r_h u}{r_t}\right)^{2(d-2)} ~. \end{eqnarray} We can also express the integrals \eqref{elu},\eqref{areau} as \begin{eqnarray} \label{elue} l &=& \frac{2}{r_t} \int_0^1 du \frac{u^{d-2} }{\sqrt{ 1-u^{2(d-2)}}} \left(1- \frac{2(d-2)}{d-3} \left(\frac{r_h u}{r_t}\right)^{d-1} + \frac{d-1}{d-3} \left(\frac{r_h u}{r_t}\right)^{2(d-2)} \right)^{-1/2}\\ \mathcal{A} &=& 2 (L r_t)^{d-3} \int_0^1 du \frac{u^{-(d-2)}}{\sqrt{1-u^{2(d-2)}}} \left(1- \frac{2(d-2)}{d-3} \left(\frac{r_h u}{r_t}\right)^{d-1} + \frac{d-1}{d-3} \left(\frac{r_h u}{r_t}\right)^{2(d-2)} \right)^{-1/2}. \label{areaue} \nonumber \\ \end{eqnarray} It is not hard to see that these integrals cannot be evaluated analytically. In order to evaluate the integrals analytically we have to take certain limits. In the subsequent discussion we have taken two extreme limits: small charge limit and large charge limit. In the following subsections we have considered these two limits to calculate the HEE. \subsubsection{Small charge limit} We can see from eq.(\ref{ee14}) that if $Q$ is small, then $ r_h $ will be small. To be specific we have $ l \left( \frac{d-3}{d-1} \right)^{\frac{1}{2(d-2)}} Q^{\frac{1}{d-2}} \le l r_h \ll 1$ in the small charge limit. As the horizon radius $r_h$ is very small, the turning point $r_t$ is far away from it. Therefore $\left(\frac{r_h}{r_t}\right)$ is a very small quantity. So we can neglect higher order terms of $\left(\frac{r_h}{r_t}\right)$. Using this approximation we can now Taylor expand to write \begin{eqnarray}{\label{taylor}} \frac{1}{\sqrt{f(u)}} \approx 1+ \frac{d-2}{d-3} \left(\frac{r_h u}{r_t}\right)^{d-1} ~. \end{eqnarray} Using the above equation, we can now simplify eq.(\ref{elue}) as \begin{eqnarray} l &\approx& \frac{2}{r_{c}} \int_0^1 \frac{u^{d-2} du}{\sqrt{ 1-u^{2(d-2)}}} \left( 1+ \frac{d-2}{d-3} \left(\frac{r_h u}{r_t}\right)^{d-1} \right) \nonumber \\ &=& \frac{2}{r_t} \left[ \int_0^1 \frac{u^{d-2} du}{\sqrt{ 1-u^{2(d-2)}}} + \frac{d-2}{d-3} \left(\frac{r_h}{r_t}\right)^{d-1} \int_0^1 \frac{u^{2d-3} du}{\sqrt{ 1-u^{2(d-2)}}} \right]~. \label{ee18} \end{eqnarray} Therefore the turning point $r_t$ reads \begin{eqnarray} r_t = \frac{2}{l} \left[ \sqrt{\pi} \frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})} + \frac{\sqrt{\pi}}{2(d-3)}\left(\frac{r_h}{r_t}\right)^{d-1} \frac{\Gamma(\frac{d-1}{d-2})}{\Gamma(\frac{3d-4}{2(d-2)})} \right] ~. \label{ee19} \end{eqnarray} The form of the above expression suggests that we cannot solve $r_t$ exactly. Hence using the perturbative approach, we obtain \begin{eqnarray} r_t = \frac{2}{l} \left[ \sqrt{\pi} \frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})} + \frac{\sqrt{\pi}}{2(d-3)}\left(\frac{l r_h}{2}\right)^{d-1} \left( \frac{\Gamma(\frac{1}{2(d-2)})}{\sqrt{\pi}\Gamma(\frac{d-1}{2(d-2)})}\right)^{d-1} \frac{\Gamma(\frac{d-1}{d-2})}{\Gamma(\frac{3d-4}{2(d-2)})} \right]~. \label{ee20} \end{eqnarray} Now using same approximation (\ref{taylor}), the area of the extremal surface reads \begin{eqnarray} \mathcal{A} = 2(L r_t)^{d-3} \left[ \int_0^1 \frac{u^{-(d-2)}du}{\sqrt{1-u^{2(d-2)}}} + \frac{d-2}{d-3} \left(\frac{r_h}{r_t}\right)^{d-1} \int_0^1 \frac{u~ du}{\sqrt{ 1-u^{2(d-2)}}} \right]~. \label{ee21} \end{eqnarray} It is observed that the first integral in $\mathcal{A}$ is divergent as $u\rightarrow 0$. To regularize the integral we introduce the UV cut-off $\frac{r_t}{r_b}$ and add a counter term ($\frac{-2(Lr_b)^{d-3}}{d-3}$) in order to get a finite value of the extremal area which reads \begin{eqnarray} \mathcal{A}^{finite} &=& 2(L r_t)^{d-3} \left[ \int_{\frac{r_t}{r_b}}^1 \frac{u^{-(d-2)}du}{\sqrt{1-u^{2(d-2)}}} + \frac{d-2}{d-3} \left(\frac{r_h}{r_t}\right)^{d-1} \int_0^1 \frac{u~ du}{\sqrt{ 1-u^{2(d-2)}}} \right] - \frac{2(Lr_b)^{d-3}}{d-3} \nonumber \\ &=& 2(L r_t)^{d-3} \left[ \frac{\sqrt{\pi}}{(d-2)} \frac{\Gamma(\frac{3-d}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})} + \frac{\sqrt{\pi}}{(d-3)}\left(\frac{r_h}{r_t}\right)^{d-1} \frac{\Gamma(\frac{1}{d-2})}{\Gamma(\frac{d}{2(d-2)})} \right] ~. \label{ee22} \end{eqnarray} Substituting the turning point $r_t$ from eq.(\ref{ee20}) into eq.(\ref{ee22}) and keeping terms upto $\mathcal{O} ((lr_h)^{d-1})$ and then simplifying, we obtain \begin{eqnarray} \mathcal{A}^{finite} &=& \left(\frac{L}{l}\right)^{d-3} \left[ -\frac{(2\sqrt{\pi})^{d-2}}{d-3} \left(\frac{\Gamma(\frac{d-1}{d-2})}{\Gamma(\frac{1}{2(d-2)})}\right)^{d-2} + \frac{d-2}{d (d-3)} \frac{(lr_h)^{d-1}}{4\sqrt{\pi}} \nonumber \right.\\ && \hspace{40mm} \left. \times \left( \frac{\Gamma(\frac{1}{2(d-2)})}{\Gamma(\frac{d-1}{2(d-2)})}\right)^2 \frac{\Gamma(\frac{1}{d-2})}{\Gamma(\frac{d}{2(d-2)})} \right] ~. \label{ee23} \end{eqnarray} The finite holographic entanglement entropy reads ($\frac{\mathcal{A}^{finite}}{4G_{(d)}}$) from the above equation \begin{eqnarray} S^{finite}_{A} = S^{AdS}_{A} + S^{ext}_{A} \label{ee24} \end{eqnarray} where $S^{AdS}_{A}$ is the entanglement entropy for pure $AdS$ spacetime and the extra piece comes from the extremality of black holes. The expressions for $S^{AdS}_{A}$ and $S^{ext}_{A}$ read \begin{eqnarray} S_A^{AdS} = - \frac{(2\sqrt{\pi})^{d-2}}{4G_N^d (d-3)} \left(\frac{L}{l}\right)^{d-3} \left(\frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})}\right)^{d-2} \label{ee25} \end{eqnarray} \begin{eqnarray} S_A^{ext} = \frac{L^{d-3}l^2 r_h^{d-1}}{16 \sqrt{\pi} G_{(d)}} \frac{d-2}{d (d-3)} \left(\frac{\Gamma(\frac{1}{2(d-2)})}{\Gamma(\frac{d-1}{2(d-2)})}\right)^2 \frac{\Gamma(\frac{1}{d-2})}{\Gamma(\frac{d}{2(d-2)})} ~. \label{ee26} \end{eqnarray} But we know that the relation between the mass of the extremal black hole with its horizon is given by \begin{eqnarray} r_h^{d-1} = \frac{d-3}{2(d-2)} M^{ext} ~. \label{ee27} \end{eqnarray} Substituting this in eq.\eqref{ee26}, we obtain \begin{eqnarray} S_A^{ext} = k L^{d-3} l^2 M^{ext} \label{ee28} \end{eqnarray} where \begin{eqnarray} k = \frac{1}{32\;d\; G_{(d)}\sqrt{\pi}} \left( \frac{\Gamma(\frac{1}{2(d-2)})}{\Gamma(\frac{d-1}{2(d-2)})}\right)^2 \frac{\Gamma(\frac{1}{d-2})}{\Gamma(\frac{d}{2(d-2)})} ~. \label{ee29} \end{eqnarray} It is reassuring to note that the above expression reduces to the result in \cite{Chaturvedi:2016kbk} in the $d=4$ limit. \subsubsection{Large charge limit}{\label{sel}} In this subsection we are going to compute the HEE of an extremal $AdS$-RN black hole whose charge $Q$ is large. By the extremality condition (\ref{extremality}), this means that the horizon radius $r_h$ is also large. This in turn implies $r_h l\gg 1$. As the horizon radius is very large, so we can assume that the horizon is very close to the turning point of the extremal surface ($r_h \sim r_t$). Now looking at the area integral (\ref{areaue}), we find that the dominant contribution to the finite part of the integral comes from $u\rightarrow 1$ limit. On the other hand, defining $u_0 =\frac{r_t}{r_h}$, we see that $u_0 \sim 1$. Hence most of the contributuion to the finite part of the area integral comes from the near horizon limit. We should then Taylor expand the lapse function (\ref{lapseue}) around $u_0$ to evalute the area integral. For this Taylor expansion to be valid one must show that $u-u_0$ is small enough. As $r_t$ and $ r_b$ are very large, $u$ is very close to $u_0$ throughout the integral. Hence we can now Taylor expand eq.(\ref{lapseue}) and neglect higher order terms to obtain \begin{eqnarray} f(u) &=& f(u_0) + f^{\prime}(u_0) (u-u_0) +\frac{f^{\prime\prime}(u_0)}{2!} (u-u_0)^2 + \mathcal{O}((u-u_0)^3) \nonumber \\ & = & (d-1)(d-2) \left(1- \frac{u}{u_0}\right)^2 + \mathcal{O}((u-u_0)^3) \nonumber \\ &\simeq& (d-1)(d-2) \left(1- \frac{r_h u}{r_t}\right)^2 ~. \label{ee30} \end{eqnarray} Using this approximated value of $f(u)$, the length of the entangling region becomes \begin{eqnarray} l &=& \frac{2}{r_t \sqrt{(d-1)(d-2)}} \int_0^1 \frac{u^{d-2} du}{\sqrt{1-u^{2(d-2)}}} \frac{1}{(1- \frac{r_h}{r_t} u)} ~. \end{eqnarray} To simplify this integral we make a binomial expansion and the length integral now takes the form \begin{eqnarray} \frac{lr_t}{2}&=& \frac{1}{ \sqrt{(d-1)(d-2)}} \sum_{n=0}^{\infty} \left(\frac{r_h}{r_t}\right)^n \int_0^1 \frac{u^{n+d-2} du}{\sqrt{1-u^{2(d-2)}}} \nonumber \\ &=& \frac{1}{2(d-2)^{3/2}} \sqrt{\frac{\pi}{d-1}} \sum_{n=0}^{\infty} \frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})} \left(\frac{r_h}{r_t}\right)^n ~. \label{ee31} \end{eqnarray} For large value of $n$, this expression is divergent. Using gamma function properties and Stirling formula, one can check that for large value of $n$, the above summation goes as $\frac{\sqrt{2(d-2)}}{\sqrt{n}} \left(\frac{r_h}{r_t}\right)^n $. To get a finite value we isolate the divergent terms to obtain \begin{eqnarray} lr_t &=& \frac{1}{(d-2)^{3/2}} \sqrt{\frac{\pi}{d-1}} \frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{2d-3}{2(d-2)})} + \sqrt{\frac{\pi}{(d-1)(d-2)}} \sum_{n=1}^{\infty} \left( \frac{\Gamma(\frac{n+d-1}{2(d-2)})}{(d-2)\Gamma(\frac{n+2d-3}{2(d-2)})} - \sqrt{\frac{2}{(d-2)n}} \right) \left(\frac{r_h}{r_t}\right)^n \nonumber \\ && + \frac{1}{(d-2)} \sqrt{\frac{2\pi}{d-1}} Li_{\frac{1}{2}}\left(\frac{r_h}{r_t}\right) \label{ee32} \end{eqnarray} where \begin{eqnarray} Li_{\frac{1}{2}} (\frac{r_h}{r_t}) = \sum_{n=1}^{\infty} \frac{1}{\sqrt{n}} \left(\frac{r_h}{r_t}\right)^n \label{ee33} \end{eqnarray} is polylogarithmic function. As the horizon radius $r_h$ is very close to the turning point $r_t$ of the extremal surface, we can assume that $r_t = r_h (1+ \epsilon)$ where $\epsilon$ is a very small positive number \cite{Hubeny:2012ry}. Substituting it in eq.(\ref{ee32}), we get \begin{eqnarray} lr_h = k_1 + \sqrt{\frac{2}{d-1}} \left(\frac{\pi}{d-2}\right)\frac{1}{\sqrt{\epsilon}} + \mathcal{O}(\epsilon) \label{ee34} \end{eqnarray} where \begin{eqnarray} k_1 &=& \sqrt{\frac{\pi}{d-1}} \frac{1}{(d-2)^{3/2}} \frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{2d-3}{2(d-2)})} + \sqrt{\frac{2\pi}{d-1}} \frac{1}{(d-2)} \zeta (\frac{1}{2})\nonumber \\ && + \sqrt{\frac{\pi}{(d-1)(d-2)}} \sum_{n=1}^{\infty} \left( \frac{\Gamma(\frac{n+d-1}{2(d-2)})}{(d-2)\Gamma(\frac{n+2d-3}{2(d-2)})} -\sqrt{\frac{2}{(d-2)n}} \right). \end{eqnarray} We can also invert eq.(\ref{ee34}) to obtain \begin{eqnarray} \epsilon \approx \frac{2\pi^2}{(d-1)(d-2)^2} \frac{1}{(lr_h - k_1)^2} ~. \end{eqnarray} The $r_h$ appearing in the above equation can be replaced by eq.(\ref{ee14}) to get $\epsilon$ in terms of the black hole charge $Q$. \noindent Now the extremal surface area (\ref{areaue}) reads \begin{equation} \mathcal{A} = \frac{2 (L r_t)^{d-3}}{\sqrt{(d-1)(d-2)}} \int_0^1 du \frac{u^{-(d-2)}}{\sqrt{1-u^{2(d-2)}}}\frac{1}{(1- \frac{r_h}{r_t} u)} ~. \end{equation} Again by using the binomial expansion we get \begin{eqnarray}{\label{areasum1}} \mathcal{A} = \frac{2(Lr_t)^{d-3}}{\sqrt{(d-1)(d-2)}} \sum_{n=0}^{\infty} \left(\frac{r_h}{r_t}\right)^n \int^{1}_{0} \frac{u^{n-d+2}du}{\sqrt{1-u^{2(d-2)}}} ~. \end{eqnarray} The above equation is divergent for the terms corresponding to $n<(d-2)$. Let us regularize it from $n=0$ to $n=d-3$. To regularize the divergent terms we have to introduce IR cut-off $r_b$ in the integrals. Let us start with the integral corresponding to $n=0$: \begin{eqnarray} \mathcal{A}_0^{finite} &=& \frac{2(Lr_t)^{d-3}}{\sqrt{(d-1)(d-2)}} \int_{\frac{r_t}{r_b}}^{1} du \frac{1}{u^{d-2}\sqrt{1-u^{2(d-2)}}} - \frac{2(L r_b)^{d-3}}{\sqrt{(d-1)(d-2)}} \nonumber \\ &=& -\frac{2\sqrt{\pi} (Lr_t)^{d-3}}{(d-3)\sqrt{(d-1)(d-2)}} \frac{\Gamma \left( \frac{d-1}{2(d-2)} \right)}{\Gamma \left(\frac{1}{2(d-2)} \right)} ~. \end{eqnarray} The term corresponding to $n=1$ is given by \begin{eqnarray} \mathcal{A}_1^{finite} &=& \frac{2(Lr_t)^{d-3}}{\sqrt{(d-1)(d-2)}} \left(\frac{r_h}{r_t}\right) \int^{1}_{0} du \frac{u^{-(d-3)}}{\sqrt{1-u^{2(d-2)}}} \nonumber \\ &=& \frac{2 r_h L^{d-3} r_t^{d-4}}{\sqrt{(d-1)(d-2)}} \left[\int_{\frac{r_t}{r_b}}^1 du \frac{1}{u^{d-3}} + \sum_{k=1}^{\infty}\frac{\Gamma (k+\frac{1}{2})}{\sqrt{\pi} \Gamma (k+1)} \int_0^1 du u^{3-d +2k(d-2)}\right]. \end{eqnarray} In the above equation, we have separated the first term as it is divergent and we have regularized it. Since $r_t$ and $r_b$ are both large ($r_t \sim r_b$) hence the first term do not contribute. The finite value of the integral is \begin{equation} \mathcal{A}_1^{finite} =\frac{2 r_h L^{d-3} r_t^{d-4}}{\sqrt{(d-1)(d-2)}} \left[\frac{1}{d-4} + \frac{\sqrt{\pi}}{2(d-2)}\frac{\Gamma (\frac{4-d}{2(d-2)})}{\Gamma (\frac{2}{2(d-2)})} \right] ~. \end{equation} In general the expression for the regularized terms are \begin{equation} \mathcal{A}_m^{finite} = \frac{2(Lr_t)^{d-3}}{\sqrt{(d-1)(d-2)}} \left(\frac{r_h}{r_t}\right)^m \left[\frac{1}{d-m-3}+ \frac{\sqrt{\pi}}{2(d-2)}\frac{\Gamma (\frac{m-d+3}{2(d-2)})}{\Gamma (\frac{m+1}{2(d-2)})} \right] \end{equation} for $m=1,2,\dots,(d-4)$. Let us now look at the $n=d-3$ term: \begin{eqnarray} \mathcal{A}_{d-3} &=& \frac{2 (Lr_h)^{d-3}}{\sqrt{(d-1)(d-2)}} \int_0^1 du \frac{1}{u \sqrt{1-u^{2(d-2)}}} \nonumber \\ &=& \frac{2 (Lr_h)^{d-3}}{\sqrt{(d-1)(d-2)}} \left[\int_{\frac{r_t}{r_b}}^{1} du\; \frac{1}{u}+ \sum_{k=1}^{\infty}\frac{\Gamma (k+\frac{1}{2})}{\sqrt{\pi} \Gamma (k+1)} \int_0^1 du\; u^{2k(d-2)-1}\right] \nonumber \\ &=& \frac{2 (Lr_h)^{d-3}}{\sqrt{(d-1)(d-2)}}\left[ -\log (\frac{r_t}{r_b}) +\frac{\log 4}{2(d-2)}\right]\nonumber\\ & \approx & \frac{2 (Lr_h)^{d-3}}{\sqrt{(d-1)(d-2)}} \frac{\log 4}{2(d-2)} ~. \end{eqnarray} The remaining terms in eq.\eqref{areasum1} corresponding to $n \geq (d-2)$ are given as \begin{eqnarray} \mathcal{A}_{n\geq (d-2)} &=& \frac{2(Lr_t)^{d-3}}{\sqrt{(d-1)(d-2)}} \sum_{n=(d-2)}^{\infty} \left(\frac{r_h}{r_t}\right)^n \int_{0}^{1} \frac{u^{n-d+2}du}{\sqrt{1-u^{2(d-2)}}} \nonumber \\ &=& \frac{2(Lr_t)^{d-3}}{\sqrt{(d-1)(d-2)}} \sum_{n=(d-2)}^{\infty} \frac{\sqrt{\pi}}{2(d-2)} \frac{\Gamma(\frac{n-d+3}{2(d-2)})}{\Gamma(\frac{n+1}{2(d-2)})} \left(\frac{r_h}{r_t}\right)^n ~. \end{eqnarray} Therefore the above contribution diverges as $r_t$ approaches to $r_h$, as for large $n$ the factor inside the summation goes as $\sim \frac{1}{\sqrt{n}} \left(\frac{r_h}{r_t}\right)^n$. To remove the divergence we use the identity $\Gamma (n+1) =n \Gamma (n)$ to rewrite it as \begin{equation} \mathcal{A}_{n\geq (d-2)}=\frac{(Lr_t)^{d-3}\sqrt{\pi}}{\sqrt{(d-1)(d-2)}} \sum_{n=(d-2)}^{\infty} \left\{ \frac{1}{(d-2)}+ \frac{1}{(n-d+3)} \right\} \frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})} \left(\frac{r_h}{r_t}\right)^n ~. \end{equation} Now for large value of $n$, the second term goes as $\frac{\sqrt{2(d-2)}}{n\sqrt{n}} \left(\frac{r_h}{r_t}\right)^n$. Therefore it is convergent. Using eq.(\ref{ee31}) in the above equation we obtain \begin{eqnarray}{\label{3.44}} \mathcal{A}_{n\geq (d-2)}^{finite} &=& \frac{(Lr_t)^{d-3}\sqrt{\pi}}{\sqrt{(d-1)(d-2)}} \left[ \frac{\sqrt{(d-2)(d-1)}}{\sqrt{\pi}} lr_t -\sum_{m=0}^{d-3} \frac{\Gamma(\frac{m+d-1}{2(d-2)})}{(d-2)\Gamma(\frac{m+2d-3}{2(d-2)})} \left(\frac{r_h}{r_t}\right)^m \nonumber \right. \\ && \left. + \sum_{n=d-2}^{\infty} \ \frac{1}{(n-d+3)}\frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})} \left(\frac{r_h}{r_t}\right)^n \right] ~. \end{eqnarray} The leading contribution in $\mathcal{A}_{n\geq (d-2)}^{finite}$ comes from the limit $r_t =r_h$. The second series in eq.(\ref{3.44}) is convergent at leading order. If we want to find the subleading order contributions, we have have to put $r_t=r_h(1+\epsilon)$ and expand it binomially. Now the second series in eq.(\ref{3.44}) may not be convergent at the subleading order. Therefore we isolate the divergent terms of the series up to $\mathcal{O}(\epsilon)$ and rewrite eq.(\ref{3.44}) as \begin{eqnarray} \mathcal{A}_{n\geq (d-2)}^{finite} &=& \frac{(Lr_t)^{d-3}\sqrt{\pi}}{\sqrt{(d-1)(d-2)}} \left[ \frac{\sqrt{(d-2)(d-1)}}{\sqrt{\pi}} lr_t -\sum_{m=0}^{d-3} \frac{\Gamma(\frac{m+d-1}{2(d-2)})}{(d-2)\Gamma(\frac{m+2d-3}{2(d-2)})} \left(\frac{r_h}{r_t}\right)^m \nonumber \right. \\ && \left. + \sum_{n=d-2}^{\infty} \left( \frac{1}{n-d+3}\frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})} - \frac{\sqrt{2(d-2)}}{n\sqrt{n}}\right) \left(\frac{r_h}{r_t}\right)^n + \sum_{n=d-2}^{\infty} \frac{\sqrt{2(d-2)}}{n\sqrt{n}} \left(\frac{r_h}{r_t}\right)^n \right]. \nonumber \\ \end{eqnarray} Now we can write \begin{eqnarray} \sum_{n=d-2}^{\infty} \frac{\sqrt{2(d-2)}}{n\sqrt{n}} \left(\frac{r_h}{r_t}\right)^n = \sqrt{2(d-2)} \left[ Li_{\frac{3}{2}} \left[\frac{r_h}{r_t}\right] - \sum_{m=1}^{d-3} \frac{1}{m\sqrt{m}} \left(\frac{r_h}{r_t}\right)^m \right]. \end{eqnarray} We can now put all the results together in eq.(\ref{areasum1}) to write the total area of the extremal surface as \begin{eqnarray} \mathcal{A}^{finite} &=& \frac{(Lr_t)^{d-3}}{\sqrt{(d-1)(d-2)}} \left[ \sqrt{(d-2)(d-1)} lr_t - \frac{(d-2)\sqrt{\pi}}{d-3} \frac{\Gamma (\frac{d-1}{2(d-2)})}{\Gamma (\frac{1}{2(d-2)})} \right.\nonumber \\ && +\sum_{n=1}^{d-4}\left(\frac{1}{d-n-3}+ \frac{\sqrt{\pi}}{2(d-2)}\frac{\Gamma (\frac{n-d+3}{2(d-2)})}{\Gamma (\frac{n+1}{2(d-2)})}- \frac{\sqrt{\pi}}{2(d-2)} \frac{\Gamma (\frac{n+d-1}{2(d-2)})}{\Gamma (\frac{n+2d-3}{2(d-2)})} \right)\left(\frac{r_h}{r_t}\right)^n \nonumber\\ &&+ \left(\frac{\log 4}{(d-2)} -\frac{\sqrt{\pi}}{(d-2)\Gamma (3/2)}\right) \left(\frac{r_h}{r_t}\right)^{d-3} \nonumber \\ &&+ \sqrt{\pi}\sum_{n=d-2}^{\infty} \left( \frac{1}{n-d+3}\frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})}- \frac{\sqrt{2(d-2)}}{n\sqrt{n}}\right)\left(\frac{r_h}{r_t}\right)^n \nonumber\\ && \left. \sqrt{2\pi(d-2)} \left( Li_{\frac{3}{2}} \left[\frac{r_h}{r_t}\right] - \sum_{n=1}^{d-3} \frac{1}{n\sqrt{n}} \left(\frac{r_h}{r_t}\right)^n \right)\right] ~. \end{eqnarray} Now we substitute $r_t = r_h(1+\epsilon)$ in the above expression. After simplification we finally obtain \begin{eqnarray} \mathcal{A}^{finite} = L^{d-3} l r_h^{d-2} + (Lr_h)^{d-3} \left( K_{1} + K_{2} \sqrt{\epsilon} +K_3 \epsilon\right) + \mathcal{O}(\epsilon^{3/2}) \end{eqnarray} where \begin{eqnarray} K_{1} &=& \frac{1}{\sqrt{(d-1)(d-2)}} \left[ -2\sqrt{\pi}\frac{d-2}{d-3} \frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})}+ \frac{\log 4}{(d-2)} -\frac{\sqrt{\pi}}{(d-2)\Gamma (3/2)} \right. \nonumber \\ && \left. +\sqrt{2\pi (d-2)}\left[ \zeta\left(\frac{3}{2}\right) - \sum_{n=1}^{d-3}\frac{1}{n\sqrt{n}} \right] +\sqrt{\pi} \sum_{n=d-2}^{\infty} \left[\frac{1}{n-d+3} \frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})} -\frac{\sqrt{2(d-2)}}{n\sqrt{n}} \right] \right. \nonumber \\ && \left. + \sum_{n=1}^{d-3} \left(\frac{1}{d-n-3} +\frac{d-2}{n+1}\frac{\Gamma (\frac{n-d+3}{2(d-2)})}{\Gamma (\frac{n+1}{2(d-2)})}\right) \right] \nonumber \\ K_{2} &=& -\frac{2\sqrt{2}\pi}{\sqrt{d-1}} \nonumber \\ K_3 &=& \frac{2}{\sqrt{(d-1)(d-2)}} \left[1+\frac{d}{d-1}\frac{\Gamma(\frac{-1}{2(d-2)})}{\Gamma(\frac{d-3}{2(d-2)})} + \sqrt{\frac{\pi (d-2)}{2}}\left( (d-1)\xi(\frac{3}{2})-\xi(\frac{1}{2})\right)\right] ~. \end{eqnarray} Therefore the renormalized holographic entanglement entropy in the large charge regime for the extremal black hole is given by \begin{eqnarray} S_{A}^{finite} = L^{d-3}l S_{BH}^{ext} + \frac{(Lr_h)^{d-3}}{4G_N^d} \left( K_{1} + K_{2} \sqrt{\epsilon}+ K_3 \epsilon\right)+\mathcal{O}(\epsilon^{3/2}) \end{eqnarray} where $S_{BH}^{ext} = \frac{r_h^{d-2}}{4G_N^d}~$. \subsection{Non-extremal black hole} \subsubsection{ Small Charge limit} \textbf{Low temperature limit} \\ In this subsection we shall compute the HEE for a subsystem in the boundary theory whose bulk dual is the non-extremal black hole which has small charge $Q$. We further assume that the Hawking temperature $T_H$ of the black hole is also small. The non-extremality condition implies $Q^2 \neq \frac{d-1}{d-3} r_h^{2(d-2)}$. As the temperature of the black hole $T_H$ is small, eq.(\ref{bhtemp}) implies that $r_h$ has to be small. Again eq.(\ref{extremality}) in small charge limit also suggest the same criterion. Thus in small charge and temperature limit, the horizon radius $r_h$ of the black hole is very small, so we can make the assumption $\frac{r_h}{r_t} \ll 1$. Now we replace the charge $Q$ by the quantity $\alpha= \frac{Q}{r_h^{d-2}}$ in eq.(\ref{lapsu}) which makes the lapse function to be \begin{equation} f(u) = 1 - \left\{ 1+ \alpha^2 - \left(\frac{r_h u}{r_t}\right)^{d-3} \right\} \left(\frac{r_h u}{r_t}\right)^{d-1} ~. \end{equation} To find the values of the expressions \eqref{elu} and \eqref{areau}, we have to use an approximated expression for $\frac{1}{\sqrt{f(u)}}$. As we have already made the assumption of low temperature, eq.(\ref{bhtemp}) implies that $\frac{Q}{r_h^{d-2}}=\alpha \sim 1$. We can now Taylor expand $\frac{1}{\sqrt{f(u)}}$ around $\frac{r_h}{r_t} \sim 0$ and neglect higher order terms to obtain \begin{equation} \frac{1}{\sqrt{f(u)}}= 1+ \frac{1+\alpha^2}{2} \left(\frac{r_h u}{r_t}\right)^{d-1} + \mathcal{O}(\left(\frac{r_h u}{r_t}\right)^{2(d-2)}) ~. \end{equation} Using this in eq.(\ref{elu}) we obtain the length of subsystem to be \begin{eqnarray} l &=& \frac{2}{r_t} \left[ \int_{0}^{1} \frac{u^{d-2}du}{\sqrt{1-u^{2(d-2)}}} + \frac{1+\alpha^2}{2} \left(\frac{r_h}{r_t}\right)^{d-1} \int_{0}^{1} \frac{u^{2d-3}du}{\sqrt{1-u^{2(d-2)}}} \right] \nonumber \\ \Rightarrow r_t &=& \frac{2}{l} \left[\sqrt{\pi} \frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})} + \frac{1+\alpha^2}{2} \left(\frac{r_h}{r_t}\right)^{d-1} \frac{\sqrt{\pi}}{2(d-2)^2} \frac{\Gamma(\frac{1}{d-2})}{\Gamma(\frac{3d-4}{2(d-2)})} \right] ~. \label{ee51} \end{eqnarray} To get the solution of the turning point $r_t$ of the extremal surface in terms of the length ($l$) of the subsystem, we use perturbative technique to get \begin{eqnarray} r_t = \frac{2\sqrt{\pi}}{l} \left[\frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})} + \frac{1+\alpha^2}{(d-2)^2} \frac{(lr_h)^{d-1}}{2^{d+1}} \left(\frac{\Gamma(\frac{1}{2(d-2)})}{\sqrt{\pi}\Gamma(\frac{d-1}{2(d-2)})}\right)^{d-1} \frac{\Gamma(\frac{1}{d-2})}{\Gamma(\frac{3d-4}{2(d-2)})} \right] ~. \label{ee53} \end{eqnarray} Now the extremal surface area reads \begin{eqnarray} \mathcal{A} = 2 (Lr_t)^{d-3} \left[ \int_{0}^{1} \frac{u^{-d+2}du}{\sqrt{1-u^{2(d-2)}}} + \frac{1+\alpha^2}{2} \left(\frac{r_h}{r_t}\right)^{d-1} \int_{0}^{1} \frac{u du}{\sqrt{1-u^{2(d-2)}}} \right] ~. \end{eqnarray} The first integral of the above expression is divergent. This can be regularized by introducing UV cut-off $\frac{1}{r_b}$. So, the finite part of the extremal surface area is \begin{eqnarray} \mathcal{A}^{finite} &=& 2 (Lr_t)^{d-3} \left[ \int^{1}_{\frac{r_t}{r_b}} \frac{u^{-d+2}du}{\sqrt{1-u^{2(d-2)}}} + \frac{1+\alpha^2}{2} \left(\frac{r_h}{r_t}\right)^{d-1} \frac{\sqrt{\pi}}{2(d-2)} \frac{\Gamma(\frac{1}{d-2})}{\Gamma(\frac{d}{2(d-2)})} \right] - \frac{2(lr_b)^{d-3}}{d-3} \nonumber \\ &=& \frac{(Lr_t)^{d-3}\sqrt{\pi}}{d-2} \left[ \frac{\Gamma(\frac{3-d}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})} + \frac{1+\alpha^2}{2} \left(\frac{r_h}{r_t}\right)^{d-1} \frac{\Gamma(\frac{1}{d-2})}{\Gamma(\frac{d}{2(d-2)})} \right] ~. \end{eqnarray} Now we shall substitute eq.(\ref{ee53}) in the above equation to express the extremal surface area in terms of the subsystem length $l$. This yields \begin{eqnarray} \mathcal{A}^{finite} &=& \left(\frac{L}{l}\right)^{d-3} \left[ -\frac{(2\sqrt{\pi})^{d-2}}{d-3} \left(\frac{\Gamma(\frac{d-1}{d-2})}{\Gamma(\frac{1}{2(d-2)})}\right)^{d-2} + \frac{1+\alpha^2}{8\sqrt{\pi}d} (lr_h)^{d-1} \right. \nonumber\\ && \hspace{40mm} \left. \times \left( \frac{\Gamma(\frac{1}{2(d-2)})}{\Gamma(\frac{d-1}{2(d-2)})}\right)^2 \frac{\Gamma(\frac{1}{d-2})}{\Gamma(\frac{d}{2(d-2)})} \right] ~. \end{eqnarray} From the definition of the entanglement entropy we therefore obtain \begin{eqnarray} S_{A}^{finite} = \frac{\mathcal{A}^{finite}}{4 G_N^d} = S_{A}^{AdS} + S_{A}^{non-ext} \end{eqnarray} where \begin{eqnarray} S_A^{AdS} &=& - \frac{(2\sqrt{\pi})^{d-2}}{4 G_N^d (d-3)} \left(\frac{L}{l}\right)^{d-3} \left( \frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})}\right)^{d-2} \\ S_{A}^{non-ext} &=& \frac{(1+\alpha^2)L^{d-3}\;l^2\;r_h^{d-1}}{32 \;d \sqrt{\pi} G_{(d)}} \left( \frac{\Gamma(\frac{1}{2(d-2)})}{\Gamma(\frac{d-1}{2(d-2)})}\right)^2 \frac{\Gamma(\frac{1}{d-2})}{\Gamma(\frac{d}{2(d-2)})} ~. \label{ee56} \end{eqnarray} These expressions reproduces the results in \cite{Chaturvedi:2016kbk} in $d=4$ limit. We can now use the mass of non-extremal black hole which is $M = r_h^{d-1} (1+\alpha^2)$ to express $ S_{A}^{non-ext}$ in a more convenient way as \begin{equation} S_{A}^{non-ext} = k L^{d-3} l^2M^{non-ext} \end{equation} where \begin{equation} k = \frac{1}{32\;d\; G_{(d)}\sqrt{\pi}} \left( \frac{\Gamma(\frac{1}{2(d-2)})}{\Gamma(\frac{d-1}{2(d-2)})}\right)^2 \frac{\Gamma(\frac{1}{d-2})}{\Gamma(\frac{d}{2(d-2)})} \end{equation} which is same as eq.(\ref{ee29}).\\ \noindent \textbf{High temperature limit} \\ In this subsection we will investigate the behaviour of HEE for a subsystem in the boundary whose bulk dual is a non-extremal black hole with small charge but a high Hawking temperature. In the high temperature limit the expression for temperature of black hole \eqref{bhtemp} and the non-extremality condition \eqref{extremality} together suggest that the horizon radius $r_h$ has to be large. Therefore one may easily see that $\frac{Q^2}{r_h^{2(d-2)}} \ll 1 $. We now define a new the quantity $\delta^2 = \frac{(d-3)Q^2}{(d-1)r_h^{2(d-2)}} \ll 1$ to express lapse function as \begin{eqnarray} f(u) &=& 1- \left(\frac{r_h u}{r_t}\right)^{d-1} -\frac{d-1}{d-3} \left(\frac{r_h u}{r_t}\right)^{d-1} \delta^2 \left(1- \left(\frac{r_h u}{r_t}\right)^{d-3} \right) \\ \Rightarrow \frac{1}{\sqrt{f(u)}} &\approx & \frac{1}{\sqrt{1- \left(\frac{r_h u}{r_t}\right)^{d-1}}} \left[1+ \frac{(d-1)\delta^2}{2(d-3)} \left(\frac{r_h u}{r_t}\right)^{d-1} \frac{1-\left(\frac{r_h u}{r_t}\right)^{d-3}}{1- \left(\frac{r_h u}{r_t}\right)^{d-1}} \right] ~. \end{eqnarray} In the last line we have made a Taylor expansion of the function around $\delta =0$ and neglected the higher order terms. We can use this expression of lapse function in eq.(\ref{elu}) to find the subsystem length to be \begin{eqnarray}{\label{3.65}} l = \frac{2}{r_t} &&\left[\int_0^1 \frac{u^{d-2}du}{\sqrt{1-u^{2(d-2)}}} \frac{1}{\sqrt{1-\left(\frac{r_h u}{r_t}\right)^{d-1}}} + \frac{(d-1)\delta^2}{2(d-3)} \left(\frac{r_h}{r_t}\right)^{d-1} \right.\nonumber\\ && \left. \times \int_0^1 \frac{u^{2d-3}du}{\sqrt{1- u^{2(d-2)}}} \frac{\left(1-\left(\frac{r_h u}{r_t}\right)^{d-3} \right)}{\left(1-\left(\frac{r_h u}{r_t}\right)^{d-1} \right)^{3/2}} \right]. \nonumber \\ \end{eqnarray} It is not possible to evaluate both the above integrals analytically in this existing form. We use the following identities \begin{eqnarray}{\label{binomial}} \frac{1}{\sqrt{1-y}} = \sum_{n=0}^\infty \frac{\Gamma (n+1/2)}{\sqrt{\pi}\Gamma (n+1)}y^n~; \quad \frac{1}{(1-y)^{\frac{3}{2}}}=\sum_{n=0}^\infty \frac{2 \Gamma(n+3/2)}{\sqrt{\pi}\Gamma (n+1)}y^n \end{eqnarray} to express the above integrals in a convenient form which makes analytical solution possible. We rewrite eq.(\ref{3.65}) as \begin{eqnarray}{\label{lrt}} \frac{lr_t}{2} &=& \sum_{n=0}^{\infty} \frac{\Gamma(n+\frac{1}{2})}{\sqrt{\pi}\Gamma(n+1)} \left(\frac{r_h}{r_t}\right)^{(d-1)n} \int_0^1 \frac{u^{(d-1)n+d-2}du}{\sqrt{1-u^{2(d-2)}}} \nonumber \\ &&+ \frac{(d-1)\delta^2}{2(d-3)} \left(\frac{r_h}{r_t}\right)^{d-1} \sum_{n=0}^{\infty} \frac{2\Gamma(n+\frac{3}{2})}{\sqrt{\pi}\Gamma(n+1)} \left(\frac{r_h}{r_t}\right)^{(d-1)n}\int_0^1 \frac{u^{(d-1)n+2d-3}du}{\sqrt{1-u^{2(d-2)}}} \left(1-\left(\frac{r_h u}{r_t}\right)^{d-3} \right) \nonumber \\ \Rightarrow lr_t &= &\sum_{n=0}^{\infty} \frac{\Gamma(n+\frac{1}{2})}{(d-2)\Gamma(n+1)} \frac{\Gamma\left(\frac{(d-1)n+d-1}{2(d-2)}\right)}{\Gamma\left(\frac{(d-1)n+2d-3}{2(d-2)}\right)} \left(\frac{r_h}{r_t}\right)^{(d-1)n} + \frac{(d-1)\delta^2}{d-3} \sum_{n=0}^{\infty} \frac{\Gamma(n+\frac{3}{2})}{(d-2)\Gamma(n+1)} \nonumber \\ && \times \left(\frac{r_h}{r_t}\right)^{(d-1)n+d-1} \left[ \frac{\Gamma\left(\frac{(d-1)n+2d-2}{2(d-2)}\right)}{\Gamma\left(\frac{(d-1)n+3d-4}{2(d-2)}\right)} - \left(\frac{r_h}{r_t}\right)^{d-3} \frac{\Gamma\left(\frac{(d-1)n+3d-5}{2(d-2)}\right)}{\Gamma\left(\frac{(d-1)n+4d-7}{2(d-2)}\right)} \right] ~. \end{eqnarray} Let us now look at the form of divergence for different terms of the above expression. For large value of $n$, the first term behaves as $\sim \frac{1}{n}\left(\frac{r_h}{r_t}\right)^{n}$ and the second and third terms behave in same way as $ \sim \left(\frac{r_h}{r_t}\right)^{n}$. Therefore the divergences of the second and third term cancels out each other. We isolate the divergent terms to get \begin{eqnarray} lr_t &=& \frac{\sqrt{\pi}}{d-2} \frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{2d-3}{2(d-2)})} + \sum_{n=1}^{\infty} \left[ \frac{\Gamma(n+\frac{1}{2})}{(d-2)\Gamma(n+1)}\frac{\Gamma(\frac{(d-1)n+d-1}{2(d-2)})}{\Gamma(\frac{(d-1)n+2d-3}{2(d-2)})} -\sqrt{\frac{2}{(d-2)(d-1)}} \frac{1}{n} \right] \left(\frac{r_h}{r_t}\right)^{(d-1)n} \nonumber \\ &+& \frac{(d-1)\delta^2}{(d-3)(d-2)} \sum_{n=0}^{\infty} \left[ \frac{\Gamma(n+\frac{3}{2})}{\Gamma(n+1)}\frac{\Gamma(\frac{(d-1)n+2d-2}{2(d-2)})}{\Gamma(\frac{(d-1)n+3d-4}{2(d-2)})} -\sqrt{\frac{2(d-2)}{(d-1)}} \right] \left(\frac{r_h}{r_t}\right)^{(d-1)n+d-1} \nonumber \\ &-& \frac{(d-1)\delta^2}{(d-3)(d-2)} \sum_{n=0}^{\infty} \left[ \frac{\Gamma(n+\frac{3}{2})}{\Gamma(n+1)}\frac{\Gamma(\frac{(d-1)n+3d-5}{2(d-2)})}{\Gamma(\frac{(d-1)n+4d-7}{2(d-2)})} -\sqrt{\frac{2(d-2)}{(d-1)}} \right] \left(\frac{r_h}{r_t}\right)^{(d-1)n+2d-4} \nonumber \\ &+& \frac{\delta^2}{d-3} \sqrt{\frac{2(d-1)}{d-2}} \frac{\left(1-\left(\frac{r_h}{r_t}\right)^{d-3}\right)}{\left(1-\left(\frac{r_h}{r_t}\right)^{d-1}\right)} \left(\frac{r_h}{r_t}\right)^{d-1} - \sqrt{\frac{2 }{(d-1)(d-2)}} \log\left(1- \left(\frac{r_h}{r_t}\right)^{d-1}\right) ~. \end{eqnarray} Now in the high temperature limit, $r_h$ takes large value. We can say that $r_h \sim r_t$. Hence we can write $r_t = (1+\epsilon) r_h$ where $\epsilon$ is a very small positive number. With this the above equation reads \begin{eqnarray} lr_h = -\sqrt{\frac{2}{(d-1)(d-2)}} \log((d-1)\epsilon)+ C_1 + \delta^2 C_2 + \mathcal{O}(\epsilon) \label{eel1} \end{eqnarray} where \begin{eqnarray} C_1 &=& \frac{\sqrt{\pi}}{d-2} \frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{2d-3}{2(d-2)})} + \sum_{n=1}^{\infty} \left[ \frac{\Gamma(n+\frac{1}{2})}{(d-2)\Gamma(n+1)}\frac{\Gamma(\frac{(d-1)n+d-1}{2(d-2)})}{\Gamma(\frac{(d-1)n+2d-3}{2(d-2)})} -\sqrt{\frac{2}{(d-2)(d-1)}} \frac{1}{n} \right] \nonumber \\ C_2 &=& \sqrt{\frac{2}{(d-2)(d-1)}} + \frac{(d-1)}{(d-3)(d-2)} \sum_{n=0}^{\infty} \left[ \frac{\Gamma(n+\frac{3}{2})}{\Gamma(n+1)}\frac{\Gamma(\frac{(d-1)n+2d-2}{2(d-2)})}{\Gamma(\frac{(d-1)n+3d-4}{2(d-2)})} -\sqrt{\frac{2(d-2)}{(d-1)}} \right] \nonumber \\ && - \frac{(d-1)\delta^2}{(d-3)(d-2)} \sum_{n=0}^{\infty} \left[ \frac{\Gamma(n+\frac{3}{2})}{\Gamma(n+1)}\frac{\Gamma(\frac{(d-1)n+3d-5}{2(d-2)})}{\Gamma(\frac{(d-1)n+4d-7}{2(d-2)})} -\sqrt{\frac{2(d-2)}{(d-1)}} \right] ~. \end{eqnarray} From the definition of $\delta$, we had earlier obtained \begin{eqnarray} T= \frac{(d-1)r_h}{4\pi} (1-\delta^2). \end{eqnarray} Substituting the value of $r_h$ from the above equation in eq.(\ref{eel1}) and simplifying, we obtain \begin{eqnarray} \epsilon \approx \epsilon_{ent} e^{-\sqrt{\frac{d-2}{2(d-1)}} 4\pi Tl(1+\delta^2)} \end{eqnarray} where $\epsilon_{ent} = \frac{1}{d-1} e^{C_1 + C_2\delta^2}$. Now the surface area reads \begin{eqnarray} \mathcal{A} &=& 2 (Lr_t)^{d-3} \left[ \int_0^1 du \frac{u^{-d+2}}{\sqrt{1-u^{2(d-2)}}}\frac{1}{\sqrt{1- \left(\frac{r_h u}{r_t}\right)^{d-1}}} \right. \nonumber\\ &&\hspace{20mm}\left. +\; \frac{(d-1)\delta^2}{2(d-3)} \left(\frac{r_h}{r_t}\right)^{d-1} \int_0^1 du \frac{u\left(1- \left(\frac{r_h u}{r_t}\right)^{d-3}\right) }{\sqrt{1-u^{2(d-2)}}\left(1-\left(\frac{r_h u}{r_t}\right)^{d-1}\right)^{3/2}}\right] \nonumber \\ &=& 2 (Lr_t)^{d-3} \left[ \int_0^1 du \frac{u^{-d+2}}{\sqrt{1-u^{2(d-2)}}} \left(1+ \sum_{n=1}^{\infty} \frac{\Gamma(n+\frac{1}{2})}{\sqrt{\pi}\Gamma(n+1)} \left(\frac{r_h u}{r_t}\right)^{(d-1)n} \right) + \frac{(d-1)\delta^2}{2(d-3)} \left(\frac{r_h}{r_t}\right)^{d-1} \right. \nonumber \\ && \left. ~~~~~~\times \int_0^1 du \frac{u\left(1- \left(\frac{r_h u}{r_t}\right)^{d-3}\right) }{\sqrt{1-u^{2(d-2)}}}\left( \sum_{n=0}^{\infty} \frac{2\Gamma(n+\frac{3}{2})}{\sqrt{\pi}\Gamma(n+1)} \left(\frac{r_h u}{r_t}\right)^{(d-1)n}\right)\right] ~. \nonumber\\ \end{eqnarray} In the last line of the above equation we have used the relation \eqref{binomial} to evaluate the integrals analytically. The first integral corresponds to the area integral for pure AdS spacetime. Only this integral is divergent near $u \rightarrow 0$. To remove this divergence, we introduce UV cut-off $\frac{1}{r_b}$ and add a counter term to obtain a finite value of surface area \begin{eqnarray} \mathcal{A}^{finite} &=& 2 (Lr_t)^{d-3} \left[ \int_{\frac{r_t}{r_b}}^1 du \frac{u^{-d+2}}{\sqrt{1-u^{2(d-2)}}} + \sum_{n=1}^{\infty} \frac{\Gamma(n+\frac{1}{2})}{\sqrt{\pi}\Gamma(n+1)} \left(\frac{r_h}{r_t}\right)^{(d-1)n} \int_{0}^1 du \frac{u^{(d-1)n-d+2}}{\sqrt{1-u^{2(d-2)}}} \right. \nonumber \\ && \left. + \frac{(d-1)\delta^2}{2(d-3)} \sum_{n=0}^{\infty} \frac{2\Gamma(n+\frac{3}{2})}{\sqrt{\pi}\Gamma(n+1)} \left(\frac{r_h u}{r_t}\right)^{(d-1)n+d-1} \int_0^1 du \frac{u^{(d-1)n+1}}{\sqrt{1-u^{2(d-2)}}} \left(1- \left(\frac{r_h}{r_t}\right)^{d-3}\right) \right] \nonumber\\ &&-\frac{2(Lr_b)^{d-3}}{d-3} \nonumber\\ &=& (Lr_t)^{d-3} \left[ \frac{\sqrt{\pi}}{d-2}\frac{\Gamma(\frac{3-d}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})} + \sum_{n=1}^{\infty} \frac{1}{d-2} \frac{\Gamma(n+1/2)\Gamma \left(\frac{n+1+d(n-1)}{2(d-2)} \right)}{\Gamma(n+1)\Gamma \left(\frac{n+1+nd}{2(d-2)}\right)}\left(\frac{r_h}{r_t}\right)^{(d-1)n}\right. \nonumber\\ && \left.+ \frac{\delta^2 (d-1)}{(d-2)(d-3)} \sum_{n=0}^{\infty} \frac{\Gamma(n+3/2)\Gamma \left(\frac{2+n(d-1)}{2(d-2)}\right)}{\Gamma(n+1)\Gamma \left(\frac{d+n(d-1)}{2(d-2)}\right)}\left(\frac{r_h}{r_t}\right)^{(n+1)(d-1)} \right.\nonumber\\ && \left. -\frac{\delta^2 (d-1)}{(d-2)(d-3)} \sum_{n=0}^{\infty} \frac{\Gamma(n+3/2)\Gamma \left(\frac{(d-1)(n+1)}{2(d-2)}\right)}{\Gamma(n+1)\Gamma \left(\frac{n(d+1)+2d-3}{2(d-2)}\right)} \left(\frac{r_h}{r_t}\right)^{n(d-1)+2(d-2)} \right] \nonumber\\ \end{eqnarray} \begin{eqnarray} &=& (Lr_t)^{d-3} \left[ \frac{\sqrt{\pi}}{d-2}\frac{\Gamma(\frac{3-d}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})} \right. \nonumber\\ && \left.+\sum_{n=1}^{\infty} \frac{1}{d-2}\left(1+\frac{d-2}{n+1+(n-1)(d-2)}\right) \frac{\Gamma(n+1/2)\Gamma \left(\frac{(n+1)(d-1)}{2(d-2)} \right)}{\Gamma(n+1)\Gamma \left(\frac{n+1+(n+2)(d-2)}{2(d-2)}\right)}\left(\frac{r_h}{r_t}\right)^{(d-1)n}\right. \nonumber\\ && \left.+ \frac{\delta^2 (d-1)}{(d-2)(d-3)} \sum_{n=0}^{\infty}\left(1+\frac{d-2}{2+n(d-1)}\right) \frac{\Gamma(n+3/2)\Gamma \left(\frac{(n+2)(d-1)}{2(d-2)}\right)}{\Gamma(n+1)\Gamma \left(\frac{n+2+(n+3)(d-2)}{2(d-2)}\right)}\left(\frac{r_h}{r_t}\right)^{(n+1)(d-1)} \right.\nonumber\\ && \left. -\frac{\delta^2 (d-1)}{(d-2)(d-3)} \sum_{n=0}^{\infty}\left(1+\frac{d-2}{n(d-1)+d-1}\right) \frac{\Gamma(n+3/2)\Gamma \left(\frac{n+1+(n+3)(d-2)}{2(d-2)}\right)}{\Gamma(n+1)\Gamma \left(\frac{n+1+(n+4)(d-2)}{2(d-2)}\right)} \left(\frac{r_h}{r_t}\right)^{n(d-1)+2(d-2)} \right] ~. \nonumber\\ \end{eqnarray} Now we can use eq.(\ref{lrt}) to recast the surface area as \begin{eqnarray}{\label{3.75}} \mathcal{A}^{finite}&=& (Lr_t)^{d-3} \left[ \frac{\sqrt{\pi}}{3-d}\frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{2d-3}{2(d-2)})} + lr_t + \sum_{n=1}^{\infty} \frac{\Gamma(n+\frac{1}{2})\Gamma(\frac{(d-1)n+d-1}{2(d-2)}) \left(\frac{r_h}{r_t}\right)^{(d-1)n}}{\left( (d-1)n-(d-3)\right)\Gamma(n+1)\Gamma(\frac{(d-1)n+2d-3}{2(d-2)})} +\frac{(d-1)\delta^2}{d-3} \right. \nonumber \\ && \left. \times \sum_{n=0}^{\infty} \frac{\Gamma(n+\frac{3}{2})}{\Gamma(n+1)} \left\{ \frac{\Gamma(\frac{(d-1)n+2(d-1)}{2(d-2)})\left(\frac{r_h}{r_t}\right)^{(d-1)n+d-1}}{\left( (d-1)n+2\right)\Gamma(\frac{(d-1)n+3d-4}{2(d-2)})} -\frac{\Gamma(\frac{(d-1)n+3d-5}{2(d-2)})\left(\frac{r_h}{r_t}\right)^{(d-1)n+2d-4}}{( (d-1)n+d-1)\Gamma(\frac{(d-1)n+4d-7}{2(d-2)})} \right\} \right] ~. \nonumber \\ \end{eqnarray} The first summation term goes as $\sim \frac{1}{n^2}\left(\frac{r_h}{r_t}\right)^n$, where as the second and third summation terms goes as $\sim \frac{1}{n}\left(\frac{r_h}{r_t}\right)^n$ for large $n$. The leading contribution to the area $\mathcal{A}^{finite}$ comes from $r_h=r_t$. Those summation terms are not divergent in this limit. But as we know from \cite{Hubeny:2012ry} that the turning point of the extremal surface cannot penetrate the horizon radius $r_h$, we have to substitute $r_t=r_h(1+\epsilon)$ in eq.(\ref{3.75}) and expand binomially to get terms that are higher order in $\epsilon$. Then it is easy to see that the summation terms are not convergent at the order $\mathcal{O}(\epsilon)$ and also at higher order. As in this case $r_h$ is very large, $\epsilon$ is likely to be a tiny quantity. So we consider contribution to surface area only upto order $\mathcal{O}(\epsilon)$. We now use $r_t=r_h(1+\epsilon)$ and separate the divergent terms from the summations to express eq.(\ref{3.75}) as \begin{eqnarray} A^{finite}&=& L^{d-3} l r_h^{d-2} + L^{d-3}r_h^{d-3}(K_1+\delta^2 K_2)\nonumber \\ &&+L^{d-3}r_h^{d-3}(K_3 \epsilon +\delta^2(K_4 \epsilon +K_5 \epsilon \log \epsilon)) \end{eqnarray} where \begin{eqnarray} K_1 &=& \frac{\sqrt{\pi}}{3-d} \frac{\Gamma \left(\frac{d-1}{2(d-2)}\right)}{\Gamma \left(\frac{2d-3}{2(d-2)}\right)}+ \frac{\sqrt{2(d-2)}}{(d-1)^{\frac{3}{2}}}\xi(2) \nonumber\\ && +\sum_{n=1}^{\infty} \left(\frac{1}{((d-1)n+3-d)}\frac{\Gamma(n+1/2)\Gamma \left(\frac{(n+1)(d-1)}{2(d-2)}\right)}{\Gamma(n+1)\Gamma\left(\frac{(d-1)n+2d-3}{2(d-2)}\right)} -\frac{1}{n^2}\frac{\sqrt{2(d-2)}}{(d-1)^{\frac{3}{2}}}\right) \nonumber \\ K_2 &=& \left(\frac{d-1}{d-3}\right)\left[\left(\frac{\Gamma \left(\frac{d-1}{d-2}\right)}{2\Gamma \left(\frac{3d-4}{2(d-2)}\right)} -\frac{\Gamma\left(\frac{3d-5}{2(d-2)}\right)}{(d-1)\Gamma \left(\frac{4d-7}{2(d-2)} \right)}\right)\Gamma(3/2) \right. \nonumber \\ && \left. + \sum_{n=1}^{\infty} \left(\frac{1}{2+n(d-1)}\frac{\Gamma \left(n+\frac{3}{2}\right)\Gamma \left(\frac{(d-1)(n+2)}{2(d-2)}\right)}{\Gamma(n+1)\Gamma \left(\frac{(d-1)n+3d-4}{2(d-2)}\right)}- \frac{1}{n}\frac{\sqrt{2(d-2)}}{(d-1)^{\frac{3}{2}}}\right) \right. \nonumber \\ && \left. + \sum_{n=1}^{\infty} \left(\frac{1}{(n+1)(d-1)}\frac{\Gamma \left(n+\frac{3}{2}\right)\Gamma \left(\frac{(d-1)n++3d-5}{2(d-2)}\right)}{\Gamma(n+1)\Gamma \left(\frac{(d-1)n+3d-7}{2(d-2)}\right)}- \frac{1}{n}\frac{\sqrt{2(d-2)}}{(d-1)^{\frac{3}{2}}}\right) \right. \nonumber \\ K_3 &=& \sqrt{\frac{2(d-2)}{d-1}} \left(\log (d-1) -1\right) \nonumber \\ K_4 &=& \left(\frac{d-1}{d-3}\right) \left[\frac{2(d-2)\Gamma \left(\frac{3d-5}{2(d-2)}\right)}{(d-1)\Gamma \left(\frac{4d-7}{2(d-2)}\right)} -\frac{(d-1)\Gamma\left(\frac{d-1}{d-2}\right)}{2 \Gamma\left(\frac{3d-4}{2(d-2)}\right)}\right]\Gamma\left(\frac{3}{2}\right) -\sqrt{\frac{2(d-2)}{d-1}}\log(d-1) \nonumber\\ K_5 &=& -\sqrt{\frac{2(d-2)}{d-1}} ~. \end{eqnarray} \noindent Therefore the renormalized HEE in the small charge regime is given by \begin{eqnarray} S_{A}^{finite} &=& L^{d-3}l S_{BH} + \frac{(Lr_h)^{d-3}}{4G_N^d}(K_1 +\delta^2 K_2) \nonumber \\ && +\frac{L^{d-3}r_h^{d-3}}{4G_N^d}(K_3 \epsilon +\delta^2(K_4 \epsilon +K_5 \epsilon \log \epsilon)) \end{eqnarray} where $S_{BH} = \frac{r_h^{d-2}}{4G_N^d}~$. \subsubsection{Large charge limit} The HEE of a non-extremal $AdS$-RN black hole with high charge limit has been computed in this section. The non-extremality condition (\ref{extremality}) sets the horizon radius ($r_h$) at large value and we have $r_h l\gg 1$. Hence all the assumptions made in section (\ref{sel}) are also applicabe in this case. Hence we can now Taylor expand eq.(\ref{lapseue}) around $u_0=\frac{r_t}{r_h}$ and neglect higher order terms to obtain the lapse function as \begin{eqnarray} f(u) &=& f(u_0) + f^{\prime}(u_0) (u-u_0) + \mathcal{O}((u-u_0)^2) \nonumber \\ &\approx& \left[(d-1)-\frac{(d-3)Q^2}{r_h^{2(d-2)}} \right]\left(1- \frac{u}{u_0}\right) ~. \end{eqnarray} Let us denote the prefactor infront of the above equation by $\sigma$, so that $\sigma =(d-1)-\frac{(d-3)Q^2}{r_h^{2(d-2)}}$. This factor also had appeared in the expression for black hole temperature in eq.(\ref{bhtemp}). In the low temperature limit we have $\sigma \rightarrow 0$ and in high temperature limit we have $\sigma \rightarrow (d-1)$. Now the length of the subsystem reads \begin{eqnarray}{\label{3.80}} l &=& \frac{2}{r_t\sqrt{\sigma}} \int_0^1 \frac{u^{d-2}du}{\sqrt{1-u^{2(d-2)}}} \frac{1}{\sqrt{1- \frac{r_h u}{r_t}}} \nonumber \\ \Rightarrow lr_t &=& \frac{2}{\sqrt{\sigma}} \sum_{n=0}^{\infty} \frac{\Gamma(n+\frac{1}{2})}{\sqrt{\pi}\Gamma(n+1)} \left(\frac{r_h}{r_t}\right)^n \int_0^1 \frac{u^{n+d-2} du}{\sqrt{1-u^{2(d-2)}}} \nonumber \\ &=& \frac{1}{(d-2)\sqrt{\sigma}} \sum_{n=0}^{\infty} \frac{\Gamma(n+\frac{1}{2})}{\sqrt{\pi}\Gamma(n+1)} \frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})} \left(\frac{r_h}{r_t}\right)^n \end{eqnarray} where we have used eq.(\ref{binomial}). For large value of $n$, this expression is divergent. Using gamma function properties and Stirling formula, one can see that for large value of $n$, the above summation goes as $\frac{\sqrt{2}}{(d-2)\;n} \left(\frac{r_h}{r_t}\right)^n $. Therefore it is divergent as $r_t \rightarrow r_h$. To get a finite value we isolate the divergent terms to get \begin{eqnarray} lr_t &=& \frac{\sqrt{\pi}}{(d-2)\sqrt{\sigma}} \frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{2d-3}{2(d-2)})} + \frac{1}{\sqrt{\sigma}} \sum_{n=1}^{\infty} \left( \frac{\Gamma(n+\frac{1}{2})}{(d-2)\Gamma(n+1)} \frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})} - \sqrt{\frac{2}{d-2}}\frac{1}{n} \right) \left(\frac{r_h}{r_t}\right)^n \nonumber \\ &&- \sqrt{\frac{2}{(d-2)\sigma}} \log\left(1- \frac{r_h}{r_t}\right) ~. \end{eqnarray} Now we substitute $r_t=r_h (1+\epsilon)$ and expand around $\epsilon$ to finally obtain \begin{eqnarray} \sqrt{\sigma }lr_h = - \sqrt{\frac{2}{d-2}} \log(\epsilon) + D_1 + \mathcal{O}(\epsilon) \label{eel4} \end{eqnarray} where \begin{eqnarray} D_1 = \frac{\sqrt{\pi}}{(d-2)} \frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{2d-3}{2(d-2)})} + \sum_{n=1}^{\infty} \left\{ \frac{\Gamma(n+\frac{1}{2})}{(d-2)\Gamma(n+1)} \frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})} - \sqrt{\frac{2}{d-2}}\frac{1}{n} \right\} ~. \end{eqnarray} From the above equation, one can find \begin{eqnarray} \epsilon = \epsilon_{ent} e^{-\frac{(d-2)\sigma}{2}lr_h} \end{eqnarray} where $\epsilon_{ent} = e^{\frac{d-2}{2}D_1}$. Now we shall compute the extremal surface area which reads \begin{equation} \mathcal{A} = \frac{2(Lr_t)^{d-3}}{\sqrt{\sigma}} \int_0^1 du\frac{1}{u^{d-2}\sqrt{1-u^{2(d-2)}}}\frac{1}{\sqrt{\sigma \left(1-\frac{r_h}{r_t}u\right)}} ~. \end{equation} Since we want an analytical expression of area we use eq.(\ref{binomial}) to rewrite the area integral as \begin{eqnarray} \mathcal{A} = \frac{2(Lr_t)^{d-3}}{\sqrt{\sigma}} \sum_{n=0}^{\infty} \frac{\Gamma(n+\frac{1}{2})}{\sqrt{\pi}\Gamma(n+1)} \left(\frac{r_h}{r_t}\right)^n \int_{0}^{1} \frac{u^{n-d+2}du}{\sqrt{1-u^{2(d-2)}}} ~. \end{eqnarray} This integral seems to be divergent for $n < (d-2)$. The divergence occurs when $u \rightarrow 0$. So we can put an UV cutoff $r_t/r_b$ in the integrals to regularize it from $n=0$ to $n=d-3$. We use the same procedure as used in section (\ref{sel}) to obtain the finite part of area integrals as \begin{eqnarray} \mathcal{A}^{finite}_0 &=&-\frac{2\sqrt{\pi} (Lr_t)^{d-3}}{(d-3)\sqrt{\sigma}} \frac{\Gamma \left( \frac{d-1}{2(d-2)} \right)}{\Gamma \left(\frac{1}{2(d-2)} \right)} \nonumber\\ \mathcal{A}^{finite}_m &=& \frac{2(Lr_t)^{d-3}}{\sqrt{\sigma}}\frac{\Gamma(n+\frac{1}{2})}{\sqrt{\pi}\Gamma(n+1)} \left(\frac{r_h}{r_t}\right)^m \left[\frac{1}{d-m-3}+ \frac{\sqrt{\pi}}{2(d-2)}\frac{\Gamma (\frac{m-d+3}{2(d-2)})}{\Gamma (\frac{m+1}{2(d-2)})} \right];\mbox{(m=1,2,...,$d-4$)}\nonumber\\ \mathcal{A}^{finite}_{d-3}&=& \frac{2(Lr_t)^{d-3}\Gamma \left(\frac{2d-5}{2}\right)}{\sqrt{\pi \sigma \Gamma(d-2)}} \left(\frac{r_h}{r_t}\right)^{d-3}\frac{\log 4}{2(d-2)} ~. \end{eqnarray} For $n \geq (d-2)$, we get \begin{eqnarray} \mathcal{A}_{n\geq (d-2)} &=& \frac{2(Lr_t)^{d-3}}{\sqrt{\sigma}} \sum_{n=(d-2)}^{\infty} \frac{\Gamma(n+\frac{1}{2})}{\sqrt{\pi}\Gamma(n+1)} \left(\frac{r_h}{r_t}\right)^n \int_{0}^{1} \frac{u^{n-d+2}du}{\sqrt{1-u^{2(d-2)}}} \nonumber \\ &=& \frac{2(Lr_t)^{d-3}}{\sqrt{\sigma}} \sum_{n=(d-2)}^{\infty} \frac{1}{2(d-2)}\frac{\Gamma(n+\frac{1}{2})}{\Gamma(n+1)} \frac{\Gamma(\frac{n-d+3}{2(d-2)})}{\Gamma(\frac{n+1}{2(d-2)})} \left(\frac{r_h}{r_t}\right)^n \nonumber \\ &=& \frac{(Lr_t)^{d-3}}{\sqrt{\sigma}} \sum_{n=(d-2)}^{\infty} \left( \frac{1}{d-2}+ \frac{1}{n-d+3} \right) \frac{\Gamma(n+\frac{1}{2})}{\Gamma(n+1)} \frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})} \left(\frac{r_h}{r_t}\right)^n . \end{eqnarray} Using eq.(\ref{3.80}) we can recast the above expression in terms of the subsystem length as below \begin{eqnarray}{\label{3.89}} \mathcal{A}_{n\geq (d-2)}^{finite} &=& \frac{(Lr_t)^{d-3}}{\sqrt{\sigma}} \left[ \sqrt{\sigma}\; l r_t -\sum_{m=0}^{d-3} \frac{1}{d-2} \frac{\Gamma(m+\frac{1}{2})}{\Gamma(m+1)} \frac{\Gamma(\frac{m+d-1}{2(d-2)})}{\Gamma(\frac{m+2d-3}{2(d-2)})} \left(\frac{r_h}{r_t}\right)^m \nonumber \right. \\ && \left. + \sum_{n=d-2}^{\infty} \frac{1}{n-d+3} \frac{\Gamma(n+\frac{1}{2})}{\Gamma(n+1)} \frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})} \left(\frac{r_h}{r_t}\right)^n \right]~. \nonumber \\ \end{eqnarray} For large value of $n$, the third term goes as $\frac{\sqrt{2(d-2)}}{n^2} \left(\frac{r_h}{r_t}\right)^n$. Hence this term is finite as $r_t \rightarrow r_h$. Hence we can get a finite leading order contribution to this term. But as $r_t=r_h(1+\epsilon)$, one can see that the third term in eq.(\ref{3.89}) is divergent at first order in $\epsilon$. So we separate the divergent terms to rewrite eq.(\ref{3.89}) as \begin{eqnarray} \mathcal{A}_{n\geq (d-2)}^{finite} &=& \frac{(Lr_t)^{d-3}}{\sqrt{\sigma}} \left[ \sqrt{\sigma} lr_t -\sum_{m=0}^{d-3} \frac{1}{d-2} \frac{\Gamma(m+\frac{1}{2})}{\Gamma(m+1)} \frac{\Gamma(\frac{m+d-1}{2(d-2)})}{\Gamma(\frac{m+2d-3}{2(d-2)})} \left(\frac{r_h}{r_t}\right)^m \nonumber \right. \\ && \left. + \sum_{n=d-2}^{\infty} \left\{ \frac{1}{n-d+3} \frac{\Gamma(n+\frac{1}{2})}{\Gamma(n+1)} \frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})} - \frac{\sqrt{2(d-2)}}{n^2}\right\} \left(\frac{r_h}{r_t}\right)^n \right. \nonumber \\ && \left. + \sum_{n=d-2}^{\infty} \frac{\sqrt{2(d-2)}}{n^2} \left(\frac{r_h}{r_t}\right)^n \right]~. \end{eqnarray} Now we can write \begin{eqnarray} \sum_{n=d-2}^{\infty} \frac{\sqrt{2(d-2)}}{n^2} \left(\frac{r_h}{r_t}\right)^n = \sqrt{2(d-2)} \left[ Li_{2} \left[\frac{r_h}{r_t}\right] - \sum_{m=1}^{d-3} \frac{1}{m^2} \left(\frac{r_h}{r_t}\right)^m \right]. \end{eqnarray} Using this, the total surface area reads \begin{eqnarray} \mathcal{A}^{finite} &=& \frac{(Lr_t)^{d-3}}{\sqrt{\sigma}} \left[ \sqrt{\sigma} lr_t -\frac{2\sqrt{\pi}(d-2)}{d-3}\frac{\Gamma \left(\frac{d-1}{2(d-2)}\right)}{\Gamma \left(\frac{1}{2(d-2)}\right)} \right.\nonumber \\ &&+ \left. \sum_{n=1}^{d-4} \frac{\Gamma(n+\frac{1}{2})}{\sqrt{\pi}\Gamma(n+1)} \left(\frac{\sqrt{\pi}}{n+1} \frac{\Gamma(\frac{n-d+3}{2(d-2)})}{\Gamma(\frac{n+1}{2(d-2)})} + \frac{2}{[d-(n+3)]}\right) \left(\frac{r_h}{r_t}\right)^n \right. \nonumber \\ &&+ \left.\frac{\Gamma \left(\frac{2d-5}{2}\right)}{(d-2)\sqrt{\pi}\Gamma(d-2)}\left(\log 4-\Gamma(3/2)\right)\left(\frac{r_h}{r_t}\right)^{d-3}\right. \nonumber \\ && \left. +\sqrt{2(d-2)} \left( Li_{2} \left[\frac{r_h}{r_t}\right] - \sum_{n=1}^{d-3} \frac{1}{n^2} \left(\frac{r_h}{r_t}\right)^n \right)\right. \nonumber\\ && \left. + \sum_{n=d-2}^{\infty} \left( \frac{1}{n-d+3}\frac{\Gamma(n+\frac{1}{2})}{\Gamma(n+1)}\frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})} - \frac{\sqrt{2(d-2)}}{n^2}\right) \left(\frac{r_h}{r_t}\right)^n \right] ~. \nonumber \\ \label{ee80} \end{eqnarray} Now we substitute $r_t = r_h(1+\epsilon)$ in eq.(\ref{ee80}) to know the sub-leading term upto order $\epsilon$. After simplification we finally obtain the finite part of the area of extremal surface to be \begin{eqnarray} \mathcal{A}^{finite} = L^{d-3} l r_h^{d-2} + \frac{(Lr_h)^{d-3}}{\sqrt{\sigma}} \left\{ K_1^{'} +K_2^{'} \epsilon+ \mathcal{O}(\epsilon^2) \right\} \end{eqnarray} where \begin{eqnarray} K_1^{'} &=& -2\sqrt{\pi}\frac{d-2}{d-3}\frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})} + \sqrt{\frac{d-2}{2}}\xi(2) -\sqrt{\frac{d}{2}}\sum_{n=1}^{d-3}\frac{1}{n^2} \nonumber \\ && + \sum_{n=1}^{d-4} \frac{\Gamma\left(n+\frac{1}{2}\right)}{\sqrt{\pi}\Gamma(n+1)} \left[ \frac{\sqrt{\pi}}{n+1} \frac{\Gamma(\frac{n-d+3}{2(d-2)})}{\Gamma(\frac{n+1}{2(d-2)})} + \frac{2}{d-(n+3)}\right] \nonumber\\ && + \frac{\Gamma \left(\frac{2d-5}{2}\right)}{\sqrt{\pi}(d-2)\Gamma(d-2)}\left(\log 4 -\frac{\sqrt{\pi}}{\Gamma(3/2)}\right) \nonumber \\ && + \sum_{n=d-2}^{\infty} \left[\frac{1}{n-d+3} \frac{\Gamma(n+\frac{1}{2})}{\Gamma(n+1)} \frac{\Gamma(\frac{n+d-1}{2(d-2)})}{\Gamma(\frac{n+2d-3}{2(d-2)})} -\frac{\sqrt{2(d-2)}}{n^2} \right] \nonumber \\ K_2^{'}&=&\frac{\Gamma \left(\frac{2d-7}{2}\right)}{\sqrt{\pi}\Gamma (d-3)}\left(2+\frac{\sqrt{\pi}\Gamma\left(\frac{-1}{2(d-2)}\right)}{(d-3)\Gamma \left(\frac{d-3}{2(d-2)}\right)}\right) -\sqrt{2(d-2)} ~. \end{eqnarray} Therefore the renormalized holographic entanglement entropy is given by (in the large charge regime for non-extremal black hole) \begin{eqnarray} S_{A}^{finite} = L^{d-3}l S_{BH}^{ext} + \frac{(Lr_h)^{d-3}}{4G_N\sqrt{\sigma}} \left\{ K_1^{'} +K_2^{'} \epsilon+ \mathcal{O}(\epsilon^2) \right\} \end{eqnarray} where $S_{BH}^{ext} = \frac{r_h^{d-2}}{4G_N^d}~$. \section{Entanglement thermodynamics} In this section, we investigate the entanglement thermodynamics in the small charge limit. To carry out this study, one has to note from the $AdS$/CFT duality that in the presence of a RN black hole, an extremal black hole has to be considered as the dual to the ground state (zero temperature state) of the boundary field theory. Further, a non-extremal black hole has to be considered as the dual to the excited state (finite temperature state) of the boundary field theory. The difference in the entanglement entropies of these two states would lead to the first law of entanglement thermodynamics. Hence we obtain \begin{eqnarray} \Delta S_A = \frac{\Delta E_A}{T_{ent}} \label{eet1} \end{eqnarray} where \begin{eqnarray} \Delta S_A &=& S_A- S_A^{ext} = k L^{d-3} l^2 (M - M^{ext}) \\ \Delta E_A &=& \int_{A} dx_1 dx_2...dx_{d-3} T_{tt}^{temp\neq 0} -\int_{A} dx_{1} dx_2...dx_{d-3} T_{tt}^{temp= 0} \nonumber \\ &=& \frac{d-2}{16\pi G_N^d} L^{d-3} l (M-M^{ext}) ~. \end{eqnarray} Substituting these two relation in eq.(\ref{eet1}), the entanglement temperature reads \begin{eqnarray} T_{ent} = \frac{2(d-2)^2}{\sqrt{\pi}} \left(\frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})}\right)^2 \left[\frac{1}{\frac{\Gamma(\frac{1}{d-2})}{\Gamma(\frac{d}{2(d-2)})}-\frac{\Gamma(\frac{d-1}{2(d-2)})}{\Gamma(\frac{1}{2(d-2)})}}\right] ~. \end{eqnarray} \section{Conclusion} We have explicitly investigated the entanglement thermodynamics for $d$-dimensional charged black hole by studying the holographic entanglement entropy in different cases. We computed the holographic entanglement entropy in extremal and non-extremal cases in two different regimes, namely, the small charge limit and the large charge limit. For non-extremal black hole, there are two limiting cases, namely, the low temperature limit and the high temperature limit. We have calculated the holographic entanglement entropy for all these cases. It is observed that the holographic entanglement entropy for small charged extremal black hole is the same as the holographic entanglement entropy in the low temperature regime for small charged non-extremal black hole. We have then found the first law of entanglement thermodynamics for boundary field theory in the low temperature regime in arbitrary dimensions for the small charge limit. From this we have calculated the entanglement temperature for this system. \section*{Acknowledgment} DG would like to thank to DST-INSPIRE for financial support. S.~Gangopadhyay acknowledges the support by DST SERB under Start Up Research Grant (Young Scientist), File No. YSS/2014/000180. He also acknowledges the support of IUCAA, Pune for the Visiting Associateship programme.
1,314,259,993,137
arxiv
\section{INTRODUCTION} Explaining model predictions is highly desirable for reliable applications of machine learning. This is especially important in risk-sensitive settings like medicine and credit scoring \citep{medicineshap, medicinexai, medicine-interp, credit-scoring} where an incorrect model prediction could prove very costly. Explainability is becoming increasingly relevant because of regulations like the General Data Protection Regulation \citep{gdpr}, which may require being able to explain model predictions before deploying a model in the real world. This is less of a challenge in models like linear models and decision trees, which tend to be easier to interpret. However, the same is not true for more complex models like Neural Networks, where explaining predictions may not be straightforward \citep{why-should-i-trust}. Explainable AI is an area of machine learning which aims to provide methodologies for interpreting model predictions. Various different techniques of explaining models have been proposed, with each approach satisfying different properties \citep{xai-review}. In this paper, we focus on Shapley values \citep{shap1, shap2, shap-og}, a popular approach for quantifying feature relevance, which is model-agnostic, i.e., is independent of model implementation. Additionally, this is a local explanation method, i.e., it can be used to explain individual model predictions. Shapley values are based on ideas from cooperative game theory \citep{game-theory} and come with various desirable theoretical properties \citep{kernelshap} which make it a very attractive method in practice. At a high-level, Shapley values treat features as `players' in a game, where the total payout is the model prediction at a given point. To quantify the feature importance, this method distributes the total payout among each player in a `fair' manner using a \emph{value} function. Different types of Shapley value functions have been proposed which differ in the way they distribute payout among players \citep{kernelshap, expondatamanifold}. These can be broadly divided into two categories: (i) \emph{on-manifold} value functions, which only depend on the model behaviour on the input data distribution, and (ii) \emph{off-manifold} value functions which also depend on the model behaviour outside the input data distribution. Off-manifold Shapley values are not robust to changes in model behaviour outside the data distribution. This means that the explanations obtained using these methods may be highly influenced if the model behaviour outside the data distribution changes, even if it remains fixed on the data distribution \citep{expondatamanifold, foolingshap, on-manifold-off-manifold}. Such changes to the model can change the Shapley values drastically, resulting in misleading explanations, and can even be used to hide model biases. On the other hand, while the on-manifold Shapley values are robust to such model perturbations, the explanations obtained are highly sensitive to changes in the feature distribution. Additionally, these methods do not capture the \emph{causal} contribution of features as they attribute importance based on feature correlations. For example, we show that on-manifold Shapley values can be `fooled' into attributing similar importance to two positively correlated features, even if the model depends on only one of them. In this paper, we bridge this gap between \emph{on-manifold} and \emph{off-manifold} Shapley values by proposing ManifoldShap (illustrated in Figure \ref{fig:manifoldshap}), a Shapley value function, which remains robust to changes in model behaviour outside the data distribution, while estimating the \emph{causal} contribution of features. We show that ManifoldShap is significantly less sensitive to changes in the feature distribution than other on-manifold value functions. We extend the formal notion of robustness in \citet{on-manifold-off-manifold} by providing an alternative definition which may be more desirable in many cases. We additionally show that our proposed method satisfies both notions of robustness, while other methods do not. Moreover, ManifoldShap satisfies a number of other desirable properties which we verify theoretically and empirically on real-world datasets. \begin{figure}[t] \centering \includegraphics[height=1.5in]{./images/main_fig.png} \caption{The datapoints at which model is evaluated when computing Shapley values for test point $\textbf{x}$, along with the data manifold. Off-manifold methods evaluate the model outside the data manifold whereas our proposal, ManifoldShap, restricts model evaluations to the data manifold.} \label{fig:manifoldshap} \vspace{-0.2in} \end{figure} \section{SHAPLEY VALUES} In this section, we will introduce Shapley values for model explainability. For any given model $f:\mathcal{X} \rightarrow \mathcal{Y}$, our goal is to obtain localised model explanations at a given point $\textbf{x} \in \mathcal{X}$. We assume that $\mathcal{X} \subseteq \mathbb{R}^d$ and $\mathcal{Y} \subseteq \mathbb{R}$. Shapley values \citep{shap1, shap2, shap-og} provide a natural tool for obtaining such explanations. For a specific input $\textbf{x}$, Shapley values define a way of distributing the difference between $f(\textbf{x})$ and a baseline, which we denote as $b_0$, among the $d$ input features. This can naturally be interpreted as the contribution of each feature towards the difference $f(\textbf{x}) - b_0$, and is commonly referred to as feature attributions. One possible choice of baseline explored in the literature is the model evaluated at an auxiliary input $\textbf{x}'$, i.e., $b_0 = f(\textbf{x}')$. Alternatively, many methods use the average model output $\mathbb{E}[f(\textbf{X})]$ as the baseline, i.e., $b_0 = \mathbb{E}[f(\textbf{X})]$. This can be used to explain \emph{why} the output at a point $\textbf{x}$ deviates from the average output. The average output provides a more intuitive and interpretable baseline compared to the choice of an auxiliary input $\textbf{x}'$, which can be arbitrary. In this work, we therefore restrict our attention to the latter category. As an example, consider a model which predicts an individual's salary, with input features corresponding to individual's information. If feature $i \in [d]$ represents the age of the individual, the attribution for feature $i$, which we will denote as $\atr{i}{f}$, tells us the contribution of individual's age to the salary prediction for $\textbf{x}$, relative to the average salary prediction, i.e., $f(\textbf{x}) - \mathbb{E}[f(\textbf{X})]$. To compute the contribution for feature $i$ at $\textbf{x}$, Shapley values consider a value function $v: 2^{[d]} \rightarrow \mathbb{R}$ where $v$ may implicitly depend on $\textbf{x}$. Given a subset $S\subseteq[d]\setminus\{i\}$, we can intuitively interpret the difference $v(S\cup \{i\}) - v(S)$ as the contribution of feature $i$ w.r.t. the set $S$. Next, the Shapley values for feature $i$ is defined as a weighted sum over all possible subsets $S$: \[ \atr{i}{f} \coloneqq \sum_{S \subseteq [d] \setminus \{i\}} \frac{|S|!(d-|S|-1)!}{d!} (v(S\cup \{i\}) - v(S)). \] The quantity $\phi_i$ can be intuitively considered as the average contribution of feature $i$ to the prediction at $\textbf{x}$. \begin{comment} One such family of value functions consider feature relevance relative to an auxiliary input $\textbf{x}'$, and attributes the difference between $f(\textbf{x})$ and $b_0=f(\textbf{x}')$ to individual features, i.e., $\sum_i \phi_i = f(\textbf{x})- f(\textbf{x}')$. A second class of value functions consider the baseline $b_0 = \mathbb{E}[f(\textbf{X})]$, and attribute the difference $f(\textbf{x}) - \mathbb{E}[f(\textbf{X})]$ among individual features. This can be used to explain \emph{why} the output at a point $\textbf{x}$ deviates from the average output. The average output provides a more natural baseline compared to the choice of an auxiliary input $\textbf{x}'$, which can be arbitrary. In this work, we therefore restrict our attention to the latter category. \end{comment} In order for the explanations obtained to be interpretable and intuitive, the value function $v$ must be chosen such that it satisfies a number of desirable properties. We present some of the most important such properties here\done{comment about symmetry}: \begin{enumerate} \item \textit{Sensitivity:} If $f$ does not depend on $x_i$, then $v(S\cup \{i\}) = v(S)$, and hence $\atr{i}{\textbf{x}}=0$. \item \textit{Symmetry:} If $f$ is symmetric in components $i$ and $j$ and $x_i = x_j$, then $v(S\cup \{i\}) = v(S\cup \{j\})$ and hence $\phi_i = \phi_j$. \item \textit{Efficiency:} If $\atr{i}{\textbf{x}}$ denotes the attribution of feature $i$ to $f(\textbf{x}) - \mathbb{E}[f(\textbf{X})]$, then $v([d])-v(\emptyset) = f(\textbf{x}) - \mathbb{E}[f(\textbf{X})]$ and hence, $ \sum_i \phi_i = f(\textbf{x}) - \mathbb{E}[f(\textbf{X})]. $ % % % % \end{enumerate} Next, we present various commonly used value functions, which can be classified into \emph{off-manifold} and \emph{on-manifold} value functions. \subsection{Off-Manifold Value Functions} This class of value functions does not restrict function evaluations to the data distribution, and consequently, computing Shapley values involves evaluating the model on out-of-distribution inputs, where the model has not been trained (see Figure \ref{fig:manifoldshap}). The most commonly used off-manifold value function is Marginal Shapley (MS) (also called RBShap \citep{kernelshap}): \paragraph{Marginal Shapley (MS).} \[ v^{\textup{MS}}_{\textbf{x}, f}(S) \coloneqq \mathbb{E}[f(\textbf{x}_S, \textbf{X}_{\bar{S}})]. \] Specifically, Marginal Shapley takes the expectation of $f(\textbf{x}_s, \textbf{X}_{\bar{S}})$ over the marginal density of $\textbf{X}_{\bar{S}}$. In addition to this, there has been some recent work proposing a causal perspective when computing Shapley values \citep{lshap, causalshap, jung2022}. Specifically, these works observe that manually fixing the values of features $\textbf{X}_S$ to $\textbf{x}_S$ when computing Shapley values, corresponds to \emph{intervening} on the feature values. In Pearl's do calculus \citep{pearl, pearl2012the}, this is expressed as $do(\textbf{X}_S = \textbf{x}_S)$. This leads to the definition of Interventional Shapley (IS) value functions: \paragraph{Interventional Shapley (IS).} \begin{align} v^{\textup{IS}}_{\textbf{x}, f}(S) \coloneqq \mathbb{E}[f(\textbf{X}) \mid do(\textbf{X}_S = \textbf{x}_S)]. \label{causalshap} \end{align} % % % % % % % A detailed discussion of how Interventional Shapley differs from other \textit{non-causal} value functions has been deferred to Section \ref{subsec:limitations-on-man}. How to compute $v^{\textup{IS}}_{\textbf{x}, f}(S)$ depends on the causal structure of the features. \citet{lshap} only consider the causal relations between the function inputs and outputs, rather than between the real-world features and the true output $Y$. This corresponds to the set-up in Figure \ref{fig:dag}, where the true feature values $\tilde{X}_i$ are formally distinguished from the features $X_i$ input into the function, $f$, with $X_i$ being a direct causal descendant of $\tilde{X}_i$ and no interactions between $X_i$. In this set-up, intervening on $\textbf{X}_S$ yields the following interventional distribution: \begin{align*} p(\textbf{X}_{\bar{S}} \mid do(\textbf{X}_S = \textbf{x}_S) ) = p(\textbf{X}_{\bar{S}}). \end{align*} In this case, the value function, $v^{\textup{IS}}_{\textbf{x}, f}(S)$ can straightforwardly be computed as \begin{align*} v^{\textup{IS}}_{\textbf{x}, f}(S) \hspace{-0.1cm}=\hspace{-0.1cm} \mathbb{E}[f(\textbf{X}) \mid do(\textbf{X}_S = \textbf{x}_S)] \hspace{-0.1cm} = \hspace{-0.1cm} \mathbb{E}_{\textbf{X}_{\bar{S}} \sim p(\textbf{X}_{\bar{S}})}[f(\textbf{x}_S, \textbf{X}_{\bar{S}})]. \end{align*} \begin{figure}[ht] \centering \includegraphics[height=1.2in]{./images/dag_janzing.png} \caption{Causal structure considered in \citet{lshap}. The true features are $\tilde{X}_i$ while features input into the model are $X_i$.} \label{fig:dag} \end{figure} This is equivalent to Marginal Shapley. Therefore, Marginal Shapley can be considered a special case of Interventional Shapley. In contrast, \citet{causalshap} seeks to estimate the causal contributions of the real-world features towards the true output $Y$, and therefore, does not distinguish between the true features and the features input into the model. The resulting IS value function also takes into account the causal relations among the true features themselves. \subsection{On-Manifold Value Functions} These value functions only rely on function values in data distribution when computing Shapley values. As a result, any changes in the function outside data distribution does not change the explanations obtained. One of the first on-manifold value functions proposed was Conditional Expectation Shapley (CES) \citep{kernelshap}: \paragraph{Conditional Expectation Shapley (CES).} \[ v_{\textbf{x}, f}^{\textup{CES}}(S) \coloneqq \mathbb{E}[f(\textbf{X}) \mid \textbf{X}_S = \textbf{x}_S ]. \] Unlike Marginal Shapley, CES takes the expectation of $f(\textbf{x}_s, \textbf{X}_{\bar{S}})$ over the conditional density of $\textbf{X}_{\bar{S}}$ given $\textbf{X}_S = \textbf{x}_S$ (and not the marginal density of $\textbf{X}_{\bar{S}}$). This has undesired implications for the obtained Shapley values, which we discuss in detail in Section \ref{subsec:limitations-on-man}. Apart from this, recently \citet{on-manifold-off-manifold} proposed Joint Baseline Shapley (JBShap), a value function which aims to make Shapley values robust to model changes in regions of low data-density. This value function explicitly takes the density $p(\textbf{x})$ into consideration when calculating explanations: \paragraph{Joint Baseline Shapley (JBShap).} \[ v_{\textbf{x}, f, p}^{\textup{J}}(S) \coloneqq f(\textbf{x}_S, \textbf{x}'_{\bar{S}})p(\textbf{x}_S, \textbf{x}'_{\bar{S}}), \] where $\textbf{x}'$ is an auxiliary baseline. The authors also propose an extension of JBShap, called \emph{Random Joint Baseline Shapley} (RJBShap) where the value function averages over all possible baseline values: \paragraph{Random Joint Baseline Shapley (RJBShap).} \[ v_{\textbf{x}, f, p}^{\textup{RJ}}(S) \coloneqq \mathbb{E}_{p_b(\textbf{X}_{\bar{S}})}[f(\textbf{x}_S, \textbf{X}_{\bar{S}})p(\textbf{x}_S, \textbf{X}_{\bar{S}})]. \] Here, $p_b(\textbf{X}_{\bar{S}})$ is some prior distribution over features $\textbf{x}'_{\bar{S}}$. A natural choice of prior is the marginal density $p(\textbf{X}_{\bar{S}})$, which we use to compute RJBShap later. Having listed the most relevant on and off manifold value functions, we discuss their limitations in the following sections. This will motivate our proposal of an alternative value function, which aims to circumvent these limitations. \subsection{Limitations of off-manifold value functions} As \citet{foolingshap, expondatamanifold} point out, dependence of Shapley explanations on off-manifold behaviour of the model can be problematic. For example, computing Interventional Shapley at $\textbf{x}$ requires evaluating the model at points $(\textbf{x}_S, \textbf{X}_{\bar{S}})$ for $S\subseteq[d]$ where $\textbf{X}_{\bar{S}} \sim p(\textbf{X}_{\bar{S}} \mid do(\textbf{X}_S = \textbf{x}_S))$. Such points may lie outside the distribution of training data, where the model was not trained. Consider a model which is identical to the ground truth function on the data distribution. The train/test errors of the model will be 0, suggesting that it captures the ground truth function perfectly. However, if the model differs from the ground truth outside the data distribution, the model's Shapley values may be drastically different from the ground truth Shapley values, resulting in highly misleading explanations. % % % % % This limitation of off-manifold Shapley values can be exploited to `fool' Shapley values into hiding model biases. In \citet{foolingshap}, the authors consider models which are highly biased on the data manifold (i.e., solely rely on sensitive features, like racial background, for predictions). They show that these models can be perturbed outside the data manifold in such a way that the resulting Shapley values give no attribution to the sensitive features, despite the models relying solely on these sensitive features on the data manifold. Therefore, off-manifold Shapley values are highly vulnerable to off-manifold manipulations. \subsection{Limitations of on-manifold value functions}\label{subsec:limitations-on-man} % While the on-manifold value functions do not consider model behaviour outside data distribution, the existing methods can lead to unintuitive or misleading Shapley explanations as they do not consider the \textit{causal} contributions of features, and are highly sensitive to feature correlations. Specifically, as \citet{lshap} point out, when computing feature contributions at $\textbf{x}$, the value function for a subset $S$, $v(S)$, must capture the effect of fixing the feature values $\textbf{X}_S$ to $\textbf{x}_S$. This is \emph{not} given by $\mathbb{E}[f(\textbf{X}) \mid \textbf{X}_S = \textbf{x}_S]$ as in CES, because observing $\textbf{X}_S = \textbf{x}_S$ also changes the distribution of $\textbf{X}_{\bar{S}}$. Instead, the impact of setting $\textbf{X}_S$ to $\textbf{x}_S$ is captured by $\mathbb{E}[f(\textbf{X}) \mid do(\textbf{X}_S = \textbf{x}_S)]$, which in general is different from conditional expectation. Therefore, Interventional Shapley is inherently proposed to capture the \emph{causal} effect of fixing feature values. Since CES considers the conditional expectation $\mathbb{E}[f(\textbf{X}) \mid \textbf{X}_S = \textbf{x}_S]$ when computing Shapley values, the resulting Shapley values are highly influenced by feature correlations. As a result, two highly correlated features may receive similar feature attributions even if the model under consideration depends on only one of them. We make this concrete with an example in Appendix \ref{sec:int-vs-ces}\done{add RJBShap to this}. We also demonstrate empirically in Section \ref{sec:exps} and Appendix \ref{sec:exps-app} that CES can be highly sensitive to the feature correlations, and consequently can lead to wrong explanations. Additionally, computing CES is computationally challenging when the feature-space is continuous. While \citet{expondatamanifold} propose training a surrogate model $g$ with masked inputs to estimate the conditional expectation (see Appendix \ref{subsec:CES-comp}), training $g$ is even more difficult than training the model $f$. Aside from this, the JBShap and RJBShap value functions proposed by \citet{on-manifold-off-manifold}, explain the feature contributions for the function $\tilde{f}_p(\textbf{x}) \coloneqq f(\textbf{x})p(\textbf{x})$, rather than $f(\textbf{x})$ itself. Specifically, RJBShap explain the contribution of individual features towards the difference $\tilde{f}_p(\textbf{x}) - \mathbb{E}_{p_{b}(\textbf{X})}[\tilde{f}_p(\textbf{X})]$. This means that the resulting Shapley values therefore do not explain the underlying function $f$ itself. We make this more concrete with an example with $\mathcal{X} \subseteq \mathbb{R}^2$: \begin{align} \textbf{X} \sim \mathcal{N}(\textbf{0}, I_2), \quad f(\textbf{x}) = \exp{\left(x^2_1/2\right)}. \label{eq:rjbshap-example} \end{align} For this example, $\tilde{f}_p(\textbf{x})$ only depends on $x_2$ and consequently, the RJBShap values for feature 1, $\phi_1 = 0$, for all $\textbf{x} \in \mathcal{X}$, even though the function $f(\textbf{x})$ \emph{only} depends on $x_1$. RJBShap can therefore lead to \emph{highly} misleading explanations. We confirm this empirically in Appendix \ref{subsec:rjbshap}. % Additionally, the notion of off-manifold robustness satisfied by JBShap and RJBShap value functions can be restrictive. We expand upon this in Section \ref{subsec-robustness}, where we propose an alternative definition of robustness which is less restrictive, and is not satisfied by JBShap and RJBShap. \done{Retrieve the following commented out bit} % \done{More stress on the computational aspect} \begin{comment} \paragraph{Conditional Expectation Shapley.} A commonly used value function in practice is the Conditional Expectation Shapley (CES) \citep{kernelshap}: \[ v_{\textbf{x}, f, p}^C(S) = \mathbb{E}[f(\textbf{X}) \mid \textbf{X}_S = \textbf{x}_S ]. \] In other words, CES takes the expectation of $f(\textbf{x}_s, \textbf{X}_{\bar{S}})$ over the conditional density of $\textbf{X}_{\bar{S}}$ given $\textbf{X}_S = \textbf{x}_S$. \paragraph{Random Baseline Shapley.} Another commonly used value function in this category is Random Baseline Shapley (RBShap) \citep{kernelshap}, also commonly referred to as \emph{Marginal Shapley}. \[ v_{\textbf{x}, f, p}^{RB}(S) = \mathbb{E}[f(\textbf{x}_S, \textbf{X}_{\bar{S}})]. \] Unlike CES, RBShap takes the expectation of $f(\textbf{x}_s, \textbf{X}_{\bar{S}})$ over the marginal density of $\textbf{X}_{\bar{S}}$ (and not the conditional density of $\textbf{X}_{\bar{S}}$ given $\textbf{X}_S = \textbf{x}_S$). \blue{Axioms of Shapley values, w.r.t., value functions} In this paper, we will restrict our attention to a causal interpretation of Shapley values, where the main aim is to estimate the \emph{causal} contribution of the features. We discuss the motivation for this causal approach in greater detail in the next section. \subsection{Causal Shapley Values} There has been some recent work proposing a causal perspective when computing Shapley values \citep{lshap, causalshap}. Specifically, these works observe that manually fixing the values of features $\textbf{X}_S$ to $\textbf{x}_S$ when computing Shapley values, corresponds to \emph{intervening} on the feature values. In Pearl's do calculus \citep{pearl}, this is expressed as $do(\textbf{X}_S = \textbf{x}_S)$. This leads to the definition of causal Shapley value functions: \begin{align} v^{\textup{causal}}_{\textbf{x}, f, p}(S) = \mathbb{E}[f(\textbf{X}) \mid do(\textbf{X}_S = \textbf{x}_S)]. \label{causalshap} \end{align} How to compute $v^{\textup{causal}}_{\textbf{x}, f, p}(S)$ depends on the causal structure of the features. \citep{lshap} consider a set-up where the true feature values $\tilde{X}_i$ are distinguished from the features $X_i$ input into the function, $f$, with $X_i$ being a direct causal descendant of $\tilde{X}_i$ \blue{add figure}. In this set-up, intervening on $\textbf{X}_S$ yields the following interventional distribution: \begin{align*} p((\textbf{x}_S, \textbf{X}_{\bar{S}}) \mid do(\textbf{X}_S = \textbf{x}_S) ) = p(\textbf{X}_{\bar{S}}). \end{align*} Therefore, the value function, $v^{\textup{causal}}_{\textbf{x}, f, p}(S)$ can straightforwardly be computed as \begin{align*} v^{\textup{causal}}_{\textbf{x}, f, p}(S) \hspace{-0.1cm}=\hspace{-0.1cm} \mathbb{E}[f(\textbf{X}) \mid do(\textbf{X}_S = \textbf{x}_S)] \hspace{-0.1cm} = \hspace{-0.1cm} \mathbb{E}_{\textbf{X}_{\bar{S}} \sim p(\textbf{X}_{\bar{S}})}[f(\textbf{x}_S, \textbf{X}_{\bar{S}})]. \end{align*} This is equivalent to RBShap. In \citep{causalshap}, the authors do not distinguish between the true values of the features, and the features input into the model. As a result, the resulting causal Shapley value function, $v^{\textup{causal}}_{\textbf{x}, f, p}(S)$, also takes into account the causal relations among the true features themselves. In this setting, assuming there is no unmeasured confounding and that the causal structure is known, we can compute the interventional distribution as follows: \begin{align*} p(\textbf{X}_{\bar{S}} \mid do(\textbf{X}_S = \textbf{x}_S)) = \prod_{j \in \bar{S}} p(X_j \mid \textbf{X}_{pa(j) \cap \bar{S}}, \textbf{x}_{pa(j) \cap S}), \end{align*} where $pa(j) \cap T$ denotes the parents of $j$ that are also part of the subset $T$. The conditionals $p(X_j \mid \textbf{X}_{pa(j) \cap \bar{S}}, \textbf{x}_{pa(j) \cap S})$ can be estimated from the available observational data. \blue{add more details/refs} Note that being able to compute the interventional distributions does not need the knowledge of the complete causal graph. In the case when we only know partial causal ordering between the features, \citep{causalshap} provide a way of computing the interventional distributions. In this paper, we assume that interventional expectations $\mathbb{E}[\cdot \mid do(\textbf{X}_S = \textbf{x}_S)]$ can be computed using the available observational data, for any subset of features $S$. \blue{comment about how this is straightforward in the setting of \citep{lshap}.} \subsubsection{Comparison with Conditional Expectation Shapley Values} In general, \eqref{causalshap} is not equal to the conditional expectation $\mathbb{E}[f(\textbf{X}) \mid \textbf{X}_S = \textbf{x}_S]$. In particular, as \citep{lshap} point out, CES values are influenced by feature correlations. This can lead to cases where the function $f(\textbf{x})$ is independent of a feature $x_i$, but if $x_i$ is highly correlated with another feature $x_j$ which affects the function prediction $f(\textbf{x})$, then $x_i$ receives a non-zero attribution with CES. This dependence of CES on the feature correlations means that CES can be `fooled' into attributing importance to unimportant features, if these unimportant features are correlated with important features. We will highlight this problem explicitly in the following example. \paragraph{Example.} Assume that $\mathcal{X} = \{0,1\}^2$, and that the features $X_1, X_2$ follow the causal structure in \citep{lshap}. Consider the case where $f(x_1, x_2)=x_1$ and $X_1, X_2$ are binary variables, with \begin{align*} X_1 =& \begin{cases} 0 & \textup{w.p. 0.5} \\ 1 & \textup{otherwise} \end{cases} \hspace{0.4cm} \textup{and,} \\ X_2 \mid X_1 =& \begin{cases} x_1 & \textup{w.p. $p$ (for some $p>0$),} \\ 1-x_1 & \textup{otherwise.} \end{cases} \end{align*} In this case, $\mathbb{E}[f(X_1, X_2)\mid do(X_2 = x_2)] = \mathbb{E}[f(X_1, x_2)] = 0.5$ and $\mathbb{E}[f(X_1, X_2)\mid do(X_1 = x_1, X_2=x_2)] = \mathbb{E}[f(x_1, x_2)] = x_1$, for any $x_2$. It straightforwardly follows that, in this case, $\phi_2 = 0$. This also makes intuitive sense, as the function $f(x_1, x_2)$ is not dependant on $x_2$. However, if we use conditional expectations to drop features instead, we get that \[ \phi_2 = p\mathds{1}(x_1 = 1) + (1-p)\mathds{1}(x_1 = 0) - 1/2, \] which is non-zero when $p\neq 1/2$. This example illustrates that CES value function can lead to misleading Shapley values, especially when the features are highly correlated. The causal Shapley value function, on the other hand, incorporates the causal effect of \emph{fixing} a set of features $S$, and therefore, yields Shapley values which are unaffected by correlations within the data. \subsection{The off-manifold nature of Causal Shapley Values} While Causal Shapley values correctly capture the notion of \emph{fixing} feature values when dropping features, it also has a limitation. In general, for any $\textbf{x}_S$, the datapoint $(\textbf{x}_S, \textbf{X}_{\bar{S}})$ where $\textbf{X}_{\bar{S}} \sim p(\textbf{X}_{\bar{S}} \mid do(\textbf{X}_S = \textbf{x}_S))$ may not lie on the data distribution, which the model was trained on. In fact, the model may not even be defined on the datapoints from this interventional distribution. As a result, Causal Shapley value computation involves evaluating the model outside its domain of validity, which the model may not have been trained for. We refer to such datapoints as \emph{off-manifold} points, and from now onwards, we will refer to Shapley values which rely on off-manifold model evaluations as \emph{off-manifold} Shapley values. We illustrate this problem using a simple example. \paragraph{Example.} Consider the set-up of \citep{lshap}, with $\mathcal{X} \subseteq \mathbb{R}^2$, and $f$ the trained model. Moreover, let $X_1 \sim \mathcal{N}(0, 1)$ and $X_2 \mid X_1 \sim \delta_{X_1}$. In this example, the training data will only consist of datapoints $(x_1, x_2)$ with $x_1 = x_2$. However, $p(x_2 \mid do(X_1 = x_1)) = p(x_2) = \mathcal{N}(0, 1)$ and therefore, when computing the Causal Shapley value function $v^{\textup{causal}}_{\textbf{x}, f, p}(\{1\})$ we will need to evaluate the model $f$ at $(x_1, X_2)$ where $X_2 \sim \mathcal{N}(0,1)$. On these datapoints, $X_2 \neq x_1$ almost surely, and therefore these are off-manifold datapoints which the model has not been trained on before. \blue{Add diagram to illustrate this better.} \blue{give another example with a different causal structure possibly.} The limitations of off-manifold Shapley values have been widely discussed in the literature \citep{on-manifold-off-manifold, expondatamanifold, foolingshap} \blue{Add more refs}. Specifically, \citep{expondatamanifold} point out that off-manifold Shapley values can be highly influenced by model behaviour outside the data manifold, which may be irrelevant to the explanations sought by a user. The Shapley values could therefore be misleading. This limitation of off-manifold Shapley values can be exploited to `fool' Shapley values. In \citep{foolingshap}, the authors consider models which are highly biased on the data manifold (i.e., solely rely on sensitive features (e.g., racial background) for predictions). They show that these models can be perturbed outside the data manifold in such a way that the resulting Shapley values give no attribution to the sensitive features, despite the models relying solely on these sensitive features on the data manifold. Therefore, off-manifold Shapley values are highly vulnerable to off-manifold manipulations. \blue{Give a concrete example of where the off-manifold behaviour leads to misleading Shapley values.} \paragraph{Contributions.} In this paper we propose ManifoldShap, a Shapley value function which makes Shapley values robust to off-manifold manipulation by restricting function evaluations to the data manifold. We show theoretically and empirically that, ManifoldShap is robust to off-manifold manipulations. In addition, it adheres to the causal interpretation of \emph{fixing} feature values when dropping features. As a result, ManifoldShap is less sensitive to correlations among the features compared to CES. ManifoldShap can be seen as a \emph{compromise} between CES and Causal Shapley values -- it remains robust to off-manifold manipulations, while respecting the notion of \emph{intervening} on the fixed features. See section \blue{ref} for a detailed discussion on this. \end{comment} % \section{MANIFOLD RESTRICTED SHAPLEY VALUES} In this paper, we argue that a model must be mainly characterised by it's behaviour on the data manifold. While \emph{intervening} on features provides the correct notion of fixing features, we must restrict our attention to the data manifold when estimating Shapley values. This allows us to avoid the issues of non-identifiability outside the data manifold, thereby making the Shapley estimates robust against adversarial attacks as in \citet{foolingshap}. In order to estimate Shapley values which are robust to off-manifold manipulations, we must restrict the function evaluation to the data manifold. Before we proceed, we introduce our value function in terms of general sets $\mathcal{Z} \subseteq \mathcal{X}$. \begin{definition}[ManifoldShap] Let $\mathcal{Z} \subseteq \mathcal{X}$ be an open set with $\textbf{x} \in \mathcal{Z}$, and $\mathbb{P}(\textbf{X}\in\mathcal{Z} \mid do(\textbf{X}_S = \textbf{x}_S)) > 0$ for $S\subseteq [d]$. Then, we define the ManifoldShap on $\mathcal{Z}$ as: \begin{align} \valgeneric{f}{\mathcal{Z}}(S) \coloneqq \valfunset{S}{f}{\mathcal{Z}}. \label{val_fun} \end{align} \end{definition} \done{In Appendix \ref{proofs} we prove that under mild regularity assumptions, the value function \eqref{val_fun} is well-defined. IMPORTANT: Prove this.} \textbf{Remark.} The notation $\mathbb{E}[\cdot \mid do(\textbf{X}_S =\textbf{x}_S), \textbf{X}\in \mathcal{Z}]$ denotes the expectation w.r.t. the density $p_{\mathcal{Z}, \textbf{x}_S}(\cdot)$ where % \begin{align} p_{\mathcal{Z}, \textbf{x}_S}(\textbf{y}) \coloneqq \frac{p(\textbf{y}\mid do(\textbf{X}_S =\textbf{x}_S))\mathds{1}(\textbf{y}\in\mathcal{Z})}{\mathbb{P}(\textbf{X}\in\mathcal{Z} \mid do(\textbf{X}_S = \textbf{x}_S))}. \label{man-density} \end{align} The condition $\mathbb{P}(\textbf{X}\in\mathcal{Z} \mid do(\textbf{X}_S = \textbf{x}_S)) > 0$ ensures that $p_{\mathcal{Z}, \textbf{x}_S}(\textbf{x})$ (and hence $\valgeneric{f}{\mathcal{Z}}(S)$) is well-defined. By conditioning on the event $\textbf{X} \in \mathcal{Z}$, the ManifoldShap value function restricts the function evaluations to the set $\mathcal{Z}$. In practice, $\mathcal{Z}$ can be chosen to be the data manifold, or any other region of interest, where model behaviour is relevant to explanations sought. In this way, ManifoldShap will disregard the model behaviour outside the region of interest when computing Shapley values. A detailed discussion of how to choose the sets $\mathcal{Z}$ is deferred to the next section. Our formulation of \emph{ManifoldShap} is general as it is does not assume a specific causal structure on the features. In our methodology, we assume that the expectation $\mathbb{E}[f(\textbf{X}) \mid do(\textbf{X}_S = \textbf{x}_S)]$ can be computed using observational data. This is a standard assumption needed to compute Interventional Shapley, and holds true under the causal structure in Figure \ref{fig:dag}. Under this assumption, we can compute the value function using the following result. \begin{lemma}\label{manifoldShap} The value function $\valgeneric{f}{\mathcal{Z}}$ can be written as, \begin{align*} \valgeneric{f}{\mathcal{Z}}(S) = \frac{\mathbb{E}[f(\textbf{X}) \mathds{1}(\textbf{X} \in \mathcal{Z}) \mid do(\textbf{X}_S = \textbf{x}_S)]}{\mathbb{P}(\textbf{X} \in \mathcal{Z} \mid do(\textbf{X}_S = \textbf{x}_S))} \end{align*} \end{lemma} % % % In practice, all we need is a manifold classifier, trained to estimate the value of the indicator, i.e. $\hat{g}(\textbf{x}) \approx \mathds{1}(\textbf{x} \in \mathcal{Z})$. The value function \eqref{val_fun} can then be estimated using: \begin{align} \valgeneric{f}{\mathcal{Z}}(S) % &\approx \frac{\mathbb{E}[f(\textbf{X}) \hat{g}(\textbf{X}) \mid do(\textbf{X}_S = \textbf{x}_S)]}{\mathbb{E}[\hat{g}(\textbf{X}) \mid do(\textbf{X}_S = \textbf{x}_S)]}. \label{val_fun_approx} \end{align} We also provide alternative methodologies of estimating ManifoldShap using rejection sampling and regression techniques in Appendix \ref{subsec:manshap-alternative-methods}. \paragraph{Choosing the sets $\mathcal{Z}$.} Next, we discuss general purpose methodologies of choosing sets $\mathcal{Z}$ which can serve as practical estimation of the data manifold in most cases. One can obtain $\mathcal{Z}$ by training an out-of-distribution classifier directly. \citet{foolingshap} do so by perturbing each datapoint on randomly chosen features, and subsequently using these to train the classifier. In general, users may wish to choose different regions of interest $\mathcal{Z}$ on an ad hoc basis when computing Shapley values. In what follows, we outline a few specific choices of $\mathcal{Z}$, each of which satisfy different notions of robustness to off-manifold manipulations. We discuss this in greater length in Section \ref{subsec-robustness}. \begin{definition}[Density manifold]\label{den-manifold} Given an $\epsilon > 0$, we define the \emph{$\epsilon$-density manifold} (\emph{$\epsilon$-DM}) of the data distribution, denoted as $\mathcal{D}_\epsilon$, as: $ \mathcal{D}_\epsilon \coloneqq \{\textbf{x}\in \mathbb{R}^d : p(\textbf{x}) > \epsilon \}. $ Here, $p(\textbf{x})$ denotes the joint density of the data. \end{definition} The $\epsilon$-DM includes all regions of high density in the set. Using $\mathcal{Z} = \mathcal{D}_\epsilon$ in our value function therefore restricts function evaluations to regions of high density. An alternative way to choose $\mathcal{Z}$ is via the probability mass captured by $\mathcal{Z}$, i.e., for a given level $\alpha$, we may pick sets $\mathcal{Z}=\mathcal{P}_\alpha$ such that $\mathbb{P}(\textbf{X}\in \mathcal{P}_\alpha)\geq \alpha$. One such set can be defined as: \begin{definition}[Mass manifold]\label{mass-manifold} Given an $\alpha > 0$, we define the \emph{$\alpha$-mass manifold} ($\alpha$-MM) of the data distribution, denoted as $\mathcal{P}_\alpha$, as $\mathcal{P}_\alpha \coloneqq \mathcal{D}_{\epsilon^{(\alpha)}}$, where $ \epsilon^{(\alpha)} \coloneqq \sup \{\epsilon \geq 0: \mathbb{P}(\textbf{X}\in\mathcal{D}_\epsilon)\geq \alpha \}. $ \end{definition} We show in Proposition \ref{optimality} (Appendix \ref{proofs}) that the Lebesgue measure of $\mathcal{P}_\alpha$ is smallest among the sets $\mathcal{Z}$ with $\mathbb{P}(\textbf{X} \in \mathcal{Z}) \geq \alpha$. It should be noted that $\mathcal{P}_\alpha$ is not necessarily the unique such set. One can use techniques like kernel density estimation and VAEs to approximate the manifolds described in this section (more details in Appendix \ref{subsec:computingman}). \subsection{Robustness to off-manifold manipulation}\label{subsec-robustness} We say that a Shapley value function is robust to off-manifold manipulation, if changing the model $f$ outside the data manifold does not lead to `large' changes in its Shapley values. In this section, we formalise this idea of robustness and show that ManifoldShap satisfies this notion, while the existing value functions do not. First, we present the definition of robustness as used in \citet{on-manifold-off-manifold}, to formalise the notion of off-manifold manipulations. \begin{definition}[T-robustness \citep{on-manifold-off-manifold}]\label{trobustness} Given two models $f_1(\textbf{x}), f_2(\textbf{x})$ and any probability density $p(\textbf{x})$, we say that a value function, $v_{\textbf{x}, f}$, is strong T-robust if it satisfies the following condition: if $\max_{\textbf{x}} | f_1(\textbf{x}) - f_2(\textbf{x}) | p(\textbf{x}) \leq \delta$, then, $| v_{\textbf{x}, f_1}(S) - v_{\textbf{x}, f_2}(S)| \leq T \delta$ for any $S \subseteq [d]$. \end{definition} As per \citet{on-manifold-off-manifold},``The premise $\max_{\textbf{x}} | f_1(\textbf{x}) - f_2(\textbf{x}) | p(\textbf{x}) \leq \delta$ bounds the maximum perturbation on low density regions." Additionally, \citet{on-manifold-off-manifold} show that JBShap and RJBShap value functions satisfy strong T-robustness to off-manifold manipulation, while other value functions like MS and CES do not. Likewise, since MS is a special case of IS, the latter also does not satisfy strong T-robustness. On the other hand, ManifoldShap restricted to $\epsilon$-density manifold, $\mathcal{D}_\epsilon$, satisfies this notion of robustness. \begin{proposition}\label{t-robust} The value function $\valgeneric{f}{\mathcal{D}_\epsilon}(S) = \valfunset{S}{f}{\mathcal{D}_\epsilon}$ is strong $T$-robust for $T = 1/\epsilon$. \end{proposition} Proposition \ref{t-robust} shows that with decreasing $\epsilon$, the robustness parameter $T$ increases and ManifoldShap gets less robust. \paragraph{Alternative definition of Robustness.} Definition \ref{trobustness} considers a very specific notion of model perturbation. In particular, the perturbation in model $f(\textbf{x})$ must not exceed $\delta/p(\textbf{x})$ for all $\textbf{x} \in \mathbb{R}^d$ and some $\delta>0$. This does not encapsulate the case where the function perturbation remains bounded on a region of interest $\mathcal{Z}$, but may increase arbitrarily outside $\mathcal{Z}$. For example, we may have the case that the function $f(\textbf{x})$ remains fixed on a set $\mathcal{Z}$ with $\mathbb{P}(\textbf{X} \in \mathcal{Z}) > 0.99$. Robustness of Shapley values should dictate that changing the function outside $\mathcal{Z}$ should not lead to arbitrarily different Shapley values. We later show that Def. \ref{trobustness} does not lead to such robustness guarantees. \done{show?} To encapsulate this, we provide an alternative definition of robustness, which allows us to take into account model manipulation on sets with small probability mass. First, we define the notion of robustness on a general feature subspace $\mathcal{Z}'\subseteq \mathcal{X}$: \begin{definition}[Subspace T-robustness]\label{subspacerobustness} Let $\mathcal{Z}'\subseteq\mathcal{X}$ be such that $\mathbb{P}(\textbf{X}\in\mathcal{Z}')>0$. We say that a value function $v_{\textbf{x}, f}$ is strong T-robust on subspace $\mathcal{Z}'$ if it satisfies the following condition: if $\sup_{\textbf{x}\in\mathcal{Z}'} | f_1(\textbf{x}) - f_2(\textbf{x}) | \leq \delta$, then, $| v_{\textbf{x}, f_1}(S) - v_{\textbf{x}, f_2}(S)| \leq T \delta$ for any $S \subseteq [d]$. \end{definition} A value function satisfying strong T-robustness on $\mathcal{Z}$ would not result in drastically different Shapley values when the model perturbation is bounded on the set $\mathcal{Z}$, by some value $\delta > 0$. The above definition allows us to directly consider robustness of value functions on sets based on probability mass, $\mathcal{P}_\alpha$. Moreover, by restricting the function evaluations to a set $\mathcal{Z}$, ManifoldShap is naturally set up to provide subspace T-robustness guarantee. We formalise this as follows: \begin{proposition}\label{manshap-subspace-robustness} % The value function $\valgeneric{f}{\mathcal{Z}}$ is strong T-robust on any set $\mathcal{Z}'$ satisfying $\mathcal{Z}\subseteq \mathcal{Z}'$ with $T = 1$. \end{proposition} \begin{comment} In contrast, we show that value functions such as IS, CES, MS do not satisfy this notion of robustness. In addition to this, while the JBShap and RJBShap value functions proposed by \citep{on-manifold-off-manifold} satisfy strong T-robustness (Def. \ref{trobustness}), they do not satisfy subspace T-robustness either. Consequently, the Shapley explanations can change drastically if the function perturbations are unconstrained in sets of small probability mass. We also verify this empirically in Section \ref{sec:exps}. \end{comment} In contrast, we show that all other value functions under consideration do not satisfy this notion of robustness: \begin{proposition}\label{subspace-robustness-causalshap} For any set $\mathcal{Z}'$ with $\mathbb{P}(\textbf{X}\in \mathcal{Z}') < 1$, the IS value function $v^{\textup{IS}}_{\textbf{x}, f}(S)$, the CES value function $v_{\textbf{x}, f}^{\textup{CES}}(S)$, and the MS value function $v_{\textbf{x}, f}^{\textup{MS}}(S)$, the JBShap value function $v_{\textbf{x}, f}^{\textup{J}}(S)$ and the RJBShap value function $v_{\textbf{x}, f}^{\textup{RJ}}(S)$ are all \emph{not} strong T-robust on subspace $\mathcal{Z}'$ for $|T| < \infty$. \end{proposition} Consider the family of value functions which \emph{drop} features in $\bar{S}$ through randomisation, i.e., $v_{f, p_S}(S) = \mathbb{E}_{\textbf{X} \sim p_S}[f(\textbf{X})]$. We note that IS, MS, CES and ManifoldShap all fall into this family. For example, when $p_S = p(\textbf{X} \mid do(\textbf{X}_S = \textbf{x}_S))$ we obtain IS, and when $p_S = p(\textbf{X} \mid \textbf{X}_S = \textbf{x}_S)$ we obtain CES. We show in Appendix \ref{subsec:tv-distance} that the choice of $p_S$ in ManifoldShap (i.e. $p_{\mathcal{Z}, \textbf{x}_S}$ in Eq. \eqref{man-density}) minimises the Total Variation distance with interventional distribution $p(\textbf{X} \mid do(\textbf{X}_S = \textbf{x}_S))$ subject to the condition that $v_{f, p_S}(S)$ is strong T-robust on $\mathcal{Z}$. This ensures that ManifoldShap values provide reasonable estimation of \emph{causal} contribution of features. \done{Perhaps combine the two sets of robustness results} \done{combine the proofs for JBShap and RJBShap. Formalise the meaning of `for any $\mathcal{Z}$'} \done{Our methodology has an implicit dependence on the density, through the $\mathds{1}$, whereas JBshap directly depends on it.} \done{Choice of baseline as $\textbf{x}'$ is largely random?} \subsection{Comparison with existing methods}\label{sec:comparison} \textbf{Causal Accuracy. }Recall that, CES attributes feature importance based on feature correlations. Consequently, two highly correlated features may be attributed similar feature importance even if the model only depends on one of them, i.e., the sensitivity property is violated. However, ManifoldShap on the other-hand, seeks to estimate the \emph{causal} contribution of features towards the prediction $f(\textbf{x})$, as it uses the \emph{interventional} measure restricted to the manifold $\mathcal{Z}$ to drop features. The experiments in Appendix \ref{sec:exps-app} confirm this, as the ManifoldShap results are significantly less sensitive to feature correlations than CES. Our example in Eq. \eqref{eq:rjbshap-example} shows how the explicit dependence of RJBShap on the density can lead to extremely inaccurate Shapley explanations. In Appendix \ref{subsec:rjbshap}, we show that because of its causal nature, ManifoldShap provides significantly more accurate and intuitive explanations. Additionally, unlike RJBShap, ManifoldShap only depends on the density estimation via the indicator $\mathds{1}(p(\textbf{x})\geq \epsilon)$. Therefore, as we show in Appendix \ref{subsec:sensitivity-density-error}, ManifoldShap is significantly more robust to density estimation errors than RJBShap. Aside from this, \cite{ghalebikesabi2021on} propose Neighbourhood SHAP, a value function aimed to provide explanations for the localised behaviour of the model near the datapoint $\textbf{x}$ where explanations are sought. While the authors empirically show the robustness of the methodology against off-manifold perturbations, they do not consider the causal perspective and therefore the main object of interest is not the causal contribution of features. \textbf{Robustness. }As outlined in Section \ref{subsec-robustness}, ManifoldShap is robust to model changes outside the manifold and therefore is not vulnerable to adversarial attacks as in \citet{foolingshap}. In light of this, we argue that ManifoldShap provides a compromise between conditional and interventional Shapley values. It attempts to estimate causal contributions of features, while providing robustness guarantees. \textbf{Trade-off between Accuracy and Robustness. }Restricting function evaluations to the manifold $\mathcal{Z}$, as in ManifoldShap, means that the resulting Shapley values are dependant on the manifold itself, and may not purely reflect the causal contribution of features. This is because these are no longer pure Interventional Shapley values. This results in a trade-off between robustness to off-manifold manipulation and the `causal accuracy' of the Shapley values. ManifoldShap provides us flexibility over this trade-off, through the size of the manifold $\mathcal{Z}$. When $\mathcal{Z} = \mathcal{D}_\epsilon$, the size of the manifold is modulated through the $\epsilon$ parameter. As $\epsilon \rightarrow 0$, the size of manifold increases and ManifoldShap values tend towards IS values. However, as mentioned above, it comes at the cost of reduced robustness, as the Shapley evaluations include increasing number of datapoints `far' from the training data. On the other hand, increasing $\epsilon$ increases the robustness of Shapley values, while reducing their causal accuracy, as the resulting Shapley values discard a significant number of datapoints which lie outside $\mathcal{D}_\epsilon$. \textbf{Computational Considerations. }Computing CES may be computationally expensive and may require different supervised or unsupervised learning techniques \citep{expondatamanifold, kernelshap, on-manifold-off-manifold}. In contrast, while ManifoldShap requires estimating a manifold classifier, estimating $\valgeneric{f}{\mathcal{Z}}(S)$ does not incur any computational cost over and above computing the interventional expectations. Proposition \ref{manifoldShap} illustrates this by expressing the ManifoldShap value function as a ratio of interventional expectations. This is even more straightforward when the causal structure is as in Figure \ref{fig:dag}\done{describe the setting}, and the interventional expectation is equivalent to marginal expectation. Additionally, to avoid the exponential time complexity of computing the value function for all $S\subseteq [d]$, we propose a sampling based estimation in Appendix \ref{subsec:rejection_sampling} which makes computation of ManifoldShap feasible for high dimensional feature spaces (see Appendix \ref{subsec:feature_dims}). \begin{comment} \section{Related Works} \subsection{Off-Manifold Robustness of Shapley explanations} A number of recent works highlight the dependence of marginal Shapley values on off-manifold behaviour of the function \citep{expondatamanifold, foolingshap, on-manifold-off-manifold}. Specifically, \citep{foolingshap} explicitly demonstrates how Shapley explanations can be `fooled' by manipulating the model outside the data manifold. The authors do so by defining classifiers which only use sensitive attributes (like race) to classify on the data manifold, while outside the data manifold, the models use a synthetic feature to classify datapoints. The resulting Shapley explanations, attribute little importance to the sensitive features, giving the impression that the model is not \textit{biased}. To circumvent this problem, \citep{expondatamanifold} proposes the use of Conditional Expectation Shapley (CES) instead, as CES is an \emph{on-manifold} value function. However, as we demonstrate empirically in Section \ref{sec:exps} and Appendix \ref{sec:exps-app}, CES can be highly sensitive to the feature correlations, and consequently can lead to unintuitive explanations. To make Shapley values robust to off-manifold manipulation, recently \citep{on-manifold-off-manifold} formalise the notion of robustness in Shapley values, and propose alternative value functions, namely JBShap and RJBShap defined in Section \ref{subsec-robustness}, that satisfy this notion of robustness. As explained in Section \ref{subsec-robustness}, these proposals are highly dependent on the data density, and can therefore lead to misleading explanations. We also confirm this empirically in Section \ref{sec:exps}. Additionally, the notion of off-manifold perturbation used in this work can be restrictive. We propose an alternative definition of robustness which may be of practical interest in many cases, and show that ManifoldShap satisfies both definitions of robustness. \end{comment} \section{ROBUSTNESS IN OTHER EXPLANATION METHODS} Shapley value is not the only \textit{off-manifold} explanation method. This problem has also been explored in other explanation methods like LIME \citep{foolingshap, improvinglime, resistingood} and gradient-based methods \citep{foolingnn, fairwashing}. For example, \citet{foolingnn} illustrates this problem in gradient-based interpretability methods for Neural Networks. The paper shows that these explanations are not stable when model is manipulated without hurting the accuracy of the model. Numerous solutions have also been proposed such as \citet{resistingood}, which addresses this problem for explanation methods like RISE, OCCLUSION and LIME by quantifying a similarity metric for perturbed data. This metric is then integrated into the explanation methods. Likewise \citet{improvinglime} proposes to make LIME robust to off-manifold manipulation, by using a GAN to sample more realistic synthetic data which are then used to generate LIME explanations. Aside from this, \citet{fairwashing} proposes an alternative robust gradient-based explanation method. However, unlike Shapley values, gradient-based methods rely on model properties (e.g., differentiability), and are not model agnostic. % \section{EXPERIMENTAL RESULTS}\label{sec:exps} In this section, we conduct experiments on synthetic and real world datasets to demonstrate the utility of ManifoldShap and compare it with existing methods. Instead of training the models, we compute Shapley values for the underlying true functions directly. Additional experiments investigating the sensitivity of the different Shapley methods to changing feature correlations, manifold size and feature dimensions have been included in Appendix \ref{sec:exps-app}. The code to reproduce our experiments can be found at \href{https://github.com/amazon-science/manifold-restricted-shapley}{\color{blue}{github.com/amazon-science/manifold-restricted-shapley}}. \subsection{Synthetic data experiments} \begin{figure*} \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[height=0.9in]{images/pert_exps_diff_dag/pert_rho_0_2.png} \subcaption{$\delta=0$} \label{fig:delta_0} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[height=0.9in]{images/pert_exps_diff_dag/pert_rho_5.png} \subcaption{$\delta=5$} \label{fig:delta_5} \end{subfigure} \caption{Synthetic data experiments for $\delta=0, 5$. The barplots on the left of each subfigure shows the most important features for different Shapley value functions. The boxplots show the approximation errors of the Shapley values for different value functions.}\label{fig:pert-exps-dag} % \end{figure*} \done{causal structure of these experiments} \done{talk about baselines} \done{how do you compute the conditional expectation} Here we investigate the effect of model perturbation in low density regions on Shapley values. % \paragraph{Data generating mechanism.} \begin{wrapfigure}{l}{0pt} \begin{tikzpicture} \tikzset{ > = stealth, every node/.append style = { draw = black, shape = circle, inner sep = 0.5pt, minimum size=0.75cm }, every path/.append style = { arrows = ->, } } \tikz{ \node (x) at (0,0) {$X_2$}; \node (y) at (2,0) {$Y$}; \node (z) at (1,1) {$X_1$}; \path (x) edge (y); \path (z) edge (x); \path (z) edge (y); } \end{tikzpicture} \end{wrapfigure} In this experiment, $\mathcal{Y} \subseteq \mathbb{R}$ and $\mathcal{X} \subseteq \mathbb{R}^2$ follow the Causal DAG shown on the left.\done{re-add caption} % % In specific, the Structural Causal Model (SCM) \citep{pearl} for the ground truth data generating mechanism is: \begin{align*} X_1 &= \epsilon_1, \hspace{0.3cm} X_2 = \rho X_1 + \sqrt{1 - \rho^2} \epsilon_2, \hspace{0.3cm} Y = X_1. \end{align*} Here, $\epsilon_i \overset{\textup{i.i.d.}}{\sim}\mathcal{N}(0, 1)$ and $\rho = 0.85$ is the correlation between $X_1, X_2$. Next, we define the perturbed models. \begin{figure}[h!] \centering \begin{subfigure}[t]{0.24\textwidth} \centering \includegraphics[height=1.3in]{./images/pert_exps_diff_dag/classifier_heatmap1_multivariate_gaussian_rho_0_85_pert_0.png} \subcaption{Heatmap of $g_\delta$ for $\delta = 0$.} \end{subfigure}% \begin{subfigure}[t]{0.24\textwidth} \centering \includegraphics[height=1.3in]{./images/pert_exps_diff_dag/classifier_heatmap1_multivariate_gaussian_rho_0_85_pert_5.png} \subcaption{Heatmap of $g_\delta$ for $\delta = 5$.} \end{subfigure} \caption{Heatmaps for ground truth and perturbed models $g_\delta$. Each model has test mean squared error of 0.}\label{fig:heatmaps} \end{figure} \paragraph{Perturbed models.} We define the following family of perturbed models $g_\delta:\mathcal{X} \rightarrow \mathbb{R}$, parameterised by $\delta \in \mathbb{R}$. \begin{align*} g_\delta(\textbf{X}) \coloneqq Y + \delta X_2 \mathds{1}(\textbf{X} \not \in \mathcal{P}_\alpha). \end{align*} Here, we use VAEs to estimate $\mathcal{P}_\alpha$ (see Appendix \ref{subsec:computingman}) and choose $\alpha=1-10^{-3}$. By construction, the models $g_\delta$ should agree with the ground truth on the $\alpha$-manifold, i.e. $g_\delta(\textbf{X}) = Y$ when $\textbf{X} \in \mathcal{P}_\alpha$, but these models differ from the ground truth for $\textbf{X} \not\in \mathcal{P}_\alpha$. Figure \ref{fig:heatmaps} shows the model heatmaps for $\delta = 0, 5$ along with the original data. It is impossible to distinguish between these models on the data manifold, as both have test mean squared error of 0. \paragraph{Results.} Recall that the ground truth model does not depend on $X_2$, so the ground truth Shapley value for feature 2 is $\phi_2 = 0$. As a result, for any prediction, feature 1 has greater absolute Shapley value than feature 2, i.e. $|\phi_1| \geq |\phi_2|$. We compute Shapley values for $g_\delta$ using different value functions on 500 datapoints $\{\textbf{x}^{(i)}\}_{i=1}^{500}$, sampled from the SCM defined above. We compute CES using the ground truth conditional distributions of $X_i \mid X_j$ for $i\neq j$, which can be obtained analytically in this setting. Figure \ref{fig:pert-exps-dag} shows the results, with the bar plots on the left of Figures \ref{fig:delta_0} and \ref{fig:delta_5}, showing the most important features as per different value functions for $\delta=0, 5$.\done{re-add footnote}% For $\delta=0$, Figure \ref{fig:delta_0} confirms that the IS values of the ground truth model attribute greatest feature importance to feature 1 for all datapoints. This is expected as the ground truth model does not depend on $x_2$. For ManifoldShap, we observe that for 4\% of the datapoints, feature 2 is attributed greater importance. This highlights that robustness of ManifoldShap comes at the cost of reduced causal accuracy of Shapley values. Furthermore, it can be seen that CES value function attributes greatest importance to feature 2 for more than 30\% of the datapoints. This is because CES provides similar Shapley values for positively correlated features. We observe similar behaviour for RJBShap, which attributes greatest importance to feature 2 for about 20\% of datapoints. This happens because RJBShap provides feature contributions for $\tilde{f}_p(\textbf{x})=f(\textbf{x})p(\textbf{x})$ rather than $f(\textbf{x})$, and can therefore be misleading. \done{elaborate} When $\delta=5$, Figure \ref{fig:delta_5} shows that, for more than 50\% of datapoints IS attributes greater importance to feature 2 than feature 1 in the perturbed model. This shows that IS is sensitive to off-manifold perturbation. For ManifoldShap on the other hand, feature 2 is attributed greater importance for only about $10\%$ of the datapoints, less than all other baselines. We have also plotted the difference between estimated Shapley values and the ground truth IS values, for each value function. For a fair comparison between different value functions, we scale the Shapley values so that $\sum_{i\in \{1,2\}} |\phi_i | = 1$. As $\delta$ increases from 0 to 5, we can see that the errors in Shapley values increase for IS, while the errors in ManifoldShap are more concentrated around 0 than any other baseline. The results show that ManifoldShap values, unlike IS, remain robust to off-manifold manipulations, while providing explanations which remain closer to ground truth IS values overall. CES and RJBShap, on the other hand can result in misleading explanations. \begin{figure*}[ht] \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[height=0.98in]{images/compas_exp/results_compas_combined_1.png} \subcaption{COMPAS dataset results} \label{fig:compas} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[height=0.98in]{images/cc_exp/results_cc_combined_1.png} \subcaption{Communities and Crime dataset results} \label{fig:cc} \end{subfigure} \caption{Experiments on COMPAS and CC datasets. The barplots on the left of each subfigure shows the most important features for different Shapley values functions. The boxplots show the approximation errors of the Shapley values for different value functions\done{Rename CausalShap}.}\label{fig:real-world} % \end{figure*} \done{IMP: Criticise RJBShap} \subsection{Real world datasets} In this subsection, we evaluate the effect of adversarial off-manifold manipulation of models on Shapley values using real-world datasets. Specifically, using the same setup as in \citet{foolingshap}, we show that existing methodologies may fail to identify highly problematic model biases, whereas ManifoldShap can mitigate this problem due to its robustness properties. We consider the causal structure in Figure \ref{fig:dag} where the true features $\tilde{X}_i$ are distinguished from input features $X_i$, and therefore IS is equivalent to MS here. \paragraph{Datasets.} The COMPAS dataset, collected by ProPublica \citep{machinebias}, includes information for 6172 defendants from Broward County, Florida. This information comprises 52 features including defendants' criminal history and demographic attributes. The sensitive attribute in this dataset is defendants' race. \done{Each defendant is classified based on whether they have a high-risk for recidivism.} The second dataset, Communities and Crime (CC), is a UCI dataset \citep{Dua:2019} which includes crime data in communities across the US, where each community constitutes a datapoint comprising 128 features. The sensitive attribute in CC is the percentage of Caucasian population. From here onwards, we use `race' to refer to the sensitive attribute for both datasets. \done{A label is assigned to each community which classifies if the proportion of violent crime in that community is above median or not.} \done{Next, we outline the experimental setup.} \paragraph{Biased classifier.} Following the strategy of \citet{foolingshap}, we construct the binary classifier $f$ to be only dependant on the sensitive feature for both datasets. Additional details are given in Appendix \ref{subsec:experimental_detals_app}. % \paragraph{Manifold estimation.} Just like in \citet{foolingshap}, we determine the manifold $\mathcal{Z}$ by training an OOD classifier. In particular, we follow the strategy in \citet{foolingshap} by perturbing each datapoint on randomly chosen features, and subsequently using these newly generated perturbations to train an OOD classifier\done{to detect OOD samples}. \paragraph{Out of manifold perturbation.} To perturb the model outside the manifold $\mathcal{Z}$, we construct 2 synthetic features (referred to as `unrelated columns') like \citet{foolingshap}. For datapoints that lie outside $\mathcal{Z}$, only the `unrelated columns' are used to classify the datapoints. However, unlike \citet{foolingshap}, these `unrelated columns' are positively correlated with race. This is done to highlight a shortcoming of CES: even though CES is an on-manifold value function, the positive correlation between unrelated columns and race `fools' CES into attributing non-zero credit to the synthetic features. \paragraph{Results.} We compute the Shapley values for the perturbed models on 500 datapoints from a randomly chosen held-out dataset. We use the supervised approach to estimate CES as outlined in Appendix \ref{subsec:CES-comp}. The barplots in Figures \ref{fig:compas} and \ref{fig:cc} show the percentage of data points in COMPAS and CC datasets respectively, for which each feature shows up as the top feature as per different value functions. For RJBShap, CES, and IS, there are more data points in both datasets with top feature among `unrelated columns' than data points with top feature of race. For IS, this happens as a result of OOD perturbation of the model, and shows that when using IS, we can hide biases in the model by perturbing the model out of manifold. For RJBShap, this could be explained by the fact that it explicitly depends on the joint density $p(\textbf{x})$ of the data. Since, `unrelated columns' are positively correlated with race, the dependence of the density $p(\textbf{x})$ on these features and race is similar. As a result, `unrelated columns' get non-zero attributions in RJBShap. \done{For CES, this is also explained by the positive correlation between race and `unrelated columns'. Even though CES only evaluates the function on manifold, the positive correlation means that CES attributes similar importance for features `unrelated columns' as for race.} \red{This positive correlation between race and `unrelated columns' also causes CES to attribute similar importance for features `unrelated columns' as for race.} This can be especially misleading when the data contains multiple correlated features which are not used by the model. On the other hand, for ManifoldShap, majority of the datapoints have top feature race, whereas none of them have top feature among `unrelated columns'. Figure \ref{fig:real-world} also shows the difference between estimated Shapley values and the ground truth IS values of the biased model. We have again rescaled the Shapley values so that $\sum_{i \in [d]}|\phi_i| = 1$ for fair comparison between different value functions. We can see that for the feature race, the errors of ManifoldShap are more concentrated around 0 than any other baseline considered. For `unrelated columns', ManifoldShap values are $\hat{\phi}_i=\phi_i=0$, i.e., ManifoldShap satisfies sensitivity property in this case. This shows that ManifoldShap is significantly more \done{more }robust to adversarial manipulation of the function outside the manifold, as well as robust to the attribution of credit based on correlations among features. \done{In this way, ManifoldShap retains the `causal interpretation' of Shapley values, while restricting the function evaluation to the data manifold.} % \section{DISCUSSION AND LIMITATIONS} In this paper, we propose ManifoldShap, a Shapley value function which provides a compromise between existing on and off manifold value functions, by providing explanations which are robust to off-manifold perturbations of the model while estimating the causal contribution of features. However, ManifoldShap also has its limitations. While our work does not make any assumptions on the set $\mathcal{Z}$, the properties of ManifoldShap are inherently linked to the choice of $\mathcal{Z}$. ManifoldShap is only robust to perturbation of model outside $\mathcal{Z}$ and perturbations inside $\mathcal{Z}$ could lead to significant changes in the computed Shapley values. It is therefore important to choose $\mathcal{Z}$ that is a good representative of the true data manifold, as otherwise, the Shapley values may not be robust to off-manifold perturbations. Additionally, as pointed out in Section \ref{sec:comparison}, restricting model evaluations to the set $\mathcal{Z}$ can reduce the causal accuracy of Shapley values. This becomes especially evident when the data manifold $\mathcal{Z}$ is \textit{sparse} or low-dimensional relative to the space $\mathcal{X}$. We highlight this empirically in Appendix \ref{subsec:corr}. Likewise, as we show in Appendix \ref{sec:properties}, the sensitivity and symmetry properties of ManifoldShap are also dependent on the properties of $\mathcal{Z}$. It is therefore worth exploring methodologies of choosing $\mathcal{Z}$ which provide the ideal trade-off between desirable properties like causal accuracy and robustness of explanations. We believe these limitations suggest interesting research questions that we leave for future work. % \done{sensitivity axioms} \done{subspace robustness} \done{computing density can be difficult.} \done{in the case of sparse manifolds, we end up with conditional expectations.} \done{could think about the a softer version of ManifoldShap} \subsubsection*{Acknowledgements} We would like to thank Dominik Janzing for his valuable suggestions and insightful discussions. We are also grateful to Kailash Budhathoki and Philipp Faller for providing feedback on an earlier version of the manuscript.
1,314,259,993,138
arxiv
\section{INTRODUCTION} In 3-dimensional space, indistinguishable particles obey Fermi-Dirac statistics (Fermions), or Bose-Einstein statistics (Bosons). For both Fermions and Bosons, upon the exchange of two indistinguishable particles, the system wave function gains a $\pi$ or $2\pi$ phase change. However, when restricted to 2-dimensional space, particles appear to obey fractional statistics\cite{PhysRevLett.48.1144}. This means that when two indistinguishable particles in 2-dimensional space are exchanged, the system wave function gains a statistical phase change, ranging continuously from $0$ to $2\pi$. Those quasiparticles are defined as anyons. Anyons can be grouped in abelian and nonabelian anyons. Abelian anyons are particles that realize 1-dimensional representations of braid groups. In nature, abelian anyons are believed to exist and be responsible for the fractional quantum hall effect (FQH)\cite{PhysRevLett.48.1559,PhysRevLett.50.1395,PhysRevLett.53.722}. Nonabelian anyons are particles that behave as multi-dimensional representations of braid groups. They are critical in topological quantum computing, for example, in the Kitaev fault-tolerant quantum computation models \cite{AYu20032,Alexei20062}. Recently, the interest in anyons is enhanced by the developing field of quantum computing because of their potential ability to impliment fault-tolerant quantum computing architecture. Several theoretical schemes have been proposed to directly observe the fractional statistics associated with the anyon braiding motion\cite{PhysRevLett.94.166802,PhysRevLett.96.016802,PhysRevLett.96.016803,naturep,PhysRevLett.91.090402,nphys287,Zhang20112007,nphys943,PRLDuan,prsa464}. These schemes are mainly grouped into two approaches: the first is proposed to be realized in FQH systems, and the second makes use of the Kitaev models. In FQH systems, it is difficult to directly observe anyonic fractional statistics, and to introduce or resolve individual anyons\cite{frometoa} when compared with the schemes using the Kitaev spin lattice models. Experimental demonstrations in photon systems using the Kitaev spin lattice models have been realized\cite{1367-2630-11-8-083010,PhysRevLett.102.030502}. However the anyons are not protected from local noise and there is no explicit particle interpretation of the excitations\cite{nphys943} because the background Hamiltonian vanishes in such photon systems. In contrast, the background Hamiltonian can be simulated in the nuclear magnetic resonance (NMR) systems. In our work an NMR quantum information processor is used to demonstrate the anyon braiding scheme proposed by Han et al.\cite{PRLDuan} in the smallest Kitaev system utilizing 6 qubits. The six-body ground state preparation, anyon excitations, and anyonic braiding operations are realized using a 7-qubit molecle in liquid state NMR. By comparing the two final states, of which one is obtained after the anyon creation, braiding, and fusion processes while the other doesn't undergo such processes, the phase difference, which is mapped into a frequency change of NMR spectrum peaks in our experiment, can be observed. \section{The Kitaev $k\times k$ square lattice model} The first Kitaev spin lattice model\cite{AYu20032} is a $k\times k$ square lattice on the torus, containing qubits on each of the bonds (Figure \ref{anyonscheme}) (here we define the bonds as the minimal lines forming the lattice). The total number of qubits is $2k^{2}$. The spin lattice contains vertices and faces. A vertex $v$ is the intersection of four bonds. A face $f$ means the surface with boundary defined by four bonds. We can then define an Hamiltonian as \begin{align} H_{K}=-\sum_{v}A_{v}-\sum_{f}B_{f}\label{hamiltonian}, \end{align} where \begin{align} A_{v}=\Pi_{j\in vertex(v)}\sigma^{x}_{j}, B_{f}=\Pi_{j\in face(f)}\sigma^{z}_{j} \end{align} $A_{v}$ ($B_{f}$) represents the 4-body interactions belonging to the qubits which live on the vertex $v$ (or the face $f$). For all the vertices and faces, the ground state $|\Psi _{ground} \rangle$ for the Hamiltonian $H_{K}$ satisfies \begin{align} A_{v}|\Psi _{ground} \rangle=|\Psi _{ground} \rangle, B_{f}|\Psi _{ground}\rangle=|\Psi _{ground} \rangle. \end{align} The ground states are 4-fold degenerate. They form a protected subspace $G$ \begin{equation} G=\{|\xi\rangle \in N, A_{v}|\xi\rangle = |\xi\rangle, B_{f}|\xi\rangle = |\xi\rangle, for\ all\ v\ and\ f \}. \end{equation} $N$ is the Hilbert space of the $2k^{2}$ qubits. This is the definition of the toric code, which is a special kind of stablizer code\cite{AYu20032}. $A_{v}$ and $B_{f}$ are its stabilizer operators. Because of the periodic boundary conditions, for each qubit $j$, $\sigma^{x}_{j}$ and $\sigma^{z}_{j}$ appear twice in different ${A_{v}}'s$ and ${B_{f}}'s$. It can be easily obtained that \begin{equation} \Pi_{v}A_{v}=1, \Pi_{f}B_{f}=1.\label{relationship} \end{equation} If a state does not satisfy several (for example, n) of the $A_{v}|\Psi \rangle=|\Psi \rangle$ and $B_{f}|\Psi\rangle=|\Psi \rangle$ constraints, it's an excited state with n elementary excitations (or quasiparticles). The relationships in Eq. \ref{relationship} implies that the quasiparticles should appear in pairs. \begin{figure}[!ht] \centering \includegraphics[width=1.7in,height=1.5in]{kitaevmodel.eps} \caption{Illustration of the first Kitaev model. The first Kitaev spin lattice model is a $k\times k$ square lattice on the torus. There is a qubit on each bond. The operaters $A_{v}$ and $B_{f}$ act on the qubits of the vertex $v$ and face $f$, respectively.} \label{anyonscheme} \end{figure} \begin{figure}[!ht] \includegraphics[width=3in,height=1.6in]{anyoncreationandbraidingn.eps} \caption{Illustration of anyon creation and braiding operations. (a) The creation of two $\it{m}$ ($\it{e}$) anyons by the operation $X_{\it{mm}}$ ($Z_{\it{ee}}$). $X_{\it{mm}}$ means $\sigma_{x}$ operation to qubit $\it{mm}$, and $Z_{\it{ee}}$ means $\sigma_{z}$ operation to qubit $\it{ee}$. The two $\it{m}$ anyons are localized on face $f1$ and face $f2$. The two $\it{e}$ anyons are localized on vertex $v1$ and vertex $v2$. (b) The braiding motion of an $\it{m}$ anyon around an $\it{e}$ anyon by operations $X_{l1}X_{l2}X_{l3}X_{l4}$. Qubits $l1$, $l2$, $l3$, and $l4$ are the qubits along the braiding path.} \label{anyonbraiding} \end{figure} If a $\sigma_{x}$ operation is applied to a qubit, for example, qubit $\it{mm}$, in a ground state, the state wave function is $|\Psi\rangle_{\it{mm}}=X_{\it{mm}}|\Psi _{ground} \rangle$ ($X_{\it{mm}}$ means the $\sigma_{x}$ operation to qubit $\it{mm}$). It satisfies $B_{f1}|\Psi\rangle_{\it{mm}} =-|\Psi\rangle_{\it{mm}}$ and $B_{f2}|\Psi\rangle_{\it{mm}} =-|\Psi\rangle_{\it{mm}}$. $f1$ and $f2$ are the two faces next to qubit $\it{mm}$. That means two quasiparticles have been created at those particular locations. The two quasiparticles can be considered as two ``defects" localized on faces $f1$ and $f2$. They are called $\it{m}$ particles. Instead, if a $\sigma_{z}$ operation is applied to a qubit, for example qubit $\it{ee}$, in a ground state, the state wave function is $|\Psi\rangle_{\it{ee}}=Z_{\it{ee}}|\Psi _{ground} \rangle$ ($Z_{\it{ee}}$ means the $\sigma_{z}$ operation to qubit $\it{ee}$). It satisfies $A_{v1}|\Psi\rangle_{\it{ee}} =-|\Psi\rangle_{\it{ee}}$ and $A_{v2}|\Psi\rangle_{\it{ee}} =-|\Psi\rangle_{\it{ee}}$. This also means two quasiparticles occur. Here, $v1$ and $v2$ are the two neighboring vertices which are connected by the bond qubit $\it{ee}$ lives on. The two quasiparticles can also be considered as ``defects" localized on vertices $v1$ and $v2$. They are called $\it{e}$ particles. The states with quasiparticles (excitations) are excited states. (Figure \ref{anyonbraiding}(a)) Since two $\it{m}$ ($\it{e}$) particles at the same site annihilate, the $\it{m}$ ($\it{e}$) particle can be moved by applying $\sigma_{x}$ ($\sigma_{z}$) operations along the path (Figure \ref{anyonbraiding}(b)). A braiding operation is to move an $\it{m}$ ($\it{e}$) particle around an $\it{e}$ ($\it{m}$) particle along a closed circle path, which is equivalent to two successive particle exchanges\cite{prsa464}. For Fermions and Bosons, states do not change after two successive particle exchanges. For the $\it{m}$ and $\it{e}$ particles, it has been shown that after a braiding operation, the global state gains a $\frac{\pi}{2}\times 2$ phase change\cite{AYu20032}, which is different from Fermions and Bosons. Therefore the $\it{m}$ and $\it{e}$ particles are anyons that obey fractional statistics. \section{The six-qubit Kitaev spin lattice model and the experimental scheme} The minimum amount of qubits needed to implement the smallest version of the periodic Kitaev model for anyon braiding operations is eight. However by abandoning periodic condition, the spin lattice model can be extended from a square lattice to any planar graph\cite{planar}, and an anyonic model can be found with six qubits where we can demonstrate braiding statistics\cite{PRLDuan}. The graphic structure of the six-qubit model is shown in Figure \ref{graphicmodel}(a). The Hamiltonian of the system is \begin{align} H_{6}&=-A_{1}-A_2-B_1-B_2-B_3-B_4, \end{align} where, \begin{align} A_1&=\sigma^x_1\sigma^x_2\sigma^x_3,\nonumber\\ A_2&=\sigma^x_3\sigma^x_4\sigma^x_5\sigma^x_6,\nonumber\\ B_1&=\sigma^z_1\sigma^z_3\sigma^z_4,\nonumber\\ B_2&=\sigma^z_2\sigma^z_3\sigma^z_5,\nonumber\\ B_3&=\sigma^z_4\sigma^z_6,\nonumber\\ B_4&=\sigma^z_5\sigma^z_6.\nonumber \end{align} The ground state of the six-qubit Kitaev spin lattice is \begin{align}|\Psi _{ground}\rangle=\frac{1}{2} (|000000\rangle + |111000\rangle + |110111\rangle + |001111\rangle).\end{align} Because the boundary conditions have changed, the ground state is not degenerate anymore. The ground state can be created from a six-qubit graph state shown in Figure \ref{graphicmodel}(b)\cite{PRLDuan}. A graph state is a type of multi-qubit state represented by a graph with the vertex set $V$ and the edge set $E$, and is defined as \begin{align}|G\rangle=\Pi_{(i,j)\in E}U_{i,j}|+\rangle^{\bigotimes V},\end{align} where the operater $U_{i,j}$ is the controlled-$\sigma_z$ operation between the qubits $i$ and $j$, and $|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle +|1\rangle)$. The six-qubit graph state corresponding to the graph in Figure \ref{graphicmodel}(b) is \begin{align}|G_{6}\rangle=U_{1,2}U_{1,3}U_{3,6}U_{4,6}U_{5,6}|++++++\rangle.\end{align} The ground state of the six-qubit Kitaev spin lattice is \begin{align}|\Psi _{ground}\rangle=O|G_{6}\rangle,\end{align} where $O=IHHHHI$, $I$ is the identity operater, and $H$ is the Hadamard operater. The $|\Psi _{ground}\rangle$ can be prepared by first preparing a six-qubit graph state $|G_{6}\rangle$, then implementing an $O$ operation. This gives a method to prepare the ground state. For the ground state of the six-qubit system in Figure \ref{graphicmodel}(a), if a $\sigma_x$ operation is applied to qubit 4, a pair of $\it{m}$ particles are created on its two neighboring faces, and if a $\sigma_z$ is applied to qubit 3, a pair of $\it{e}$ particles are created on its two neighboring vertices. By applying successive $\sigma_x$ operations on qubits 6, 5, 3, and 4, one $\it{m}$ particle moves in a loop around one $\it{e}$ particle. After such a braiding operation, the global wave function will obtain a phase factor of -1. By observing this phase change, one can verify the fractional statistics of anyons. \begin{figure}[!ht] \centering \includegraphics[width=2in,height=1.5in]{graphicmodeln.eps} \caption{(a) The six-qubit Kitaev model and its braiding loop. A pair of $\it{m}$ ($\it{e}$) anyons are created by the operation $X_{4}$ ($Z_{3}$) and the braiding operation is realized by $X_{6}X_{5}X_{3}X_{4}$. (b) The graph state that is equivalent under local unitary operations to the ground state of the Hamiltonian in (a). The corresponding qubits in the NMR spin system are also labeled at each vertex.} \label{graphicmodel} \end{figure} Han et al.\cite{PRLDuan} give the basic circuit for ground state preparation, anyon creation, anyon braiding, and anyon fusion. It should be noted that if one does the braiding operation to a state with a pair of $\it{e}$ particles and a pair of $\it{m}$ particles (Figure \ref{anyonbraiding}(b)), the phase change after braiding is a global phase, which cannot be observed in experiments directly. In the scheme proposed by Han et al.\cite{PRLDuan}, the anyon creation step is realize by $\sigma_x$ and $\sqrt{\sigma_z}=e^{i\frac{\pi}{4}}e^{-i\frac{\pi}{4}\sigma_{z}}$ instead of $\sigma_x$ and $\sigma_z$. The $\sigma_x$ operation creates a pair of $\it{m}$ particles. The $\sqrt{\sigma_z}$ operation creates a superposition between the states with and without a pair of $\it{e}$ particles. Therefore after the anyon creation step, the state of the system is \begin{align}|\Psi\rangle=\frac{1}{\sqrt{2}}(|\psi_{1}\rangle+|\psi_{2}\rangle),\end{align} where $|\psi_{1}\rangle$ is a state with a pair of $\it{m}$ particles only, $|\psi_{2}\rangle$ is a state with a pair of $\it{m}$ particles together with a pair of $\it{e}$ particles. $|\psi_{1}\rangle$ will not change after a braiding procedure, because no $\it{e}$ particles exist. $|\psi_{2}\rangle$ will obtain a phase factor of -1 after a braiding procedure. Therefore the total wave fuction becomes \begin{align}|\Psi^{'}\rangle=\frac{1}{\sqrt{2}}(|\psi_{1}\rangle-|\psi_{2}\rangle).\end{align} In this way, the phase change caused by braiding operation becomes a local phase factor in front of $|\psi _{2} \rangle$ and is observable in experiments. The fusion operation is realized by applying $\sqrt{\sigma_z}$ and $\sigma_x$. In such case, if indeed there is a statistical phase change $\frac{\pi}{2}\times 2$ acquired after braiding, the state after fusion is $|\Psi_{ground}\rangle$. This means the $\it{e}$ particle pair and the $\it{m}$ particle pair are both fused. In our experiment scheme, $\sqrt{\sigma_z}^{-1}$ and $\sigma_x$ operations are performed as the fusion step (Figure \ref{curcuits}). With the statistical phase change $\frac{\pi}{2}\times 2$ introduced by a braiding operation (two successive exchanges between anyons), the state after the fusion step should be $|\Psi_{excited}\rangle=\sigma_z |\Psi_{ground}\rangle$. Without the statistical phase change $\frac{\pi}{2}\times 2$, the state after the fusion step should be $|\Psi_{ground}\rangle$. Therefore, by observing the difference between the state after the fusion step and the ground state, we can demonstrate the fractional statistics of anyons. \section{Experimental implementation} In the experiment, $^{13}C$-labeled trans-crotonic acid dissolved in d6 acetone was used. The system contains 7 qubits. The molecule and its parameters are described in Figure \ref{moleculestructure}. To implement the anyon braiding, we first prepared the molecule in the labeled pseudopure state with a deviation matrix of the form $\rho_i=Z_{H1}{\bf{0}}_{C1}{\bf{0}}_{M}{\bf{0}}_{C2}{\bf{0}}_{C4}{\bf{0}}_{H2}{\bf{0}}_{C3}$ where $\bf{0}=|0\rangle\langle 0|$ and $Z$ is the Pauli matrix $\sigma_z$. We chose H1 as the label qubit, and C1, M, C2, C4, H2, C3 as qubits 1, 2, 3, 4, 5 and 6, to match to Figure \ref{graphicmodel}(b), using the neighboring couplings for shortening the gate operations in implementation. \begin{figure}[!ht] \centering \includegraphics[width=3in,height=2.5in]{moleculestructure.eps} \caption{Characteristics of the molecule of trans-crotonic acid\cite{nature,molecule}. The chemical shifts (diagonal elements) and J-coupling constants (off-diagonal elements) are given in $Hz$. The spin-lattice and spin-spin relaxation times T1 and T2 are listed at the bottom. The chemical shifts are given with respect to reference frequencies of 700.13 $MHz$ (hydrogens) and 176.05 $MHz$ (carbons). The three hydrogen nuclei in the methyl form a spin-$3/2$ group M. After using a gradient-based subspace selection, this group acts in its spin-$1/2$ subspace\cite{nature}. Therefore, M can be used as a single qubit. We thus have 7 qubits, including the two protons and four carbons.} \label{moleculestructure} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=3.5in,height=1.6in]{curcuitsn.eps} \caption{The quantum network for the experiment with anyonic manipulation. $H$ represents the Hadamard operation. $Z=\sigma_z$, $X=\sigma_x$. $Y90$ is the read pulse, $Y90=e^{-i\frac{\pi}{4}\sigma_{y}}$.} \label{curcuits} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=2.8in,height=1.6in]{curcuitin.eps} \caption{The quantum network for the experiment without anyonic manipulation. It serves as a comparison with the experiment in Figure \ref{curcuits}.} \label{curcuiti} \end{figure} Figure \ref{curcuits} and Figure \ref{curcuiti} show the circuits for the experiments with and without anyonic manipulation, respectively. There is a part indicated as ``Measurement" from which one can observe the phase change from the spectrum of the label qubit. These two experiments illustrated in Figure \ref{curcuits} and Figure \ref{curcuiti} were carried out for comparison. The wave fuctions of the labeled states (a), (b), (c), (d), (e) in Figure \ref{curcuits}, and (f), (g) in Figure \ref{curcuiti} are as follows: \begin{align} |\Psi _a \rangle&=\frac{1}{2} (|0_{C1}0_{M}0_{C2}0_{C4}0_{H2}0_{C3}\rangle + |1_{C1}1_{M}1_{C2}0_{C4}0_{H2}0_{C3}\rangle \nonumber\\&+ |1_{C1}1_{M}0_{C2}1_{C4}1_{H2}1_{C3}\rangle + |0_{C1}0_{M}1_{C2}1_{C4}1_{H2}1_{C3}\rangle)\nonumber\\&=|\Psi _{ground} \rangle\nonumber \end{align} \begin{align} |\Psi _b \rangle&=\frac{1}{2} (|0_{C1}0_{M}0_{C2}1_{C4}0_{H2}0_{C3}\rangle + i|1_{C1}1_{M}1_{C2}1_{C4}0_{H2}0_{C3}\rangle \nonumber\\&+ |1_{C1}1_{M}0_{C2}0_{C4}1_{H2}1_{C3}\rangle + i|0_{C1}0_{M}1_{C2}0_{C4}1_{H2}1_{C3}\rangle)\nonumber\\&=X_{C4}(\frac{1}{\sqrt{2}}(e^{i\frac{\pi}{4}}|\Psi _{ground} \rangle+e^{-i\frac{\pi}{4}}|\Psi _{excited} \rangle))\nonumber\\&=\frac{1}{\sqrt{2}}(|\psi_{1}\rangle+|\psi_{2}\rangle)\nonumber \end{align} \begin{align} |\Psi _c \rangle&=\frac{1}{2} (|0_{C1}0_{M}1_{C2}0_{C4}1_{H2}1_{C3}\rangle + i|1_{C1}1_{M}0_{C2}0_{C4}1_{H2}1_{C3}\rangle \nonumber\\&+ |1_{C1}1_{M}1_{C2}1_{C4}0_{H2}0_{C3}\rangle + i|0_{C1}0_{M}0_{C2}1_{C4}0_{H2}0_{C3}\rangle)\nonumber\\&=X_{C4}(\frac{1}{\sqrt{2}}(e^{i\frac{\pi}{4}}|\Psi _{ground} \rangle-e^{-i\frac{\pi}{4}}|\Psi _{excited} \rangle))\nonumber\\&=\frac{1}{\sqrt{2}}(|\psi_{1}\rangle-|\psi_{2}\rangle)\nonumber \end{align} \begin{align} |\Psi _d \rangle&=\frac{i}{2} (|0_{C1}0_{M}0_{C2}0_{C4}0_{H2}0_{C3}\rangle - |1_{C1}1_{M}1_{C2}0_{C4}0_{H2}0_{C3}\rangle \nonumber\\&+ |1_{C1}1_{M}0_{C2}1_{C4}1_{H2}1_{C3}\rangle - |0_{C1}0_{M}1_{C2}1_{C4}1_{H2}1_{C3}\rangle)\nonumber\\&=iZ_{C2}|\Psi _{ground}\rangle=i|\Psi _{excited} \rangle \nonumber \end{align} \begin{align} |\Psi _e \rangle&=\nonumber\\&\frac{i\sqrt{2}}{2} (|0_{C1}0_{M}1_{C2}0_{C4}0_{H2}0_{C3}\rangle + |1_{C1}1_{M}1_{C2}1_{C4}1_{H2}1_{C3}\rangle)\nonumber \end{align} \begin{align} |\Psi _f \rangle&=\frac{1}{2} (|0_{C1}0_{M}0_{C2}0_{C4}0_{H2}0_{C3}\rangle + |1_{C1}1_{M}1_{C2}0_{C4}0_{H2}0_{C3}\rangle \nonumber\\&+ |1_{C1}1_{M}0_{C2}1_{C4}1_{H2}1_{C3}\rangle + |0_{C1}0_{M}1_{C2}1_{C4}1_{H2}1_{C3}\rangle)\nonumber\\&=|\Psi _{ground} \rangle\nonumber \end{align} \begin{align} |\Psi _g \rangle&=\nonumber\\&\frac{\sqrt{2}}{2} (|0_{C1}0_{M}0_{C2}0_{C4}0_{H2}0_{C3}\rangle + |1_{C1}1_{M}0_{C2}1_{C4}1_{H2}1_{C3}\rangle) \end{align} In the experiment with anyonic manipulation, $|\Psi _a \rangle$ is obtained after the ground state preparation. It is the ground state of the six-qubit Kitaev model. After anyon creation, the wave function of the state is $|\Psi _b \rangle$. After anyon braiding, $|\Psi _b \rangle$ is transformed to $|\Psi _c \rangle$. $|\Psi _b \rangle$ and $|\Psi _c \rangle$ are the superpositions of $|\psi_{1}\rangle$ and $|\psi_{2}\rangle$. $|\psi_{1}\rangle$ is a state with a pair of $\it{m}$ particles. $|\psi_{2}\rangle$ is a state with a pair of $\it{e}$ particles and a pair of $\it{m}$ particles. Due to the $\it{e}$ anyons present in $|\psi_{2}\rangle$, it has a phase change acquired when the $\it{m}$ particle braiding around the $\it{e}$ particle. This causes the difference between $|\Psi _b \rangle$ and $|\Psi _c \rangle$. After anyon annihilation, the state goes to the excited state $|\Psi _d \rangle$. In order to observe the phase change in relatively simple spectra directly, the state $|\Psi _d \rangle$ is transformed to $|\Psi _e \rangle$. For the state $|\Psi _e \rangle$, by observing the label qubit, a 2-peak spectrum can be obtained. The left peak in the spectrum corresponds to $|111111\rangle$ while the right peak corresponds to $|001000\rangle$. In the experiment without anyonic manipulation, the ground state preparation and the measurement procedures are the same as the experiment with anyonic manipulation. The state $|\Psi _f \rangle$ equals $|\Psi _a \rangle$. The state $|\Psi _g \rangle$ is almost the same as $|\Psi _e \rangle$, except that the qubit C2 is $|0\rangle$ in $|\Psi _g \rangle$ (the global phase is ignored). For the state $|\Psi _g \rangle$, by observing the label qubit, a 2-peak spectrum can be obtained. The left peak in the spectrum corresponds to $|110111\rangle$ while the right peak corresponds to $|000000\rangle$. It is straightforward to conclude that the difference is caused by anyon braiding, illustrated in Figure \ref{curcuits}. Because $\it{m}$ and $\it{e}$ anyons do not obey integral statistics, after two successive exchangings they obtain a phase factor, which is mapped into a frequency change of peaks in our experiment. At the end of the measurement part, the states of qubits H1 and C2 are exchanged via a swap gate. Therefore, C2 becomes the label qubit. The final results are obtained by implementing a $\pi/2$ read pulse to C2. It should be mentioned that the J-coupling constant between C2 and H2 $|J_{C2,H2}|=0.66Hz$ is the smallest of the couplings between C2 and the other nuclei. $|J_{C2,H2}|$ can be resolved in our experiments, which means all the 64 peaks of C2 spectrum are resolved (Figure \ref{fitting}). $|J_{C2,H1}|=155.42Hz$ is the largest J-coupling constant of C2. The braiding induced changes in frequencies of peaks should equal $|J_{C2,H1}|$, which can be easily and explicitly observed in the experiments. The gates used in the ground state preparation were realized by combining single-qubit rotations and evolutions of the J-coupling constants between the neighboring qubits, while all the anyonic manipulations were realized by single qubit rotations\cite{PhysRevA.52.3457,apl76}. The single-qubit rotation pulses were generated using the GRAPE algorithm\cite{GRAPE} for H1 and H2, and were standard Isech-shaped r.f. pulses for M and C1-C4. The J-coupling evolutions were realized by implementing refocusing pulses. We combined all the pulses using a custom-built software compiler, which numerically optimizes refocusing pulses and minimizes the errors due to undesired J-coupling evolutions\cite{natcomm,PhysRevA.78.012328}. The duration of the pulse sequences shown in Figure \ref{curcuiti} and Figure \ref{curcuits} was 195.1ms and 250.2ms, respectively. \begin{widetext} \begin{center} \begin{figure}[!ht] \includegraphics[width=5in,height=2.5in]{fittingnew.eps} \caption{(a) The superposed spectra of the theoretical C2 thermal state spectrum (red line) and the experimental pseudopure state spectrum (black line). There are 64 peaks in the thermal state spectrum, each corresponding to a computational basis state. (b) The experimental spectrum corresponding to the experiment in Figure \ref{curcuiti}. It has two dominant peaks, i and j. (c) The experimental spectrum corresponding to the experiment in Figure \ref{curcuits}. It has two dominant peaks, s and t. The amplitude of peak o in the experimental pseudopure state spectrum in (a) is taken as reference to normalize the experimental signals shown in (b) and (c). On the right are the zoomed-in spectra for the peaks o, i, j, t, and s. The states which the experimental peaks correspond to are labeled on top of each zoomed-in spectrum. There is a $J_{C2,H1}$ distance between peaks i and s, j and t. p, q, u, and v are the small peaks that have the same frequencies as those of peaks s, t, i, and j, respectively.} \label{fitting} \end{figure} \end{center} \end{widetext} \section{Experimental results} The final results for the experiments are shown in Figure \ref{fitting}. Figure \ref{fitting}(a) shows the superposed spectra between the simulated thermal state spectrum of C2 and the experimental pseudopure state spectrum. It shows the experimentally realized $|000000\rangle$ peak (peak o). It should be mentioned that there is an antiphase peak (peak w) in Figure \ref{fitting}(a). This antiphase peak was caused by the label qubit H1, which was in the $Z$ state. Figure \ref{fitting}(b) displays the spectrum of the state $\frac{i\sqrt{2}}{2} (|0_{C1}0_{M}0_{H1}0_{C4}0_{H2}0_{C3}\rangle + |1_{C1}1_{M}0_{H1}1_{C4}1_{H2}1_{C3}\rangle)$, and Figure \ref{fitting}(c) displays the spectrum of the state \\$\frac{i\sqrt{2}}{2} (|0_{C1}0_{M}1_{H1}0_{C4}0_{H2}0_{C3}\rangle + |1_{C1}1_{M}1_{H1}1_{C4}1_{H2}1_{C3}\rangle)$, both generated by observing C2, whose deviation density matrix was $X=\sigma_{x}$. Peaks i (-18.3$Hz$), j (-137.2$Hz$), s (137.1$Hz$) and t (18.2$Hz$) correspond to states $|110111\rangle$, $|000000\rangle$, $|111111\rangle$ and $|001000\rangle$, respectively. The sum intensity of the two dominant peaks, both in Figure \ref{fitting}(b) and Figure \ref{fitting}(c), is about 0.7, normalized using the intensity of peak o. There is a $155.4Hz=|J_{C2,H1}|$ shift between peaks i and s, j and t (Figure \ref{fitting}). This frequency difference between the peaks in the two spectra was caused by the process of anyonic manipulation, demonstrating that after the braiding operation, the state with $\it{e}$ and $\it{m}$ anyons acquired a phase change $\delta=(\frac{\pi}{2} + \eta)*2$. Here $2\eta$ is the deviation of the experimental phase change from $\frac{\pi}{2}\times 2$. We suppose the experimentally realized labeled state (a) in Figure \ref{curcuits} and labeled state (f) in Figure \ref{curcuiti} were \begin{align} |{\Psi _f}' \rangle=|{\Psi _a}' \rangle=\alpha |\Psi _{ground} \rangle +\beta |\Psi _{excited} \rangle + \gamma |\Psi _{error} \rangle. \end{align} The above expression implies the ground state preparation was not perfect. $\beta |\Psi _{excited} \rangle$ was responsible for peaks p and q. $\gamma |\Psi _{error} \rangle$ was responsible for the peaks other than peaks i, j, p and q. (Figure \ref{fitting}(b)) The experimentally realized labeled state (g) in Figure \ref{curcuiti} was \begin{align} |{\Psi _g}' \rangle=\frac{\sqrt{2}}{2} [\alpha(&|0_{C1}0_{M}0_{C2}0_{C4}0_{H2}0_{C3}\rangle \nonumber\\ + &|1_{C1}1_{M}0_{C2}1_{C4}1_{H2}1_{C3}\rangle)\nonumber\\ +\beta(&|0_{C1}0_{M}1_{C2}0_{C4}0_{H2}0_{C3}\rangle \nonumber\\ + &|1_{C1}1_{M}1_{C2}1_{C4}1_{H2}1_{C3}\rangle)]+\gamma |{\Psi _{error}}' \rangle. \end{align} $|\frac{\beta}{\alpha}|$ can be determined from the peak intensities (denoted as $\Gamma$) of the experimental spectrum shown in Figure \ref{fitting}(b). \begin{align}|\frac{\beta}{\alpha}|=\sqrt{\frac{\Gamma _{p}+\Gamma _{q}}{\Gamma _{i}+\Gamma _{j}}}=0.18\pm 0.09. \label{grounddata} \end{align} The experimentally realized labeled states (d) and (e) in Figure \ref{curcuits} were \begin{align} |{\Psi _d}' \rangle&=ie^{i\eta}[(-\alpha \sin{\eta} -\beta \cos{\eta}) |\Psi _{ground} \rangle\nonumber\\ &+(\alpha \cos{\eta}-\beta \sin{\eta})|\Psi _{excited} \rangle ]+ \gamma |{\phi _{error}} \rangle\nonumber\\ &=ie^{i\eta}[\alpha '|\Psi _{ground} \rangle+\beta '|\Psi _{excited} \rangle ]+ \gamma |{\phi _{error}}\rangle.\label{stated} \end{align} \begin{align} |{\Psi _e}' \rangle=\frac{\sqrt{2}ie^{i\eta}}{2} &[\alpha ' (|0_{C1}0_{M}0_{C2}0_{C4}0_{H2}0_{C3}\rangle \nonumber\\ &+ |1_{C1}1_{M}0_{C2}1_{C4}1_{H2}1_{C3}\rangle)\nonumber\\ &+\beta ' (|0_{C1}0_{M}1_{C2}0_{C4}0_{H2}0_{C3}\rangle \nonumber\\ &+ |1_{C1}1_{M}1_{C2}1_{C4}1_{H2}1_{C3}\rangle)]+\gamma |{\phi _{error}}' \rangle. \end{align} Here $\eta=\frac{\delta}{2}-\frac{\pi}{2}$, $\alpha '=-\alpha \sin{\eta} -\beta \cos{\eta}$, $\beta '=\alpha \cos{\eta}-\beta \sin{\eta}$. $\alpha '|\Psi _{ground} \rangle$ was responsible for peaks u and v. $\gamma |{\phi _{error}} \rangle$ was transformed from $\gamma |\Psi _{error} \rangle$ via anyonic manipulation. $\gamma |{\phi _{error}} \rangle$ was responsible for the peaks other than s, t, u and v. (Figure \ref{fitting}(c)) $|\frac{\beta '}{\alpha '}|$ can be determined from the experimental spectrum shown in Figure \ref{fitting}(c). \begin{align} |\frac{\alpha '}{\beta '}|=\sqrt{\frac{\Gamma _{u}+\Gamma _{v}}{\Gamma _{s}+\Gamma _{t}}}=0.24\pm 0.06. \label{exciteddata} \end{align} Combining Equation \ref{grounddata} and Equation \ref{exciteddata}, we can obtain \begin{align} \tan{\eta}=\frac{|\frac{\alpha '}{\beta '}|-|\frac{\beta}{\alpha}|}{1+|\frac{\beta}{\alpha}|*|\frac{\alpha '}{\beta '}|}=0.06 \pm 0.03. \end{align} $\eta =0.06\pm 0.03=(0.02\pm 0.01)\pi$ and the phase change $\delta=(\frac{\pi}{2} + \eta)*2=(0.52\pm 0.01)\pi\times 2$. This agrees with the prediction of the fractional statistics.\\ \section{Discussion} The signal loss mainly came from the spin-spin relaxation and pulse imperfection. Through comparing the signal intensities of the simulation with and without the T2 effects, we estimate that T2 effects contributed to about 15\% of the loss of signal. Comparing the experimental results (peak intensity $\sim$0.7) with the results of the simulation with T2 effects (peak intensity $\sim$1), we estimate that imperfections of the implementation of rf pulses caused an additional approximate 15\% signal loss. The values of $\eta$ in the simulation with and without T2 effects are $0.07$ and $0.06$, respectively, which match well with the experimental data ($\eta =0.06\pm 0.03$). The difference between the two $\eta$ values in the simulation is small, which means decoherence did not contribute much to the deviation of $\delta$ from $\frac{\pi}{2}\times 2$. The deviation was mainly caused by imperfections of the refocusing protocols and the implementation of rf pulses. \section{Conclusion} In summary, we demonstrated the anyonic fractional statistics, using a 7-qubit NMR system. This is the first demonstration on topological quantum computing using nuclear spins. One advantage of our experimental scheme is that we can use the same technique to simulate a 9-qubit Kitaev spin model mentioned by Han et al.\cite{PRLDuan} using an NMR system with more spins, so that we can do a demonstration that anyonic operation is robust to different braiding paths and thus taking an important step towards showing the fault tolerance properties of the Kitaev model.\\ \section*{ACKNOWLEDGMENTS} We thank J.-F. Zhang for helpful discussions, and Industry Canada for support at the Institute for Quantum Computing. R.L. acknowledges support from CIFAR and NSERC. G.L. acknowledges support from the National Natural Science Foundation of China (Grant No. 10874098), and the National Basic Research Program of China (2009CB929402).
1,314,259,993,139
arxiv
\section{\label{2}Introduction} In small molecules, excited electronic states may have altered atomic coordinations, which leads to Frank-Condon multi-phonon sidebands in electronic spectra\cite{Herzberg50}. In solids, electronic excited states are often delocalised, eliminating such effects\cite{Rashba82}. However, if excited states are self-localised\cite{Rashba, Sog93} then Frank-Condon effects should reappear in the form of shifting and Gaussian broadening of the pure electronic transition\cite{Cho75}. The electronically excited cluster may relax to its vibrational ground state after excitation so that the fluorescence energy is lower than the energy required for absorption. This is sketched in figure \ref{fc}. In this paper we explore the possibility of a spin Frank-Condon effect and describe what would be observed in this case. We consider a spin cluster such that after excitation the lowest energy spin configuration differs from that in the ground state and so a spin relaxation may occur. We calculate the energies and wavefunctions of the cluster for different spin configurations. This is used to evaluate the energies and the strengths of the allowed transitions from the ground state and from the relaxed excited states. In the manganites it is known that the exchange interaction is antiferromagnetic between two Mn$^{3+}$ ions but becomes ferromagnetic\cite{Wollan55} if one of the atoms is ionised to Mn$^{4+}$. This gives the possibility for the exchange to depend on the state of excitation of the Mn ions. Another way in which a scenario similar to the one that we describe can occur is if the spin that we consider is actually a pseudo-spin\cite{Ahmed06} corresponding to the orbital order in manganites\cite{Brink02}. In this case ionising the Mn$^{3+}$ ion will eliminate the orbital order on that site and could also lead to a realignment of the orbital moments on the neighbouring ions\cite{Allen99}. In this paper we consider a toy model that shows this basic physics and can be adapted to treat the real problems mentioned above. We summarise the procedure for calculating Frank-Condon spectra and then describe how the spin calculation proceeds along the same path. First we calculate the vibrational energies and wavefunction of the cluster in the electronic ground state and then repeat the calculation for the excited cluster. This enable us to see when the excited state is off-set relative to the ground state as shown in figure \ref{fc}. In this case the excited cluster may make radiationless transitions to the vibrational ground state of the electrically excited cluster. The Frank-Condon principle states that the transition is a vertical line as shown in the figure. More accurately, the intensity of an electronic transition that is accompanied by changes in the vibrational states is proportional to the square of overlap of the two vibrational wave functions. In the spin case we use a model Hamiltonian to calculate the energies and wavefunctions of the spin cluster in the (electronic) ground state and when an electron is excited. We assume that the excited electron is delocalised so that the magnetic cluster is left with a vacancy. This enables us to identify the parameter ranges for which reorientation may occur in the excited state. The electronic transition will occur without spin flip of the neighbouring ions which is the spin analog to the Frank-Condon approximation. The intensity of each transition is then found from the overlap of the two spin states. In Section II we define the spin cluster model and calculate the energies and the eigenstates for the electronic ground state and when an electron is excited. The excitation spectra are calculated in section III and the paper concludes in Section IV. \begin{figure} \begin{center} \includegraphics[scale=0.5] {fc.eps} \end{center} \caption{\label{fc}Energy diagram of an electronic transition with phonon coupling along the configurational coordinate $r_i$, a normal mode of the lattice. The upwards arrows represent absorption without phonons. The downwards arrows represent the symmetric process in emission.} \end{figure} \section{\label{method}Methodology} \subsection{ The effective Hamiltonian } We consider a cluster of five spins as shown in figure \ref{gs}a. In the ground state it has the highest spin total $S_T=5/2$. We assume that after optical excitation an electron for the central site has been excited and has left the cluster as shown in figure 2b which is a state of ($S_T=2$). It will be seen that this toy model is very rich and allows us to explore this effect in detail. \begin{figure} \begin{center} \includegraphics[scale=0.6] {spincluster.eps} \end{center} \caption{\label{gs}(a) The ground state of ferromagnetic spin cluster with total spin $S_T=5/2$, (b) the cluster in a state with total spin $S_T=2$ just after the central spin is removed by optical excitation.} \end{figure} The cluster de-excites when a band electron makes a transition back on to the central site. We shall consider both possibilities namely that the spin of the electron that rejoins the cluster is parallel or antiparallel to the spin of the whole cluster. We assume that there is a Heisenberg ferromagnetic interaction, $J'$, between the central spin and the four neighbours and an antiferromagnetic Heisenberg interaction, $J$, between the neighbours and that each of the four neighbours experiences a mean field interaction, $H_{mf}$, with the rest of the lattice. The effective Hamiltonian describing the low-energy electronic spin states is given by, \begin{equation} {\mathcal{H}} = H_0+H_1+H_2, \end{equation} where $H_o$ represents the crystal mean field interaction. Then, \begin{equation} H_o=-H_{mf}(S_{1}^{z}+S_{2}^{z}+S_{3}^{z}+S_{4}^{z}), \end{equation} \begin{eqnarray} H_1&=&-J'{\bf S_o}.({\bf S_1}+{\bf S_2}+{\bf S_3}+{\bf S_4}) \end{eqnarray} and \begin{equation} H_2=J({\bf S_1}.{\bf S_2}+{\bf S_2}.{\bf S_3}+{\bf S_3}.{\bf S_4}+{\bf S_4}.{\bf S_1}). \end{equation} Where ${\bf S_o}$ is the spin operator of the spin in the centre of the cluster and ${\bf S_1}, {\bf S_2}, {\bf S_3}$ and ${\bf S_4}$ are the spin operators of the nearest neighbours of the central spin. After the central spin, $S_o$, is removed from the cluster as shown in figure \ref{gs}b the term $H_1$ does not contribute. The effective Hamiltonian representing the excited cluster under the crystal mean field can be given as follows, \begin{eqnarray} {\mathcal{H}}&=&-H_{mf}(S_{1}^{z}+S_{2}^{z}+S_{3}^{z}+S_{4}^{z})+ J(S_{1}^{z}S_{2}^{z}+S_{2}^{z}S_{3}^{z}+S_{3}^{z}S_{4}^{z}+S_{4}^{z}S_{1}^{z}) \nonumber \\ &&+(J/2)(S_{1}^{+}S_{2}^{-}+S_{1}^{-}S_{2}^{+}+S_{2}^{+}S_{3}^{-}+S_{2}^{-}S_{3}^{+} \nonumber \\ &&+S_{3}^{+}S_{4}^{-}+S_{3}^{-}S_{4}^{+}+S_{4}^{+}S_{1}^{-}+S_{4}^{-}S_{1}^{+}), \end{eqnarray} where $S_{1}^{z}, S_{2}^{z}, S_{3}^{z}$, and $S_{4}^{z}$ are the spin-up and the spin-down operators. The parameters are chosen so that the ground state of the five-spin cluster is ferromagnetic. The energy of the state with total spin $S_T=5/2$ is $E_{5/2}=-4H_{mf}-4J'+4J$. The energies and eigenstates of the states $S_T=3/2$, $S_T=1/2$ and $S_T=-1/2$ are given in the table \ref{3/2}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.4] {E_vs_J.eps} \end{center} \caption{\label{E_J}(a) A diagram shows the eigen energy $E/H_{mf}$ versus the AFM exchange energy $J/H_{mf}$. $E_2$ is ground state at $0<J/H_{mf}<0.33$ range, $E_0^1$ is the ground state at the range $0.33<J/H_{mf}<0.72$ and finally $E_1^1$ is the ground state above this range $0.72<J/H_{mf}$.} \end{figure} Similarly calculations are done for the four-spin cluster. Immediately after excitation it will be in the state $S_T=2$, as shown in figure \ref{gs}b. The energies of the four-spin cluster are shown in figure \ref{E_J} as a function of $J/H_{mf}$. There is one state of $S_T=2$, four states of $S_T=1$ (two are degenerate) and six states of $S_T=0$ (three are degenerate). Since the cluster relaxes to the lowest state we are most interested in the energy and configuration of the lowest state. It is seen that the spin of the ground state of this four-spin cluster changes as a function of $J/H_{mf}$. For small values of $J$ such that $0<J/H_{mf}<0.33$ the spin of the ground state remains as $S_T=2$ and then there is an intermediate region $0.33<J/H_{mf}<0.72$ where the ground state has $S_T=1$ (state $E_1^1$) and finally for large values of $J/H_{mf}$ it becomes $S_T=0$ (state $E_0^1$). \begin{table}[h!] \caption{The eigen energies of the states with total spin $S_T=3/2$, $S_T=1/2$ and $S_T=-1/2$.} \label{3/2} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}ll} \hline $S_T$ &Eigen energy \\ \hline $S_T=3/2$ & $E_{3/2}^{1,2}=\frac{1}{4}(-6H_{mf}+J'$ \\ &$-\sqrt{16J^2+24JJ'+2JJ'^2})$ \\ &$E_{3/2}^{3,4}=\frac{1}{2}(-3H_{mf}-4J')$ \\ &$E_{3/2}^5=\frac{1}{2}(-3H_{mf}+2J-J')$ \\ \hline $S_T=1/2$ &$E_{1/2}^{1,2,3}=0$ \\ &$E_{1/2}^4=0$ \\ &$E_{1/2}^{5,6}=\frac{1}{2}(-2H_{mf}+2J'-\sqrt{2}\sqrt{2H_{mf}^2-4H_{mf}J'+3J'^3}$ \\ &$E_{1/2}^{7,8}=\frac{1}{2}(-2H_{mf}+2J'+\sqrt{2}\sqrt{2H_{mf}^2-4H_{mf}J'+3J'^3})$ \\ &$E_{1/2}^{9,10}=\frac{1}{2}(-2H_{mf}-3J+2J'-\sqrt{4H_{mf}^2-20H_{mf}J'+25J^2-8H_{mf}J'+20JJ'+6J'^2)}$ \\ \hline $S_T=-1/2$ &$E_{1/2}^{1,2,3}=0$ \\ &$E_{1/2}^4=0$ \\ &$E_{1/2}^{5,6}=\frac{1}{2}(2H_{mf}+2J'-\sqrt{2}\sqrt{2H_{mf}^2+4H_{mf}J'+3J'^3}$ \\ &$E_{1/2}^{7,8}=\frac{1}{2}(2H_{mf}+2J'+\sqrt{2}\sqrt{2H_{mf}^2+4H_{mf}J'+3J'^3})$ \\ &$E_{1/2}^{9,10}=\frac{1}{2}(2H_{mf}-3J+2J'-\sqrt{4H_{mf}^2+20H_{mf}J'+25J^2+8H_{mf}J'+20JJ'+6J'^2)}$ \\ \hline \hline \end{tabular*} \end{table} We have three different scenarios, I, II and III corresponding to the de-excitation from a cluster of total spin $S_T=2,1$ and 0 respectively. As a band electron combines with the four spin cluster it will produce a state that is the direct product of the state for the four spin cluster and the spin of the band electron. It is this state whose overlap with the ground state wavefunctions must be evaluated in order to obtain the strength of the transition. In scenario I where $0<J/H_{mf}<0.33$ the four-spin cluster remains in the state $S_T=2$. The cluster can de-excite by absorbing an electron of either spin; in the first case it will go back to the ground state of the five spin cluster with $S_T=5/2$, and in the second to an excited state of the ground cluster with $S_T=3/2$. This is shown in figure \ref{levels}a. In scenario II which is valid for $0.33<J/H_{mf}<0.72$ the four spin cluster first relaxes to the lowest $S_T=1$ state, $E_1^1$, before combining with a band electron of either spin to de-excite to a state with $S_T=3/2$ or $S_T=1/2$. This is shown in figure \ref{levels}b. Finally in the case where $J/H_{mf}>0.72$ the four spin cluster first relaxes to the lowest $S_T=0$ state, $E_0^1$, before combining with a band electron of either spin to de-excite to a state with $S_T=1/2$ or $S_T=-1/2$. This is shown in figure \ref{levels}c. In the following section we calculate the energies and probabilities corresponding to each of these processes. We note that for the four spin cluster we only need to calculate the wavefunctions for the lowest eigenstate for each value of the total spin. We also need the energies of the five spin cluster for those states that will be reached by de-excitation from the four spin cluster. \section{\label{results}Results and Discussion} \subsection{Spin de-excitation} {\bf Scenario I}(a): There are two possibilities for this scenario. First of all, if the band electron combines with the cluster as spin-up, as seen in figure \ref{gs}a, the cluster loses its energy as luminescence and de-excites directly from the energy level-B, $E_2=-4H_{mf}+4J$, with $S_T=2$ to the energy level-A with $S_T=5/2$ as shown in figure \ref{levels}a. Namely, the cluster does not experience any relaxation by losing phonons in this case before or after it optically de-excites which is represented in figure \ref{levels}a by dashed downarrow-a. The eigen energy for these energy levels and their wavefunction are listed in table \ref{210}. (b): The cluster, in this scenario, starts with the same level $S_T=2$ but the band electron combines with the cluster as spin-down to its central site. The cluster de-excites optically not to the ground state but to an excited state at the energy level-G with $S_T=3/2$. This energy level has five states as shown in table \ref{3/2}. We calculate the overlap of states, $\psi_i (i=1,$...,5), of the five-spin cluster with total spin $S_T=3/2$ with the state formed by the direct product of $\vert\phi_2\rangle$ and an electron of spin down on the central state,$\vert\downarrow\phi_2\rangle$ and the overlap probability, $P_i=\langle\psi_i\vert\downarrow\phi_2\rangle$. \begin{table}[h!] \caption{The eigen energies of the states with $S_T=2$, $S_T=1$ and $S_T=0$ and the wavefunctions for the lowest energy state.} \label{210} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lll} \hline $S_T$ &Eigen energy & Wave function \\ \hline $S_T=2$ &$E_2=-4H_{mf}+4J$ &$\psi_2=|\uparrow\uparrow\uparrow\uparrow\rangle$ \\ \hline $S_T=1$ &$E_1^1=-H_{mf}-J$ &$\psi_1^1=\frac{1}{2}[|\uparrow\uparrow\uparrow\downarrow\rangle+|\uparrow\uparrow\downarrow\uparrow\rangle+|\uparrow\uparrow\downarrow\uparrow\rangle+|\downarrow\uparrow\uparrow\uparrow\rangle]$ \\ &$E_1^2=-H_{mf}+J$ & \\ &$E_1^3=E_1^4=-H_{mf}$ & \\ \hline $S_T=0$ &$E_0^1=-2.4J$ &$\psi_0^1=\frac{1}{\sqrt{6}}[|\uparrow\downarrow\uparrow\downarrow\rangle+|\downarrow\uparrow \downarrow\uparrow\rangle+|\uparrow\uparrow\downarrow\downarrow\rangle+|\downarrow\downarrow\uparrow\uparrow\rangle+|\uparrow\downarrow\downarrow\uparrow\rangle+|\downarrow\uparrow\uparrow\downarrow\rangle]$\\ &$E_0^2=-J/2$ & \\ &$E_0^3=E_0^4=E_0^5=0$ & \\ &$E_0^6=0.85J$ & \\ \hline \hline \end{tabular*} \end{table} It is found that the cluster de-excites from the energy level-B to the state with energy $E_{3/2}^1=E_{3/2}^2$ with unit probability. The probability of de-excitation of the rest of the states in the spin cluster corresponding to the total spin $S_T=3/2$ are zero. {\bf Scenario II}: The cluster experiences relaxation by losing thermal energy before it de-excites optically. Namely, before the band electron combines with the cluster, the cluster relaxes thermally from the energy level-B with $S_T=2$ to the energy level-C with $S_T=1$, as shown in figure \ref{levels}b. The new energy level has four states which their eigen energy are listed in table \ref{210}. We assume that the cluster relaxes to its lowest energy state $E_1^1$. (a) In this scenario the band electron combines with the cluster as spin-up. The cluster de-excites optically with the starting state, $\vert\uparrow\psi_1^1\rangle$ which is direct product of the multiplet of energy level-G with $S_T=3/2$, see figure \ref{levels}b. Table \ref{3/2} shows the values of the eigen energy of level-G with $S_T=3/2$ with the starting state, $\vert\uparrow\psi_1^1\rangle$. It is found that the cluster de-excites from the energy level-B to the degenerate level with energy $E_{3/2}=E_{3/2}^1=E_{3/2}^2$ with unit probability. Naturally we fined there is zero probability for de-excitation into any other of five states of five spin cluster with total spin $S_T=3/2$. Finally, the cluster relaxes thermally from the energy level-G with $S_T=3/2$ to the ground state energy level-A with $S_T=5/2$ \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8]{f-c1.eps}\\ \vspace{1cm}\includegraphics[scale=0.8]{f-c2.eps}\\ \vspace{1cm}\includegraphics[scale=0.8]{f-c3.eps}\\ \end{center} \caption{\label{levels}(a) A diagram describes the relaxation of the cluster according to scenario I, (b) scenario II, (c) scenario III. The number of the levels multiplicity are in brackets.} \end{figure} (b) In this case the cluster absorbs a down spin electron. The cluster de-excites optically from the energy level-C with the starting state, $\vert\downarrow\psi_1^1\rangle$, to an energy level-F higher than the energy level-G. This new level (level-F) has total spin $S_T=1/2$ and has ten states. The eigen energies of these ten states of the energy level-F with total spin $S_T=1/2$ have been calculated and shown in table \ref{3/2}. These states from state $E_{1/2}^4$ to $E_{1/2}^10$ have zero overlapping probability with the starting state while the states $E_{1/2}^1$, $E_{1/2}^2$ and $E_{1/2}^{3}$ are overlapping with the starting state by unit probability. Namely, the cluster de-excites optically from the energy level-C to any state of the states from $E_{1/2}^1$ to $E_{1/2}^3$ of the energy level-F, this is represented by the dashed downarrow-b in figure \ref{levels}b. The cluster relaxes thermally to the ground state energy level-A with $S_T=5/2$ through the energy level-G with $S_T=3/2$. {\bf Scenario III}:(a) After the cluster relaxes from the energy level-B, by losing thermal energy, to level-C with $S_T=1$ it relaxes again to an energy level called D with $S_T=0$. The energy level-D has six states according to the effective Hamiltonian. Table \ref{210} shows the eigen energy for these states. If the band electron combines with the cluster as spin-up, the cluster de-excites optically from this new energy level-D with $S_T=0$ with the starting state, $\vert\uparrow\psi_0^1\rangle$ to the energy level-F with $S_T=1/2$ . The energy level-F has ten states. We have calculated the eigen energies of these ten states with the starting state, $\vert\uparrow\psi_0^1\rangle$ and shown the results in table \ref{3/2}. Our calculations are showing that the states from $E_{1/2}^4$ to $E_{1/2}^10$ have zero overlapping with the starting state. The states $E_{1/2}^1$, $E_{1/2}^2$ and $E_{1/2}^3$ have the maximum probability of overlapping with the starting state. Namely, the cluster de-excites optically from the energy level-D with $S_T=0$ to the states from 1 to 3 of the energy level-F with $S_T=1/2$, see the dashed downarrow-a in figure \ref{levels}c. Finally the cluster relaxes thermally from the energy level-G to the ground state energy level-A with $S_T=5/2$ through the energy level-G. (b): Now, if the band electron combines with the cluster back as spin-down the cluster relaxes optically from the energy level-D with $S_T=0$ to a new energy level called E with $S_T=-1/2$ with new starting state, $\vert\downarrow\psi_0^1\rangle$, as seen in figure \ref{levels}c. It is obtained from our calculations that when the band electron combines with the cluster as spin down during the cluster in the energy level-D with $S_T=0$ the cluster de-excites optically with the starting state, $\vert\downarrow\psi_0^1\rangle$, to the states from $E_{-1/2}^1$ to $E_{-1/2}^3$ of the ten states of the energy level-E with $S_T=-1/2$ where the probability of overlapping of these states with the starting state is the optimum. This is represented by a dashed downarrow-b in figure \ref{levels}c. The states from $E_{1/2}^4$ to $E_{1/2}^{10}$ have zero probability to overlap with the starting state. Finally the cluster relaxes thermally from level-E through levels-F and G to get the ground state energy level-A with $S_T=0$, as seen in figure \ref{levels}c. \subsection{\label{E_l}The luminescence energy calculation} We calculate the luminescence energy for each scenario as follows.\\ Scenario I: (a) If the cluster de-excites from energy level-B directly to the ground state energy level-A when the band electron combines with the cluster as spin-up. The luminescence energy $E_l$ for this case is \begin{eqnarray} E_l=E_o&=&E_2-E_{5/2} \nonumber \\ &=&(-4H_{mf}+4J)-(-4H_{mf}-4J'+4J)=4J'. \end{eqnarray} (b) But if the band electron combines with the cluster as spin-down the cluster de-excites first optically from level-B to the states of energy level-G overlapping with $\vert\downarrow\psi_2\rangle$ then it relaxes losing phonons to energy level-A where this relaxation, here, has energy difference $\Delta_1=E_{3/2}^{1,2}-E_{5/2}$. Then the optical energy in this case is, \begin{eqnarray} E_l&=&E_o-\Delta_1 \nonumber \\ &=&-\frac{19}{2}H_{mf}+8J+\frac{17}{4}J'+\frac{1}{4}\sqrt{16J^2+24JJ'+2JJ'}. \end{eqnarray} Scenario-II: The cluster relaxes losing phonons to the level-C with $S_T=1$. The energy difference, here, is $\delta_1=E_2-E_1^1=5H_{mf}-5.5J$, where $E_1^1$ is the lowest eigenstate in level-C to which the cluster relaxes from level-B. The cluster de-excites after that optically to level-G if the band electron combines with the cluster as spin-up and to level-F if the band electron combines with the cluster as spin-down. As we know the energy difference between level-A and level-C is $\delta_1$ and between the level-F and level-A is $\Delta_2=E_{1/2}^{1,2,3}-E_{5/2}$. Then, in the first case, \begin{eqnarray} E_l&=&E_o-\delta_1-\Delta_1 \nonumber \\ &=&-\frac{29}{2}H_{mf}+\frac{51}{5}J+\frac{17}{4}J'+\frac{1}{4}\sqrt{16J^2+24JJ'+2JJ'}, \end{eqnarray} but in the second case, \begin{equation} E_l=E_o-\delta_1-\Delta_2. \end{equation} Scenario-III: In this scenario the cluster relaxes again to level-C, then, to level-D losing phonons. The energy difference now between level-B and level-D is $\delta_2=E_2 -E_0^1=6.4J-4H_{mf}$ where $E_0^1$ is the lowest eigenstate in level-C to which the cluster relaxes. The cluster de-excites optically from this level to level-F if the band electron combines with the cluster as spin-up and to the level-E if the band electron combines with the cluster as spin-down. The cluster relaxes again losing phonons till it gets the ground state energy level-A. Where the energy difference between level-F and level-A is $\Delta_2$ and the energy difference between level-E and level-A is $\Delta_3=E_{-1/2}^{1,2,3}-E_{5/2}$. Then the optical energy for the scenario-3 is for the first case \begin{equation} E_l=E_0-\delta_2-\Delta_2, \end{equation} and for the second case, \begin{equation} E_l=E_0-\delta_2-\Delta_3. \end{equation} Because $\Delta_1$, $\Delta_2$ and $\Delta_3$ are too complicated it is not possible to put them here as function of the exchange parameters. \section{\label{conclusion}Conclusion} A new physical effect, namely a spin Frank-Condon Effect, has been proposed. A simple model of a spin cluster has been defined that shows rich physics. The energy states of this small spin cluster problem have been investigated and the optical excitation and fluorescence calculated. It was found that in this case the selection rules imposed a very strict limitation on the number of states that could be observed. The physical reason that this occurs is that the lowest energy state for a given spin of the four-spin cluster is always even with respect to permutations of the four sites. Only one state of the five-spin cluster respects this symmetry so only one transition is allowed. In all cases we found that the cluster decayed to a unique state in ground state even when, as for a ground state of total spin $S_T=1/2$, there are as many as four energies and ten states corresponding to $S_T=\pm1/2$. Real physical situations will be more complicated. A big simplification here was that the ground state was ferromagnetically aligned and hence its wavefunction was known and it was non-degenerate. More realistic models would be antiferromagnetic. Also this was a model built from $S=1/2$ spins which is again a simplification. An extension to the study of the $e_g$ orbitals of Mn$^{3+}$ LaMnO$_3$ could be done to extend the work of Allen {\it et al}\cite{Allen99}. It would involve states that were rotated\cite{Deisenhofer03} by $2\pi/3$ and would again be complex \begin{acknowledgements} This work is funded by the Egyptian High Education Ministry (EHEM). \end{acknowledgements}
1,314,259,993,140
arxiv
\section{Introduction} \label{sec:introduction} The timely assessment of earthquake-induced building damage after an earthquake event is of utmost importance for the effective planning of rescue and remediation actions. Automatic damage assessment based on the analysis of 3D point clouds (e.g., from photogrammetry or laser scanning) can provide fast and objective information on the damage situation within few hours \citep{ duarte_2020, vetrivel_2018}. As building damages can be of different type and degrees, a detailed assessment of multiple damage grades is required. This enables efficient use and distribution of resources, and supports the evaluation of structural stability of buildings and repair measures. The assessment of different damage grades, beyond binary damage detection, is a challenging task. There is large variety in possible damage characteristics and the transferability of methods developed for a study site to other geographic regions is limited, as is the transferability to data from other sources, especially for machine learning classifiers \citep{kerle_2020, nex_2019}. An approach for detailed assessment of structural building damage through 3D point cloud classification that is transferable both geographically and with respect to the source and characteristics of point clouds used for training and evaluation would strongly support damage assessment for earthquake response in urban areas. The use of simulated point clouds for training a classifier might enable sufficient accuracies for damage classification. In case of an earthquake event, this can save valuable time as pre-trained classifiers can directly be applied to assess damage in event-specific real-world datasets without time-consuming manual labelling and further training using event-specific data. In this paper, we therefore present a method of classifying damage grades in a supervised machine learning approach using a random forest classifier which is trained on simulated point cloud data. We assess how the use of generic simulated training data, instead of region-specific building and damage structures, influences the accuracy of identifying damage grades. The presented method contributes to a timely assessment of multi-class structural building damage in an earthquake event. The method can make use of simulated training data covering the full range of expected damage patterns to classify building damage in newly acquired point clouds using UAV-borne laser scanning or photogrammetry. If a high degree of transferability is achieved both geographically and with respect to the source of input point clouds used for training and application, the timely damage information provides great support for local rescue teams, as no additional building labeling is required after the occurence of a specific earthquake event. \subsection{UAV-based point clouds for damage assessment} New possibilities for structural building damage assessment have opened up in recent years with the increasing availability of UAV-borne remote sensing strategies. UAV-borne laser scanning~(ULS) or UAV-borne photogrammetry-based dense image matching~(DIM) provide 3D point clouds of urban quarters and entire cities within reasonable time frames (several hours). In disaster situations, the deployment of UAVs and even coordination of fleets is a crucial aspect to support local rescue teams with timely damage information \citep{meyer_2022}. Acquired point cloud data provides full-3D information of the captured scene and complements traditional image-based approaches \citep{stilla_2023, lin_2021, munawar_2021, galareta_2015}. Whereas images, i.e. photography, are suited to identify heavy roof damages and full building destruction, a more detailed assessment of damage grades requires a full-3D representation of the building geometry including, e.g., façades \citep{kharroubi_2022, Kohns_et_al_2021a, xu_2021}. Besides this geometric limitation, another challenge of image-based analysis is the strong variability of radiometric properties of objects over time or in different geographic regions \citep{qin_3d_2016}. Where topographic data is available, e.g. through 3D reconstruction using dense image matching, raster-based analysis is limited by the strong vertical component of building elements, which cannot be adequately represented from a 2D top-down view \citep{stilla_2023}. To obtain 3D point clouds, UAV-borne acquisition strategies allow a dense 3D reconstruction of the scene with point spacings down to a few centimetres. This allows change detection on the scale of individual building parts. These are important for identifying damage patterns that are typical for higher damage grades (heavy damage, extreme damage, destruction), which are target classes of our method (cf.~section~\ref{sec:methods}). The finer-scale geometry of small cracks or spallings can typically not be resolved in UAV-borne point clouds and their detection would require complementary higher-resolution acquisition. This can be achieved, e.g., using terrestrial laser scanning or close-range photogrammetry for selected buildings or blocks \citep{xu_2021}, but is not within the scope of our approach of building damage assessment. \subsection{Approaches for multi-class damage classification in 3D point clouds} Structural building damage assessment based on UAV-borne point clouds so far has mainly been assessed using data acquired after an earthquake event, i.e.~using mono-temporal approaches. These data and approaches lack a-priori information in terms of pre-event data on the building structure. The information lack can be compensated with assumptions on the pre-event building shape, for example, to detect missing elements in the post-event point cloud \citep{vetrivel_2015}. This leads to misclassification where these assumptions do not hold true, and thereby limits the applicability and usability of mono-temporal approaches \citep{vetrivel_2016}. Multi-temporal point cloud-based approaches so far have been constrained by the lack of pre-event datasets of earthquake-affected regions, which limited their practical applicability for real earthquake events. With the increasing availability of 3D city models and country-wide acquisitions of medium- to high-resolution 3D point clouds (e.g., through airborne laser scanning), the development of point cloud-based methods for damage asseessment through change detection between pre- and a post-event point clouds has increased in recent years \citep{deGelis_2021, xu_2021}. Current multi-temporal approaches directly compare a pre-event dataset and a post-event dataset and thereby extract different types of change features, e.g. geometric and radiometric change features. Change can also be extracted from machine learning approaches, such as random forest. Such supervised approaches classify building damage based on a set of geometric or histogram-based features (Roynard et al. 2016). In general, multi-temporal approaches can be grouped into categories of post- and pre-classification. Post-classification approaches first apply a semantic segmentation of the dataset into different object types, and then analyse the target objects with respect to various kinds of changes between two epochs \citep{siddiqui_2017, awrangjeb_2015, xu_2015}. Pre-classification approaches first extract spatial areas of change in the point cloud and then classify these areas according to the extracted change \citep{xu_2015}. There also exist studies that combine both classification and change detection in one step \citep{tran_2018}. For binary classification tasks, deep learning approaches today represent the state-of-the-art \citep{deGelis_2021, Ma_2019}. As such, they distinguish damaged and non-damaged buildings, or between two damage grades with very different damage characteristics \citep{kalantar_2020, nex_2019}. Existent methods of supervised machine learning classification are still challenged, though, by the variety of damage characteristics (mm crack widths and spalling for low damage grades up to partial failure modes and complete collapse for high damage grades), and by the transfer of trained algorithms to unseen data and other geographic regions \citep{huang_2019, vetrivel_2018}. \subsection{Training data generation with virtual laser scanning~(VLS)} A prerequisite for machine learning-based damage classification is the availability of sufficient amounts of labelled training data covering the full range of damage patterns expected in an earthquake event \citep{deGelis_2021}. Lack of suitable training data for the classification task at hand can lead to poor classifier performances when transfered to an unseen dataset or geographic region \citep{munawar_2021, vetrivel_2018}. Training data demands of state-of-the-art machine learning approaches are difficult to meet when multiple damage grades shall be classified \citep{alzubaidi_2021}. In practical use, resulting inter-class confusion might lead to missing damaged buildings (i.e., classifying damaged buildings as undamaged), which are consequently not in the focus of local response teams. If no or insufficient labelled real-world data is available, training and evaluation of machine learning classifiers can benefit from simulated data \citep{deGelis_2021}. Whereas there are no tools available for the simulation of photogrammetric point clouds, multiple tools for the simulation of laser scanning point clouds exist \citep{winiwarter_2022, gastellu_2021, north_2010}. Virtual laser scanning (VLS) is a powerful tool that provides simulated point clouds with known properties from labelled 3D input scenes \citep{hildebrand_2022}. Thereby, real-world scenarios of laser scanning acquisitions are recreated virtually \citep{winiwarter_2022}. Large amounts of realistic training data can hence be generated by VLS, which complement or replace real-world training data where the acquisition and labelling of sufficient amounts of real-world data is not feasible. By using VLS, training data can be automatically generated, and covers the full spectrum of relevant damage patterns. Even if yielding lower classification accuracies, training purely on simulated training data might provide adequate performance in time-sensitive situations, such as an earthquake event. Pre-trained classifiers can then be directly applied to assess damage in event-specific real-world datasets without further training. Adequate modelling of damage patterns in the input scenes is thereby crucial for the accurate representation of damage in the simulated point clouds. This modelling process can, for example, be supported by domain-knowledge in earthquake engineering. A so-called damage catalogue, as developed by \cite{Kohns_et_al_2021a}, categorises typical geometric damage patterns for five different damage grades (slight, moderate, heavy, extreme, destruction) based on the European Macroseismic Scale~98 (EMS-98), following Grünthal (1998). It covers events in different geographic regions within Europe by considering observations of previous earthquake events and influences of material, building design, and structure on potential damage characteristics. Moreover, the damage catalogue by \cite{Kohns_et_al_2021a} has specifically been designed as decision basis for UAV-based assessment of structural building damage and therefore focuses on damage patterns of building elements that are recognisable from outside. Such domain knowledge allows to adequately represent damage patterns for the target damage grades in the simulated point clouds and ultimately, to discriminate multiple damage grades in classification tasks. Real-world point clouds of an earthquake-affected area are more commonly derived from UAV-borne photogrammetry-based DIM than UAV-borne laser scanning, due to lower costs and wider availability of the instruments. However, photogrammetric point cloud simulation tools are not available for generating training data. Methods for damage assessment using simulated point clouds as training data therefore have to deal with different point cloud sources being used for training and application of the model. This can be possible if damage assessment is based on object-specific rather than data-specific features, i.e. features robust to different input point clouds. \subsection{Objective} \label{sec:objective} In this research, we automatically classify multi-class structural building damage from multi-temporal real-world photogrammetric point clouds using a random forest model trained on virtual laser scanning data (Figure~\ref{fig:graphical_abstract}). We develop our method to consider the following aspects: \begin{enumerate} \item Damages are assessed per building by deriving change of geometric features between pre-event and post-event point clouds. We are thereby independent from modelling of pre-event building shapes, but can derive change through the comparison of multi-temporal point clouds. \item Domain knowledge is integrated from earthquake engineering in the process of training data generation from virtual scenes. Using a descriptive damage catalogue \citep{Kohns_et_al_2021a} we include knowledge on possible damage patterns for different damage grades and consider regional and building-specific variability. We thereby ensure that our training data covers the full spectrum of damage patterns expected in the real-world dataset. \item By using virtual laser scanning~(VLS) for the generation of simulated point clouds, labelled building-specific training data with realistic point cloud characteristics is automatically obtained. \item Through the use of object-specific change features, our machine learning model is trained on simulated UAV-borne laser scanning point clouds to classify damage in real-world UAV-borne point clouds derived from photogrammetry-based dense image matching (DIM). The aim is to achieve transferability with respect to the source of input point clouds used for training and application of the model. \end{enumerate} \begin{figure}[] \centering \includegraphics[scale=1.2]{figs/fig1.png} \caption{Overview of the approach to classify building damage in real-world photogrammetric point clouds using a machine learning model trained on simulated laser scanning point clouds.} \label{fig:graphical_abstract} \end{figure} \newpage With our developed approach, we complement existing methods for multi-class structural building damage assessment especially for applications where timely damage information is required and suitable, and sufficient pre- and post-event real-world training data is not available. To possibly increase timeliness of response further, we investigate how the use of non-region-specific training data influences the classification accuracy to thereby assessing transferability between geographic regions. \section{Study Site and Data} \label{sec:study_site_data} In this paper, we use UAV-borne DIM point clouds of the city of L'Aquila, Italy, to classify structural building damage with our method. L'Aquila was hit by an earthquake on Monday, April 6, 2009 at 3:32 a.m.~local time in the central Italy region of Abruzzo with a moment magnitude Mw~=~6.3 \citep{eer_2009, geer_2009}. The affected area is located within the central section of the Apennines. This mountain chain results from the convergence between the African tectonic plate, to which it belongs, and the European tectonic plate, and the subsequent collision of the two continental margins \citep{geer_2009}. The epicenter was close to L’Aquila, a city with an approximate population of 73,000 inhabitants. As the earthquake occurred when most people were sleeping, over 300 people died and 1,500 were injured \citep{eer_2009}. This event was the strongest of a sequence that started a few months earlier. 56 of the approximately 300 digital strong-motion stations of the Italian Strong Motion Network recorded the main shock, whereby five stations were located within a 10 km radius around the epicenter. The latter all recorded horizontal peak ground accelerations above 0.35~g, and some showed permanent displacements up to 15~cm. The ground motion had a short duration of 10~s or less \citep{eer_2009}. Up to 15,000~buildings were destroyed or damaged, 70,000~to~80,000 people were temporarily evacuated, and more than 24,000 were left homeless \citep{eer_2009}. Damage occurred in the city of L’Aquila, but more widespread extreme damage was seen in smaller towns such as Onna, Paganica, and Castelnuovo, where more than 50\% of the historic centers were damaged beyond repair. Collapsed and damaged structures in L’Aquila included both masonry buildings and reinforced concrete structures. Many relatively modern reinforced concrete frame structures with masonry infills were not damaged. Where damage occurred, it generally involved minor cracking to relatively severe cracking or collapse of the masonry infill walls. If there was heavy damage in older reinforced concrete buildings, it most likely resulted from the lack of ductility, and the brittleness of exterior infill walls and interior partition walls \citep{geer_2009}. Old unreinforced masonry buildings made of mortar and multi-wythe rubble-stone or clay bricks were significantly damaged, ranging from wall cracking to extreme damage and collapse. The damage indicated strong effects of site conditions. High damage levels were seen in villages built at least partly on relatively young sediments, but only slight damage was visible in neighbouring villages on solid rock materials, for buildings with similar material quality and characteristics \citep{eer_2009}. DIM point clouds of the city of L'Aquila were generated based on oblique and nadir RGB images captured before (2008-08-30) and after (2009-04-29) the earthquake event. Point clouds were generated based on these images through dense image matching using Agisoft Metashape~(version 1.8.2). The resulting point clouds contain around 72~million and 78~million points with an average point spacing of 0.1~m. We improved the alignment of the pre- and post-event point clouds using an iterative closest point algorithm \citep{besl_mckay_1992} applied to stable areas (streets and stable walls) in the point cloud. We assess the accuracy of the alignment between the point clouds by calculating the standard deviation of M3C2 distances \citep{lague_accurate_2013} in stable areas, following \cite{zahs_2022}. As a result of the nadir perspective of the UAV images acquired to generate this point cloud, horizontal building elements (roofs) were sampled with a higher density (point spacing: 0.09~m; STD.:~0.03~m) compared to vertical elements (point spacing:~0.12; STD.:~0.04~m). As additional data, we use simulated point clouds from virtual laser scanning as training data for our random forest classifier. The generation of the simulated dataset is described in the methods (section~\ref{sec:methods}). \textbf{\begin{figure}[] \centering \includegraphics[]{figs/fig2.png} \caption{Pre-event and post-event building point clouds and corresponding UAV images representing the four target damage grades (a) no damage, (b) heavy damage, (c) extreme damage, and (d)~destruction.} \label{fig:damages_laquila} \end{figure}} \clearpage \section{Methods} \label{sec:methods} In our study we classify multiple grades of structural building damage in a real-world UAV-borne DIM point cloud using a machine learning model trained on virtual UAV-borne laser scanning point clouds (Fig.~\ref{fig:Figure1}). Following domain knowledge in earthquake engineering (cf. section~\ref{sec:methods_scenes}) we consider four damage grades in our approach (Fig.~\ref{fig:Figure2}): No damage, heavy damage, extreme damage, destruction. Classes of slight and moderate damage are not considered, as the geometric representation of their typical damage patterns (e.g., crack widths of few millimetres) in the point clouds requires a higher spatial data resolution, i.e. more detailed acquisitions than typically available by UAV acquisitions. Our approach consists of five main steps: \begin{itemize} \item Simulation of pre- and post-event point clouds through virtual UAV-borne laser scanning of virtual scenes \item Coarse identification of changed and unchanged building parts using a k-means clustering approach \item Extraction of robust object-specific change features \item Training of a random forest machine learning model with simulated point clouds and object-specific change features \item Random forest classification of mulit-class building damage in real-world pre- and post-event DIM point clouds \end{itemize} \begin{figure}[] \centering \includegraphics[]{figs/fig3.png} \caption{Full workflow of the approach including (1)~the generation of simulated and real-world training data, (2)~coarse identification of changed and unchanged building parts, (3)~computation of object-specific geometric change features, (4)~training of multi-target random forest classifiers, (5)~classification of multi-class building damage in a real-world UAV-borne photogrammetric dataset, and (6)~evaluation of classifier performances.} \label{fig:Figure1} \end{figure} \textbf{\begin{figure}[] \centering \includegraphics[]{figs/fig4.png} \caption{Example 3D building model representing the four target damage grades (a)~no damage, (b)~heavy damage, (c)~extreme damage, and (d)~destruction considered in our classification.} \label{fig:Figure2} \end{figure}} We evaluate the performance of classifiers trained on (1)~simulated generic ULS point clouds, (2)~simulated region-specific ULS point clouds, (3)~simulated generic ULS point clouds and real-world region-specific DIM point clouds, and (4)~real-world region-specific DIM point clouds with respect to their capability to accurately classify building damage in a real-world DIM dataset. Evaluation is based on a reference dataset derived from the real-world DIM point clouds of the L'Aquila earthquake in 2009 (cf.~section~\ref{sec:study_site_data}) with the help of the engineering expert. A set of evaluation metrics (overall accuracy, precision, recall, F1 score) is used to assess the performance for each target damage grade separately as a binary case and the overall capacity of the classifiers to correctly separate buildings of any damage grade from undamaged buildings. \subsection{Generation of real-world training and evaluation data} \label{sec:methods-DIM} The DIM dataset of L'Aquila is used to generate both real-world training data and to evaluate the performance of all classifiers with respect to their ability to assess structural building damage in real-world DIM point clouds. We introduce a proper split of the real-world dataset into a training dataset and an evaluation dataset, and use the training dataset exclusively for the generation of real-world training data. Areas of individual buildings in the dataset are manually segmented and labelled by an expert in earthquake engineering, and by using the damage catalogue developed in \cite{Kohns_et_al_2021a} to identify damage patterns typical for the target damage grades. The training dataset consists of 112 labelled building models per damage grade. The evaluation dataset consists of 125~labelled buildings in total (35~no damage, 19~heavy damage, 32~extreme damage, 35~destruction). The uneven distribution of target damage grades in the evaluation dataset results from the uneven number of buildings per damage grade that could be confidently assessed in the manual expert-based labelling. For each damage grade considered in this study, Figure~\ref{fig:damages_laquila} shows exemplary building point clouds extracted from the real-world dataset of L'Aquila for the purpose of training the machine learning model. Damage patterns in the training samples cover all patterns typical for each of the damage grades and include (1)~total collapse (destruction), (2)~partial collapse of stories, fa\c{c}ades and roofs (extreme damage), (3)~large holes distributed across all building elements (heavy damage), and (4)~no damage. Buildings used for training within the entire real-world dataset are excluded from the evaluation dataset. Some buildings in the evaluation dataset could not be labelled due to difficulties or uncertainties in the recognition of damage patterns by the expert. This is mostly related to occlusion of large parts of the building due to vegetation or other scene objects. \subsection{Generation of simulated training data} \label{sec:methods-VLS} The generation of simulated training data consists of two steps: \begin{enumerate} \item Preparation of virtual scenes of 3D building models with various damage patterns, labelled with the damage grade \item Simulation of point clouds through virtual laser scanning of these virtual scenes \end{enumerate} \subsubsection{Preparation of virtual scenes} \label{sec:methods_scenes} 3D building models used to assemble the virtual scenes are taken from open source online repositories for 3D models \citep{free3d_2023}. Further buildings are generated manually in the free and open-source 3D creation suite software blender \citep[version 2.93.0]{blender_2018}. The number of the originally 28 different building models is augmented by applying modification to building size or parts of the buildings. This results in a total number of 448 buildings composed of 112 buildings per damage grade. To investigate the importance of region-specific training data, two types of virtual scenes in pre-event and post-event state are generated: \begin{enumerate} \item Region-specific scene: This scene mimics the characteristics of the real-world scene of this study (L'Aquila) with respect to building types and construction materials typical for this region (cf.~section~\ref{sec:study_site_data}). It also exhibits main characteristics of the urban structure when assembling the individual 3D building models in the scene. Damage patterns implemented in the post-event point cloud are typical damage patterns for this geographic location (cf.~section~\ref{sec:study_site_data}). \item Generic scene: This scene contains a broader range of building types (single family houses up to large apartment buildings), construction material (masonry and reinforced concrete), and built structure (both loose and narrow development), all of which typically occur in small to medium-sized European cities. Consequently, the post-event state of this scene contains a greater variety of damage patterns. \end{enumerate} Both types of scenes consist of 112 undamaged individual buildings, respectively. The three post-event damaged scenes (heavy damage, extreme damage, destruction) are generated based on their corresponding pre-event scenes. Therein, we introduce damage representative of the respective damage grade to each building in duplicates of each pre-event scene. This provides the data basis for a direct comparison of pre-event and post-event scenes to extract change features and then to classify structural building damage in a later step. Structural building damage is modelled into the buildings based on the damage catalogue developed by \cite{Kohns_et_al_2021a}. It defines distinct geometric properties typical for our target damage grades, which we use to discriminate them clearly in the classification. For each damage grade considered in our study (no damage, heavy, extreme, destruction) we manually introduce the typical damage patterns into 3D building models of the virtual post-event scenes. We ensure the realism of introduced damage patterns and the correct labelling of each building with respect to its damage grade in a visual evaluation by the earthquake engineering expert. Following the damage catalogue \citep{kohns_2022, Kohns_et_al_2021a}, we focus on the two pre-dominant construction materials in Europe, i.e. masonry and reinforced concrete. We do not consider further building material, such as wood or steel for the following reasons: Wood is flexible and able to dissipate lots of energy through the timber joints. Steel has a ductile material behaviour and only the connection points, which are difficult to assess from outside, are relevant in case of an earthquake event. In contrast to wood and steel, masonry and reinforced concrete show distinct differences between damage grades. Related damage patterns are visible from the outside, which renders UAV-based damage assessment of entire quaters possible. \subsubsection{Simulated point cloud generation using virtual laser scanning~(VLS)} \label{sec:methods-vls} We perform virtual laser scanning of the generated scenes using the open-source tool HELIOS++ \citep{winiwarter_2022}. HELIOS++ is a general-purpose ray tracing-based simulation framework with support for multiple platforms, sensors, and scene types that can be flexibly combined in a modular manner. Acquisition parameters (Tab.~\ref{tab:datasets_overview}) for our simulations are selected in accordance with point cloud characteristics of the real-world dataset (cf.~section~\ref{sec:methods-DIM}). Our goal is to achieve point densities similar to the real-world data in all simulated point clouds. The influence of different acquisition parameters between pre- and post-event acquisitions on the the geometric representation of a building and consequently on the classification performance is, for example, assessed by \citep{deGelis_2021} and is not in the focus of our study. \begin{table}[] \small \centering \caption{Acquisition parameters used for the simulation of UAV-borne laser scanning in HELIOS++.} \vspace{12pt} \begin{tabular}{ccccccc} \textbf{\begin{tabular}[c]{@{}c@{}}Scan rate [lines/s]\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}} Pulse \\ repetition rate \\ {[}kHz{]}\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Strip overlap \\ {[}\%{]}\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Field of \\view \\ {[}deg.{]}\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Flight altitude \\ {[}m AGL{]}\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Flight speed \\ {[}m/s{]}\end{tabular}} \\ \hline 89 & 300 & 60 & 120 & 100 & 8 \end{tabular} \label{tab:datasets_overview} \end{table} As input for the simulations we specify one virtual scene, respectively. The damage label and unique ID annotated to each building are passed through the simulation process and stored with the output VLS point cloud. As output of the simulation we obtain pre- and post-event region-specific and generic ULS point clouds with per-building damage grades as class label in the post-event point clouds. We segment individual building point clouds based on the building ID of the virtual scene stored with each point in the output point cloud of the simulation. \subsection{Object-specific change feature selection} \label{sec:methods-features} We assess structural building damage through change analysis of geometric features between point clouds of pre- and post-event epochs. As we evaluate the use of different sources of input point clouds for training (VLS) and for classification (real-world DIM), we investigate the transfer of change features to different sources of input point clouds in an experimental study of real-world ULS and real-world DIM point clouds. We use the experiment to identify object-specific change features that are robust to variable properties of input point clouds from different sources (laser scanning and photogrammetry). We consider a geometric change feature (e.g., change in curvature between two epochs) as robust if the relative difference of the change of this feature between two epochs is low between laser scanning and DIM point clouds. The absolute value of a geometric feature in one epoch can differ to a larger extent between a ULS and a DIM point cloud as different close-range remote sensing techniques provide data with different inherent properties and represent objects in a different way \citep{mandlburger_2017}. Object-specific change features are identified in point clouds of building damages acquired via ULS and DIM, which allows us to directly assess the robustness of geometric features to real-world ULS and DIM point clouds as input. We use real-world point clouds acquired of a building at three epochs during a demolition process. Pre-event ULS and DIM point clouds were captured before the demolition, and post-event 1 and 2 ULS and DIM point clouds were captured at two stages of the demolition process (Figure~\ref{fig:inf288}). To make results of this experiment applicable to real-world and simulated datasets used in this study, we subsample the demolition point clouds to match the point density of the L'Aquila dataset and simulated point clouds (0.1~m point spacing, STD:~0.04~m). Similarly, point densities are higher on the roofs (96~pts./m2) than on the fa\c{c}ades (66~pts./m2). Geometric features are computed per point with a certain local neighbourhood radius. Change of a feature between pre- and post-event epochs is then obtained by computing the difference between the feature value of a point in the pre-event epoch to the feature value of it's closest point in the post-event epoch using a kd-tree-based nearest neighbour search. We investigate the following hand-crafted features which are commonly used in classical machine learning approaches of building damage assessment \citep{deGelis_2021, tran_2018}: Linearity, planarity, omnivariance, surface variation, curvature, point density, no. neighbours, surface density, volume density, sphericity, verticality, eigentropy, anisotropy, eigenvalues, sum of eigenvalues, roughness, z rank (relative position of the feature point within its vertical neighbourhood), z range (highest minus lowest z value), normal vector, and echo ratio \citep{hoefle_2009}. These features are computed for local neighbourhood radii in a range of 1.0~m to 4.0~m at a step width of 0.5~m, according to the point density of the dataset. We finally select the neighbourhood size where relative change of the features is most similar between laser scanning and photogrammetry-based analyses. Therein, we consider only those geometric features as input for damage classification which show up to 10\% difference in relative change between ULS and DIM point clouds of two epochs. The threshold of $\leq$~10\% is selected as the number of features is most stable between 10\% and 15\% difference (Figure~\ref{fig:pareto}). \textbf{\begin{figure}[] \centering \includegraphics[]{figs/fig5.png} \caption{(a)-(c) Real-world UAV-borne photogrammetric point clouds and (d)-(f) real-world UAV-borne laser scanning point clouds acquired of a building before demolition (a-d) and at two different demolition stages (b-f).} \label{fig:inf288} \end{figure}} \textbf{\begin{figure}[] \centering \includegraphics[]{figs/fig6.png} \caption{Number of geometric change features depending on the threshold used as allowed difference in mean geometric change between point cloud of UAV-borne laser scanning (ULS) and UAV-borne photogrammetry-based dense image matching (DIM) point clouds of two epochs in the experimental study.} \label{fig:pareto} \end{figure}} \subsection{Coarse extraction of changed building parts} \label{sec:clustering} Before the actual classification of damage grades, we coarsely filter out non-damaged parts using a k-means clustering on each building point cloud. This is done because even for heavy or extreme damage, larger parts of the building point cloud can still be unchanged. Hence, geometric change following typical damage patterns occurs only in local parts of a building. Descriptive statistical values (e.g.,~mean or median change of a geometric feature) per building are then not suitable to represent the actual degree of damage, as exemplary shown for a building of heavy damage in Figure~\ref{fig:clustering_building}. A supervised machine learning model will consequently fail to correctly classify such building damage. \textbf{\begin{figure}[] \centering \includegraphics[]{figs/fig7.png} \caption{(a)-(b) 3D building model and derived point cloud of a building in a pre-event and post-event state. (c)-(d) histogram of change in roughness and curvature between pre-event and post-event state. Small building parts exhibiting change (1~and~2) result in a multi-modal distribution of change values, which cannot be adequately described by descriptive statistical values derived for the whole building.} \label{fig:clustering_building} \end{figure}} With our target damage grades, we can easily separate changed and unchanged building points in feature space using change in curvature (to identify holes and large cracks) and change in height (to identify collapsed roofs and stories). We use clustering to separate all building points into two clusters (changed~/~unchanged), from which we only use points of the changed cluster for the classification of damage grades. Buildings with no changed points would directly be considered as undamaged buildings. To reflect the share of damaged area of a building after filtering, we include the share of damaged points as additional feature in the classification, derived as the percentage of clustered changed points to all points of a building. \subsection{Classification of building damage grades} \label{sec:methods-a´clf} To classify structural building damage, we use supervised classification using the object-specific geometric change features for the changed points per building. We use random forest decision trees for classification, which are suitable for our application as they are mostly uncorrelated due to high variations of the trees and do not require large amounts of training data \citep{breiman_random_2001}. To investigate the influence of using region-specific and real-world training data for damage classification of structural building damage in real-world point clouds, we train multiple random forest classifiers with different input training data (cf. sections~\ref{sec:methods-DIM} and \ref{sec:methods-VLS}): 1)~Simulated generic VLS data (VLS~generic), 2)~Simulated region-specific data (VLS~region specific), 4)~Simulated generic ULS data and real-world region-specific DIM data (VLS~generic~+~real-world DIM). 4)~Real-world region-specific DIM data (real-world~DIM). Each classifier is trained and tested using the 448 labelled damaged and undamaged buildings of the respective dataset with an equal number of 112 buildings per damage grade. In the selection of the buildings used for training, we ensure that the full range of damage patterns typical for the respective damage grade are included in the training dataset. For each classifier, building objects of the entire training dataset are randomly split into 70\% training data and 30\% testing data to evaluate the accuracy of the trained classifier. The random forest classifier is trained with a set of 100 trees and a maximum depth of 5, which provides adequate capacity for this classification task. All four trained classifiers are finally applied to the real-world DIM dataset. \subsection{Evaluation of classifier performances} \label{sec:methods-clf_eval} The performance of the classifiers with respect to their capability of accurately classifying structural building damage in a real-world DIM point cloud is assessed using the labelled evaluation dataset. The major focus of the evaluation is on (1)~the transferability with respect to multi-source input point clouds used for training and application of the model, and (2)~the geographic transferability of models trained on generic simulated data for classification of a real-world dataset. We evaluate the performance for each target damage grade separately as a binary case, as the correct discrimination of multiple damage grades is of great relevance to our study. Further, we evaluate the overall capacity of the classifiers to correctly separate buildings of any degree of damage from undamaged buildings. We use overall accuracy, precision, recall, and F1 score as classification metrics to assess the classifiers performances. \clearpage \section{Results} \label{sec:results} \subsection{Generation of simulated training data} \label{sec:simulated_training_data} The two types of virtual scenes (generic and region-specific) are shown in Figure~\ref{fig:virtual_scenes} in pre-event and post-event states along with exemplary damage patterns of various 3D building models with different damage grades. All buildings of the pre-event scene are modified to represent damaged buildings of the damage grades in the post-event states. \textbf{\begin{figure}[] \centering \includegraphics[]{figs/fig8.png} \caption{Examples of 3D building models of a generic scene in (a)~pre-event state and (b)~-~(d) post-event states of the target damage grades (a)~no damage, (b)~heavy damage, (c)~extreme damage, and (d)~destruction.} \label{fig:virtual_scenes} \end{figure}} Point clouds of the region-specific and generic virtual scenes are shown in Figure~\ref{fig:simulated_vs_realworld} as zoom-ins to individual buildings. As a result of the downward-looking perspective of the UAV-borne acquisition, buildings feature higher point densities on their horizontal elements (roofs; mean point spacing: 0.07~m, STD. 0.03~m) compared to their vertical elements (fa\c{c}ades; mean point spacing: 0.11~m , STD: 0.04~m). Figure~\ref{fig:simulated_vs_realworld} also compares real-world and simulated building point clouds of buildings with similar damage patterns to demonstrate similarities and differences in the geometric representation of damage. It is clearly visible that the spatial sampling of the buildings differs due to the different acquisition strategies and consequently different point cloud characteristics. The geometric representation of damage patterns is, however, not considerably affected by these differences, as for higher damage grades the change in geometry occurs on a larger spatial scale than the differences in sampling due to the overall dense sampling of a building. We can therefore expect that extracted geometric change between two epochs is in the same order of magnitude in both simulated laser scanning and real-world DIM point clouds. \textbf{\begin{figure}[] \centering \includegraphics[]{figs/fig9.png} \caption{Real-world photogrammetric and simulated virtual laser scanning point clouds for the target damage grades (a) no damage, (b) heavy damage, (c) extreme damage, and (d) destruction.} \label{fig:simulated_vs_realworld} \end{figure}} \subsection{Object-specific change feature selection} The difference in relative change of geometric features between real-world ULS and DIM point clouds is visualised in Figure~\ref{fig:change_features_search_rad}. Features with low differences are considered as robust object-specific features which are suitable to classify building damage both in ULS and DIM point clouds (cf. section~\ref{sec:methods-features}). In this study we consider features with less than~10\% difference between ULS and DIM point clouds as this has shown to provide a good compromise between the similarity of geometric change between ULS and DIM point clouds and the number of change features available for damage assessment. The finally selected robust features are: Planarity, surface variation, point density, number of neighbours, surface density, volume density, roughness, z rank, z range and the normal vector. The remaining change features (cf.~section~\ref{sec:methods-features}) show strong ($\geq$~15\%) differences between ULS and DIM point clouds and are therefore not considered suitable for our approach. \textbf{\begin{figure}[] \centering \includegraphics[]{figs/fig10.png} \caption{Robust object-specific change features which show less than~10\% difference between ULS and DIM point clouds of two epochs in our experimental investigation. Red boxes indicate the local neighbourhood size for which differences are lowest. These features show to provide a good compromise between the similarity of geometric change between both sources of input point clouds and the number of change features available for damage assessment. They are therefore considered for the random forest-based damage classification.} \label{fig:change_features_search_rad} \end{figure}} \textbf{\begin{figure}[] \centering \includegraphics[scale=0.8]{figs/fig11.png} \caption{(a)~Pre-event and (b)~post-event UAV-borne laser scanning (ULS) point clouds of the experimental building and (c)~classified changed (red) and unchanged (blue) building parts by k-means clustering. Changed building parts are input for the extraction of object-specific change features as input for the random forest classifier to assess different damage grades.} \label{fig:clustering} \end{figure}} \subsection{Coarse extraction of changed building parts} \label{sec:results_clustering} Results of the k-means clustering to separate changed from unchanged building points are shown by example in Figure~\ref{fig:clustering}. This confirms that the clustering based on change in height and curvature performs suitably and therefore is appropriate to filter changed building points for the classification. \subsection{Evaluation of classifier performances} \label{sec:results_clf_eval} The results of all metrics used to quantitatively evaluate the trained models are given in Table~\ref{tab:evaluation_clf} and Figure~\ref{fig:clf_combined}. When applied to the real-world evaluation dataset, the VLS~generic classifier (Figure~\ref{fig:clf_combined}(a)) yields the highest classification accuracy for no damage (overall accuracy:~95.1\%; F1 score~91.7\%), followed by destruction (overall accuracy:~93.6\%; F1 score: 88.9\%), extreme damage (overall accuracy:~92.0\%; F1 score: 83.3\%), and heavy damage (overall accuracy:~92.8\%; F1 score:~78.1\%). The VLS region-specific classifier (Figure~\ref{fig:clf_combined}(b)) performs best for the assessment of destruction (overall accuracy:~92.0\%; F1 score~86.5\%), followed by extreme damage (overall accuracy:~92.0\%; F1 score:~83.3\%), heavy damage (overall accuracy:~92.0\%; F1 score:~76.9\%), and no damage (overall accuracy:~92.0\%; F1 score:~76.2\%). The VLS~generic + real-world DIM classifier (Figure~\ref{fig:clf_combined}(c)) achieves best performances in the classification of no damage (overall accuracy:~94.4\%; F1 score~90.1\%), followed by destruction (overall accuracy:~92.0\%; F1 score:~86.1\%), extreme damage (overall accuracy:~92.0\%; F1 score:~83.9\%), and heavy damage (overall accuracy:~91.2\%; F1 score:~73.1\%). The real-world DIM classifier (Figure~\ref{fig:clf_combined} (d)) yields best classification results for no damage (overall accuracy:~96.8\%; F1 score:~94.6\%), followed by destruction (overall accuracy:~93.6\%; F1 score:~89.2\%), heavy damage (overall accuracy:~93.6\%; F1~score: 79.0\%), and extreme damage (overall accuracy:~92.0\%; F1 score:~83.9\%). \begin{table}[] \small \centering \caption{Accuracy measures for the trained random forest classifiers (VLS~generic, VLS~region-specific, VLS~generic + real-world~DIM, and real-world DIM) for multi-class damage classification of 125~buildings in the real-world~DIM dataset of L'Aquila, Italy. The performance is evaluated for each target damage grade separately as a binary case. Moreover, the overall capacity of the classifiers to correctly separate damaged buildings (any degree of damage) from undamaged buildings is evaluated.} \begin{tabular}{lccccc} & \textbf{\begin{tabular}[c]{@{}c@{}}All damage\\ grades\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}No \\ damage\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Heavy \\ damage\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Extreme\\ damage\end{tabular}} & \textbf{Destruction} \\ \hline \textbf{VLS generic} & & & & & \\ Overall accuracy & \textbf{95.12} & \textbf{95.12} & \textbf{92.80} & \textbf{92.00} & \textbf{93.60} \\ Precision & 1.0 & 84.62 & 72.73 & 78.13 & 91.43 \\ Recall & 84.62 & 1.0 & 84.21 & 89.29 & 86.49 \\ F1 score & \textbf{91.67} & \textbf{91.67} & \textbf{78.05} & \textbf{83.33} & \textbf{88.89} \\ \hline \textbf{VLS region-specific} & & & & & \\ Overall accuracy & 94.40 & 92.00 & 92.00 & 92.00 & 92.00 \\ Precision & 82.05 & 84.21 & 69.57 & 78.13 & 82.05 \\ Recall & 1.0 & 69.57 & 84.21 & 89.29 & 91.43 \\ F1 score & 90.14 & 76.19 & 76.91 & 83.33 & 86.49 \\ \hline \textbf{VLS generic + real-world DIM} & & & & & \\ Overall accuracy & 94.40 & 94.40 & 91.20 & 92.00 & 92.00 \\ Precision & 82.05 & 1.0 & 68.18 & 81.25 & 83.78 \\ Recall & 1.0 & 82.05 & 78.95 & 86.67 & 88.57 \\ F1 score & 90.14 & 90.14 & 73.17 & 83.87 & 86.11 \\ \hline \textbf{Real-world DIM} & & & & & \\ Overall accuracy & \textbf{96.80} & \textbf{96.80} & \textbf{93.60} & \textbf{92.00} & \textbf{93.60} \\ Precision & 1.0 & 89.74 & 78.95 & 81.25 & 84.62 \\ Recall & 89.74 & 1.0 & 78.95 & 86.67 & 94.29 \\ F1 score & \textbf{94.59} & \textbf{94.59} & \textbf{78.95} & \textbf{83.87} & \textbf{89.19} \end{tabular} \label{tab:evaluation_clf} \end{table} Inter-class confusions for all classifiers mainly occur between neighbouring damage grades. For example, buildings with extreme damage are misclassified as heavy damage or destruction. Only buildings with no damage are partly misclassified as heavy damage or vice versa by all classifiers, which are significantly different degrees of damage. For one of these miss-classified buildings, a visual inspection of the point clouds reveals occlusion effects in the area of damage, which results from the more narrow built structure in the region-specific simulated scene. For the other buildings, however, close inspection does not show occlusion effects but the class probabilities of no damage and heavy damage in these cases are similarly high which suggests, that damage patterns of these building objects are not significantly different from no damage. Using generic simulated training data yields good classification results for all target classes with overall classification accuracies between 92.0\% and 95.1\%, and F1 scores ranging between 78.1\% and 91.7\%. Using region-specific simulated instead of generic simulated training data does not strongly reduce inter-class confusion or increase the completeness of detected damaged buildings (+3\%), neither does the use of real-world region-specific training data (+6\% increase in completeness of detected damaged buildings). This implies that our model trained purely on generic simulated data has a high transferability to unseen regions and that the benefit of adding site-specific real-world training data is low for the classification task at hand. The performance of the VLS generic classifier to detect damage, i.e. binary classification to separate damaged from undamaged buildings, is strong with an overall accuracy of~84.6\%. Using real-world region-specific training data achieves only slightly higher overall accuracies~(89.7\%). Simulated or real-world region-specific training data does hence not considerably increase classification accuracies neither with respect to the classification of multiple damage grades, nor with respect to the detection of damaged buildings. We attribute this to the fact that for the damage grades considered in our study, damage patterns do not vary considerably for different building types and built structure. Change features learned from the generic simulated training dataset hence generalise appropriately to be used for damage assessment in datasets with different site characteristics. These results support our hypothesis that the transfer of a supervised machine learning model trained purely on simulated, non-region specific training data to an unseen real-world dataset with different site characteristics is a valid approach for the use case of timely damage assessment for earthquake response. Although the degree of damage is not increasing linearly but rather exponentially with increasing damage grades, inter-class confusion for all classifiers mostly occurs between neighbouring damage grades and not between damage grades far from each other. We therefore assume that the damage catalogue as a descriptive framework is not exclusively relating to geometric change in the point cloud. Consequently, the model might learn certain geometric representations of damage grades differently from how expert analysts would categorise them. Moreover, except from one building, all damaged buildings are correctly detected as damaged. This is an important result, as applications can rely to identify damaged buildings with high completeness. The exact degree of damage may not be as essential in this first step but it may already be useful to derive a tendency towards rather high or low degree of damage. \textbf{\begin{figure}[] \centering \includegraphics[scale=0.9]{figs/fig13.png} \caption{Confusion matrices for the trained random forest classifiers (a) VLS generic, (b) VLS region-specific, (c) VLS generic + real-world DIM, and (d) real-world DIM for multi-class damage classification of 125 buildings in the real-world photogrammetric dataset of L'Aquila, Italy.} \label{fig:clf_combined} \end{figure}} \section{Discussion} \label{sec:discussion} \subsection{Transferability of the method with respect to the source of input point clouds} Our approach achieves transferability of input point clouds from different sources (laser scanning and photogrammetry) for training and application of the random forest classifier. More specifically, we are able to transfer the classifier trained on simulated UAV-borne laser scanning point clouds to classify damage in real-world UAV-borne DIM point clouds. We achieve this through the identification of object-specific geometric change features, which show to be representative for the investigated damage grades both when using ULS and DIM point clouds. Handling multi-source point clouds is especially relevant when training models with simulated data, due to the lack of available tools to simulate photogrammetric point clouds. Therefore, training with simulated data has to be based on simulated laser scanning point clouds, whereas the application of the model might be based on DIM point clouds. Our approach could be extended to consider multi-modal data also with respect to the pre-event and post-event input point clouds used to derive change between two epochs, such as performed in \cite{dai_2020}. In practice, point cloud data of built areas is often available from different sensor systems and acquisition strategies, such as ours, or may be generated from existing 3D city models. Consequently, the confident assessment of structural building damage requires methods and strategies which can extract representative change features from heterogeneous pre-event and post-event point clouds \citep{zhang_et_al_2021}. \subsection{Geographic transferability of the method} A further important strength of our approach is the geographic transferability of a trained classifier to datasets from unseen regions. We found that the random forest classifier trained on generic (i.e., not region-specific) simulated building point clouds achieves high classification accuracies in the real-world dataset, which are comparable to those achieved in studies training on region-specific inputs, such as \cite{deGelis_2021}. Using simulated region-specific building point clouds with damage characteristics more similar to the real-world building point clouds does not strongly improve the classification results (increase of overall accuracy $<$~2\%; increase of F1 score: $<$~2\%), neither does training purely based on real-world region-specific data (increase of overall accuracy $<$~2\%; increase of F1 score: $<$~3\%). Model transferability is still a major challenge for the assessment of binary or multi-class building damage. For example, \cite{vetrivel_2018} found that the geographic transferability of their supervised model based on a CNN and 3D point cloud features is limited already when scene characteristics vary slightly. In our study, we achieve model transferability through the integration of domain knowledge in the process of training data generation. Using the concept of a damage catalogue developed in \cite{Kohns_et_al_2021a}, we are able to identify damage patterns which characterise the target damage grades across different geographic regions, and model them into the virtual 3D building models. Therefore, our model is trained on geometric change features which generalise adequately to discriminate target damage grades in point clouds of different geographic regions. This contribution of our method is of utmost relevance for practical use. In contrast to a computational estimation of building damage based on pre-calculated fragility functions \citep{kohns_2022b} it allows assessing damage directly for the specific buildings in the affected area, and to directly applying the pre-trained model for damage classification in a real earthquake event as soon as post-event UAV data is available. \subsection{Automatic generation of large 3D training datasets} Simulated training data in this study comprises the manual modelling of various damage patterns into 3D building models, and the annotation of buildings with damage labels. As such training data can be generated to train machine learning models before an actual earthquake event occurs, the trained models can be directly applied to classify damage in real-world datasets acquired after an earthquake occurs. This saves valuable time for rescue and remediation actions to set in \citep{Kohns_et_al_2021a}. Future approaches of UAV damage assessment in earthquake response might integrate databases of existing damaged and undamaged 3D building models. Such databases could integrate both building models generated from real-world city models \citep[e.g.,][]{Zhihang_2018} and synthetically generated building models from other sources, including virtual laser scanning. In the field of forestry, for example, the Python software package pytreedb \citep{pytreedb} provides an open object-based library for laser scanning point clouds of trees with a simple interface to store and share the tree objects. A similar approach could be used to store and query labelled 3D building models in an open database. The database could even be automatically connected with the laser scanning simulation module to assemble a multitude of different scenes where building objects can be flexibly interchanged. Simulated building point clouds might then be stored with the respective building object in the database. Such an approach considerably increases the degree of automation in the process of training data generation. It also provides great flexibility in assembling input scenes for laser scanning simulation. The approach can thereby be especially valuable for classification methods with high demands for training data, such as Deep Learning \citep{deGelis_2021}. While in our approach we classify damage on building level, it generally offers flexibility with respect to the spatial unit of extraction of changed building part and subsequent damage classification. Instead of a whole building, the approach might in the future also be tested to assess damage only for partial buildings which represent a coherent unit that is of interest for damage classification for a specific use case. \subsection{Automatic assessment of lower damage grades in UAV-borne point clouds} In our method, we use a geometric approach for the assessment of structural building damage through change in pre- and post-event 3D~UAV-borne point clouds. We therein focus on the assessment of higher damage grades (heavy damage, extreme damage, destruction). These are usually geometrically resolved in UAV-borne point clouds acquired over large areas during an earthquake event. When interpreting classification results, it should be considered that buildings classified as non-damaged by our approach might be slightly or moderately damaged, but cannot be recognised as such from the point cloud data due to more subtle damage patterns. While high damage grades are most relevant to support the coordination of immediate rescue actions on site in the very first hours after an earthquake event, a timely assessment of lower damage grades is also of great relevance, especially because these buildings are prone to severe damage in aftershock events. Other UAV-borne acquisition strategies are needed to meet the requirements for the assessment of lower damage grades. Most importantly, the spatial resolution of derived point clouds would need to geometrically represent typical lower-grade damage patterns, such as cracks of millimeter width. This can be achieved by lower flight altitudes and flight speeds to focus on selected individual buildings at higher detail. Such acquisitions strategies favour high spatial resolution over large spatial coverage of the affected area \citep{meyer_2022}. To assess both high and low damage grades in a reasonable time frame, a two-step approach could be deployed \citep{kohns_et_al_egu_2021}: Immediately after an earthquake, fleets of UAVs are sent out to capture the entire affected area in overview flights within a few hours. Areas of higher relevance for immediate damage assessment can be prioritised in the acquisition through the integration of an initial damage assessment in the planning of the UAV missions. This initial damage forecast is based on ground motion fields and exposure data including fragility and vulnerability functions, and gives a first estimate on the expected building-specific damage. Point clouds resulting from such overview flights are characterised by medium to low resolution ($>$~0.1~m), and can be input for the automatic and fast assessment of higher damage grades (destruction, extreme damage, heavy damage), as in our method. Beyond immediate response actions, this first assessment reduces the number of buildings that require a more detailed UAV-borne acquisition to identify potential lower damage grades (slight damage, moderate damage). Hence, only buildings classified as not damaged in the coarse assessment will be considered in detailed UAV acquisitions and the subsequent assessment of damage, which considerably reduces the amount of data to be analysed. Such approaches support local rescue teams during various phases after an earthquake. \section{Conclusion} We present a novel approach to automatically classify multi-class structural building damage from multi-temporal point clouds, evaluated on pre- and post-event data of an earthquake event. We evaluate a supervised machine learning model trained on simulated point clouds from virtual UAV-borne laser scanning with respect to its capacity to classify damage grades no damage, heavy damage, extreme damage, and destruction in a real-world photogrammetric dataset. Damage is thereby assessed through an approach that considers change of geometric features between a pre-event and post-event point cloud for the classification of the grades of damage. Our results reveal transferability with respect to multi-source point clouds used for training (simulated UAV-borne laser scanning) and application (real-world UAV-borne photogrammetry) of the model. This is possible by using a set of robust object-specific change features, which characterise damage patterns both in the simulated and real-world point clouds. Consequently, simulated point clouds from virtual laser scanning provide a valuable source of realistic labelled training data for the classification task at hand. We further achieve geographic transferability of the model trained on simulated point clouds by integrating domain knowledge from earthquake engineering in the generation of realistic simulated training data. This allows training the model on geometric change which characterises the target damage grades across different geographic regions. The assessment of multiple damage grades in the real-world dataset yields high accuracies (overall accuracy:~92.0\%~-~95.1\% ; F1 score:~78.05~-~91.67\%). Classification accuracies only slightly improve when using real-world region-specific training data (overall accuracy:~$<$~2\%, F1 score:~$<$~3\%). The same applies for the binary case of detecting damaged buildings for which the classifier trained on generic simulated training data detects~89.6\% of damaged buildings. Using real-world region-specific training data only slightly increases the detection rate by~+3.1\%. Given the two aspects of transferability, we conclude that our approach provides a powerful assessment of multi-class structural building damage. We consider it especially relevant for applications where timely information on the damage situation is required, often linked to the situation that sufficient real-world training data is not available. \newpage \beginsupplement \section*{Data statement} The 3D building models, simulated point clouds and Python scripts used in this article will be openly provided after peer-reviewed publication. \section*{Acknowledgement} This work was supported by the Bundesministerium für Bildung und Forschung (BMBF), Federal Ministry of Education and Research under Grant 03G0890 in the frame of the project LOKI and by the Deutsche Forschungsgemeinschaft (DFG), Germany Research Foundation, in the frame of the project VirtuaLearn3D under Grant 496418931. We would like to thank CGR S.p.A. (Compagnia Generale Ripreseaeree) for providing the real-world image data of the city of L'Aquila for deriving the photogrammetric point clouds used in this study. \bibliographystyle{elsarticle-harv}
1,314,259,993,141
arxiv
\section{Galaxy Groups as Probes of Cosmic Feedback} In recent years, deep galaxy surveys such as GOODS \citep{giav04} have been providing a wealth of high-quality data on the properties of galaxies across a wide range of redshifts and masses. In combination with large-area surveys of the low-redshift Universe (e.g.\ the SDSS), this is enabling detailed studies of the history of stellar mass assembly, star formation activity, and nuclear activity across considerable look-back times, providing a much more detailed view of galaxy evolution than available just a decade ago. However, a complete understanding of galaxy evolution and the processes driving it is unlikely to emerge from statistical considerations applied to large galaxy samples alone. One of several complimentary approaches is to consider the {\em impact} of galaxy activity on the surrounding environment. Groups of galaxies provide particularly useful laboratories for such studies, not only because they represent a very common galaxy environment in the nearby Universe, but also because they contain a hot intracluster medium (ICM) whose properties (thermal pressure, entropy, metallicity) are highly susceptible to non-gravitational processes such as those associated with galactic feedback from star formation and AGN activity. X-ray studies of this hot gas can therefore provide information on the integrated feedback activity of galaxies. Using a sample of 15 X-ray bright groups observed with {\em Chandra}, we are using measurements of the metal distribution in groups to unravel some of the details of the history of star formation and nuclear activity in the group members. \section{Supernova Feedback and Star Formation History} From the {\em Chandra} data, we have measurements of the radial distribution of the abundance of iron and silicon in the intragroup gas for all 15 systems (details of the group sample and data reduction can be found in \citealt{rasm07}). For an assumed set of supernova (SN) yields \citep{iwam99,nomo06}, the results for either element can be uniquely decomposed into contributions from SN~Ia and SN~II and the results then stacked to provide mean profiles for the entire sample. In Fig.~\ref{fig,SN} we show the resulting average contribution of SN~Ia relative to that of SN~II within the sample, plotted as a function of radius in units of $r_{500}$. The abundance pattern clearly implies a strong dominance of SN~II enrichment at large radii, where most of the intragroup gas resides. \setcounter{figure}{0} \begin{figure}[htb] \plotfiddle{jrasmussen_fig1.eps}{5.1cm}{0}{42}{42}{-110}{0} \caption{The mean number ratio of SN~Ia vs.\ SN~II in our groups as inferred from the ICM enrichment pattern, with the shaded region representing the typical relative uncertainty of 25\%. For comparison, dashed lines show observed and predicted ratios in different redshift intervals from deep field data \citep{dahl04}.} \label{fig,SN} \end{figure} Comparison to the ratio of SN~Ia vs SN~II measured in the GOODS survey \citep{dahl04} over a range of redshifts shows that our inferred SN ratios well outside the group cores, at $r\ga 0.5r_{500}$, are inconsistent with observed values at low-to-intermediate redshifts ($z \la 0.6$) but broadly agrees with predictions at $z \ga 1.5$. The comparison data involve measured SN~Ia rates out to $z\approx 1.8$, and SN~II rates beyond $z\approx 1$ as predicted from evolutionary models of the cosmic star formation rate (see Fig.~2 in \citealt{dahl04}). The immediate implication of Fig.~\ref{fig,SN} is that most enrichment, and hence SN and star formation activity in the groups, must have taken place reasonably close to the peak of the cosmic star formation rate at $z\sim 2-3$. Further insight may be gained by considering the total metal mass in the groups generated by each of the two SN types, normalized by the aggregate $K$-band luminosity $L_K$ of the group members. Using the adopted SN yields and the SN rate per unit $L_K$ \citep{mann05} observed in local early-type galaxies (which dominate the optical output in our groups), these metal mass-to-light ratios can be translated into enrichment time-scales. The latter will be lower limits, however, because we do not account for metals locked in stars or metals ejected beyond $r_{500}$ by galaxy winds, nor for the fact that $L_K$ must have been smaller in the past due to the continued growth of stellar mass in group members and the addition of further members over time. Fig.~\ref{fig,times} (left) shows the results for iron from SN~Ia, revealing time-scales in excess of $\ga 10$~Gyr in many cases and suggesting that SN~Ia at current rates cannot have produced the required amount of Fe in several of the groups. \begin{figure}[htb] \plottwo{jrasmussen_fig2a.eps}{jrasmussen_fig2b.eps} \caption{{\itshape Left:\/} $K$-band mass-to-light ratio of iron within $r_{500}$ produced by SN~Ia in the groups, as a function of mean X-ray temperature. Shaded regions show the corresponding time-scales for SN~Ia in the group members to produce the observed metal mass, with uncertainties reflecting those of the SN~Ia rates in local early-types. {\itshape Right:\/} The same for SN~II, assuming SN rates in local spirals for all satellite galaxies, as a function of $K$-band luminosity ratio of the central galaxy to all group members.} \label{fig,times} \end{figure} This issue is even more acute for SN~II, for which only upper limits to their rate in local early-types are available. Even if, for the sake of argument, assuming SN~II rates in line with those of nearby late-type {\em spirals} for all galaxies except the central brightest group galaxy (BGG), the time-scales are still prohibitively large in most cases (Fig.~\ref{fig,times} right). Hence, the inferred enrichment time-scales require much higher specific SN rates in the past in the group members, independently confirming the well-established need for a rise in the cosmic star formation rate density out to at least $z \sim 2-3$, as inferred from galaxy surveys. While similar results have been reported for massive galaxy clusters (e.g.\ \citealt{fino00}), this has not previously been tested at the far more common mass scale of galaxy groups. \section{Constraining SN and AGN Feedback} Assuming that energy and metals have been released proportionally from supernovae to the ICM, the SN ratios and metal masses implied by Figs.~\ref{fig,SN} and \ref{fig,times} allow us to estimate the total SN energy imparted to the hot gas for an assumed SN explosion energy of 10$^{51}$ erg. The resulting values within $r_{500}$, shown in Fig.~\ref{fig,E} (left), scatter around a mean of $\sim 0.6$~keV per ICM particle, with no clear trend with group ``mass'' $\langle T \rangle$. Both the mean and scatter are in broad agreement with results of hydrodynamical simulations of groups involving momentum-driven galaxy winds to account for ICM enrichment \citep{dave08}. In principle, the inferred SN energies can also help constrain the impact of AGN feedback in the groups. As a first crude step, one could argue that the combined energy input from these feedback processes cannot substantially have exceeded the sum of the ICM thermal energy and its integrated energy losses without unbinding the hot gas (according to the virial theorem). Under this assumption, and evaluating the total radiative energy losses from the ICM on the basis of its current X-ray luminosity integrated over a 10~Gyr time-scale, the resulting total allowed AGN heating energy in each group is shown in Fig.~\ref{fig,E} (right). \begin{figure}[htb] \plottwo{jrasmussen_fig3a.eps}{jrasmussen_fig3b.eps} \caption{{\itshape Left:\/} SN energy per ICM particle associated with the observed ICM metal masses. {\itshape Right:\/} Total allowed AGN heating energy as a function of total stellar mass in each group.} \label{fig,E} \end{figure} This would suggest that for a typical $T\sim 1$~keV system with a total stellar mass $M_\ast \sim 1\times 10^{12}$~M$_\odot$, the integrated AGN heating energy cannot substantially have exceeded $\sim 10^{49}$~erg per M$_\odot$ of stellar mass. To some extent, however, the above approach mainly probes the resilience of the {\em current} ICM to additional heating, and it also neglects any bulk kinetic energy imparted to the gas which may have modified its density distribution. Such a contribution could be substantial, especially in the poorest systems, but should more accurately reflect the maximum energy input that can have occurred in the groups. Hence, more robust constraints on AGN feedback would arise from assessing the amount of work done against gravity in establishing current gas mass fractions and distributions within $r_{500}$ relative to those of massive clusters. Assuming that the AGN heating energy in Fig.~\ref{fig,E} has been released over a 10~Gyr timescale, the resulting time-averaged heating power in the groups is typically an order of magnitude larger than the current mechanical AGN luminosity of the BGG, as estimated from its observed 1.4-GHz radio power and the relation of \citet{birz04}. Significantly more powerful AGN activity in the past within these groups is thus allowed, but not necessarily required, by the above results, in qualitative agreement with the inferred rise in the AGN luminosity density out to $z \approx 2$ (e.g.\ \citealt{hopk07}). Some further implications for AGN accretion and supermassive black hole growth can be obtained by estimating current central black hole masses $M_{\rm BH}$ from the observed BGG bulge velocity dispersion (e.g.\ \citealt{gebh00}). The energies in Fig.~\ref{fig,E} then provide rough upper limits to the efficiency $\eta \sim E_{\rm AGN}/(M_{\rm BH}c^2)$ with which mass accreted by a central black hole in the BGG's has been converted into heating energy in the groups. On average, this number is $\eta \approx 1-2$\% for our sample, rising to $\eta \approx 5$\% in the hottest groups. Although these numbers should clearly be regarded as tentative at present, it is interesting to note that they are consistent with estimates of the ratio between black hole accretion rate and AGN jet power in bright elliptical galaxies ($\approx 2$\%; \citealt{alle06}). \section{Summary} Our work demonstrates that the SN and AGN feedback history of galaxies can be probed by studying how such feedback processes have affected the hot gas surrounding galaxies in nearby galaxy groups. This provides a useful complementary approach to those based on large multi-wavelength galaxy surveys. Specifically, comparison of the observed amount of metals in the hot intragroup gas to the present-day optical properties of the group members indicate much higher supernova and star formation rates per stellar mass in the past. By further requiring that the observed metal masses be produced within a Hubble time, these findings could in principle be quantitatively checked against models predicting the redshift evolution of the specific star formation rate and stellar mass in galaxies of a given present-day mass. Our observations also enable crude constraints on the integrated impact of AGN feedback from the group members, providing rough upper limits to the total AGN heating energy released per stellar mass, and to the efficiency with which central supermassive black holes have converted accreted mass into heating energy. With some modifications, our approach could eventually deliver robust constraints on models of galaxy formation and evolution which include the growth of central supermassive black holes and the associated AGN feedback. \acknowledgements JR acknowledges support provided by the National Aeronautics and Space Administration through Chandra Postdoctoral Fellowship Award Number PF7-80050.
1,314,259,993,142
arxiv
\section{Introduction} In the seminal paper of Black and Scholes (1973), the problem of valuing a European option was solved in closed form. Among other things, their result assumes that the stochastic process associated to the underlying asset is a geometric Brownian motion, not allowing for the payment of discrete dividends. Yet the majority of stocks on which options trade do pay dividends. Merton (1973) was the first to relax the no-dividend assumption, allowing for a deterministic dividend yield. In this case, he showed that European options can be priced in the context of a Black-Scholes economy, with either a continuous dividend yield or a discrete dividend proportional to the stock price. However, when the dividend process is discrete and does not depend on the stock level, the simplicity of the Black-Scholes model breaks down. Let $S_{t}$ denote the value of the underlying asset at time $t$, and let $T$ be the maturity time of the option. When the risky asset pays a dividend $D$ at time $\tau <T$, a jump of size $D$ in the value process happens at that point in time. The stock price process is discontinuous at $t=\tau $ and is no more a geometric Brownian motion in the time interval $[0,T]$. The standard approximation procedure for valuing European options written on such a risky asset, first informally suggested by Black (1975), considers a Black-Scholes formula, where the initial price of the underlying stock $S_{0}$ is replaced by its actual value less the present value ($PV$) of the dividends ($Div$), \[ S_{0}\to S_{0}^{\ast }=S_{0}-PV(\hbox{Div}) \] This adjustment is made to evaluate the option at any point in time before $\tau $. After the payment of dividends, there is no need for further adjustments. In this approximation, the input in the Black-Scholes formula is the value of the (continuous) stochastic process, \[ S_{t}^{\ast }=\left\{ \begin{array}{l} S_{t}-De^{-r\left( \tau -t\right) },\quad t<\tau \\ S_{t},\quad t\geq \tau \end{array} \right. \] where $r$ is the risk-free rate. For $t<\tau $, the discontinuous stock price process $S_{t}$ can thus be seen as the sum of two components ($S_{t}=S_{t}^{\ast }+De^{-r\left( \tau -t\right) }$). One riskless component, $De^{-r\left( \tau -t\right) }$, corresponding to the known dividends during the life of the option, and a continuous risky component $S_{t}^{\ast }$. At any given time before $\tau $, the riskless component is the present value of the dividend discounted at the present at the risk-free rate. For any time after $\tau $ until the time the option matures, the dividend will have been paid and the riskless component will no longer exist. We thus have $S_{T}=S_{T}^{\ast }$ and, as pointed out by Roll (1977), the usual Black-Scholes formula is correct to evaluate the option only if $S_{t}^{\ast }$ follows a geometric Brownian motion. In that case, we would use in the Black-Scholes formula $S_{0}^{\ast }$ for the initial value, together with the volatility of the process $S_{t}^{\ast }$, followed by the risky component of the underlying asset. If we assume that $S_{t}^{\ast }$\ follows a geometric Brownian motion, a simple application of It\^{o} Lemma shows that the original stock price process $S_{t}$ does not follow a geometric Brownian motion in the time interval $\left[ 0,\tau \right[ $. On the other hand, under the Black-Scholes assumption that $S_{t}$ follows a geometric Brownian motion in $\left[ 0,\tau \right[ $, the risky component $S_{t}^{\ast }$ follows a continuous process that is not a geometric Brownian motion in $\left[0,\tau \right[ $. Therefore, the standard procedure described above must be seen as an approximation to the true value of such calls under the Black-Scholes assumption. As argued by Bos and Vandermark (2002), this assumption is typically underlying the intuition of traders, but the approximation is sometimes bad. In fact, as noticed in the early papers about option pricing (Cox and Ross, 1976; Merton, 1976a; Merton, 1976b), the correct specification of the stochastic process followed by the value of the underlying stock is of prime importance in option valuation. The deficiency of this standard procedure is reported in Beneder and Vorst (2001). Using Monte Carlo simulation methods, these authors calculate the values of call options under the Black-Scholes assumption, and compare them with the values obtained with the approach just described. Reported errors are up to $9.4\%$. They also find that the standard procedure above usually undervalues the options. For these reasons, Beneder and Vorst (2001) propose a different approximation, trying to improve the standard procedure by adjusting the volatility of the underlying asset. This approach consists in modifying the variance of the returns by a weighted average of an adjusted and an unadjusted variance, where the weighting depends on the time $\tau $ of the dividend payment. Performing much better than the former approximation, this method still does not allow the control of the errors committed for the given parameters of the economy. Analogously, Frishling (2002) warns on the mispricing risk due to the use of an incorrect underlying stochastic process. This discussion is followed by a series of recent papers suggesting different approximations that better match numerical results (Bos and Vandermark, 2002; Bos \textit{et al}, 2003). More recently, Haug \textit{et al} (2003) discuss this problem. However, as these authors claim, ``[i]n the case of European options, the above techniques are \textit{ad hoc}, but the job gets done (in most cases) when the corrections are properly carried out''. The development of these approximations enhance two important aspects. First, they are not exact, and it is not possible to control the error with respect to the correct value of the option. Second, there are numerical procedures to estimate the value of these options, as for example, Monte-Carlo simulation methods. However, this method is time consuming and provides a convergence of statistical nature. The purpose of this paper is to derive a closed form for the exact value of European options on a stock paying a discrete dividend, in the context of a Black-Scholes economy. We obtain an exact result and we need not to rely on \textit{ad hoc} assumptions. This paper is organized as follows. In Section 2, an integral representation for the value of European options written on an asset paying a discrete dividend is obtained, and the convexity properties of the solutions of the Black-Scholes equation are derived. In section 3, we construct functional upper and lower bounds for the integral representation of the value of an option. These bounds follow from a convexity property of the solutions of the Black-Scholes equation. Theorem \ref{main} is the main result of this paper and gives the algorithmic procedure to determine the price of European options on a stock paying a discrete dividend. In section 4, numerical examples are analyzed and we discuss the advantages of the proposed method. In section 5, we summarize the main conclusions of the paper. \section{Valuation of European options on a stock paying a discrete dividend} In this section, following a standard procedure to derive the Black-Scholes formula (Wilmott, 2000), we derive an integral representation for the value of a European option written on an asset paying a known discrete dividend. We consider a European call option with maturity time $T$ and strike price $K$. This call option is written on an underlying asset with value $S_{t}$, with stochastic differential equation, \[ dS_{t}=\mu S_{t}dt+\sigma S_{t}dW_{t} \] where $\mu $ and $\sigma $ are the drift and volatility of the underlying asset. The quantity $W_{t}$ is a continuous and normally distributed stochastic process with mean zero and variance $t$. Under these conditions, the underlying asset with value $S_{t}$ follows a geometric Brownian motion. We also assume a risk-free asset with constant rate of return $r.$ In the context of the Black-Scholes economy, the value $V$ of an option is dependent of the time $t$ and of the price of the underlying asset $S$. Under the absence of arbitrage opportunities (Wilmott, 2000; Bj\"{o}rk, 1998), it follows that $V(S,t)$ obeys the Black-Scholes equation, \begin{equation}\label{eq:pde} {\frac{{\partial V}}{{\partial t}}}+{\frac{1}{2}}\sigma ^{2}S^{2}{\frac{{ \partial ^{2}V}}{{\partial S^{2}}}}+rS{\frac{{\partial V}}{{\partial S}}} -rV=0 \end{equation} The Black-Scholes equation is a quasi-linear parabolic partial differential equation, with $S\geq 0$, and $t\geq 0$. To determine the solutions of the Black-Scholes equation, we introduce the new variables, \begin{equation} \label{eq:newvariables} \left\{ \begin{array}{l} \theta =T-t \nonumber \\ x =\log S+\left( r-\frac{\sigma ^{2}}{2}\right) \left( T-t\right) \nonumber \\ \end{array} \right. \end{equation} together with the new function $\varphi (x,\theta )=e^{r(T-t)}V(S,t)$. In the new coordinates (\ref{eq:newvariables}), the Black-Scholes equation (\ref {eq:pde}) becomes the diffusion equation, \begin{equation} {\frac{{\partial \varphi }}{{\partial \theta }}}={\frac{1}{2}}\sigma ^{2}{ \frac{{\partial ^{2}\varphi }}{{\partial x^{2}}}} \label{eq:diffusion} \end{equation} where $x\in \mathbf{R}$ and $\theta \geq 0$. If $\theta =0$, by (\ref {eq:newvariables}), we have $\varphi (x,0)=V(S,T)$, and $\varphi (x,T)=e^{rT}V(S,0)$. Therefore, by (\ref{eq:newvariables}), the forward solution in the time $\theta $ of the diffusion equation relates with the backward solution in the time $t$ of the Black-Scholes equation (\ref{eq:pde}% ). The Black-Scholes problem for the price of a call option is to determine the option value at time $t=0$ whose value at maturity time $T$ is, \begin{equation}\label{eq:fronteira} V\left( S,T\right) =\max \{0,S-K\} \end{equation} Therefore, due to the change of coordinates (\ref{eq:newvariables}), the call option solution of the Black-Scholes equation (\ref{eq:pde}) is equivalent to an initial value problem for the diffusion equation. Suppose now an initial data problem for the diffusion equation (\ref {eq:diffusion}), $\varphi (x,\theta =0)=f(x)$. Under these conditions, the general solution of (\ref{eq:diffusion}) is (Folland, 1995), \begin{equation} \varphi \left( {x,\theta }\right) ={\frac{1}{{\sigma \sqrt{2\pi \theta }}}} \int_{-\infty }^{\infty }f(y){\exp }\left[ {{-{\frac{{\left( {x-y}\right) ^{2}}}{{2\sigma ^{2}\theta }}}}}\right] {dy} \label{eq:soldiff} \end{equation} and the solution of the Black-Scholes equation for a call option is, \begin{equation} V\left( {S}{,0}\right) =e^{-rT}\varphi \left( {x,T}\right) ={\frac{ e^{-rT}}{{\sigma \sqrt{2\pi T}}}}\int_{-\infty }^{\infty }V(e^{y},T){\exp } \left[ {{-{\frac{{\left( {x-y}\right) ^{2}}}{{2\sigma ^{2}T}}}}}\right] {dy} \label{eq:solBS} \end{equation} This integral can be easily calculated to obtain the usual Black-Scholes formula (Black and Scholes, 1973; Wilmott, 2000). For a dividend distribution at some time $\tau \in (0,T)$, the Black-Scholes formula is no longer true, since, during the life time of the option, the value of the underlying asset does not follow a geometric Brownian motion. However, if we take the time intervals, $I_{1}=[0,\tau \lbrack $ and $I_{2}=[\tau ,T]$, the value of the underlying asset follows a geometric Brownian motion in each interval $I_{1}$ and $I_{2}$, and, at time $t=\tau $, it has a jump equal to the dividend $D$. Before considering this case, we proceed with some properties of the solutions (\ref{eq:soldiff}) and (\ref{eq:solBS}) of the diffusion and of the Black-Scholes equations. \begin{definition} A real valued function $f\left( x \right)$, with $x \in \mathbf{R}$, is convex if, for every $x_1 , x_2 \in \mathbf{R}$, \[ f\left( {{\frac{{x_1 + x_2 } }{2}}} \right) \le {\frac{1 }{2}}\left( { f\left( {x_1 } \right) + f\left( {x_2 } \right)} \right) \] \end{definition} A simple property of convex functions is that, if the real-valued functions $f$ and $g$ are both convex, and $g$ is increasing, then $f(g(x))$ is also convex. \begin{proposition} \label{convex} Let $f(x)$ the initial data function of a well-posed diffusion equation problem, and suppose that $f(x)$ is non-negative and convex. Then, for fixed $\theta $, the solution $\varphi (x,\theta )$ of the diffusion equation is also convex. Moreover, if $f(x)$ is an increasing function, then, for fixed $\theta $, $\varphi (x,\theta )$ is also increasing. \end{proposition} \emph{Proof.} Suppose that the solution (\ref{eq:soldiff}) of the diffusion equation (\ref{eq:diffusion}) is well defined (Folland, 1995). By (\ref{eq:soldiff}), with $z=y-x$, we have, \[ \varphi \left( {x,\theta }\right) ={\frac{1}{{\sigma \sqrt{2\pi \theta }}}} \int_{-\infty }^{\infty }f(z+x){\exp }\left( {{-{\frac{{z^{2}}}{{2\sigma ^{2}\theta }}}}}\right) {dz} \] As, by hypothesis, $f(x)$ is convex, then, for every $z\in \mathbf{R}$, \[ f\left[ {{\frac{{\left( {x_{1}+z}\right) +\left( {x_{2}+z}\right) }}{2}}} \right] =f\left( z+{\frac{x_{1}+x_{2}}{2}}\right) \leq {\frac{1}{2}}\left[ { f\left( {z+x_{1}}\right) +f\left( {z+x_{2}}\right) }\right] \] and, as $f(x)$ is non-negative, \begin{eqnarray*} \varphi \left( {\frac{x_{1}+x_{2}}{2},s}\right) &=&\frac{1}{{\sigma \sqrt{ 2\pi \theta }}}\int_{-\infty }^{\infty }f\left( z+{\frac{x_{1}+x_{2}}{2}} \right) {\exp }\left( {{-{\frac{{z^{2}}}{{2\sigma ^{2}\theta }}}}}\right) {dz } \\ &\leq &{\frac{1}{2}}\left[ \varphi \left( x_{1},\theta \right) +\varphi \left( x_{2},\theta \right) \right] \end{eqnarray*} and so $\varphi (x,\theta )$ is also convex. Assuming now that $f(x)$ is increasing, we have that $f(x_2)\ge f(x_1)$, whenever $x_2>x_1$. Then, for every $z \in \mathbf{R}$, we have, $f(z+x_2)\ge (z+x_1)$, and, by (\ref{eq:soldiff}), the last assertion of the proposition follows. \hfill $\Box$ \bigskip As (\ref{eq:fronteira}) is a convex function in $S$, Proposition \ref{convex} implies that the backward solution (\ref{eq:solBS}) of the Black-Scholes equation (\ref {eq:pde}) is also a convex function. Suppose now that a dividend on the underlying asset is distributed at time $t=\tau $. We denote this dividend by $D$. According to the classical solution of the Black-Scholes equation (Wilmott, 2000), the price of the option just after the distribution of dividends at time $t=\tau $ is, \begin{equation} \label{eq:value+} V\left( {S_{+},\tau }\right) =S_{+}\,N\left( {d+\sigma \sqrt{T-\tau }}% \right) -Ke^{-r\left( {T-\tau }\right) }N\left( {d}\right) \end{equation} where, \[ d={\frac{{\ln S_{+}-\ln K+\left( r-{\frac{1}{2}}\sigma ^{2}\right) \left( T-\tau \right) }}{{\sigma \sqrt{T-\tau }}} } \] and $S_{+}$ denotes the value of the underlying asset just after the dividend distribution. The function $N(\cdot )$ is the cumulative distribution function for the normal distribution with mean zero and unit variance. By Proposition \ref{convex}, the function $V(S_{+},\tau )$ is convex. Note that, the solution (\ref{eq:value+}) is given by, $V\left( {S_{+},\tau }\right) =e^{-r\left( {T-\tau }\right) } \phi(x, T-\tau)$, and is directly calculated from (\ref{eq:solBS}) and (\ref{eq:fronteira}). The approach taken here to value an option is equivalent (see, among others, Cox and Ross, 1976; Harrison and Krebs, 1979) to write this value at any point in time as the expected discounted payoff of the option at maturity $T, $ under the so-called risk-neutral probability measure. Hence, knowing beforehand the amount to be distributed as dividend, the value of the option is not supposed to jump at $\tau $. In other words, the payment of known dividends $D$ at a known point in time $\tau $ does not affect the expectations at time $\tau $ about the final payoff of the option at maturity $T$, and the value of the option is continuous at $\tau $\footnote{According to Wilmott, 2000, the jump condition on the asset price is known {\it a priori}, implying that there is no surprise in the fall of the stock price. Therefore, in order to avoid arbitrage opportunities, the value of the option should not change across the dividend date. This is a no-arbitrage argument.} (Wilmott, 2000, pp. 129-131). Going backward in time, the value of the underlying asset jumps from $S_{+}$ to $S_{-}=S_{+}+D$, where $S_{-}$ is the value of the underlying asset just before the dividend distribution. As $V(S_{+},\tau)=V(S_{-},\tau )$, by (\ref{eq:value+}), the price of the option just before the distribution of dividends at time $t=\tau $ is, \begin{equation} \label{eq:value-} V(S_{-},\tau )=\left\{ \begin{array}{ll} (S_{-}-D)N( {\bar{d}+\sigma \sqrt{T-\tau }}) -Ke^{-r(T-\tau)}N( {\bar{d}}) &\hbox{if}\ S_{-}>D \\ 0 &\hbox{if}\ S_{-}\leq D \\ \end{array} \right. \end{equation} where, \begin{equation} {\bar{d}}={\frac{{{\ln (S_{-}-D)-\ln {K}+\left( {r-{\frac{1}{2}}\sigma ^{2}} \right) \left( {T-\tau }\right) }}}{{\sigma \sqrt{T-\tau }}}} \label{eq:d-} \end{equation} In Fig. \ref{fig:BS}, we plot $V({S_{+},\tau })$, $V({S_{-},\tau })$ and $V(S,T)$ as a function of $S$. The functions $V({S_{+},\tau })$, $V({S_{-},\tau })$ and $V(S,T)$ are convex. \begin{figure} [htbp] \centerline{\includegraphics[width=8 true cm]{fig1.eps}} \caption{Option values $V({S_+,\tau})$, $V({S_-,\tau})$ and $V(S,T)$ as a function of the value $S$ of the underlying asset. Parameter values are: $\mu =0.01$, $\sigma =0.2$, $r=0.03$, $K=100$, $D=5$, $T=1$ and $\tau=0.5$.} \label{fig:BS} \end{figure} To calculate the value of a call option as a function of the actual price ($t=0$) of the underlying asset, we must introduce the change of coordinates (\ref {eq:newvariables}) into (\ref{eq:value-}) and integrate as in (\ref {eq:solBS}). By (\ref{eq:solBS}) and (\ref{eq:value-}), it follows that the time-zero value of a European option written on an asset paying dividend $D$ at time $t=\tau $ is given by, \begin{equation} V\left( {S,0}\right) =e^{-r\tau }\varphi \left( {x,\tau }\right) ={\frac{% e^{-r\tau }}{{\sigma \sqrt{2\pi \tau }}}}\int_{-\infty }^{\infty }V\left[ {% S_{-}(y),\tau }\right] {\exp }\left[ {{-{\frac{{\left( {x-y}\right) ^{2}}}{{% 2\sigma ^{2}\tau }}}}}\right] {dy} \label{eq:solBSwithD} \end{equation} which has no simple representation in terms of tabulated functions. By Proposition \ref{convex}, $V(S,0)$ is also convex. \section{Accurate bounds for $V( S,0)$} As it is difficult to determine a close form for the integral representation of the option's value (\ref{eq:solBSwithD}) in terms of tabulated functions, to estimate the value $V\left( {S,0}\right) $, we use the convexity property of $V({S_{-},\tau })$ and its asymptotic behavior as $% S_{-}\rightarrow \infty $. \begin{lemma} \label{asymptotic}If $K>0$, then, in the limit $S_{-}\rightarrow \infty $, $% V({S_{-},\tau })$ is asymptotic to the line $V=(S_{-}-D)-Ke^{-r(T-\tau )}$, and $V({S_{-},\tau })\geq (S_{-}-D)-Ke^{-r(T-\tau )}$. \end{lemma} \emph{Proof.} \noindent In the limit $S_{-}\rightarrow \infty $, ${\bar{d}}\rightarrow \infty $, and $N({\bar{d}})\rightarrow 1$. Hence, by (\ref{eq:value-}), $V({S_{-},\tau })$ is asymptotic to the line $V_1=(S_{-}-D)-Ke^{-r(T-\tau )}$. To prove the second part of the lemma, first note that, if $V_1=(S_{-}-D)-Ke^{-r(T-\tau )}\le 0$, then $S_{-}\leq D+Ke^{-r(T-\tau )}$. As $V({S_{-},\tau })$ is non-negative, if $S_{-}\leq D+Ke^{-r(T-\tau )}$, then $V({S_{-},\tau })\ge V_1$. Suppose now that $S_{-}>D+Ke^{-r(T-\tau )}$. By hypothesis, we assume that there exists some $S_{-}={\bar{S}} $ such that, $V({{\bar{S}},\tau })=({\bar{S}}-D)-Ke^{-r(T-\tau )}$, and $V({{\bar{S}},\tau })>0$. By (\ref{eq:value-}) and (\ref{eq:d-}), we then have, \[ Ke^{-r(T-\tau )}={\frac{N\left[ {\bar{d}}({\bar{S}})+\sigma \sqrt{T-\tau }% \right] -1}{N\left[ {\bar{d}}({\bar{S}})\right] -1}}({\bar{S}}-D) \] As $(S_{-}-D)> Ke^{-r(T-\tau )}$, from the equality above, we obtain, \[ {\frac{N\left[ {\bar{d}}({\bar{S}})+\sigma \sqrt{T-\tau } \right] -1}{N\left[ {\bar{d}}({\bar{S}})\right] -1}}({\bar{S}}-D)=Ke^{-r(T-\tau )}<(S_{-}-D) \] Hence, \[ N\left[ {\bar{d}}({\bar{S}})+\sigma \sqrt{T-\tau }\right] <N\left[ {\bar{d}}({\bar{S}})\right] \] which contradicts the fact that $N(\cdot )$ is a monotonically increasing function of the argument. Therefore, the function $V({S_{-},\tau })$ and the line $V_1=(S_{-}-D)-Ke^{-r(T-\tau )}$ do not intersect for finite $\bar{S}$. As $V({S_{-},\tau })$ is a continuous function of $S_{-}$, then $V({S_{-},\tau })\ge V_1$ in all the range of $S_{-}$, and the lemma is proved. \hfill $\Box$ \medskip To estimate the solution (\ref{eq:solBSwithD}) of the Black-Scholes equation, we use Proposition \ref{convex} and Lemma \ref{asymptotic} to construct integrable upper and lower bound functions of $V({% S_{-},\tau })$. This constructions proceeds as follows. Let us choose a fixed number $S_{-}=S^{\ast }>D$, and divide the interval $% [D,S^{\ast }]$ into $M\geq 1$ smaller subintervals. The length of the subintervals is $\Delta S=(S^{\ast }-D)/M$, and their extreme points are denoted by, \[ S_{i}=D+i\,\Delta S,\qquad i=0,\ldots ,M \] As the function $V({S_{-},\tau })$ is convex, in each subinterval, the function $V({S_{-},\tau })$ is bounded from above by the chord that connects the points $(S_{i},V({S_{i},\tau }))$ and $(S_{i+1},V({S_{i+1},\tau }))$. We define the constants, \[ \alpha _{i}={\frac{M}{S^{\ast }-D}}\left[ V({S_{i},\tau })-V({S_{i-1},\tau })% \right] ,\qquad i=1,\ldots ,M \] where by (\ref{eq:value-}), $V({S_{0},\tau })=0$. Therefore, in each interval $[S_{i-1},S_{i}]$, the function $V({S_{-},\tau })$ is bounded from above by the function $f_{i}(S_{-})=\alpha _{i}(S_{-}-S_{i-1})+V({% S_{i-1},\tau })$. Let us define the characteristic function of a set $I$ as, $\chi _{I}(x)=1$, if $% x\in I$, and $\chi _{I}(x)=0$, otherwise. Then, the function $V({S_{-},\tau }% )$ in the interval $[D,S^{\ast }]$ is approached from above by the piecewise linear function, \begin{equation} V_{1}^{+}({S_{-},\tau })=\sum_{i=1}^{M}\left[ \alpha _{i}(S_{-}-S_{i-1})+V({% S_{i-1},\tau })\right] \chi _{\lbrack S_{i-1},S_{i}]}(S_{-}) \label{eq:upb1} \end{equation} To extend the bound of $V({S_{-},\tau })$ to $S_{-}>S^{\ast }$, we introduce the function, \begin{equation} V_{2}^{+}(S_{-},\tau )=\left[ (S_{-}-S^{\ast })+V(S^{\ast },\tau )\right] \chi _{\lbrack S^{\ast },\infty )}(S_{-}) \label{eq:upb2} \end{equation} By Proposition \ref{convex} and Lemma \ref{asymptotic}, for $S_{-}\geq S^{\ast }$, $V_{2}^{+}(S_{-},\tau )$ is the chord connecting the point $% (S^{\ast },V(S^{\ast },\tau ))$ to the point at infinity. Therefore, we have proved the following: \begin{lemma} \label{upper} The function $V(S_{-},\tau )$ has the upper bound, \[ V(S_{-},\tau )\leq V_{1}^{+}(S_{-},\tau )+V_{2}^{+}(S_{-},\tau ),\ \ \hbox{if}\ \ S_{-}> D \] where $V_{1}^{+}$ and $V_{2}^{+}$ are given by (\ref{eq:upb1}) and (\ref {eq:upb2}), respectively, and the function $(V_{1}^{+}+V_{2}^{+})$ is piecewise linear and non-negative. If $S_{-}\le D$, $V(S_{-},\tau )=0$. \end{lemma} The construction of a lower bound for (\ref{eq:value-}) follows the same line of reasoning. In each subinterval $[S_{i-1},S_{i}]\subset \lbrack D,S^{\ast }]$, we can construct a linear function that bounds from below the function $% V(S_{-},\tau )$. Due to the convexity of $V(S_{-},\tau )$, we construct the lower bound through the derivative of $V(S_{-},\tau )$ at the middle point of each interval $[S_{i-1},S_{i}]$. We then have, \begin{equation} V_{1}^{-}\left( {S_{-},\tau }\right) =\sum\limits_{i=1}^{M}\left[ {{% V^{\prime }\left( {S_{i+{\frac{1}{2}}},\tau }\right) \left( {S_{-}-S_{i+{% \frac{1}{2}}}}\right) +V\left( {S_{i+{\frac{1}{2}}},\tau }\right) }}\right] \chi _{\lbrack S_{i-1},S_{i}]}(S_{-}) \label{eq:V1-} \end{equation} where, \[ V^{\prime }\left( {S_{-},\tau }\right) ={\frac{{e^{-{\frac{1}{2}}\left( {% \overline{d}+\sigma \sqrt{T-\tau }}\right) ^{2}}}}{{\sigma \sqrt{2\pi }\sqrt{% T-\tau }}}}-{\frac{{K\,e^{-r\left( {T-\tau }\right) }\,e^{-{\frac{1}{2}% \overline{d}}^{2}}}}{{\sigma \sqrt{2\pi }\sqrt{T-\tau }\left( {S_{-}-D}% \right) }}}+N\left( {\overline{d}+\sigma \sqrt{T-\tau }}\right) \] and ${\overline{d}}$ is given by (\ref{eq:d-}). To extend the lower bound of $V({S_{-},\tau })$ to $S_{-}>S^{\ast }$, we use Lemma \ref{asymptotic} to introduce the function, \begin{equation} V_{2}^{-}({S_{-},\tau })=\left[ (S_{-}-D)-Ke^{-r(T-\tau )}\right] \chi _{\lbrack S^{\ast },\infty )}(S_{-}) \label{eq:V2-} \end{equation} By Lemma \ref{asymptotic}, $V_{2}^{-}({S_{-},\tau })$ bounds from below $V(S_{-},\tau )$. Therefore, we have: \begin{lemma} \label{lower} The function $V(S_{-},\tau )$ has the lower bound, \[ V(S_{-},\tau )\geq V_{1}^{-}(S_{-},\tau )+V_{2}^{-}(S_{-},\tau ),\ \ \hbox{if}\ \ S_{-}> D \] where $V_{1}^{-}$ and $V_{2}^{-}$ are given by (\ref{eq:V1-}) and (\ref {eq:V2-}), respectively, and the function $(V_{1}^{-}+V_{2}^{-})$ is piecewise linear and non-negative. If $S_{-}\le D$, $V(S_{-},\tau )=0$. \end{lemma} Finally, we can state our main result: \begin{theorem} \label{main}We consider the Black-Scholes equation (\ref{eq:pde}) together with the terminal condition (\ref{eq:fronteira}). We assume that $K>0$ and a dividend $D>0$ is payed at the time $\tau $ with $0<\tau <T$. Let $S=S^{\ast }>D$ be a fixed constant and let $M\geq 1$ be an integer. Then, the solution of the Black-Scholes equation with terminal condition (\ref{eq:fronteira}) has the following upper and lower bounds: \begin{eqnarray*} V(S,0)\leq V_{S^{\ast },M}^{+}(S,0)&=&\sum\limits_{i=1}^{M}\left\{ \alpha _{i}A_{i}S+e^{-r\tau }\left[ V(S_{i-1},\tau )-\alpha _{i}S_{i-1}% \right] B_{i}\right\} \\ &+&SN(d^{\ast })+e^{-r\tau }\left[ V(S^{\ast },\tau )-S^{\ast }\right] N(d^{\ast }-{\sigma \sqrt{\tau }}) \end{eqnarray*} and \begin{eqnarray*} V(S,0)\geq V_{S^{\ast },M}^{-}(S,0)&=&S\sum\limits_{i=1}^{M}V^{\prime }\left( S_{i+\frac{1}{2}},\tau \right) A_{i} \\ &+&e^{-r\,\tau }\sum\limits_{i=1}^{M}\left[ {V}\left( {{{S_{i+{\frac{1}{2}} },\tau }}}\right) {{-V^{\prime }}}\left( {{{S_{i+{\frac{1}{2}}},\tau }}}\right) {{{S_{i+{\frac{1}{2}}}}}}\right] {B_{i}} \\ &+&SN(d^{\ast })-e^{-r\tau }\left( {D+Ke^{-r\left( {T-\tau }\right) }} \right) N\left( {d^{\ast }-\sigma \sqrt{\tau }}\right) \end{eqnarray*} where, \begin{eqnarray*} S_{i} &=&D+{\frac{{S^{\ast }-D}}{M}}i \\ d_{i} &=&{\frac{{\log S-\log S_{i}+(r+{\frac{1}{2}}\sigma ^{2})\tau }}{{% \sigma \sqrt{\tau }}}} \\ d &=&{\frac{{\log (S-D)-\log K+(r+{\frac{1}{2}}\sigma ^{2})(T-\tau )}}{{% \sigma \sqrt{T-\tau }}}} \\ d^{\ast } &=&{\frac{{\log S-\log S^{\ast }+(r+{\frac{1}{2}}\sigma ^{2})\tau }% }{{\sigma \sqrt{\tau }}}} \\ V(S,\tau ) &=&(S-D)N(d)-Ke^{-r(T-\tau )}N(d-{\sigma \sqrt{T-\tau }}) \\ V^{\prime }\left( {S,\tau }\right) &=&N(d)+{\frac{{e^{-{\frac{1}{2}}% d^{2}}}}{{\sigma \sqrt{2\pi (T-\tau )}}}}-{\frac{{K\,e^{-r\left( {T-\tau }% \right) }\,e^{-{\frac{1}{2}}\left( {d-\sigma \sqrt{T-\tau }}\right) ^{2}}}}{{% \sigma \sqrt{2\pi (T-\tau )}\left( {S-D}\right) }}} \\ \alpha _{i} &=&{\frac{M}{{S^{\ast }-D}}}\left[ {V(S_{i},\tau }{% )-V(S_{i-1},\tau }{)}\right] \\ A_{i} &=&N(d_{i-1})-N(d_{i}) \\ B_{i} &=&N(d_{i-1}-{\sigma \sqrt{\tau }})-N(d_{i}-{\sigma \sqrt{\tau }}) \end{eqnarray*} and $N(\cdot )$ is the cumulative distribution function for the normal distribution with mean zero and unit variance. \end{theorem} \emph{Proof.} By Lemmata (\ref{upper}) and (\ref{lower}), \[ V_{1}^{-}({S_{-},\tau })+V_{2}^{-}({S_{-},\tau })\leq V({S_{-},\tau })\leq V_{1}^{+}({S_{-},\tau })+V_{2}^{+}({S_{-},\tau }),\ \ \hbox{if}\ \ S_{-}> D \] Multiplying this inequality by the factors as in the integral (\ref{eq:solBSwithD}), and integrating, we obtain the estimates of the theorem. \hfill $\Box$ Note that, for $S^{\ast }>D$ fixed, $\lim_{M\to\infty} V_{S^{\ast },M}^{-}(S,0)\not= lim_{M\to\infty} V_{S^{\ast },M}^{+}(S,0)$. However, if $S^{\ast }$ is large enough, both limits can be made arbitrarily close. Technically, this is due to the way the exponential term in (\ref{eq:solBS}) contributes to the integral. \section{Calculating the price of a call option on a stock paying a discrete dividend} Theorem \ref{main} is the necessary tool to determine the price of a call option when the underlying asset pays a discrete known dividend before maturity time $T$. In fact, Theorem \ref{main} asserts that we can always find upper and a lower bound functions for $V(S,0)$, and the bounding functions approach each other as we increase $M$ and $S^{\ast }$. \begin{figure} [htbp] \centerline{\includegraphics[width=14 true cm]{fig2.eps}} \caption{Bounds $V^+_{S^*,M}(S,0)$ and $V^-_{S^*,M}(S,0)$ for $V(S,0)$, calculated from Theorem \ref{main}, for several values of $S^*$ and $M$. In a) we have chosen $S^*=D+ Ke^{-r(T-\tau)}=103.5$. In b), $S^*=2(D+ Ke^{-r(T-\tau)})=207.0$. Parameter values are: $\mu =0.01$, $\sigma =0.2$, $r=0.03$, $K=100$, $D=5$, $T=1$ and $\tau=0.5$.} \label{fig:bounds} \end{figure} To determine the price of the option, we first choose fixed values for the approximation parameters $S^{\ast }$ and $M$. If $V_{S^{\ast },M}^{+}(S,0)$ and $V_{S^{\ast },M}^{-}(S,0)$ differ too much within some fixed precision, we then increase $S^{\ast }$ and $M$. \bigskip \begin{table} \caption{\label{tb:t1} Bounds $V_{S^{\ast },M}^{+}(S,0)$ and $V_{S^{\ast },M}^{-}(S,0)$ for $V(S,0)$, calculated from Theorem \ref{main}, for several values of $S^{\ast }$ and $M$, and $S=110$. The exact value $V(S,0)$ has been obtained by the numerical integration of (\ref {eq:solBSwithD}). The interval error $\varepsilon$ is given by (\ref{eq:solBSwithD}). Parameter values are the same as in Fig. \ref{fig:bounds}, and we have chosen $S^{\ast }=D+Ke^{-r(T-\protect\tau )}=103.5$, $S^{\ast }=1.5(D+Ke^{-r(T-\protect\tau )})=155.3$ and $S^{\ast }=2(D+Ke^{-r(T-\protect\tau )})=207.0$.} \begin{center} \item[]\begin{tabular}{ccccccc} \hline S & $S^{\ast }$ & $M$ & $V_{S^{\ast },M}^{-}(S,0)$ & $V(S,0)$ & $V_{S^{\ast },M}^{+}(S,0)$ & $\varepsilon$\\ \hline 110 & 103.5 & 10 & 11.24 & 12.87 & 15.41 & 4.166 \\ 110 & 103.5 & 50 & 11.61 & 12.87 & 15.35 & 3.739\\ 110 & 103.5 & 400 & 11.63 & 12.87 & 15.35 & 3.721\\ & & & & & &\\ 110 & 155.3 & 10 & 11.39 & 12.87 & 13.20 & 1.807 \\ 110 & 155.3 & 50 & 12.79 & 12.87 & 12.88 & 0.096\\ 110 & 155.3 & 200 & 12.87 & 12.87 & 12.87 & 0.006\\ 110 & 155.3 & 400 & 12.87 & 12.87 & 12.87 & 0.002\\ & & & & & &\\ 110 & 207.0 & 10 & 10.64 & 12.87 & 13.45 & 2.813\\ 110 & 207.0 & 50 & 12.72 & 12.87 & 12.89 & 0.170\\ 110 & 207.0 & 200 & 12.86 & 12.87 & 12.87 & 0.011\\ 110 & 207.0 & 400 & 12.87 & 12.87 & 12.87 & 0.003\\ \hline \end{tabular} \end{center} \end{table} To analyze the convergence of the functional bounds $V^{+}$ and $V^{-}$ to the true price of a call option, we take, as an example, the parameters: $\mu =0.01$ (drift), $\sigma =0.2$ (volatility), $r=0.03$ (interest rate), $K=100$ (strike price), $D=5$ (dividend), $T=1$ (expiration time) and $\tau =0.5$ (time of dividend paying). In Fig. \ref{fig:bounds}, we show $V_{S^{\ast },M}^{+}(S,0)$ and $V_{S^{\ast },M}^{-}(S,0)$, for several values of $S^{\ast }$ and $M$, and calculated from Theorem \ref{main}. Increasing $M$ and $S^*$, the upper and lower bounds $V_{S^{\ast },M}^{+}(S,0)$ and $V_{S^{\ast },M}^{-}(S,0)$ approach each other, increasing the accuracy to which the functionals bounds approach the option price. To quantify this approximation to the value of the option, we define the interval error as, \begin{equation} \varepsilon=|V_{S^{\ast },M}^{+}(S,0)-V_{S^{\ast },M}^{-}(S,0)| \label{eq:error} \end{equation} In Table \ref{tb:t1}, we compare the values of the upper and lower bounds $V^+_{S^*,M}(S,0)$ and $V^-_{S^*,M}(S,0)$, calculated from Theorem \ref{main}, with the exact value of $V(S,0)$, obtained by the numerical integration of (\ref {eq:solBSwithD}). We show also the interval error $\varepsilon$ associated to both bounds. Assuming an interval error below the smallest unit of the monetary currency, for example, $\varepsilon<10^{-2}$, we obtain the true value of the option. Therefore, for a choice of $S^*$ and $M$ such that $\varepsilon<10^{-2}$, the difference between $V^+_{S^*,M}(S,0)$ and $V^-_{S^*,M}(S,0)$, is below the smallest unit of the monetary currency, and the rounded values of $V^+_{S^*,M}(S,0)$ and $V^-_{S^*,M}(S,0)$ coincide. This rounded value is the option value within the chosen monetary accuracy To analyze the global convergence behavior of $V^+_{S^*,M}(S,0)$ and $V^-_{S^*,M}(S,0)$, we chose a fixed value of $S$, and we change the approximation parameters $S^{\ast }$ and $M$. In Fig. \ref{fig:Md}, we show $V_{S^{\ast },M}^{+}(S,0)$ and $V_{S^{\ast },M}^{-}(S,0)$ as a function of $S^{\ast }$, for several values of $M$. Increasing $M$, the upper and lower bounds of $V(S,0)$ become close in a region of the $S^{\ast }$ axis. A choice of $S^{\ast }$ in this region, gives better bounds to the value of the option, for lower values of $M$ (Table \ref{tb:t1} and Fig. \ref{fig:Md}). For all the examples we have analyzed, a good compromise to determine the value of the call option is to choose $S^{\ast}=2(D+Ke^{-r(T-\tau )})$. Then, increasing $M$, the interval error decreases. Due to the fast computational convergence of the expressions in Theorem \ref{main}, bounds with interval error below the smallest unit of the monetary currency are straightforwardly obtained. \begin{figure} [htbp] \centerline{\includegraphics[width=10 true cm]{fig3.eps}} \caption{Bounds $V_{S^{\ast },M}^{+}(S,0)$ and $V_{S^{\ast },M}^{-}(S,0)$ as a function of $S^{\ast }$, for $S=110$ and several values of $M$. The parameter values are the same as in Fig. \ref{fig:bounds} and Table \ref{tb:t1}. } \label{fig:Md} \end{figure} \section{Concluding remarks} We have obtained an upper and a lower bound for the exact value of a call option on a stock paying a known discrete dividend at a known future time. We have assumed the context of a Black-Scholes economy, where, away from the dividend time paying, the underlying asset price follows a geometric Brownian motion type stochastic process. The upper and lower bounds both approach the exact value of the option when two parameters are varied. In practical terms, one of these parameters ($S^{\ast }$) can be fixed to the value, $S^{\ast }=2\left( D+Ke^{-r(T-\tau )}\right) $, where $K$ is the strike, $D$ is the dividend, $\tau $ is the time of paying the discrete dividend, and $T$ is the length of the contract. Increasing the second parameter $M$, we obtain bounds for the option value with increasing accuracy. If this accuracy is below the smallest unit of the monetary currency, both bounds coincide, and we obtain the exact value of the option. The technique used to construct these bounds relies on the convexity properties of the option value at maturity, and on a property of the Black-Scholes and diffusion equations that preserves the convexity of propagated initial conditions. Under this framework, a similar methodology can be used to determine the value of a put option on a stock paying a known discrete dividend at a known future time. From the numerical point of view, the technique developed here reduces to the sum of a few Black-Scholes type terms, whereas numerical Monte Carlo methods rely on the poor convergence properties determined by the classical central limit theorem. In our numerical tests for the determination of the exact price of a call option, the computing time of our technique (using the Mathematica programming language) is several orders of magnitude faster than the computing time of finite diferences integration algorithms and of Monte Carlo methods. \section*{Acknowledgments} We would like to thank Faisal Al-Sharji for the carefull test of the results presented here. This work has been partially supported by Funda\c c\~ao para a Ci\^encia e a Tecnologia (Portugal), under a plurianual funding grant.
1,314,259,993,143
arxiv
\section{Introduction} \label{sec1} We consider a stylized model for a network of~$N$ users sharing a wireless medium according to a random-access scheme. The network is represented by an undirected graph~$G = (V, E)$, called \textit{conflict graph}. The set of vertices~$V = \{1, \dots, N\}$ describes the network users and the set of edges~$E \subseteq V \times V$ indicates which pairs of users interfere and are thus prevented from simultaneous activity. The independent sets of~$G$ (sets of vertices not sharing any edge) then correspond to the feasible joint activity states of the network. A user is said to be \textit{blocked} whenever the user itself or any of its neighbors in~$G$ is active, and \textit{unblocked} otherwise. User~$i$ activates (starts a transmission) at an exponential rate~$\nu_i$ whenever it is unblocked, and then remains active for an exponentially distributed time period with unit mean, before turning inactive again. The durations of the various activity periods are assumed independent across time and among users. We will refer to the parameters $\nu_i$ as \textit{activation rates}. Let~$\Omega^* \subseteq \{0, 1\}^V$ be the collection of incidence vectors of all independent sets of~$G$, and let~$X^*(t) \in \Omega^*$ be the joint activity state at time~$t$, with element $i$ of $X^*(t)$ indicating whether user~$i$ is active ($X_i^*(t)=1$) or not ($X_i^*(t)=0$) at time~$t$. Then $\{\xst \}_{t \geq 0}$ is a reversible Markov process with stationary distribution~\cite{BKMS87,Kelly79,Kelly85,KBC87,WK05} \eqn{\label{eq:statt1} \pi_x(\nu_1, \dots, \nu_N) = \lim_{t \to \infty} \pr{X^*(t) = x} = \frac{\prod_{i \in V} \nu_i^{x_i}}{\sum_{y \in \Omega^*} \prod_{i \in V} \nu_i^{y_i}}, \quad x \in \Omega^*. } We also mention that the model amounts to a special instance of a loss network~\cite{SR04,ZZ99}, and that the product-form distribution~\eqref{eq:statt1} corresponds to the Gibbs measure of the hard-core model in statistical physics~\cite{GS08,H97}. For the case $\nu_i=\nu$ it follows from \eqref{eq:statt1} that only the activity states corresponding to \textit{maximum} independent sets retain probability mass as the activation rate~$\nu$ grows large. This indicates that users that do not belong to a maximum independent set have far fewer opportunities to be active. This disadvantage is commonly referred as \textit{spatial unfairness}, and the associated starvation effects have major performance repercussions in wireless networks. It has been shown that spatial unfairness can be avoided by selecting suitable user-specific activation rates~$\nu_i$ which provide all users with an equal opportunity to be active in the long run~\cite{JW10,VJLB11}. Even in those cases, however, or in symmetric scenarios where spatial fairness is automatically ensured, transient but yet significant starvation effects can arise due to extremely slow transitions between high-likelihood or \textit{dominant} states. Intuitively speaking, the activity process will typically need to pass through a low-likelihood or bottleneck state in order for the process to transit between dominant states. Visiting such a bottleneck state basically involves the occurrence of a rare event, or even several rare events in different limiting regimes, and causes the transition to take a correspondingly long amount of time. Consequently, users may experience extended stretches of forced inactivity (possibly interspersed by long intervals with a rapid succession of activity periods), resulting in serious performance degradation. Motivated by these fairness issues, we investigate in the present paper the time for the Markov process to reach, starting from a given dominant state, one of the other dominant states. We study these hitting times as well as mixing properties in the asymptotic regime where the activation rates~$\nu_i$ grow large. This asymptotic regime, in which users activate aggressively, is relevant in highly loaded networks and gives rise to the above-described starvation effects. As shown numerically in~\cite{KKA12}, these starvation effects are particularly pronounced in dense topologies. As a prototypical worst-case scenario, we focus on a specific class of dense conflict graphs, namely complete partite graphs. In such networks the users can be partitioned into $K$~disjoint sets called \textit{components}, such that each user interferes with all users in all other components. This implies that a transition from an activity state in one of the components to another component entails passing through a bottleneck state where all users are inactive at some point. Based on this observation and a regenerative argument, we establish a geometric-sum representation for the hitting time, which we then use to obtain the asymptotic order-of-magnitude and scaled distribution. For convenience we assume all users within a given component to have the same activation rate, but we do allow for users in different components to have different activation rates. Section~\ref{sec2} presents a detailed model description and Section~\ref{sec3} gives an overview of the main results. Preliminary results for the case $\nu_i = \nu$ for all $i \in V$ appeared in~\cite{ZBvL12}, but did not exhibit the full qualitative range of asymptotic behaviors that will be revealed in the present paper. \section{Model description} \label{sec2} Consider a network represented by a \textit{complete partite graph}, where two users interfere if and only if they belong to different components. Thus, in particular, users within the same component do not interfere. Denote by $C_1,\dots,C_K$ the $K$ components of $G$ and define $L_k:=|C_k|$ as the size of component $C_k$. Note that the components $C_1, \dots, C_K$ are the $K$ \textit{maximal} independent sets of the graph $G$. Moreover, component $C_k$ corresponds to a \textit{maximum} independent set if and only if $L_k \geq L_j$ for all $j=1,\dots,K$. Figure~\ref{fig:conflictgraph} shows an example of such a dense conflict graph, where $K=5$ and the components have sizes $\{ L_1, L_2, L_3, L_4, L_5\}=\{3,4,6,2,5\}$. The corresponding state space $\Omega^*$ for this graph is shown is Figure~\ref{fig:statespace}. \begin{figure}[!hb] \centering \includegraphics[scale=0.3]{ckg.eps} \caption{Example of complete $K$-partite conflict graph with $K=5$.} \label{fig:conflictgraph} \end{figure} We assume that the exponential rate at which a user activates depends only on a global aggressiveness parameter $\nu$ and on the component it belongs to, namely \[ \nu_i = f_k(\nu) \quad \text{ if } i \in C_k, \] for some monotone function $f_k: \mathbb R_+ \to \mathbb R_+$ with $\lim_{\ninf} f_k(\nu) = \infty$. We will refer to the function $f_k(\cdot)$ as the \textit{activation rate} of component $C_k$, for $k=1,\dots,K$. In view of symmetry, all states with the same number of active users in a given component can be aggregated, and we only need to keep track of the number $l$ of active users, if any, and the index $k$ of the component $C_k$ they belong to. This state aggregation yields a new Markov process $\{\xt \}_{t \geq 0}$ on a star-shaped state space~$\Omega$ with $K$~branches, where each branch emanates from a common root node and describes one of the components of the conflict graph. Figure~\ref{fig:star} shows the aggregated state space corresponding to the previous example. \begin{figure}[!hb] \centering \subfigure[State space $\Omega^*$]{\includegraphics[scale=0.36]{ckp.eps}\label{fig:statespace}} \subfigure[Aggregated state space $\Omega$]{\includegraphics[angle=181,origin=c,scale=0.32]{star.eps}\label{fig:star}} \caption{State space $\Omega^*$ and aggregated state space $\Omega$, for the conflict graph in Figure~\ref{fig:conflictgraph}.} \end{figure} For $k=1,\dots, K$, let \[ \mathcal{B}_k:=\{(k,l) : 1 \leq l \leq L_k\} \] denote the branch of the state space $\Omega$ that corresponds to activity inside component $C_k$, where state $(k,l)$ indicates $l$~users active in component $C_k$. Then $\Omega = \{0\} \cup \bigcup_{k=1}^K \mathcal{B}_k$, where~$0$ is the bottleneck state in which all users are inactive. The transition rates of the process $\{\xt \}_{t \geq 0}$ then read \eqan{ q(0, (k, 1)) &= L_k f_k(\nu),\nonumber \\ q((k,1), 0) &= 1, \nonumber \\ q((k,l), (k,l+1)) &= (L_k - l) f_k(\nu), \quad l = 1, \dots, L_k - 1, \, k = 1, \dots, K,\nonumber \\ q((k,l), (k,l-1)) &= l, \quad l = 2, \dots, L_k, \, k = 1, \dots, K. \nonumber } The stationary distribution of the process $\{\xt \}_{t \geq 0}$ can be easily seen to be \eqan{ \label{eq:sd} \pi_0(\nu) &= \Big(1 + \sum_{k = 1}^{K} \sum_{l = 1}^{L_k} \bin{L_k}{l} f_k(\nu)^l\Big)^{-1}, \nonumber \\ \pi_{(k,l)}(\nu) &= \pi_0(\nu) \bin{L_k}{l} f_k(\nu)^l, \quad l = 1, \dots, L_k, \, k = 1, \dots, K. } The state $(k, L_k)$ corresponds to the maximum activity state inside component $C_k$, which becomes the most likely state within the branch $\mathcal{B}_k$ as $\nu \to \infty$. Define the \textit{transition time} from state $(k_1,l_1)$ to state $(k_2,l_2)$ as \[ T_{(k_1,l_1),(k_2,l_2)}(\nu):=\inf\{t > 0: X(t) = (k_2,l_2) ~|~ X(0) = (k_1,l_1)\}. \] We now introduce few parameters that will turn out to play a key role in the asymptotic distribution of the transition time. Define for $k\neq k_2$, \eqn{\label{eq:defgk} \gamma_k := \lim_{\ninf} \frac{f_k(\nu)^{L_k}}{\sum_{j\neq k_2} f_j(\nu)^{L_j}}. } To avoid technicalities, we assume throughout that all parameters $\gamma_k$ are well defined. In view of~\eqref{eq:sd}, $\gamma_k$ may be interpreted as the stationary fraction of time that the activity process spends in branch $k$ as $\nu \to \infty$, excluding the target branch $k_2$. As it turns out, $\gamma_k$ also equals the fraction of time that the activity process spends in branch $\mathcal{B}_k$ during the transition time as $\nu \to \infty$. Branch $\mathcal{B}_k$ is called \textit{dominant} if $\gamma_k >0$ and let $K_* :=\{k \neq k_2 : \gamma_k>0\}$ be the index set of all \textit{dominant branches}. Note that, by construction, the set $K_*$ is never empty and thus there is always at least one dominant branch. \section{Main results} \label{sec3} In this section we present our main results, which are all related to the asymptotic behavior of the transition time $T_{(k_1,l_1),(k_2,l_2)}(\nu)$ in the asymptotic regime of a large activation rate $\nu$. Our first result characterizes the asymptotic order-of-magnitude of the mean transition time in terms of the activation rates and the network structure. For any two real-valued functions $f(\cdot)$ and $g(\cdot)$, let $f(\nu) \sim g(\nu)$ indicate that $\lim_{\ninf} f(\nu) / g(\nu) = 1$ as $\nu \to \infty$. \begin{thm}\label{thm:thm1} If $k_1\neq k_2$, then \eqn{ \label{eq:agr} \mathbb E T_{(k_1, l_1), (k_2, l_2)}(\nu) \sim \frac{1}{L_{k_1}} f_{k_1}(\nu)^{L_{k_1}-1} + \frac{1}{L_{k_2} f_{k_2}(\nu)} \sum_{k \in K_*} f_k(\nu)^{L_k}, \quad \text{ as } \nu \to \infty. } \end{thm} The first term on the right-hand side of~\eqref{eq:agr} corresponds to the asymptotic mean \textit{escape time} $\mathbb E T_{(k_1,l_1),0} (\nu)$ from the initial branch $\mathcal{B}_{k_1}$, while the second term describes the contribution of the mean time spent visiting dominant branches, possibly including branch $\mathcal{B}_{k_1}$ as well. Let \eqn{\label{eq:alpha} \alpha:=\lim_{\ninf} \frac{\mathbb E T_{(k_1,l_1), 0} (\nu)}{\mathbb E T_{(k_1,l_1),(k_2,l_2)}(\nu)} \in [0,1] } denote the relative weight of $\mathcal{B}_{k_1}$. Our second result gives the asymptotic distribution of the transition time $T_{(k_1,l_1),(k_2,l_2)}(\nu)$ scaled by its mean as $\nu \to \infty$. \begin{thm}\label{thm:thm2} If $k_1\neq k_2$, then \[ \frac{T_{(k_1,l_1),(k_2,l_2)}(\nu)}{ \mathbb E T_{(k_1,l_1),(k_2,l_2)} (\nu)} \xrightarrow{d} Z, \quad \text{ as } \nu \to \infty. \] \end{thm} The random variable $Z$ can be expressed as \[Z \,{\buildrel d \over =}\, \alpha Y + (1-\alpha) W, \] where the random variable $Y$ is exponentially distributed with unit mean and the random variable $W$ is independent of $Y$ and has a more complicated distribution, see~\eqref{eq:LTW}, which depends on the sizes and activation rates of the dominant branches only. The possible distributions of $Z$ are summarized in Table~\ref{tab:overview} in Section~\ref{sec5}. In several cases the distribution of $Z$ is exponential, which may be expected in view of the connection with many exponentiality results for the occurrence of rare events \cite{A82,AB92,AB93,GR05,K66,K79}. In addition, we identify various cases that lead to \textit{non}-exponentiality, typically due to the fact that the activity process spends a substantial period in branches other than $k_1$ and $k_2$. Our third result concerns the starvation phenomenon. For $k=1, \dots, K$, define the random variable \[ \tau_k(t):=\int_0^t I_{\{X(s) \in \mathcal{B}_k \}} ds, \] which measures how much time the activity process $\{\xt \}_{t \geq 0}$ spends in branch $\mathcal{B}_k$ during the interval $[0,t]$. We can think of $\tau_k(t)$ as a measure of the \textit{throughput} of component $C_k$ over the time interval $[0,t]$. We speak of \textit{complete starvation} or \textit{zero throughput} of component $C_k$ in $[0,t]$ when $\tau_k(t)=0$. The next theorem provides insight into the time scales at which throughput starvation occur for a component of the network. \begin{thm}\label{thm:thm3} Assume $X(0)=(k_1,l_1)$ and $k_2 \neq k_1$. If $t(\nu) \sim \omega \mathbb E T_{(k_1,l_1),(k_2,1)}(\nu)$, with $\omega \in \mathbb R \cup \{0\}$, then \eqn{\label{eq:starvation} \lim_{\ninf} \pr{\tau_{k_2}(t(\nu)) = 0} \geq \pr{Z \geq \omega}. } In particular, if $ t(\nu)=o(\mathbb E T_{(k_1,l_1),(k_2,1)}(\nu))$, then \[ \lim_{\ninf} \pr{\tau_{k_2}(t(\nu)) = 0} =1, \] i.e. all users in $C_{k_2}$ have zero throughput for a period of length $t(\nu)$ with probability one as $\nu \to \infty$. \end{thm} The limit in~\eqref{eq:starvation} says that, even if there is long-term fairness among the components, for large values of $\nu$ all users in $C_{k_2}$ will face starvation on all time scales smaller than the mean transition time from the initial component to $C_{k_2}$. For the Markov process at hand, slow transitions and starvation effects are intimately related with the mixing time. The mixing time of a process is a characterization of the time required for the process to reach equilibrium. Indeed, due to the complete partite structure of the conflict graph, the process is bound to be stuck in one of the dominant branches, leading to slow convergence to equilibrium. In Section~\ref{sec8} we define the mixing time in terms of the total variation distance from stationarity, and prove a lower bound for a large enough activation parameter $\nu$. This lower bound (see Proposition \ref{prop:mix}) indicates that the mixing time of the process is at least as large as the mean escape time from the dominant branch, which establishes a direct connection between transition times and mixing times. \subsection*{Structure of the paper} The remainder of the paper is organized as follows. In Section~\ref{sec4} we study the activity process within a single component, where it behaves as a birth-and-death process, bringing the asymptotic behavior within the realm of classical results. In Section~\ref{sec5} we then leverage these results in conjunction with a geometric-sum representation to prove both Theorems~\ref{thm:thm1} and~\ref{thm:thm2}. In Section~\ref{sec6} we sketch how the approach extends to scenarios where some of the users within the same component may interfere as well, relying on the same geometric-sum representation, but using more general asymptotic exponentiality results in~\cite{GR05} for the single-component behavior. We prove Theorem~\ref{thm:thm3} and a complementary result for throughput ``near-saturation'' in Section~\ref{sec7}. In Section~\ref{sec8} we derive a lower bound on the mixing time. Lastly, in Section~\ref{sec9} we make some concluding remarks and sketch some directions for further research. \section{Hitting times within a single branch} \label{sec4} We first present a few results for the case where the two states $(k_1,l_1)$ and $(k_2,l_2)$ belong to the same branch, i.e.~$k_1 = k_2$, and $l_1 > l_2$. In this case, the presence of the other components does not affect the transition time, and hence we focus on a single branch, dropping the component index until further notice. Within a single component of size $L$, the process $\{\xt \}_{t \geq 0}$ evolves as an elementary birth-and-death process on the state space $\{L, L-1, \dots, 1,0\}$, so we can exploit several classical results for such processes. If we denote by $f(\nu)$ the activation rate for this component as a function of $\nu$, then the transition rates read \eqan{ q(l,l+1)&=(L-l) f(\nu), \quad l=0,\dots , L-1, \nonumber \\ q(l,l-1)&=l, \quad l=1,\dots , L. \nonumber } \subsection{Asymptotic growth rate} We first show how the mean transition time scales with the aggressiveness parameter $\nu$. \begin{prop}\label{prop:meansc} For $L \geq l_1 > l_2 \geq 0$, \[ \mathbb E T_{l_1,l_2}(\nu) \sim \frac{l_2! (L-l_2-1)! }{L!} f(\nu)^{L-l_2-1}, \quad \text{ as } \nu \to \infty. \] \end{prop} \begin{proof} First observe that $\mathbb E T_{l_1,l_2}(\nu) = \sum_{l=l_1}^{l_2+1} \mathbb E T_{l,l-1}(\nu)$, so we can exploit a general result for birth-and-death processes~\cite{K65}, which in the present case says that, for $0 < l \leq L$, \[ \mathbb E T_{l,l-1}(\nu) = \frac{1}{l} \sum_{n=l}^{L} \frac{\pi_n(\nu)}{\pi_l(\nu)}. \] Now~\eqref{eq:sd} implies that $\pi_n(\nu) = o(\pi_L(\nu))$ as $\nu \to \infty$ for all $n = l, \dots, L - 1$, so that \[ \mathbb E T_{l,l-1}(\nu) \sim \frac{1}{l} \frac{\pi_{L}(\nu)}{\pi_{l}(\nu)} =\frac{(l-1)! (L-l)!}{L!} f(\nu)^{L-l}, \quad \text{ as } \nu \to \infty. \] Thus $\mathbb E T_{l,l-1}(\nu) = o(\mathbb E T_{l_2+1,l_2}(\nu))$ as $\nu \to \infty$ for all $l = l_1,\dots, $ $l_2$, and hence $\mathbb E T_{l_1,l_2}(\nu) \sim \mathbb E T_{l_2+1,l_2}(\nu)$ as $\nu \to \infty$ and the result follows. \end{proof} In order to gain insight in starvation effects, we are particularly interested in the time for the activity process to reach the center state~0, referred to as \textit{escape time}, because at such points in time users in other components have an opportunity to activate. Proposition~\ref{prop:meansc} shows that \eqan{ \label{eq:sdp} \mathbb E T_{l_1,0}(\nu) \sim \frac{1}{L} f(\nu)^{L-1}, \quad \text{ as } \nu \to \infty. } Hence, the mean escape time grows asymptotically as a power of~$f(\nu)$, with the exponent equal to the component size minus one, and independent of the starting state~$l_1$. \subsection{Asymptotic exponentiality} We now turn to the scaled escape time, and show that it has an asymptotically exponential distribution. We will leverage the following well-known result for birth-and-death processes, which is commonly attributed to Keilson~\cite{K71} or Karlin and McGregor~\cite{KMcG59}. \begin{thm} \label{thm:km} Consider a birth-and-death process with generator matrix~$Q$ on the state space $\{0,\dots,L\}$ started at state~$L$. Assume that $0$ is an absorbing state, and that the other birth rates $\{\lambda_i\}_{i=1}^{L-1}$ and death rates $\{\mu_i\}_{i=1}^L$ are positive. Then the absorption time in state~$0$ is distributed as the sum of $L$~independent exponential random variables whose rate parameters are the $L$~nonzero eigenvalues of $-Q$. \end{thm} Let $Q(\nu)$ be the generator matrix of the birth-and-death process $\{\xt \}_{t \geq 0}$ on the state space $\{L,L-1,\dots,1,0\}$, with $0$ an absorbing state. Let $\{\theta_i(\nu)\}_{i=1}^L$ denote the non-zero eigenvalues of $-Q(\nu)$. It is known~\cite{LR54} that these eigenvalues are distinct, real and strictly positive, so we denote $0 < \theta_1(\nu) < \theta_2(\nu) < \dots < \theta_L(\nu)$. Theorem~\ref{thm:km} gives \eqn{\label{eq:sumexp} T_{L,0}(\nu) \,{\buildrel d \over =}\, \sum_{i=1}^{L} Y_i(\nu), } with $Y_1(\nu),\dots,Y_L(\nu)$ independent and exponentially distributed random variables with $\mathbb E Y_i(\nu)=1/\theta_i(\nu)$. The following lemma relates the growth rates of the eigenvalues as $\nu \to \infty$ to the mean escape time $\mathbb E T_{L,0}(\nu)$. \begin{lem} \label{lem:asymptotic1} $\lim_{\ninf} \theta_i(\nu) \cdot \mathbb E T_{L,0}(\nu) = 1$ if $i=1$ and $\infty$ if $i=2,\dots, L$. \end{lem} The proof of Lemma~\ref{lem:asymptotic1} is presented in~\ref{ap1}, and exploits detailed information about the growth rates of the eigenvalues obtained via symmetrization and the Gershgorin circle theorem. Lemma~\ref{lem:asymptotic1} shows that the smallest eigenvalue $\theta_1(\nu)$ becomes dominant as $\nu \to \infty$, but also proves the asymptotic exponentiality of the escape time. Indeed, denoting by $\mathcal{L}_{X}(s)=\mathbb E (e^{-s X})$, with $\mathrm{Re}(s)>0 $, the Laplace transform of a random variable $X$,~\eqref{eq:sumexp} gives \[ \mathcal{L}_{T_{L,0}(\nu)/ \mathbb E T_{L,0}(\nu)}(s)=\prod_{i=1}^L \Big(1+\frac{s}{\theta_i(\nu) \cdot \mathbb E T_{L,0}(\nu)} \Big)^{-1}. \] Lemma~\ref{lem:asymptotic1} implies that \[ \lim_{\ninf} \mathcal{L}_{T_{L,0}(\nu) / \mathbb E T_{L,0}(\nu)}(s) = \frac{1}{1+s}. \] The continuity theorem for Laplace transforms then yields that the scaled escape time has an asymptotically exponential distribution as stated in the next theorem, where $\mathrm{Exp}(\lambda)$ denotes an exponentially distributed random variable with mean $1/\lambda$. \begin{thm} \label{thm:expo} \[ \frac{T_{L,0}(\nu)}{\mathbb E T_{L,0}(\nu)} \xrightarrow{d} \mathrm{Exp}(1), \quad \text{ as } \nu \to \infty. \] \end{thm} This result can be understood as follows. For large~$\nu$, the probability of hitting state~0 before the first return to state~$L$ becomes small. So the time $T_{L,0}(\nu)$ consists of a geometrically distributed number of excursions from~$L$ which return to~$L$ without hitting~0, followed by the remaining part of the excursion that hits~0. Hence, apart from this final part, $T_{L,0}(\nu)$ is the sum of a large geometrically distributed number of i.i.d.~random variables, which indeed is expected to be exponential. The fact that the time until the first occurrence of a rare event is asymptotically exponential, is a widely observed phenomenon~\cite{K79}. Exponentiality of the hitting time of some subset~$B$ of the state space typically arises when the probability of hitting~$B$ in a single regenerative cycle is `small', and the cycle lengths are `not too heavy tailed'~\cite{GR05,K79}. This is also true for our situation, and hence an alternative proof of Theorem~\ref{thm:expo} can be obtained using~\cite[Thm.~1]{GR05} (which is a generalized version of~\cite{K66}). We do not use the probabilistic approach in~\cite{GR05} here, because the special case of a birth-and-death process allows for explicit analysis. However, in Section~\ref{sec6} we will discuss how this probabilistic approach can be exploited when the individual components have a more general structure and cannot be described by birth-and-death processes. Let us finally remark that for reversible Markov processes similar exponentiality results were established in \cite{A82}-\cite{AB93}. Aldous~\cite{A82} showed that a result like Theorem~\ref{thm:expo} can be expected when the underlying Markov process converges rapidly to stationarity. This is indeed the case for the Markov process $\{\xt \}_{t \geq 0}$ restricted to a single branch. To extend Theorem~\ref{thm:expo} to the case of a general starting state $0 < l \leq L$, we need the following technical lemma, whose proof is given in~\ref{ap2}. \begin{lem} \label{lem:asymbounds} Let $T(\nu), U(\nu),$ $V(\nu),W(\nu)$ be non-negative random variables. Consider the properties \begin{itemize} \item[{\rm (i)}] $\lim_{\ninf}\mathbb E V(\nu)/ \mathbb E U(\nu) =\lim_{\ninf} \mathbb E W(\nu)/ \mathbb E U(\nu)= 0$. \item[{\rm (ii)}] For every $\nu > 0$, $U - V \le_{st} T \le_{st} U + W$, i.e.~for every $ t > 0$, \[\pr{U - V > t} \leq \pr{T > t} \leq \pr{U + W > t}.\] \item[{\rm (iii)}] $U(\nu) /\mathbb E U(\nu) \xrightarrow{d} Z$ as $\nu \to \infty$, where $Z$ is a continuous random variable independent of~$\nu$. \end{itemize} Then, \begin{itemize} \item[{\rm (a)}] If {\rm (i)} and {\rm (ii)} hold, then $\lim_{\ninf} \mathbb E T(\nu)/\mathbb E U (\nu) =1.$ \item[{\rm (b)}] If {\rm (i)}, {\rm (ii)} and {\rm (iii)} hold, then $T(\nu) / \mathbb E T(\nu) \xrightarrow{d} Z$, as $\nu \to \infty.$ \end{itemize} \end{lem} \begin{prop} \label{prop:ael0} For any $ 0 < l \leq L$, \[ \frac{T_{l,0}(\nu)}{\mathbb E T_{l,0}(\nu)} \xrightarrow{d} \mathrm{Exp}(1), \quad \text{ as } \nu \to \infty. \] \end{prop} \begin{proof} The birth-and-death structure of the process and the strong Markov property yield the stochastic identity $T_{L,0}(\nu) \,{\buildrel d \over =}\, T_{L,l}(\nu)+T_{l,0}(\nu)$, which gives the stochastic bounds $ T_{L,0}(\nu) - T_{L,l}(\nu) \leq_{\mathrm{st}} T_{l,0}(\nu) \leq_{\mathrm{st}} T_{L,0}(\nu) $ (the two terms in the lower bound being dependent). It follows from Theorem~\ref{thm:expo} that $T_{L,0}(\nu)/ \mathbb E T_{L,0}(\nu) \xrightarrow{d} \mathrm{Exp}(1)$ as $\nu \to \infty$. In order to complete the proof, we can then use Lemma~\ref{lem:asymbounds}, taking $U(\nu)=T_{L,0}(\nu)$, $V(\nu)=T_{L,l}(\nu)$ and $W(\nu)=0$. The condition which needs to be checked is $\lim_{\ninf} \mathbb E V(\nu)/\mathbb E U(\nu) = 0$, which follows directly from Proposition~\ref{prop:meansc}. \end{proof} \subsection{More general coefficients and applications} \label{subsec:gencoef} We can extend our analysis to more general activation and deactivation dynamics inside a single branch, described by \eqan{ &q(l, l+1) = a_l f(\nu), \quad l = 1, \dots, L-1,\nonumber \\ &q(l, l-1) = d_l, \quad l = 2, \dots, L, \nonumber } where $a_l,d_l$ are positive real coefficients. Specifically, Proposition~\ref{prop:meansc} can be generalized to the following result. For $L \geq l_1 > l_2 \geq 0$, \eqn{\label{eq:genrates} \mathbb E T_{l_1,l_2}(\nu) \sim \frac{1}{d_{l_2+1}} \Big( \prod_{i=l_2+1}^{L-1} \frac{a_i}{d_{i+1}} \Big) f(\nu)^{L-l_2-1}, \quad \text{ as } \nu \to \infty. } Also Lemma~\ref{lem:asymptotic1} and thus Proposition~\ref{prop:ael0} can be shown to hold for these more general rates (see~\ref{ap1}). These results for general coefficients have some interesting applications, beyond the model considered in this paper. One example is the continuous-time Markov process $\{M_t\}_{t \geq 0}$ on $S=\{0,1,\dots, c\}$, describing the number of busy servers at time $t$. Suppose that the service rate of each server is $1$ and the arrival rate is $\nu$ which will grow large in a heavy-traffic regime. The escape time $T_{s,0}(\nu)$, choosing $a_n=1$ and $d_n=n$, $n=1,\dots,c$, then describes the time it takes for this system to \textit{drain} (i.e.~to have all the servers idle) when starting with $s \geq 1$ busy servers. Then~\eqref{eq:genrates} gives $\mathbb E T_{s,0}(\nu) \sim \nu^{c-1} / c!$ as $\nu \to \infty$, which does not depend on the starting state $s \geq 1$. The scaled drain time obeys $ \frac{T_{s,0}(\nu)}{\mathbb E T_{s,0}(\nu)} \xrightarrow{d} \mathrm{Exp}(1)$ as $\nu \to \infty$. \section{Proofs of Theorems~\ref{thm:thm1} and~\ref{thm:thm2}} \label{sec5} In this section we investigate the asymptotic behavior of the transition time $T_{(k_1,l_1), (k_2,l_2)}(\nu)$ as $\nu \to \infty$ for any pair of states $(k_1,l_1)$ and $(k_2,l_2)$, with $k_1\neq k_2$. In Subsection~\ref{sec51} we provide a stochastic representation of the transition time, which we use to derive the asymptotic mean transition time in Subsection~\ref{sec52} leading to Theorem~\ref{thm:thm1}. In Subsection~\ref{sec53} we will obtain the asymptotic distribution of the scaled transition time leading to Theorem~\ref{thm:thm2}. In Subsection~\ref{sec54} we consider in detail the random variable $W$ that occurs in Theorem~\ref{thm:thm2}. We give an overview of all possible forms of asymptotic behavior and the conditions under which they occur in Subsection~\ref{sec55}. \subsection{Stochastic representation of the transition time} \label{sec51} Consider the evolution of the process as it makes a transition from a state $(k_1,l_1)$ to a state $(k_2,l_2)$ and defining the following random variables: \begin{itemize} \item $T_{(k_1, 1), 0}^{(0)}(\nu)$: time to reach state~$0$ after state $(k_1, 1)$ is visited for the first time; \item $N_k (\nu)$: number of times the process makes a transition $0 \to (k, 1)$, $k \neq k_2$, before the first transition $0 \to (k_2, 1)$ occurs; \item $\hat{T}_{0, (k, 1)}^{(i)}(\nu)$: time spent in state~0 before the $i$-th transition to state $(k, 1)$, $k \neq k_2$, $i = 1, \dots, N_k(\nu)$; \item $\hat{T}_{0, (k_2, 1)}(\nu)$: time spent in state~$0$ before the first transition to state $(k_2, 1)$; \item $T_{(k, 1), 0}^{(i)}(\nu)$: time to return to state~$0$ after the $i$-th transition to state $(k, 1)$, $k \neq k_2$, $i = 1, \dots, N_k(\nu)$. \end{itemize} With the above definitions, it is readily seen that the following stochastic representation holds. \begin{prop} The transition time $T_{(k_1,l_1),(k_2,l_2)}$ can be represented as \eqn{ \label{eq:kpartiterepr} T_{(k_1,l_1),(k_2,l_2)} \,{\buildrel d \over =}\, T_{(k_1, l_1), (k_1, 1)} + T_{(k_1, 1), 0}^{(0)} + \sum_{k \neq k_2} \sum_{i = 1}^{N_k} \left(\hat{T}_{0, (k, 1)}^{(i)} + T_{(k, 1), 0}^{(i)}\right) + \hat{T}_{0, (k_2, 1)} + T_{(k_2, 1), (k_2, l_2)}, } where the dependence on the parameter~$\nu$ is suppressed for compactness and all the random variables representing time durations are mutually independent as well as independent of the random variables $N_k(\nu)$, $k \neq k_2$. \end{prop} Denote $F(\nu) = \sum_{k = 1}^{K} L_k f_k(\nu)$. The random variables $T_{(k,1), 0}^{(i)}$ are i.i.d.~copies of $T_{(k,1), 0}$, $i = 1, \dots, N_k(\nu)$, $k \neq k_2$, while the random variables $\hat{T}_{0, (k_2, 1)}$ and $\hat{T}_{0, (k, 1)}^{(i)}$, $k \neq k_2$, $i = 1, \dots, N_k$, are i.i.d.~copies of $T_0 \,{\buildrel d \over =}\, \mathrm{Exp}(F(\nu))$, which is the residence time in state~0. Write $X \,{\buildrel d \over =}\, \mathrm{Geo}(p)$ when $X$ is a random variable with geometric distribution~$\pr{X=n}=p(1-p)^n$, $n \in \mathbb N \cup \{0\}$. Define the random variable $N (\nu):= \sum_{k \neq k_2} N_k (\nu)$, counting the total number of entrances in branches other than $k_2$ before hitting the target branch $\mathcal{B}_{k_2}$. For all $k = 1, \dots, K$, denote $p_k(\nu) := L_k f_k(\nu) / F(\nu)$. Obviously, \[ N(\nu) \,{\buildrel d \over =}\, \mathrm{Geo}(p_{k_2}(\nu)), \] while the marginal distribution of $N_k(\nu)$ is $\mathrm{Geo} (\frac{p_{k_2}(\nu) }{ p_{k_2}(\nu)+p_k(\nu)})$. We want to distinguish the branches that significantly affect the dynamics of the process (and hence the transition time) from those that do not. The quantity $ \mathbb E N_k(\nu) \cdot \mathbb E T_{(k, 1), 0}(\nu)$, for $k\neq k_2$, is the mean time that the process spends in branch $\mathcal{B}_k$ along the transition $(0,0)\to(k_2,l_2)$. Note that Proposition~\ref{prop:meansc} gives \eqn{\label{eq:met} \mathbb E T_{(k,1), 0}(\nu) \sim \frac{1}{L_k} f_k(\nu)^{L_k-1}, \quad \text{ as } \nu \to \infty, } and that \eqn{\label{eq:enk} \mathbb E N_k(\nu) = \frac{p_k(\nu) }{ p_{k_2}(\nu) }= \frac{L_k f_k(\nu) }{ L_{k_2} f_{k_2}(\nu)}. } Therefore \[ \frac{\mathbb E N_k(\nu) \cdot \mathbb E T_{(k, 1), 0}(\nu)}{\mathbb E N_j(\nu) \cdot \mathbb E T_{(j, 1), 0}(\nu)} \sim \frac{f_k(\nu)^{L_k}}{ f_j(\nu)^{L_j}}, \quad \text{ as } \nu \to \infty, \] which shows that indeed only the visits to dominant branches asymptotically contribute to the mean transition time. \subsection{Asymptotic mean transition time} \label{sec52} We present here the proof of Theorem~\ref{thm:thm1}. Consider the stochastic representation~\eqref{eq:kpartiterepr} of the transition time $T_{(k_1, l_1), (k_2, l_2)}(\nu)$. Proposition~\ref{prop:meansc} implies that \[ \mathbb E T_{(k_1,l_1),(k_1,1)} (\nu) \sim \frac{1}{L_{k_1} (L_{k_1}-1)} f_{k_1}(\nu)^{L_{k_1}-2}, \quad \text{ as } \nu \to \infty, \] and that \[ \mathbb E T_{(k_1,1),0} (\nu) \sim \frac{1}{L_{k_1}} f_{k_1}(\nu)^{L_{k_1}-1}\quad \text{ as } \nu \to \infty. \] Hence $\mathbb E T_{(k_1,l_1),(k_1,1)(\nu)}=o\left(\mathbb E T_{(k_1,1),0} (\nu) \right)$ as $\nu \to \infty$. Moreover $\mathbb E \hat{T}_{0, (k, 1)}(\nu)=o(1)$. Lemma~\ref{lem:fifthterm} below implies that $\mathbb E T_{(k_2, 1), (k_2, l_2)}(\nu) = o\left(\mathbb E T_{(k_1,l_1),(k_2,l_2)}(\nu)\right)$. The asymptotic relation~\eqref{eq:agr} then follows using the definition of $K_*$, the asymptotic estimate~\eqref{eq:met} and the identity~\eqref{eq:enk}. The following lemma guarantees that once the process has entered the target branch $\mathcal{B}_{k_2}$, even if it may exit from it, the mean time it takes to reach the target state $(k_2,l_2)$ is negligible with respect to the mean overall transition time. \begin{lem}\label{lem:fifthterm} \[\mathbb E T_{(k_2, 1), (k_2, l_2)}(\nu) = o\left(\mathbb E T_{(k_1,l_1),(k_2,l_2)} (\nu) \right), \quad \text{ as } \nu \to \infty.\] \end{lem} \begin{proof} Consider the event \eqan{ \mathcal{E}(\nu)&=\{\text{the first } l_2-1 \text{ transitions are all towards the state }(k_2,l_2)\} \nonumber \\ &=\bigcap_{i=1}^{l_2-1} \{\text{the } i \text{-th transition is from }(k_2,i) \text{ to } (k_2,i+1)\}. \nonumber } Exploiting the fact that all these events are independent, we can compute \eqan{ \pr{\mathcal{E}(\nu)}&= \prod_{i=1}^{l_2-1} \pr{ \text{the } i \text{-th transition is from }(k_2,i) \text{ to } (k_2,i+1)} \nonumber\\ &=\frac{(L_{k_2}-1)f_{k_2}(\nu)}{(L_{k_2}-1)f_{k_2}(\nu) +1}\cdot \frac{(L_{k_2}-2)f_{k_2}(\nu)}{(L_{k_2}-2)f_{k_2}(\nu) +2} \cdot \dots \cdot \frac{(L_{k_2}-l_2+1)f_{k_2}(\nu) }{(L_{k_2}-l_2+1)f_{k_2}(\nu) + (l_2+1)}, \nonumber } and clearly $\lim_{\ninf} \pr{\mathcal{E}(\nu)} = 1$. We have that \[ \mathbb E \{T_{(k_2, 1), (k_2, l_2)}(\nu)~|~ \mathcal{E}(\nu)\} = \sum_{m=1}^{l_2-1} \frac{1}{(L_2-m)f_{k_2}(\nu) + m}=:g(\nu), \] where $g(\nu)\downarrow 0$ as $\nu \to \infty$. Consider the events $\mathcal{E}^c_n(\nu)=\{ \text{the first transition towards state }0\text{ is the } n\text{-th one}\}$, for $n=1,\dots,l_2-1$. Note that the event $\mathcal{E}^c(\nu)$ can be decomposed as $\mathcal{E}^c(\nu)=\bigcup_{n=1}^{l_2-1} \mathcal{E}_n^c(\nu)$. Using the events $\mathcal{E}(\nu)$ and $\mathcal{E}^c_n(\nu)$, we can write \eqn{ \label{eq:reprA} \mathbb E T_{(k_2, 1), (k_2, l_2)}(\nu)=\mathbb E \{T_{(k_2, 1), (k_2, l_2)} (\nu) ~|~ \mathcal{E}(\nu) \} \pr{\mathcal{E} (\nu)}+\sum_{n=1}^{l_2-1} \mathbb E \{ T_{(k_2, 1), (k_2, l_2)} (\nu) ~|~ \mathcal{E}^c_n(\nu) \} \pr{\mathcal{E}^c_n(\nu)}. } Since \eqan{ \mathbb E \{T_{(k_2, 1), (k_2, l_2)}(\nu) ~|~ \mathcal{E}^c_n(\nu) \} &\leq \mathbb E \{T_{(k_2, 1), (k_2, n-1)}(\nu) ~|~ \mathcal{E}^c_n(\nu) \}+\mathbb E T_{(k_2, n-1), (k_2, l_2)}(\nu) \nonumber \\ &\leq \mathbb E \{T_{(k_2, 1), (k_2, n-1)}(\nu) ~|~ \mathcal{E}(\nu) \}+\mathbb E T_{0, (k_2, l_2)}(\nu) \nonumber \\ &\leq \mathbb E \{T_{(k_2, 1), (k_2, l_2)}(\nu) ~|~ \mathcal{E}(\nu) \}+\mathbb E T_{(k_1,l_1), (k_2, l_2)}(\nu), \nonumber } for $n=1, \dots, l_2-1$, it follows from~\eqref{eq:reprA} that \[ \mathbb E T_{(k_2, 1), (k_2, l_2)}(\nu) \leq g(\nu)+\mathbb E T_{(k_1,l_1), (k_2, l_2)}(\nu) \pr{\mathcal{E}^c (\nu)}. \] We divide both sides by $\mathbb E T_{(k_1,l_1), (k_2, l_2)}(\nu)$, which is greater than~1 for $\nu$ sufficiently large, thanks to~\eqref{eq:sdp}. Since $g(\nu)$ and $\pr{\mathcal{E}^c (\nu)}$ are both $o(1)$, the proof of the lemma is complete. \end{proof} \subsection{Asymptotic distribution of the transition time} \label{sec53} We now turn to the proof of Theorem~\ref{thm:thm2}. It is clear that only the dominant branches which asymptotically contribute to the expected magnitude of the transition time will play a role, possibly along with the escape time from the initial branch. As we will show, the various dominant branches may play different roles, depending on whether the expected number of visits during the transition time is zero, $O(1)$ or infinite in the limit as $\nu \to \infty$. We introduce \eqn{\label{eq:ab} A(\nu):=T_{(k_1,1),0}(\nu) \quad \text{ and } \quad B(\nu):=\sum_{k \in K_*} \sum_{i=1}^{N_k(\nu)}T_{(k,1),0}^{(i)}(\nu), } whose means correspond to the two terms at the right-hand side of~\eqref{eq:agr}. From the definition~\eqref{eq:alpha} of the coefficient $\alpha$ it follows that \[ \alpha=\lim_{\ninf} \frac{\mathbb E A (\nu)}{\mathbb E A(\nu) + \mathbb E B(\nu)}. \] When $\alpha=0$ the term $A(\nu)$ becomes asymptotically negligible compared to $B(\nu)$, while the opposite holds when $\alpha=1$. Proposition~\ref{prop:ael0} already describes the asymptotic behavior of $A(\nu)$ after scaling. We need to understand the asymptotic behavior of $B(\nu)$ and for this purpose, it will be convenient to use a slightly different representation for it. Define $p_*(\nu):=\sum_{k \in K_*} p_k(\nu)$ and $\hat{p}(\nu):=\frac{p_*(\nu) }{ p_{k_2}(\nu) +p_*(\nu)}$. Introduce the random variable $N_*(\nu):=\mathrm{Geo}(1-\hat{p}(\nu))$, which represents the number of visits to the dominant branches, before entering the target branch $\mathcal{B}_{k_2}$. Introduce the sequence $(\tau^{(i)}(\nu))_{i \geq 1}$ of i.i.d.~random variables, $\tau^{(i)}(\nu) \,{\buildrel d \over =}\, \tau(\nu)$, where $\tau(\nu) \,{\buildrel d \over =}\, T_{(k,1),0}(\nu)$ with probability $p_k(\nu)/p_*(\nu)$ for every $k \in K_*$. Then \eqn{\label{eq:reprtau} B(\nu) \,{\buildrel d \over =}\, \sum_{i=1}^{N_*(\nu)} \tau^{(i)}(\nu). } For $k \in K_*$ we define \eqn{\label{eq:defbk} \beta_k:= \lim_{\ninf} \frac{L_k f_k(\nu)}{L_{k_2} f_{k_2}(\nu)}. } In view of~\eqref{eq:sd}, $\beta_k$ may be interpreted as the stationary ratio between the number of visits to branch $k$ and to branch $k_2$ as $\nu \to \infty$. Starting from branch $k_1 \neq k_2$, $\beta_k$ also represents the asymptotic mean number of visits to branch $\mathcal{B}_k$ before the first entrance in branch $\mathcal{B}_{k_2}$ as $\nu \to \infty$. To avoid technicalities, we henceforth assume that all the parameters $\beta_k$ are well defined. Moreover, we introduce the parameter $\beta:=\sum_{k \in K_*} \beta_k$, which is the asymptotic mean number of visits to dominant branches before hitting $\mathcal{B}_{k_2}$ as $\nu \to \infty$, i.e. $\beta=\lim_{\ninf} \mathbb E N_*(\nu)$. Based on the definition of the parameter $\beta_k$ in~\eqref{eq:defbk}, we partition the index set $K_*$ of the dominant branches into three subsets, namely \[ K_* = \mathcal{N} \cup \mathcal{A} \cup \mathcal{S}, \] using the following rule: \begin{itemize} \item[(i)] $k\in \mathcal{N}$ if $\beta_k=0$; \item[(ii)]$k \in \mathcal{A}$ if $\beta_k\in \mathbb R_+$; \item[(iii)] $k\in \mathcal{S}$ if $\beta_k=\infty$. \end{itemize} The branches in $\mathcal{N}$, $\mathcal{A}$ and $\mathcal{S}$ will be called \textit{non-attracting}, \textit{attracting} and \textit{strongly attracting}, respectively. Define moreover the coefficients $\gamma_{\mathcal{N}}:=\sum_{k \in \mathcal{N}}\gamma_k$, $\gamma_{\mathcal{A}}:=\sum_{k \in \mathcal{A}}\gamma_k$ and $\gamma_{\mathcal{S}}:=\sum_{k \in \mathcal{S}}\gamma_k$, with the parameters $\gamma_k$ as defined in~\eqref{eq:defgk}. We are now ready to present the proof of Theorem~\ref{thm:thm2}. Specifically, we prove that if $k_1\neq k_2$, $1\leq l_1 \leq L_{k_1}$ and $1\leq l_2 \leq L_{k_2}$, then \[ \frac{T_{(k_1,l_1),(k_2,l_2)}(\nu)}{ \mathbb E T_{(k_1,l_1),(k_2,l_2)} (\nu)} \xrightarrow{d} \alpha Y +(1-\alpha) W, \quad \text{ as } \nu \to \infty, \] where $\alpha$ is the constant defined in~\eqref{eq:alpha}, $Y$ is an exponential random variable with unit mean and $W$ is a random variable independent of $Y$, with Laplace transform \eqn{\label{eq:LTW} \mathcal{L}_W (s)=\frac{1}{\displaystyle 1+\sum_{k \in \mathcal{A}} \frac{\gamma_k s}{1 + \gamma_k s / \beta_k} + s \gamma_{\mathcal{S}}}. } The crucial idea of the proof is to use Lemma~\ref{lem:asymbounds} with the dominant term $U(\nu)$ defined as the sum of the two random variables introduced in~\eqref{eq:ab}, i.e.~$U(\nu):=A(\nu)+B(\nu)$. Theorem~\ref{thm:thm1} implies that $\mathbb E T_{(k_1,l_1),(k_2,l_2)} (\nu) \sim \mathbb E U(\nu)$ as $\nu \to \infty$ and its proof shows that all the other terms present in the stochastic representation~\eqref{eq:kpartiterepr} are negligible compared to $U(\nu)$. Note that \[ \frac{U(\nu)}{\mathbb E U(\nu)} = \frac{A(\nu)}{\mathbb E U(\nu)} + \frac{B(\nu)}{\mathbb E U(\nu)} = \frac{\mathbb E A(\nu)}{\mathbb E U(\nu)} \frac{A(\nu)}{\mathbb E A(\nu)} + \frac{\mathbb E B(\nu)}{\mathbb E U(\nu)} \frac{B(\nu)}{\mathbb E B(\nu)}. \] Recall that $A(\nu)$ and $B(\nu)$ are independent by construction. If we knew that there exist two random variables $Y$ and $W$ such that $A(\nu) /\mathbb E A(\nu) \xrightarrow{d} Y$ and $B(\nu) /\mathbb E B(\nu) \xrightarrow{d} W$ as $\nu \to \infty$, then \[ \frac{U(\nu)}{\mathbb E U(\nu)} \xrightarrow{d} \alpha Y + (1-\alpha) W, \quad \text{ as } \nu \to \infty, \] and Lemma~\ref{lem:asymbounds} would imply that \[ \frac{T_{(k_1,l_1),(k_2,l_2)} (\nu)}{\mathbb E T_{(k_1,l_1),(k_2,l_2)} (\nu)} \xrightarrow{d} \alpha Y + (1-\alpha) W, \quad \text{ as } \nu \to \infty. \] Proposition~\ref{prop:ael0} immediately gives that \[ \frac{A(\nu)}{\mathbb E A(\nu)} \xrightarrow{d} Y, \quad \text{ as } \nu \to \infty, \] where $Y$ is an exponential random variable with mean one. Thus it remains to establish that the random variable $B(\nu)/\mathbb E B(\nu)$ converges to $W$ in distribution. From the definition of $B(\nu)$ and~\eqref{eq:reprtau}, it follows that $\mathbb E B(\nu) =\mathbb E N_*(\nu) \mathbb E \tau(\nu)$ and that \eqn{\label{eq:LTB} \mathcal{L}_{B(\nu)/\mathbb E B(\nu)}(s)=G_{N_*(\nu)}\left(\mathcal{L}_{\tau(\nu)/\mathbb E B(\nu)}(s)\right)=G_{N_*(\nu)}\left(\mathcal{L}_{\tau(\nu)/\mathbb E \tau(\nu)}(s/\mathbb E N_*(\nu)\right), } where \eqn{\label{eq:genfunN} G_{N_*(\nu)}(z)=\mathbb E (z^{N_*(\nu)})=\frac{1}{1+(1-z)\mathbb E N_*(\nu)}. } We need to understand the asymptotic behavior of the random variable $\tau(\nu)/\mathbb E \tau(\nu)$. Let $T_k(\nu)=T_{(k,1),0}(\nu)$. Then $\mathcal{L}_{\tau(\nu)}(s)=\sum_{k \in K_*} \frac{p_k(\nu)}{p_*(\nu)} \mathcal{L}_{T_k(\nu)}(s)$ and hence \eqan{ \mathcal{L}_{\tau(\nu)/\mathbb E \tau(\nu)}(s/\mathbb E N_*(\nu)) &= \mathcal{L}_{\tau(\nu)} \Big(\frac{s}{\mathbb E N_*(\nu) \mathbb E \tau(\nu)}\Big) \nonumber \\ &= \sum_{k \in K^*} \frac{p_k(\nu)}{p_*(\nu)} \mathcal{L}_{T_k(\nu)} \Big(\frac{s}{\mathbb E N_*(\nu) \mathbb E \tau(\nu)}\Big) \nonumber \\ &= \sum_{k \in K^*} \frac{\mathbb E N_k(\nu)}{\mathbb E N_*(\nu)} \mathcal{L}_{T_k(\nu)/ \mathbb E T_k(\nu)} \Big(\frac{s \mathbb E T_k(\nu)}{\mathbb E N_*(\nu) \mathbb E \tau(\nu)}\Big). \nonumber } For $k \in K_*$, define \eqn{\label{eq:gnu} h_k(\nu):=\frac{\mathbb E T_k(\nu)}{\mathbb E N_*(\nu) \mathbb E \tau(\nu)}, } and note that $\lim_{\ninf} h_k(\nu)= \gamma_k / \beta_k$. Indeed, \[ \lim_{\ninf} h_k(\nu) = \lim_{\ninf} \frac{\mathbb E T_k(\nu)}{\mathbb E N_*(\nu) \mathbb E \tau(\nu)}= \lim_{\ninf} \frac{\mathbb E N_k(\nu) \mathbb E T_k(\nu)}{\mathbb E N_*(\nu) \mathbb E \tau(\nu)} \frac{1}{\mathbb E N_k(\nu)}= \gamma_k \left (\lim_{\ninf} \mathbb E N_k(\nu)\right)^{-1}. \] Combining~\eqref{eq:LTB}-\eqref{eq:gnu} yields \eqan{ \label{eq:limBf} \mathcal{L}_{B(\nu)/\mathbb E B(\nu)}(s) &= \left [ 1+\Big(1- \mathcal{L}_{\tau(\nu)/\mathbb E \tau(\nu)}(s/\mathbb E N_*(\nu)) \Big) \mathbb E N_*(\nu) \right]^{-1} \nonumber \\ &= \left [ 1+\Big(1- \sum_{k \in K^*} \frac{\mathbb E N_k(\nu)}{\mathbb E N_*(\nu)} \mathcal{L}_{T_k(\nu)/ \mathbb E T_k(\nu)} \Big (\frac{s \mathbb E T_k(\nu)}{\mathbb E N_*(\nu) \mathbb E \tau(\nu)}\Big) \Big) \mathbb E N_*(\nu) \right]^{-1} \nonumber \\ &= \left [ 1+\Big(\mathbb E N_*(\nu) - \sum_{k \in K^*} \mathbb E N_k(\nu) \mathcal{L}_{T_k(\nu)/ \mathbb E T_k(\nu)} (s h_k(\nu)) \Big) \right]^{-1} \nonumber \\ &= \left [ 1+ \sum_{k \in K^*} \mathbb E N_k(\nu)\left ( 1- \mathcal{L}_{T_k(\nu)/ \mathbb E T_k(\nu)} (s h_k(\nu))\right ) \right]^{-1}. } In order to obtain an explicit expression for $\mathcal{L}_{B(\nu)/\mathbb E B(\nu)}(s)$ as $\nu \to \infty$, we need the following technical lemma, which is proved in~\ref{ap3}. \begin{lem} \label{lem:lt} \begin{itemize} \item[{\rm (a)}] If $k \in \mathcal{S}$, then \[ \lim_{\ninf} \mathbb E N_k(\nu)\left (1- \mathcal{L}_{T_k(\nu)/ \mathbb E T_k(\nu)} (s h_k(\nu)) \right ) = \gamma_k s. \] \item[{\rm (b)}] If $k \in \mathcal{A}$, then \[ \lim_{\ninf} \mathbb E N_k(\nu)\left (1- \mathcal{L}_{T_k(\nu)/ \mathbb E T_k(\nu)} (s h_k(\nu)) \right ) = \frac{\gamma_k s}{1+\gamma_k s / \beta_k}. \] \item[{\rm (c)}] If $k \in \mathcal{N}$, then \[ \lim_{\ninf} \mathbb E N_k(\nu)\left (1- \mathcal{L}_{T_k(\nu)/ \mathbb E T_k(\nu)} (s h_k(\nu)) \right ) = 0. \] \end{itemize} \end{lem} From Lemma~\ref{lem:lt} and~\eqref{eq:limBf} it follows that \[ \mathcal{L}_W(s) = \lim_{\ninf} \mathcal{L}_{B(\nu)/\mathbb E B(\nu)}(s)=\left [ 1+\sum_{k \in \mathcal{A}} \frac{\gamma_k s}{1 + \gamma_k s / \beta_k} + \sum_{k \in \mathcal{S}} \gamma_k s \right ]^{-1}. \] The independence of $Y$ and $W$ easily follows from the independence of the corresponding terms in the stochastic representation~\eqref{eq:kpartiterepr}. \subsection{The random variable $W$: properties and interpretation} \label{sec54} The random variable $W$ is defined by its Laplace transform, see~\eqref{eq:LTW}. We remark that the shape of the distribution $W$ is fully determined by the branches in $\mathcal{A}$ and $\mathcal{S}$, independently of the branches in $\mathcal{N}$. Indeed the random variable $W$ can be represented as \[ W \,{\buildrel d \over =}\, (1-\gamma_{\mathcal{N}}) \overline{W}, \] where $\overline{W}$ is a unit-mean random variable that in no way depends on the parameters of the branches in the set $\mathcal{N}$. On the other hand, the presence of the factor $(1-\gamma_{\mathcal{N}})$ reflects the fact that the branches in $\mathcal{N}$ do affect the mean of the asymptotic scaled transition time: indeed convergence of the first moments holds if and only if $\alpha=1$ or $\mathcal{N}=\emptyset$. Indeed, \[ \alpha \mathbb E Y +(1-\alpha) \mathbb E W = \alpha + (1-\alpha)(1-\gamma_{\mathcal{N}}), \] and, if $\mathcal{N}\neq\emptyset$, then $\gamma_{\mathcal{N}} >0$ and so $\alpha \, \mathbb E Y +(1-\alpha) \, \mathbb E W<1$ when $\alpha \neq 1$. Whenever either $\mathcal{A}$ or $\mathcal{S}$ is empty, the distribution of $W$ is known explicitly, cf.~Table~\ref{tab:overview}. However, also in the scenario where both $\mathcal{A}$ and $\mathcal{S}$ are non-empty, it is still possible to give an interpretation of the distribution of $W$. If $\mathcal{A}\neq \emptyset$, define $m:=|\mathcal{A}|$ and label the branches belonging to $\mathcal{A}$ as $1, 2, \dots, m$. Let $\beta_{\mathcal{A}}:=\sum_{k=1}^m \beta_k \in (0,\infty)$ be the asymptotic mean number of visits to attracting branches as $\nu \to \infty$. Consider a hyper-exponentially distributed random variable $H$ with rates $\beta_k/\gamma_k$ and probabilities $\beta_k/\beta_{\mathcal{A}}$, $k=1,\dots,m$, whose Laplace transform is \[ \mathcal{L}_{H}(s)=\sum_{k=1}^m \frac{\beta_k}{\beta_{\mathcal{A}}} \frac{\beta_k/\gamma_k}{\beta_k/\gamma_k+s}. \] Furthermore consider a marked Poisson process with rate $\lambda=\beta_{\mathcal{A}} / \gamma_{\mathcal{S}}$ and i.i.d.~marks distributed according to $H$. The random variable $W$ in Equation~\eqref{eq:LTW} corresponds to the sum of a random time $\mathcal T$, with $\mathcal T$ exponentially distributed with mean $1/\mu = \gamma_{\mathcal{S}}$, and the total size $\mathcal W(\mathcal T)$ of the marks associated with all the events in interval $[0,\mathcal T]$. Indeed \begin{align*} \displaystyle \mathcal{L}_{\mathcal T + \mathcal W (\mathcal T)}(s) &= \displaystyle \int_{t =0}^\infty e^{-s t} e^{\lambda t \, \left (\sum_{k=1}^m \frac{\beta_k}{\beta_{\mathcal{A}}} \frac{\beta_k/\gamma_k}{\beta_k/\gamma_k+s} - 1\right )} \mu e^{-\mu t} d t = \left [ 1+\frac{\lambda}{\mu} \left (\sum_{k=1}^m \frac{\beta_k}{\beta_{\mathcal{A}}} \frac{\beta_k/\gamma_k}{\beta_k/\gamma_k+s} \right )+ \frac{s}{\mu} \right ]^{-1} \nonumber \\ & = \displaystyle \left [ 1+ \beta_{\mathcal{A}} \left (\sum_{k=1}^m \frac{\beta_k}{\beta_{\mathcal{A}}} \frac{s}{\beta_k/\gamma_k +s} \right )+ s \gamma_{\mathcal{S}} \right ]^{-1} = \left [ 1+\sum_{k \in \mathcal{A}} \frac{\gamma_k s}{1 + \gamma_k s / \beta_k} + s \gamma_{\mathcal{S}} \right ]^{-1} . \end{align*} The stochastic equality $W = \mathcal T + \mathcal W (\mathcal T)$ may be interpreted as follows. Define $p_{\mathcal{A}}:= \sum_{k \in \mathcal{A}} p_k(\nu)$ and $p_{\mathcal{S}}:= \sum_{k \in \mathcal{S}} p_k(\nu)$. The total number of visits during the transition time to the branches in $\mathcal{S}$ is geometrically distributed with parameter $p_{k_2} / p_{\mathcal{S}}$. Since the durations of these visits are independent and each relatively short compared to the transition time, the total normalized amount of time spent in the branches in $\mathcal{S}$ is exponentially distributed in the limit as $\nu \to \infty$ with mean $\gamma_{\mathcal{S}}$. The visits to the branches in $\mathcal{S}$ are interspersed with visits to the branches in $\mathcal{A}$. The number of visits to branches in $\mathcal{S}$ between two consecutive visits to branches in $\mathcal{A}$ is geometrically distributed with parameter $p_{\mathcal{A}} / p_{\mathcal{S}}$. The normalized durations of the visits to the branches in $\mathcal{A}$ have the hyper-exponential distribution $H$ as specified above. By similar arguments as mentioned above, the normalized amounts of time between these visits are independent and exponentially distributed in the limit as $\nu \to \infty$ with mean $\gamma_{\mathcal{S}} \cdot p_{k_2} / p_{\mathcal{A}} = \gamma_{\mathcal{S}} / \beta_{\mathcal{A}}$. In other words, the visits to the branches in $\mathcal{A}$ occur as a Poisson process with rate $\lambda = \beta_{\mathcal{A}} / \gamma_{\mathcal{S}}$. \subsection{An overview of the possible limiting distributions} \label{sec55} In this subsection we present an overview of all the possible limiting distributions of the scaled transition time by means of Table~\ref{tab:overview} and some simulation results to illustrate our findings. Denote by $(H_i)_{i \in \mathbb N}$ a sequence of i.i.d.~hyper-exponential random variables, $H_i \,{\buildrel d \over =}\, H$, while $\mathcal G$ is a geometric random variable $\mathcal G \,{\buildrel d \over =}\, \mathrm{Geo}\big (\frac{1}{1+\beta_{\mathcal{A}}}\big )$, independent of all the other random variables. \begin{table}[!hb] \centering {\renewcommand{\arraystretch}{1.5} \begin{tabular}{ |c|c|c|l|c|} \hline $\alpha$ & $\mathcal{A}$ & $\mathcal{S}$ & \hspace{3cm} Limiting distribution & Scenario\\ \hline \multirow{7}{*}{$0$} & $\emptyset$ & $\emptyset$ & $\delta_0$ \quad (trivial r.v.~identical to $0$) & 1a\\ \cline{2-5} & non-empty & $\emptyset$ & $\displaystyle \sum_{i=1}^{\mathcal G} H_i(p_1,\dots,p_m,\lambda_1,\dots,\lambda_m)$ &1b\\ & & & $\displaystyle \sum_{i=1}^{\mathcal G} \mathrm{Exp}_i(\lambda)$ \hspace{3.6cm} if $\beta_k/\gamma_k = \lambda \quad \forall \, k \in \mathcal{A}$ &1b*\\ \cline{2-5} & $\emptyset$ & non-empty & $\displaystyle \mathrm{Exp}(1/\gamma_{\mathcal{S}})$ &1c\\ \cline{2-5} & non-empty & non-empty & $\displaystyle W$ &1d\\ \hline \multirow{10}{*}{$(0,1)$} & $\emptyset$ & $\emptyset$ & $\displaystyle \mathrm{Exp}(1/\alpha)$ &2a\\ \cline{2-5} & & & $\displaystyle \mathrm{Exp}(1/\alpha) + \sum_{i=1}^{\mathcal G} H_i \Big (\frac{\beta_1}{\beta_{\mathcal{A}}},\dots,\frac{\beta_m}{\beta_{\mathcal{A}}},\frac{\beta_1}{(1-\alpha)\gamma_1},\dots,\frac{\beta_m}{(1-\alpha)\gamma_m} \Big )$ &2b\\ & non-empty & $\emptyset$ & $\displaystyle \mathrm{Exp}(1/\alpha)+ \sum_{i=1}^{\mathcal G} \mathrm{Exp}_i \Big (\frac{\lambda}{1-\alpha}\Big )$ \hspace{0.8cm} if $\beta_k/\gamma_k = \lambda \quad \forall \, k \in \mathcal{A}$ &2b*\\ & & & $\displaystyle \mathrm{Exp}\Big(\frac{1}{\alpha(1+\sum_{k=1}^m \beta_k)}\Big)$ \hspace{1.1cm} if $\displaystyle \beta_k/\gamma_k =\frac{1-\alpha}{\alpha} \quad \forall \, k \in \mathcal{A}$ &2b**\\ & & & $\displaystyle \mathrm{Exp}(1)$ \hspace{2.2cm} if $\displaystyle \beta_k/\gamma_k =\frac{1-\alpha}{\alpha}=\sum_{i=1}^m \beta_k \quad \forall \, k \in \mathcal{A}$ &2b***\\ \cline{2-5} & $\emptyset$ & non-empty & $\displaystyle \mathrm{Exp}(1/\alpha) + \mathrm{Exp} (1/ (1-\alpha)\gamma_{\mathcal{S}})$ &2c\\ & & & $\displaystyle \mathrm{Erlang}(2,1/\alpha)$ \hspace{3.8cm} if $\displaystyle \alpha=\gamma_{\mathcal{S}}/ (1+\gamma_{\mathcal{S}})$ &2c*\\ \cline{2-5} & non-empty & non-empty & $\displaystyle \mathrm{Exp}(1/\alpha )+(1-\alpha) W$ &2d\\ \hline $1$ & - & - & $\displaystyle \mathrm{Exp}(1)$ &3\\ \hline \end{tabular} } \caption{Overview of the possible asymptotic distributions of the scaled transition time.} \label{tab:overview} \end{table} The case $\alpha=1$ always yields asymptotic exponentiality: this happens when the escape time from branch $\mathcal{B}_{k_1}$ dominates the total transition time. As soon as $\alpha \neq 1$, the set of dominant branches starts to play an important role. In particular, the shape of the asymptotic distribution depends only on the branches in the sets $\mathcal{A}$ and $\mathcal{S}$ and changes substantially whenever one of these two subsets (or both) are empty. In the case $\alpha=0$ a diverse range of behaviors may occur, with asymptotic exponentiality only in a somewhat degenerate special case 1c. The behavior for $\alpha \in (0,1)$ is just a weighted combination of the extreme cases $\alpha=0$ and $\alpha=1$, as described in Theorem~\ref{thm:thm2}. It does not give rise to fundamentally different behavior, but interestingly enough, it does yield asymptotic exponentiality in some very special cases. If all users have the same activation rate, no matter which component they belong to, then without loss of generality, we may assume $f_k(\nu)=\nu$, $k=1,\dots,K$. Under this homogeneity assumption, the sizes of components become crucial. Indeed, if one defines $L_*:=\max_{k \neq k_2} L_k$ to be the size of the largest component, then $K_*=\{k \neq k_2 : L_k=L_* \}$. In this case the orders of magnitude of the two dominant terms of the stochastic representation~\eqref{eq:kpartiterepr} are \[ \mathbb E A(\nu) \sim \frac{\nu^{L_{k_1}-1}}{L_{k_1}} \quad \text{ and } \quad \mathbb E B(\nu) \sim \frac{|K_*| \nu^{L_*-1}}{L_{k_2}}, \quad \text{ as } \nu \to \infty, \] and hence for $1 \leq l_1 \leq L_{k_1}$, $1\leq l_2 \leq L_{k_2}$ and $k_1 \neq k_2$, \[ \mathbb E T_{(k_1, l_1), (k_2, l_2)}(\nu) \sim \left(\frac{I_{\{k_1 \in K_*\}}}{L_*} + \frac{|K_*|}{L_{k_2}}\right) \nu^{L_* - 1}, \quad \text{ as } \nu \to \infty. \] Moreover, $\beta_k/\gamma_k = (1-\alpha)/\alpha$ for every $k \in \mathcal{A}$ and thus only two possible scenarios can occur, namely 1b* and 2b*** (see \cite{ZBvL12}). The discriminating factor between these two scenarios is $\alpha$. If $k_1 \notin K_*$, then $\alpha=0$ and thus we are in scenario 1b*. If instead $k_1 \in K_*$, then $\alpha= L_{k_2}/ (|K_*| L_*)$, which means that scenario 2b*** occurs and hence asymptotic exponentiality arises. To illustrate the range of possible limiting distributions, we present some simulation results. We consider the simplest system that is sufficiently rich to show the wide range of behaviors presented in Table~\ref{tab:overview}. Specifically, we consider a complete $3$-partite network, whose three components have sizes $L_1,L_2$ and $L_3$, and assume that the process starts in state $(1,L_1)$ and the target state is $(3, L_3)$. We use activation rates of the form $f_k(\nu)=\nu^{a_k}$. For compactness, we write $\underline{a}$ for $(a_1,a_2,a_3)$ and $\underline{L}$ for $(L_1,L_2,L_3)$. This choice for the activation rates allows to invert the Laplace transform of $W$ in all the cases and thus obtain a probability density function $f(x)$, which can be compared with the simulation data. All the simulations have been performed choosing the parameter $\nu=150$ and simulating the transition time for each network $20000$ times with a customized code written in the programming language C. The results are shown in Figures~\ref{fig:simnew1} and~\ref{fig:simnew2}. \begin{figure}[!ht] \centering \subfigure[Case 1a: $\underline{L}=(3,4,6)$, $\underline{a}=(1,1,5/3)$]{\includegraphics{1a.eps}} \hspace{0.5cm} \subfigure[Case 1b: $\underline{L}=(3,5,5)$, $\underline{a}=(1/2,1/2,1/2)$]{\includegraphics{1b.eps}} \\ \subfigure[Case 1c: $\underline{L}=(3,3,3)$, $\underline{a}=(4/5,3/5,3/5)$]{\includegraphics{1c.eps}} \hspace{0.5cm} \subfigure[Case 1d: $\underline{L}=(3,4,6)$, $\underline{a}=(1,3/4,3/4)$]{\includegraphics{1d.eps}} \caption{Plots of the empirical probability density function of the scaled transition times and the density $f(x)$ of $\alpha Y + (1-\alpha)W$.} \label{fig:simnew1} \end{figure} \clearpage \begin{figure}[!h] \centering \subfigure[Case 2a: $\underline{L}=(3,4,2)$, $\underline{a}=(9/10,9/10,9/5)$]{\includegraphics{2a.eps}} \hspace{0.5cm} \subfigure[Case 2b*: $\underline{L}=(4,3,4)$, $\underline{a}=(2/3,1,1)$]{\includegraphics{2b2.eps}} \\ \subfigure[Case 2b**: $\underline{L}=(2,4,5)$, $\underline{a}=(7/4,7/8,7/4)$]{\includegraphics{2b1.eps}} \hspace{0.5cm} \subfigure[Case 2b***: $\underline{L}=(3,3,5)$, $\underline{a}=(7/8,7/8,7/8)$]{\includegraphics{2b3.eps}} \\ \subfigure[Case 2c: $\underline{L}=(5,2,2)$, $\underline{a}=(4/9,4/3,8/9)$]{\includegraphics{2c1.eps}} \hspace{0.5cm} \subfigure[Case 2c*: $\underline{L}=(5,2,5)$, $\underline{a}=(1/2,3/2,1)$]{\includegraphics{2c2.eps}} \\ \subfigure[Case 2d: $\underline{L}=(4,2,6)$, $\underline{a}=(3/5,6/5,3/5)$]{\includegraphics{2d.eps}} \hspace{0.5cm} \subfigure[Case 3: $\underline{L}=(3,3,3)$, $\underline{a}=(1,3/4,3/2)$]{\includegraphics{3n.eps}} \caption{Plots of the empirical probability density function of the scaled transition times and the density $f(x)$ of $\alpha Y + (1-\alpha)W$.} \label{fig:simnew2} \end{figure} \section{Model extensions} \label{sec6} So far we have assumed that two users interfere if and only if they belong to different components. In this section, we continue to assume that users that belong to different components interfere, but we allow users within the same component to interfere as well. If two or more users within component $C_k$ interfere with each other, there will be fewer admissible activity configurations of smaller size. In particular, not all the $L_k$ users of component $C_k$ can no longer be active simultaneously, which would then ease the transitions among different components. In the previous sections, we further assumed all users within the same component to have the same activation rate, so that state aggregation could be applied to obtain an equivalent Markov process with a star-shaped state space. In this section, we allow the users within the same component to have possibly different activation rates. With minor abuse of notation, denote by $f_l(\nu)$ the activation rate of user~$l$, and define $F_k(\nu) := \sum_{l \in C_k} f_l(\nu)$ as the aggregate activation rate of all users in the $k$-th component. The components are assumed to be \textit{minimal}, in the sense that they cannot be split into two non-trivial components, while retaining the full interference across components. As before, each independent set of the conflict graph must be a subset of one of the components, because two users that belong to different components by definition interfere. However, some subsets within the same component may no longer be independent sets in the conflict graph. For every $x \in \Omega^*$ define $V_x \subseteq V$ to be the subset of users which are active in configuration $x$, i.e. $V_x:=\{i \in V : x_i=1\}$. For every $x \in \Omega^*$, $V_x$ is by construction an independent set in the conflict graph $G$. Define moreover $\Omega_k := \{x \in \Omega^* : V_x \subseteq C_k, x \neq 0\}$. Then \[ \Omega^*=\{0\} \cup \bigcup_{k \in K} \Omega_k. \] By construction, it follows that each component satisfies a certain monotonicity property: If $V_x \in \Omega_k$, then for every nonempty $V_y \subseteq V_x$, we have $V_y \in \Omega_k$. Indeed, if $x$ a feasible configuration and it belongs to $\Omega_k$, then any configuration obtained from it by switching off some users is still feasible and belongs to $\Omega_k$ as well. The next lemma shows that between any pair of activity states corresponding to the same component, there continues to exist a path which does not visit the state $0 \in \Omega^*$. \begin{lem}\label{lem:stillgood} If $V_x, V_y \subseteq C_k$, $V_x, V_y \neq \emptyset$, then there exists a sequence of transitions between~$x$ and~$y$ that does not pass through the state $0 \in \Omega^*$. \end{lem} \begin{proof} If $V_x \cap V_y \neq \emptyset$, then the statement is trivially true. Suppose instead that $V_x \cap V_y = \emptyset$ and without loss of generality take $V_x=\{a_1,\dots,a_l\}$ and $V_y=\{b_1,\dots,b_m\}$. Define moreover $\mathcal C:=C_k \setminus (V_x \cup V_y) = \{ c_1,\dots,c_n\}$. Thanks to the monotonicity property of $C_k$, we have that $V_{a_1}, \dots, V_{a_l}, V_{b_1} \dots, V_{b_m}$ all belong to $C_k$ and each of them can be reached from $V_x$ and $V_y$, respectively, without passing through the state $0 \in \Omega^*$. Suppose that every path from~$x$ to~$y$ passes through state $0 \in \Omega^*$. Then none of the configurations $V_{\{a_k,b_j\}}$, $k=1,\dots,l$, $j=1,\dots,m$, belongs to $C_k$, i.e.~every user in $\{a_1,\dots,a_l\}$ interferes with every user in $\{b_1,\dots,b_m\}$. We claim that every $c\in \mathcal C$ interferes either with all the users in $\{a_1,\dots,a_l\}$ or with all the users in $\{b_1,\dots,b_m\}$. Indeed, if there exist $a \in V_x$, $b \in V_y$ such that both $a$ and $b$ do not interfere with $c$, then we could construct a path from~$x$ to~$y$ which does not pass through state $0$, namely $V_x \to \dots \to \{a\} \to \{a,c\} \to \{c\} \to \{b,c\} \to \{b\} \to \dots \to V_y$. If there exists a user $c \in \mathcal C$ that interferes with all the users in $\{a_1,\dots,a_l\}$ \textit{and} $\{b_1,\dots,b_m\}$, then the component $C_k$ would not be minimal, since it can be split in $C_k \setminus \{c\}$ and $\{c\}$. Thus every $c\in \mathcal C$ interferes either with all the users in $\{a_1,\dots,a_l\}$ or with all the users in $\{b_1,\dots,b_m\}$, but not both. We can then consider two sets $A = V_x \cup \mathcal C_A$ and $B = V_y \cup \mathcal C_B$, with $\mathcal C_A \cap \mathcal C_B = \emptyset$ and $\mathcal C_A \cup \mathcal C_B = \mathcal C$, such that users in $A$ interfere with all users in $B$ and vice versa. Therefore $C_k = A \cup B$ is not a minimal component. \end{proof} Clearly state $0 \in \Omega^*$ continues to be a bottleneck state which must be visited along any path between activity states corresponding to different components. For a user $l \in V$, denote by $e_l$ the configuration in $\Omega$ where only the user $l$ is active. Clearly, $l \in C_k$ if and only if $e_l \in \Omega_k$. For any two states $x, y \in \Omega^*$, let $T_{x,y} = \inf\{t \geq 0: X(t) = y | X(0) = x\}$ be a random variable representing the transition time from state~$x$ to state~$y$, i.e.~the hitting time of state~$y$ starting in state~$x$. Let $x, y \in \Omega^*$ be two activity states, with $V_x \subseteq C_{k_1}$ and $V_y \subseteq C_{k_2}$, $k_1 \neq k_2$. In order to give a stochastic representation of the transition time $T_{x,y}$, similar in spirit to \eqref{eq:kpartiterepr}, we define the following random variables for $l \in V \setminus ( \{0\} \cup C_{k_2})$: \begin{itemize} \item $N_l(\nu)$: number of times the process makes a transition $0 \to e_l \in \Omega_k$, $k \neq k_2$, before the first transition to $C_{k_2}$ occurs; \item $\hat{T}_{0, e_l}^{(i)}(\nu)$: time spent in state~0 before the $i$-th transition to state $e_l \in \Omega_k$, with $k \neq k_2$, $i = 1, \dots, N_l(\nu)$; \item $T_{e_l, 0}^{(i)}(\nu)$: time to return to state~$0$ after the $i$-th transition to state $e_l \in \Omega_k$, with $k \neq k_2$, $i = 1, \dots, N_l(\nu)$. \end{itemize} Moreover, for $l \in C_{k_2}$, define $\hat{T}_{0, e_l}(\nu)$ as the time spent in state~$0$ before the first transition to state $e_l \in \Omega_{k_2}$. Lemma~\ref{lem:stillgood} implies that the transition time $T_{x,y}$ may be represented as \eqn{\label{eq:represen1} T_{x,y} = T_{x,0} + \sum_{k \neq k_2} \sum_{l \in C_k} \sum_{i = 1}^{N_l} (\hat{T}_{0,e_l}^{(i)} + T_{e_l,0}^{(i)}) + \sum_{l \in C_{k_2}} I_l (\hat{T}_{0,e_l} + T_{e_l,y}), } where $I_l$, $l \in C_{k_2}$, are $0$-$1$ variables with $\sum_{l \in C_{k_2}} I_l = 1$ and $\pr{I_l = 1} = f_l(\nu) / F_{k_2}(\nu)$, $l \in C_{k_2}$, and the variables $T_{e_l,0}$, $l \in C_k$, are transition times when considering the component $C_k$ in isolation. Moreover, in the above representation the dependence on the parameter~$\nu$ is suppressed for compactness and all the random variables representing time durations are mutually independent as well as independent of the random variables $N_l(\nu)$, $l \in V \setminus ( \{0\} \cup C_{k_2})$. In order to determine the asymptotic behavior of the transition time $T_{x,y}(\nu)$ as $\nu \to \infty$, we now proceed to analyze the asymptotic behavior of the escape times $T_{e_l,0}$, $l \in C_k$, in the stochastic representation. Unless stated otherwise, we henceforth let $z \in \Omega_k$ and focus on the Markov process $\{\xst \}_{t \geq 0}$ restricted to the state space $\Omega_k^+ = \Omega_k \cup \{0\}$. The steady-state probability of a state $u \in \Omega_k^+$ is \[ \pi_u(\nu) = \frac{1}{Z_k(\nu)} \prod\limits_{l \in C_k} f_l(\nu)^{u_l}, \] with normalization constant \[ Z_k(\nu) = \sum_{u' \in \Omega_k^+} \prod\limits_{l \in C_k} f_l(\nu)^{u_l'}. \] Define \[ g_k(\nu) := \max_{u \in \Omega_k} \prod\limits_{l \in C_k} f_l(\nu)^{u_l} \quad \text{ and } \quad \eta_k := \min_{l \in C_k} \lim_{\ninf} \log_\nu f_l(\nu). \] We make the mild technical assumptions that $\eta \in (0,\infty)$ and that $\psi_k = \lim_{\ninf} Z_k(\nu) / g_k(\nu)$ exists. Then the following two asymptotic properties of the escape time $T_{z,0}(\nu)$ can be established: \eqn{\label{eq:expected1} \mathbb E T_{z,0}(\nu) \sim \frac{\psi_k g_k(\nu)}{F_k(\nu)},\quad \text{ as } \nu \to \infty, } and \eqn{\label{eq:scaled1} \frac{T_{z,0}(\nu)}{\mathbb E T_{z,0}(\nu)} \xrightarrow{d} \mathrm{Exp}(1),\quad \text{ as } \nu \to \infty. } In order to provide a brief sketch of the proof arguments, we first introduce some further useful notation. Let $N_{z,0}(\nu)$ be a random variable representing the number of visits to state~0 in between two consecutive visits to state~$z$. Let $R_z(\nu)$ be the residence time in state~$z$ and $T_{z,z}^+(\nu)$ the first return time to state~$z$. Noting that \[ \pi_z(\nu) = \frac{\mathbb E R_z(\nu)}{\mathbb E T_{z,z}^+(\nu)}, \quad \mathbb E R_z(\nu) \leq 1 \quad \text{ and } \quad \pi_z(\nu) = \frac{1}{Z_k(\nu)} \prod\limits_{l \in C_k} f_l(\nu)^{z_l}, \] we obtain \[ \frac{\nu^\eta \mathbb E T_{z,z}^+(\nu)}{\psi_k g_k(\nu)} = \frac{\nu^\eta \mathbb E R_z(\nu)}{\pi_z(\nu) \psi_k g_k(\nu)} \leq \frac{ \nu^\eta}{ \prod\limits_{l \in C_k} f_l(\nu)^{z_l} } \frac{ Z_k( \nu)}{ \psi_k g_k(\nu)}= o(1), \quad \text{ as } \nu \to \infty. \] Using similar arguments as in~\cite{OV05}, it may be shown that \[ \pr{T_{u, z} (\nu) > g_k(\nu) \nu^{- \eta_k / 2}} \leq r < 1 \] for all states~$u$ with $V_u \in \mathcal{B}_k$, implying (by the strong Markov property) \[ \pr{T_{u, z} (\nu) > n g_k(\nu) \nu^{- \eta_k / 2}} \leq r^n, \] and that $\pr{T_{0, z}(\nu) < T_{0,0}^+(\nu)} \geq s(\nu)$, with $s(\nu) \to 1$ as $\nu \to \infty$. This means that after a visit to state~0, the number of additional visits to that state before the first visit to state~$z$ is stochastically bounded from above by a geometrically distributed random variable with parameter $1 - s(\nu)$. This implies \eqn{\label{eq:single1} \mathbb E N_{z,0}(\nu) \sim \pr{N_{z,0}(\nu) \geq 1} \quad \text{ as } \nu \to \infty. } It may then be deduced that the distribution of $T_{z,z}^+(\nu)$ satisfies the uniform integrability condition in~\cite{GR05}. Theorem~1 in~\cite{GR05} then yields the asymptotic exponentiality property in~\eqref{eq:scaled1} and \[ \mathbb E T_{z,0}(\nu) \sim \frac{\mathbb E T_{z,z}^+(\nu)}{\pr{N_{z,0}(\nu) \geq 1}}. \] Observing that \[ \frac{\pi_0(\nu) }{ \mathbb E R_0(\nu) }= \mathbb E N_{z,0}(\nu) \frac{ \pi_z(\nu) }{ \mathbb E R_z(\nu)}, \] and invoking~\eqref{eq:single1}, we deduce that the term in the right-hand side asymptotically behaves as \[ \frac{\mathbb E T_{z,z}^+(\nu) \pi_z(\nu) \mathbb E R_0(\nu) }{\pi_0(\nu) \mathbb E R_z(\nu)} = \frac{\mathbb E R_0(\nu)}{\pi_0(\nu)} = \frac{Z_k(\nu)}{F_k(\nu)} \sim \frac{\psi_k g_k(\nu)}{F_k(\nu)}, \] yielding~\eqref{eq:expected1} as stated. \vspace{0.5cm} The two asymptotic properties~\eqref{eq:expected1},~\eqref{eq:scaled1} for the order-of-magnitude and the scaled distribution of the escape time $T_{z,0}(\nu)$ mirror those stated in~\eqref{eq:sdp} and Proposition~\ref{prop:ael0}. Using these two properties and the stochastic representation~\eqref{eq:represen1}, similar results can be established for the asymptotic behavior of the transition time $T_{x,y}(\nu)$ as in Theorems~\ref{thm:thm1} and~\ref{thm:thm2}. For any $l \in C_k$, define \[ \Theta_l(\nu) = \frac{f_l(\nu) g_k(\nu)}{F_k(\nu)}. \] In this case the set $K^*$ needs to be defined as those $l \in \bigcup_{k \neq k_2} C_k$ such that $\lim_{\nu \to \infty} \Theta_l(\nu) / \Theta_m(\nu) > 0$ for all $m \in \bigcup_{k \neq k_2} C_k$. Also, additional conditions need to be imposed in order to ensure that \[ \sum_{l \in C_{k_2}} \frac{f_l(\nu)}{F_{k_2}(\nu)} \mathbb E T_{e_l,y}(\nu) = o(\mathbb E T_{x, y}(\nu)), \quad \text{ as } \nu \to \infty, \] which guarantees that the expected time to reach state $y$, once the process hits the target component $C_{k_2}$, is asymptotically small with respect to the overall transition time $T_{x, y}(\nu)$. \section{Throughput starvation and near-saturation} \label{sec7} In this section we return to a complete partite networks with equal activation rates for users in the same component. We show how the results for the asymptotics of the transition time $T_{(k_1,l_1),(k_2,l_2)}(\nu)$ in Theorems~\ref{thm:thm1} and~\ref{thm:thm2} can be exploited to gain insight about phenomena like throughput starvation or near-saturation. More specifically, in Subsection~\ref{sec71} we present the proof of Theorem~\ref{thm:thm3}, which gives an asymptotic lower bound on the probability of throughput starvation, while in Subsection~\ref{sec72} we prove Proposition~\ref{prop:tails}, a complementary result which indicates over what time scales throughput near-saturation occurs. \subsection{Proof of Theorem~\ref{thm:thm3}} \label{sec71} Observe that $\tau_{k_2}(t(\nu)) > 0$ implies $t(\nu) > T_{(k_1,l_1),(k_2,1)}(\nu)$, because the throughput of branch $\mathcal{B}_{k_2}$ remains zero until the activity process enters $\mathcal{B}_{k_2}$. Hence \[ \pr{\tau_{k_2}(t(\nu)) > 0} \leq \pr{T_{(k_1,l_1),(k_2,1)}(\nu) < t(\nu)} = \pr{\frac{T_{(k_1,l_1),(k_2,1)}(\nu)}{\mathbb E T_{(k_1,l_1),(k_2,1)}(\nu)} < \frac{t(\nu)}{\mathbb E T_{(k_1,l_1),(k_2,1)}(\nu)}}. \] Taking the limit as $\nu \to \infty$, Theorem~\ref{thm:thm2}, gives $\lim_{\ninf} \pr{\tau_{k_2}(t(\nu)) > 0} \leq \pr{ Z < \omega }$, and~\eqref{eq:starvation} follows. \subsection{Throughput near-saturation} \label{sec72} Assume that at time $t=0$ there is at least one user active in $C_k$, i.e.~$X(0)=(k,l) \in \mathcal{B}_k$. Define the total full-component active time in $[0,t]$ as \[ \tau_k[0,t]:=\int_0^t I_{\{ X(s)=(k,L_k) \}} \, ds, \] the residual time in $C_k$ during $[0,t]$ as \[ R_{k}[0,t]:=\int_0^t I_{\{ X(r) \in \mathcal{B}_k \, \forall \, r \in [0,s] \}} \, ds, \] and the full-component active time contained in the residual time in $C_{k}$ during $[0,t]$ as \[ \tau^{\mathrm{res}}_k[0,t]:=\int_0^t I_{\{ X(r) \in \mathcal{B}_k \, \forall \, r \in [0,s] \}} I_{\{ X(s)=(k,L_k) \}} \, ds. \] For compactness, we have suppressed the implicit dependence on the parameter $\nu$ and the initial state $(k,l)$ in the notation. From this point onwards, we will also drop the subscript $k$ to keep the notation light. Note that $R[0,t]\,{\buildrel d \over =}\, \min \{t,T_{(k,l),0}\}$ and that $\tau^{\mathrm{res}}[0,t] \,{\buildrel d \over =}\, \tau[0, R[0,t]]$. The random variables $\tau[0,t]$, $R[0,t]$ and $\tau^{\mathrm{res}}[0,t]$, being particular occupancy times, are non-decreasing in $t$ on every sample path of the activity process $\{\xt \}_{t \geq 0}$. Therefore, the random variables \[ \tau[0,\infty]:=\lim_{t \to \infty} \tau[0,t], \quad R[0,\infty]:=\lim_{t \to \infty} R[0,t]=T_{(k,l),0}, \quad \tau^{\mathrm{res}}[0,\infty]:=\lim_{t \to \infty} \tau^{\mathrm{res}}[0,t] \] are well defined. For $0\leq s \leq t \leq \infty$, we define \[ \tau[s,t]:=\tau[0,t]-\tau[0,s], \quad R[s,t]:=R[0,t]-R[0,s], \quad \tau^{\mathrm{res}}[s,t]:=\tau^{\mathrm{res}}[0,t]-\tau^{\mathrm{res}}[0,s]. \] From the above definition, it is easily seen that for every sample path, $\tau^{\mathrm{res}}[s,t]$ provides a lower bound for both $\tau[s,t]$ and $R[s,t]$, as stated in the next lemma. \begin{lem}\label{lem:dom} For $0\leq s\leq t \leq \infty$, $\tau^{\mathrm{res}}[s,t] \leq \tau[s,t]$ and $\tau^{\mathrm{res}}[s,t]\leq R[s,t]$. \end{lem} \begin{proof} Rearranging terms, the differences $\tau[s,t] - \tau^{\mathrm{res}}[s,t]$ and $R[s,t]-\tau^{\mathrm{res}}[s,t]$ can both be written as integrals with an integrand that is always non-negative. \end{proof} In particular, Lemma~\ref{lem:dom} implies that, for every $0\leq s\leq t \leq \infty$ \eqn{\label{eq:fmi} \mathbb E \tau^{\mathrm{res}}[s,t] \leq \mathbb E R[s,t]. } However, as stated in the next lemma, in the limit as $\nu \to \infty$, the ratio of the expected values of $\tau^{\mathrm{res}}[0,\infty]$ and $T_{(k,l),0}=R[0,\infty]$ converges to $1$. \begin{lem} \label{lem:inftye} For any initial state $X(0)=(k,l) \in \mathcal{B}_k$, \[ \lim_{\ninf} \frac{\mathbb E \tau^{\mathrm{res}}[0,\infty]}{\mathbb E T_{(k,l),0}} =1. \] \end{lem} \begin{proof} Since the ratio is clearly less than $1$ by Equation~\eqref{eq:fmi}, it suffices to show that the liminf as $\nu \to \infty$ is larger than $1$. Applying the result in~\cite{SW00} and using~\eqref{eq:sd}, one obtains that for every $1 \leq l \leq L_k$, if $X(0)=(k,l)$, then \[ \mathbb E \tau^{\mathrm{res}}[0,\infty] = \mathbb E \Big( \int_{0}^{T_{(k,l),0}} I_{\{ X(s)=(k,L_k) \}} \, ds \Big) \geq \frac{1}{L_k} f_k(\nu)^{L_k-1}, \] and thus, involving~\eqref{eq:sdp}, \[ \liminf_{\ninf} \frac{\mathbb E \tau^{\mathrm{res}}[0,\infty]}{\mathbb E T_{(k,l),0}} \geq \liminf_{\ninf} \frac{f_k(\nu)^{L_k-1}/L_k}{\mathbb E T_{(k,l),0}} = 1. \] \end{proof} The next proposition establishes a near-saturation property in the sense that if $X(0)=(k,l) \in B_k$, then for any time period $t(\nu)=o(\mathbb E T_{(k,l),0})$ every user in $C_k$ will be active an arbitrarily large fraction of the time with probability one as $\nu \to \infty$. \begin{prop}\label{prop:tails} Suppose that $X(0)=(k,l) \in \mathcal{B}_k$ and that $T_{(k,l),0}/\mathbb E T_{(k,l),0} \xrightarrow{d} Z$ as $\nu \to \infty$. Then for every $\omega \in [0,1]$ and every $\delta>0$, \[ \liminf_{\ninf} \pr{\tau[0, \omega \mathbb E T_{(k,l),0}] \geq (1-\delta) \omega \mathbb E T_{(k,l),0}} \geq \pr{Z \geq \omega}. \] In particular, for any $t(\nu)=o(\mathbb E T_{(k,l),0}(\nu))$, $\liminf_{\ninf} \pr{\tau[0,t(\nu)] \geq (1-\delta) t(\nu)} =1$. \end{prop} \begin{rem} As mentioned earlier, the hypothesis that $T_{(k,l),0}/ \mathbb E T_{(k,l),0} \xrightarrow{d} Z$ is not just a convenient assumption, but something that we actually know. In particular, Proposition~\ref{prop:ael0} says that $Z \,{\buildrel d \over =}\, \mathrm{Exp}(1)$. Moreover, since the result holds for every initial state in $\mathcal{B}_k$, it is true also for a random initial state in $\mathcal{B}_k$. Indeed, as seen in Section~\ref{sec4}, the convergence in distribution of $T_{(k,l),0}(\nu)/ \mathbb E T_{(k,l),0}(\nu)$ to $Z$ as $\nu \to \infty$ does not depend on the initial state, as long as it belongs to $\mathcal{B}_k$. \end{rem} \begin{proof} In order to keep the notation light, we denote in this proof the hitting time $T_{(k,l),0}$ by $T$. Firstly, Lemma~\ref{lem:dom} implies that \[ \pr{\tau[0, \omega \mathbb E T] \geq (1-\delta) \omega \mathbb E T } \geq \pr{\tau^{\mathrm{res}}[0,\omega \mathbb E T] \geq (1-\delta) \omega \mathbb E T_0 }. \] Moreover, by definition of $R[0,t]=\min\{t,T\}$, we have \[ \pr{R[0, \omega \mathbb E T] \geq \omega \mathbb E T} = \pr{T \geq \omega \mathbb E T}. \] In view of the hypothesis that $T(\nu)/ \mathbb E T(\nu) \xrightarrow{d} Z$ as $\nu \to \infty$, it therefore suffices to prove that for every $ \omega \in [0,1]$ and every $\delta >0$, \[ \liminf_{\ninf} \pr{\tau^{\mathrm{res}}[0,\omega \mathbb E T] \geq (1-\delta)\omega \mathbb E T} \geq \liminf_{\ninf} \pr{R[0,\omega \mathbb E T] \geq \omega \mathbb E T}. \] Suppose that this latter statement is false, i.e.~there exist $\omega_0 \in [0,1]$, $\delta >0$ and $\eta>0$ such that \eqn{\label{eq:eta} \liminf_{\ninf} \pr{\tau^{\mathrm{res}}[0,\omega_0 \mathbb E T] \geq (1-\delta)\omega_0 \mathbb E T} \leq \liminf_{\ninf} \pr{R[0,\omega_0 \mathbb E T] \geq \omega_0 \mathbb E T} -\eta. } Then it can be shown that there exists $\varepsilon_{\omega_0,\delta} >0$ such that \eqn{\label{eq:1a} \liminf_{\ninf} \frac{\mathbb E \tau^{\mathrm{res}}[0, \omega_0\mathbb E T] }{\mathbb E T} \leq \liminf_{\ninf} \frac{\mathbb E R[0, \omega_0\mathbb E T]}{\mathbb E T}-\varepsilon_{\omega_0,\delta}. } Indeed, \eqan{ \liminf_{\ninf} & \left( \frac{\mathbb E R[0, \omega_0\mathbb E T]}{\mathbb E T} - \frac{\mathbb E \tau^{\mathrm{res}}[0, \omega_0\mathbb E T]}{\mathbb E T} \right ) = \liminf_{\ninf} \int_0^\infty \pr{\frac{R[0,\omega_0 \mathbb E T]}{\mathbb E T} \geq y} - \pr{ \frac{\tau^{\mathrm{res}}[0,\omega_0 \mathbb E T]}{\mathbb E T} \geq y} \, dy \nonumber \\ & \stackrel{\text{Lemma}~\ref{lem:dom}}{\geq} \liminf_{\ninf} \int_{(1-\delta)\omega_0}^{\omega_0}\pr{\frac{R[0,\omega_0 \mathbb E T]}{\mathbb E T} \geq y}- \pr{\frac{\tau^{\mathrm{res}}[0,\omega_0 \mathbb E T]}{\mathbb E T} \geq y} \, dy \nonumber \\ & \geq \int_{(1-\delta)\omega_0}^{\omega_0} \liminf_{\ninf} \left ( \pr{\frac{R[0,\omega_0 \mathbb E T]}{\mathbb E T} \geq y}- \pr{\frac{\tau^{\mathrm{res}}[0,\omega_0 \mathbb E T]}{\mathbb E T} \geq y} \right )\, dy \nonumber \\ & \geq \eta \delta \omega_0 >0, \nonumber } where the second last inequality follows from the generalized Fatou's lemma, while the last inequality follows from~\eqref{eq:eta} and is illustrated by Figure~\ref{fig:eta}. Thus we can take $\varepsilon_{\omega_0,\delta}:=\eta \delta \omega_0$. \begin{figure}[!ht] \centering \includegraphics{eta.eps} \caption{$\pr{ \tau^{\mathrm{res}}[0,\omega_0 \mathbb E T] / \mathbb E T \geq y}$ (lower line) vs $\pr{R[0,\omega_0 \mathbb E T] / \mathbb E T \geq y}$ (upper line)}\label{fig:eta} \end{figure} Equation~\eqref{eq:fmi} yields \eqn{\label{eq:1b} \liminf_{\ninf} \frac{\mathbb E \tau^{\mathrm{res}}[\omega_0\mathbb E T, \infty] }{\mathbb E T} \leq \liminf_{\ninf} \frac{\mathbb E R[\omega_0\mathbb E T,\infty]}{\mathbb E T}, } and thus, summing term by term~\eqref{eq:1a} and~\eqref{eq:1b}, since by definition $\mathbb E R[0,\infty]=\mathbb E T$, \[ \liminf_{\ninf} \frac{\mathbb E \tau^{\mathrm{res}}[0, \infty] }{\mathbb E T} \leq \liminf_{\ninf} \frac{\mathbb E R[0,\infty]}{\mathbb E T}-\varepsilon_{\omega_0,\delta}=1-\varepsilon_{\omega_0,\delta}, \] which contradicts Lemma~\ref{lem:inftye}. \end{proof} \section{Mixing times} \label{sec8} In the previous sections we have analyzed the transient behavior of the Markov process $\{\xt \}_{t \geq 0}$ in terms of hitting times and we have shown how this leads to starvation of individual users over certain time scales. In this section we turn attention to the long-run behavior of the Markov process $\{\xt \}_{t \geq 0}$ and in particular examine the rate of convergence to the stationary distribution. We measure the rate of convergence in terms of the total variation distance and the so-called \textit{mixing time}, which describes the time required for the distance to stationarity to become small. The mixing time becomes particularly relevant when the network has two or more dominant components which together attract the entire probability mass in the limit as $\nu \to \infty$. Indeed, in this case, the mixing time provides an indication of how long it takes the activity process to reach a certain level of fairness among the dominant components. We will prove a lower bound for the mixing time using the notion of conductance. The maximal distance over $x\in\Omega$, measured in terms of total variation, between the distribution at time~$t$ and the stationary distribution is defined as \[ d(t,\nu):= \max_{x\in\Omega}\|\pr{X(t) \in \cdot \,| X(0)=x }-\pi(\nu)\|_{\mathrm{TV}}. \] We define the mixing time of the process $\{\xt \}_{t \geq 0}$ as \[ t_{\mathrm{mix}}(\varepsilon,\nu):=\inf\{t \geq 0 : d(t,\nu)\leq \varepsilon\}. \] For a fixed $r \in (0,1)$ consider the subset $\tilde K(r)$ of branches whose stationary probability is asymptotically no more than $r$, i.e. $\tilde K(r):=\{ k ~:~ \lim_{\ninf} \pi_{\mathcal{B}_\kappa}(\nu) \leq r \}$. Define $\kappa \in \tilde K(r)$ as the index corresponding to the branch $\mathcal{B}_\kappa$ which has asymptotically the largest mean escape time, i.e.~such that for every $j \in \tilde K(r)$, \eqn{\label{eq:kappa} \lim_{\ninf} \frac{\mathbb E T_{(\kappa,1),0}(\nu)}{\mathbb E T_{(j,1),0}(\nu)} = \lim_{\ninf} \frac{L_j f_{\kappa}(\nu)^{L_{\kappa}-1}}{L_{\kappa} f_j(\nu)^{L_j-1}} \geq 1. } The next result shows that the mixing time is asymptotically at least of the same order of magnitude as the escape time from branch $\mathcal{B}_\kappa$. \begin{prop} \label{prop:mix} For any $r\in (0,1)$ and $\varepsilon \in \left (0, \frac{1-r}{2} \right )$, there exists a constant $C_{\varepsilon,r} > 0$ and $\nu_0> 0$ such that for every $\nu > \nu_0$ \[ t_{\mathrm{mix}}(\varepsilon,\nu) \geq C_{\varepsilon,r} \frac{f_{\kappa}(\nu)^{L_{\kappa}-1}}{L_\kappa}. \] \end{prop} Proposition~\ref{prop:mix} shows that it can take an extremely long time for the process $\{\xt \}_{t \geq 0}$ to reach stationarity, especially when $\nu$ is large. Such a long mixing time is typically due to the activity process being stuck for a considerable period in one of the components, and thus not visiting the states in the other components. We will prove Proposition~\ref{prop:mix} exploiting the presence of a bottleneck in the state space and using the notion of conductance. For $S \subseteq \Omega$, let $\pi_S(\nu)=\sum_{(k,l) \in S } \pi_{(k,l)}(\nu)$ be the stationary probability of $S$. Define the \textit{probability flow out of} $S$ as \[Q(S,S^c):=\sum_{(k,l) \in S, (j,m) \in S^c} \pi_{(k,l)}(\nu) q((k,l),(j,m))\] and its \textit{conductance} as $\Phi(S):=Q(S,S^c)/\pi_S$. All the quantities we just defined clearly depend on $\nu$, but we suppress it for compactness. The \textit{conductance profile} of the process $\{\xt \}_{t \geq 0}$ is defined as \[\Phi_r(\nu):=\min_{S \,: \, \pi(S) \leq r} \Phi(S).\] The following result, valid for any Markov process on a finite state space $\Omega$ with conductance profile $\Phi_r$, shows how the conductance of the process yields a lower bound on the mixing time. It is a continuous-time version of Theorem 7.3 in~\cite{LPW09} and the proof is relegated to~\ref{ap4}. \begin{lem} \label{lem:cond} For any $r\in (0,1)$ and any $\varepsilon \in \left(0,\frac{1-r}{2}\right)$, \[ t_{\mathrm{mix}}(\varepsilon, \nu) \geq \frac{1-r - 2 \varepsilon}{\Phi_r (\nu)}. \] \end{lem} In order to get a sharp bound for the conductance and hence a sharp lower bound for the mixing time, we need to identify a subset $S$ with low conductance. As proved in~\cite{LS88}, it suffices to look at the connected subsets of the state space. Therefore the branches in $\tilde{K}(r)$ become naturally good candidates for being the lowest-conductance subsets of $\Omega$. From~\eqref{eq:sd} it follows that if $(k,l)$ and $(k,m)$ are states in the same branch $\mathcal{B}_k$, then \[ \frac{\pi_{(k,m)}(\nu)}{\pi_{(k,l)}(\nu)} = \frac{l!(L_k-l)!}{m!(L_k-m)!} f(\nu)^{m-l}, \quad \text{ as } \nu \to \infty. \] Thus the conductance of $\mathcal{B}_k$ satisfies \[ \Phi(\mathcal{B}_k) = \frac{\pi_{(k,1)}(\nu) \cdot 1 }{ \sum_{l=1}^{L_k} \pi_{(k,l)}(\nu)}= \frac{ \frac{\pi_{(k,1)}(\nu)}{\pi_{(k,L_k)}(\nu)}}{ \sum_{l=1}^{L_k} \frac{\pi_{(k,l)}(\nu)}{\pi_{(k,L_k)}(\nu)}} \sim L_k f_k(\nu)^{1-L_k}, \quad \text{ as } \nu \to \infty. \] Thanks to the definition of $\kappa$, $\mathcal{B}_\kappa$ is asymptotically the branch with the smallest conductance, see Equation~\eqref{eq:kappa}. Since by definition $ \Phi_r(\nu) \leq \Phi(\mathcal{B}_{\kappa})$, Lemma~\ref{lem:cond} then gives that for every $\varepsilon \in (0,\frac{1}{4})$ and for $\nu$ sufficiently large \[ t_{\mathrm{mix}}(\varepsilon,\nu) \geq ( 1-r - 2 \varepsilon) \frac{f_{\kappa}(\nu)^{L_{\kappa}-1}}{L_\kappa}, \] which completes the proof of the lower bound claimed in Proposition~\ref{prop:mix}. \section{Conclusions} \label{sec9} We have studied hitting times and mixing properties in dense wireless random-access networks. We have represented the activity processes in such networks in terms of Markov processes on complete partite graphs. In particular, in dense networks, high activity rates lead to network behavior in which users in maximum independent sets coalesce into components which compete for the wireless medium. We have shown that components monopolize the wireless medium for extremely long periods, which leads to long mixing times and starvation of all other components. Hence, users in a particular component alternate between enjoying long periods of access, and facing long periods of starvation. While the slow nature of the transitions is a common characteristic, the asymptotic distribution of the scaled transition time depends crucially on the structure of the network and on the initial and target components and there is a notable variety of possible scenarios. In particular, in some scenarios, the distribution of the scaled transition time is non-exponential. This is due to the heterogeneous activation rates among components in conjunction with the presence of intermediate components where the activity process resides for long periods along the transition. The complete partite graphs that we focused on in the present paper, are arguably the worst possible networks in terms of transition times and starvation effects, given the size of the network. Indeed, the fact that the users are grouped into components, with no interference within components and full interference between components, turns out to be a key element for starvation to occur. This is reflected in the fact that the transition times exhibit exponential growth in the component size. Graphs that are non-partite, or partite but not complete, will have a less extreme tendency for starvation, although the issue may still arise to a milder degree. For example, interference between nodes inside the same component will reduce the size of the maximum independent set, and lower the likelihood of the maximal activity states relative to the bottleneck state where all the nodes are inactive. This is borne out by the model extensions in Section~\ref{sec6} where the order-of-magnitude of the transition time is governed by the maximum independent set within a component. Likewise, lack of interference between nodes in different components will result in bottleneck states where some of the nodes may be active, and raise the likelihood of the bottleneck state relative to the dominant activity states. This is illustrated by the work in~\cite{ZBvLN13} which investigates the transition times and delays in a toric grid. The toric grid is a bi-partite graph, but with fewer edges between the two components. The results in~\cite{ZBvLN13} show that the delays and long transition times, while still severe, are of a lower order than for the complete bi-partite graph. \bibliographystyle{plain}
1,314,259,993,144
arxiv
\section{Introduction} \label{sec:intro} The \texttt{Neutron Star Interior Composition Explorer} (\texttt{NICER}) is a NASA mission for X-ray astronomy that has been operating on the \textit{International Space Station} (\textit{ISS}) since it was launched and deployed in 2017 June \citep{gend16}. The \texttt{NICER} X-ray Timing Instrument (XTI) consists of 56 identical and co-aligned cameras, each containing an X-ray Concentrator (XRC; \cite{okaj16}) and a customized Si drift detector positioned in the concentrator's focal plane. The other primary components of \texttt{NICER} are a target acquisition and tracking platform and seven electronics boxes, each of which services event processing from eight detectors \citep{prig16}. Each detector package (detector, preamplifier, and thermoelectric cooler) is known as a Focal Plane Module (FPM), and each electronic box is referred to as a Measurement \& Power Unit (MPU). The XTI sensitivity range is 0.2--12 keV, the energy resolution is typical of Si detectors (e.g., 150 eV FWHM at 6.5 keV), and detected events are time-tagged to an absolute accuracy of 100 ns. The combined detector output from the 50 best-performing FPMs offers substantial throughput, e.g. with 10,500 c/s from the Crab Nebula over the range 0.4--12 keV. The FPM and MPU designs and interworking are described in \cite{prig16}. Here we summarize the details that are most relevant to the background model at hand. Each FPM is a single channel device that is collimated to view a circular celestial area with radius of 3.17 arcmin. The multilayer the collimator (1 mm radius) captures more than 90\% of the light in the concentrator's point spread function, and it limits the travel time to the anode for X-ray events, while the active area under the collimator extends to a radius of 2.8 mm. All \texttt{NICER} observations contain events from both the science target and the various types of background that are encountered while operating in space. Scientific analyses thus require a model that can predict the background spectrum so that the target spectrum can be isolated. The 3C50 background model uses detector measurements that characterize the background but not the X-rays from the science target, analogous to past missions such as the Photon Counting Array of the {\it Rossi} X-ray Timing Explorer (RXTE) Mission \citep{jaho06}. To assist modeling efforts, \texttt{NICER} routinely schedules observations of seven sky positions that are void of detectable point sources. These targets (inherited from {\it RXTE}) are named ``BKGD\_RXTE\#'', with \# ranging 1--6,8. Position \#7 was eliminated as a \texttt{NICER} background target for the presence of a soft X-ray source (bright star). Pre-launch analyses predicted that the \texttt{NICER} background would primarily consist of a very small contribution from the cosmic diffuse X-ray background (e.g., \cite{wuha91}), given the small FOV of the Instrument (31.6 square arcmin), and particle interactions that deposit energy indistinguishable from in-band X-rays. The 3C50 model covers these components. Additional background sources that are not considered in the 3C50 model include enhanced diffuse X-rays from hot gas in the Milky Way (dependent on galactic latitude), possible soft X-rays related to Solar activity, and possible contamination from the Earth limb or the radiation sources on Soyuz spacecrafts. In pre-launch analyses, the primary background components were expected to yield, for the majority of the \texttt{ISS} orbit, 0.2 counts per second (c/s) in soft X-rays at 0.4--2.0 keV, and an additional 0.15 c/s at 2--8 keV. The Empirical Background Model, also known as the ''3C50'' model, uses libraries constructed by sorting and combining the spectra extracted from background observations. Each library spectrum is the sum of spectra within a cell defined by intervals in the adopted model parameters, as described below. The model is named ``3C50'' because it is based on 3 parameters, the format assumes that spectral extractions will be made from standard \texttt{NICER} ``cleaned'' event lists, and the libraries are based on selection of 50 of the 52 FPMs operating in the XTI, while the remaining 4 (of 56) are not operating). Our choice of model parameters (see below) requires additional introductory explanations about signal processing steps in the MPU (see \cite{prig16}). The signal line for each FPM is replicated, so that events can be found and processed independently with ``fast'' and ``slow'' measuring chains that use circuits with different time windows. The fast chain (84 ns nominal shaping time) produces time tags with higher precision, while the slow chain (465 ns shaping time) more effectively integrates the total electron yield, with lower noise, providing better measurements of the event energy. Event detections can trigger on either measuring chain, when the rate of change in the signal line exceeds a trigger threshold held in the MPU, per FPM and per chain. The trigger thresholds are chosen to admit a noise rate of $\sim 3$ c/s per FPM, per measuring chain, as measured below 0.25 keV when there are no sources in front of the detectors. Such noise events will be asynchronous, i.e. they will trigger only one measuring chain. On the other hand, X-ray and particle events will usually trigger both measuring chains, yielding a single event that includes two measurements of the event energy. The caveat, here, is that the fast chain, with a higher noise level than the slow chain, has a lower trigger efficiency at energies below 1 keV. X-ray events from the source that trigger the slow chain, but not fast chain, are likely to be in the range 0.2--0.6 keV, i.e., above the slow threshold and below the fast threshold. When the travel path in the detector is long, from the point of incidence to the charge-collecting anode at the center of the active Si region, then the size of the charge cloud, and hence the temporal profile of the event, is elongated by charge diffusion. With its longer shaping time, the slow chain is more immune to such effects, since it has a longer time to integrate the charge. Incomplete charge collection will lower the reported event energy, and so the ratio of the slow chain energy and the fast chain energy systematically increases when the point of incidence is near the outer edges of the detector, i.e., beyond the inner ring of the collimator. This is an important detail for the background model, since pre-launch simulations had shown that particle interactions with the detector would generally produce event energies well above the 12 keV limit of the concentrator's effective area --- and thus be rejected --- except for edge-clipping events near the outer edges of the active Si area. \texttt{NICER's} calibrated events lists are given in the ''pulse invariant'' (PI) convention, where \texttt{PI} is the calibrated energy value from the slow chain in 10 eV units, \texttt{PI\_FAST} is the calibrated energy in the fast chain, and \texttt{PI\_ratio} = \texttt{PI} / \texttt{PI\_FAST}, for each event that triggers both of the measuring chains (and \texttt{PI\_ratio} = INDEF, otherwise). The events from the predicted edge-clipping particles that can mimic X-ray events in \texttt{PI} value must travel a long path to the anode, resulting in increased spread of the charge cloud associated with the increased drift time. The measurement of such events would then show anomalously high values in \texttt{PI\_ratio}, since the ``ballistic deficit'' in the fast chain will be more substantial, compared to the slow chain. The final topic of \texttt{NICER} signal processing that is pertinent to the background model is the system of \texttt{NICER} event flags. There are six flags, with assigned value 1 or 0, which are interpreted as ``yes'' or ``no'', respectively. Five of these flags tie an event to a particular circuit latch in the MPU, and the circuit is designed to help distinguish events as good or bad for inclusion in scientific analyses. The flags are: first-event-in-packet (useful only for the data pipeline software), triggered the fast chain, triggered the slow chain, forced trigger, undershoot event, and overshoot event. Forced triggers result from commands to sample the signal values in the absence of a trigger from a detector, and they serve to monitor the zero point in the energy calibration of each FPM/MPU signal processing combination. Since launch, forced triggers have been operating at 5 Hz for each FPM. Undershoot events have latched a circuit designed to detect the large negative pulse associated with a detector reset, which causes a high-amplitude negative pulse when an FPM discharges the capacitor that collects ambient charge running through the detector (maintained at $-55\ensuremath{^{\circ}}$C). Overshoot events have latched a different circuit designed to safeguard against large positive pulses (roughly equivalent to 18 keV) that could cause a bit rollover in the analog-to-digital converter. The keV assignments of undershoots and overshoot events are thus meaningless. Any event with unity values in {forced triggers, undershoots, or overshoot} flags are excluded from the ``good events'' that are passed into the cleaned event lists created by the \texttt{NICER} pipeline. The cleaned event lists are also limited to the slow chain energies in the range 0.2--15 keV. Returning to the topic of background modeling, the strategy to recognize events associated with energetic particles can be summarized as follows. Most particle events would be excluded as bad events via the overshoot flag, while detector edge-clipping events with in-band amplitude and no overshoot flag would be identified via high values in the event's \texttt{PI\_ratio}. As shown below, \texttt{NICER} in-flight data reveal an additional component with neither of these properties that must be handled in order to predict the background spectrum. \section{Observations and Data Selection} \label{sec:data} The 3C50 model is a phenomenological approach to predict the in-band (0.4--12.0 keV) background spectrum using the observations of the \texttt{NICER} background fields. The parent data set for the model libraries includes all such observations from 2017 July 24, through 2020 March 21. The contributions from each background field are given in Table~\ref{tab:obs}. The selection filters leading to the model libraries are summarized in Table~\ref{tab:select}, and they are described in detail, below. \begin{deluxetable*}{ccccc} \tablenum{1} \tablecaption{Parent Observations for the 3C50 Background Model \label{tab:obs}} \tablewidth{0pt} \tablehead{ \colhead{Target} & \colhead{ObsID first} & \colhead{ObsID last} & \colhead{\# GTIs} & \colhead{Exposure (ks)}} \decimalcolnumbers \startdata BKGD\_RXTE1 & 1012010101 & 3012010106 & 357 & 131.2 \\ BKGD\_RXTE2 & 1012010201 & 3012020201 & 545 & 309.8 \\ BKGD\_RXTE3 & 1012010301 & 3012030104 & 317 & 157.2 \\ BKGD\_RXTE4 & 1012010401 & 2012040241 & 451 & 248.4 \\ BKGD\_RXTE5 & 1012010501 & 2012050232 & 540 & 292.4 \\ BKGD\_RXTE6 & 1012010601 & 3012060201 & 830 & 548.4 \\ BKGD\_RXTE8 & 1012010801 & 3012080102 & 516 & 336.9 \\ \enddata \tablecomments{The sum of these background observations yields 3556 GTIs and 2.024 Ms exposure time.} \end{deluxetable*} Data analyses utilized the NASA HEASoft package, version 6.26.1. Prior software versions had different defaults for the use of \texttt{nimaketime} to define good time intervals (GTIs; see below). The raw event lists from the \texttt{NICER} pipeline for the observation IDs (ObsIDs) given in Table \ref{tab:obs} were calibrated for the 2020 gain revision specified by ``GCALFILE'', ``nixtiflightpi20170601v005.fits''. When making the cleaned event lists, the default filters for environmental conditions were adopted, excluding times in the South Atlantic Anomaly (SAA), pointing elevations within 15\ensuremath{^{\circ}}\ of the dark Earth limb and within 30\ensuremath{^{\circ}}\ of the sunlit Earth limb. However, the default filters for bad-event count rates were effectively disabled by using extremely high values (15000) for the maximum rates of overshoots, undershoots, and the relationship between overshoots and magnetic cutoff rigidity. These filter over-rides are required to populate the cleaned event lists with good events during times when the background rates are high. \begin{deluxetable*}{lccl} \tablenum{2} \tablecaption{Data Selections and Filters for the Spectrum Libraries \label{tab:select}} \tablewidth{0pt} \tablehead{\colhead{Selection} & \colhead{\# GTIs} & \colhead{Exposure (ks)} & \colhead{Comment}} \decimalcolnumbers \startdata All data & 3556 & 2024.2 & from Table~1 \\ Selected 50 FPMs operating & 3435 & 1937.4 & \\ Filter noise outliers & 3357 & 1891.1 & any $i$: $nz_i > 100$ and $nz_i > 0.15 nz$ \\ Parameters within 3C50 limits & 3264 & 1818.3 & see Fig.~\ref{fig:ibghrej} \\ \\ Subset \texttt{ISS} night & 1991 & 1068.6 & $nz < 200$, see Fig.~\ref{fig:cellrbg} \\ Filter out outliers & 1947 & 1038.2 & see Fig.~\ref{fig:cellrbg} ; GTIs for night library \\ \\ Subset \texttt{ISS} day & 1273 & 758.6 & $nz >= 200$ \\ Filter for high BG and stage 1 residuals & 1076 & 627.4 & $ibg_{52} < 0.4$ c/s; $-1.0 < (C_{net} + D_{net}) < 1.0$ \\ \enddata \tablecomments{In summary, the 3C50 spectral libraries use 3023 (of 3556) GTIs, corresponding to 1665.6 ks or 82\% of the exposure time. There are 1947 GTIs contributing to the night library and 1076 GTIs for the day library.} \end{deluxetable*} The ObsID directories from the \texttt{NICER} pipeline contain data for a given target, accumulated on a given day. In this work, each GTI is an interval of continuous exposure, and for \texttt{NICER} such intervals are usually less than 2 ks because of interruptions imposed by the rotation of the \texttt{ISS} with respect to celestial coordinates, imposed by the rotation of the \texttt{ISS}, once per 93 min Earth orbit. Many ObsIDs contain more than one GTI. Since the \texttt{NICER} background can change significantly at different locations in the \texttt{ISS} orbit, background modeling is based on the timescale of GTIs, rather than ObsIDs. There is further value to measuring the amount of parameter variability that occurs within a given GTI, so as to exclude it or to redefine the time boundaries to avoid strong flares in the background. Further practical considerations for running the background model are given in \ref{sec:practical}. The tool \texttt{nimaketime} was used to define the GTIs for every ObsID in each of the background fields. This step repeats the same filter choices used to make the revised cleaned event lists (see above). GTI selection was additionally filtered to exclude GTIs with duration less than 60 s, while disregarding any gaps of 1 or 2 s that might be imposed by a telemetry packet loss, which is corrected for, via an adjusted exposure time, by the \texttt{NICER} pipeline. The numbers of selected GTIs and the net exposure time accumulated per background field, are given in Table~\ref{tab:obs}. The total yield is 3556 GTIs, averaging 570 s per GTI and accumulating 2.024 Ms. Finally, we choose to build the background spectrum libraries with a selection of 50 (of 56) FPMs. We label FPMs with two digits: the first for the MPU that services it (0-6) and the second for the FPM slot (0-7) in the MPU. For example, the first FPM on the first MPU is ``00'', while the last FPM on th4e last MPU is ``67''. In this notation, the six excluded FPMs are the four that are not operating (11, 20, 22, and 60) plus two (14 and 34) that have shown episodes of unreliable spectra and high noise rates, respectively. The selection of the remaining 50 FPMs adds an element of uniformity to the background libraries. However, users can conduct target analyses with any number of selected FPMs, and then apply the 3C50 model under the assumption that the model metrics do not change, detector-by-detector, and the net background spectrum can be simply scaled by the number of selected FPMs. \section{First Stage of the 3C50 Background Model} \label{sec:ngt} The strategy behind the 3C50 model is to sort out the background components from an empirical point of view, based on event properties found in observations of the background fields. The first stage of the model distinguishes background components that have different spatial properties in the detector focal plane, as determined by event distributions that have different values of \texttt{PI\_ratio}. The second stage of the model deals with the soft X-ray excess due to noise encroachment induced by the seepage of sunlight into the Instrument when observations are conducted in \texttt{ISS} sunlight, and this is described in Section~\ref{sec:day}. \subsection{Two Parameters for Background Components Sorted by \it{PI\_ratio}} \begin{figure}[ht!] \includegraphics[angle=-90.,width=5.5in]{sample_pi_ratio_2020a.eps} \caption{Sample background observations showing every good event in the plane of event energy vs. \texttt{PI\_ratio}. The curved red line shows the \texttt{NICER} pipeline's boundary between events that appear as if in-focus (left of line), versus events originating far from the detector anode (right of line). The integration areas for the first two background model parameters, $ibg$ and $hrej$, are also shown. The observations were chosen to illustrate how the two background components are always present, but the fraction of events in each group can vary widely. \label{fig:piratio}} \end{figure} The description of model parameter choices is framed by an examination of Fig.\ref{fig:piratio}, where events are plotted in the plane of (effective) photon energy (\texttt{PI}) versus \texttt{PI\_ratio}, i.e. the ratio of slow-chain to fast-chain keV values for good events that trigger both measuring chains. Each panel shows vertical spectral tracks centered on \texttt{PI\_ratio} values near 1.0 and in the range 1.7--2.2. The curved red line shows the relationship, \texttt{PI\_ratio} $= 1.1 + 120.0 /$\texttt{PI}, where \texttt{PI} is the slow-chain photon energy in units of 10 eV. This relationship is the standard cut in pipeline data processing, where events to the left of the line, plus events with \texttt{PI\_ratio} = INDEF, are passed to the cleaned event lists. The value of 1.1 limits the exclusion of events to cases where the fast chain energy differs by more than 10\% from the slow chain energy. The second term adds allowances for the broadening of the \texttt{PI\_ratio} values due to statistical noise, especially in \texttt{PI\_FAST}. We note that the events displayed in Fig.\ref{fig:piratio} are drawn from the unfiltered but calibrated event lists (\texttt{*ufa.evt}) in the pipeline's \texttt{$ObsID$/xti/event\_cl/} directory, which contains all good events (i.e., events with no undershoot, overshoot, or forced trigger flags), with no \texttt{PI\_ratio} screening. The pipeline's \texttt{PI\_ratio} filter effectively separates the two vertical distributions in background events seen in Fig.\ref{fig:piratio}. The right-side distribution is consistent with the expected particle hits near the edges of the Si drift detector that would mimic good events with anomalously high values in \texttt{PI\_ratio}. On the other hand, the left-side events are indistinguishable from ``in-focus'' X-rays emitted by \texttt{NICER} targets, and we refer to this distribution as the in-focus background component. This label is intended as a comparative reference rather than a provable statement that the XRC is involved in the process of detecting these events. Further considerations about the origin of these events are given below. To capture and monitor the rate of events with high \texttt{PI\_ratio}, we define a ``hatchet'' rejection line, shown in Fig.~\ref{fig:piratio} as the blue vertical line at \texttt{PI\_ratio} = 1.54. This leads to the choice of the first 3C50 model parameter, $hrej$, which is the count rate of hatchet-rejected events, \texttt{PI\_ratio} $ >= 1.54$ in the range 3--18 keV. The high-energy cutoff represents an approximate maximum energy in the \texttt{NICER} calibration, while the lower limit (3 keV) avoids any overlap between in-focus and hatchet-rejected distributions. The tail-off of rejected events below 3 keV represents the lower efficiency of the fast chain triggers at lower energy, exacerbated by the charge-diffusion pulse broadening that further decreases the probability of detection. There is also laboratory evidence that the gain drops near the detector edges, affecting both measuring chains. However, the energy content of particles is sometimes sufficient to endure all of these effects and trigger a pulse with a telltale high value in \texttt{PI\_ratio}. In this sense, the \texttt{NICER} detector / electronics package has a built-in particle monitor, albeit with low efficiency. \begin{figure}[ht!] \includegraphics[width=4in]{bkgd_ampl_hrej_ibg_2020a.eps} \caption{The in-focus background rate at 0.4--12 keV ($R_{BG}$) versus the first two parameters of the 3C50 model, $ibg$ (top panel) and $hrej$ (bottom panel). The plotted data corresponds to all of the 3556 GTIs represented in Table \ref{tab:obs}. Rough correlations are seen in each panel, with a steeper dependence in the case of $ibg$. \label{fig:ampl}} \end{figure} Fig.~\ref{fig:piratio} shows two panels of background events, and each one represents an overlay from three GTIs, using different FPMs, selected at widely different times. The GTIs for the left panel were chosen for having roughly half of the events located in the hatchet-rejected region, while the three GTIs selected for the right panel have $\sim 10\%$ of events in the hatchet-rejected region. The intent is to illustrate common features in the energy:\texttt{PI\_ratio} plane, while also showing that the fraction of events in each distribution can vary significantly. To monitor the rate of in-focus events from the background observations (i.e., the vertical track of events near \texttt{PI\_ratio} $\sim 1$ in Fig.~\ref{fig:piratio}), we define $ibg$, as the count rate of in-focus events (i.e., left of the red curve) in the range of 15--18 keV. This restriction to high energy events is needed to limit the $ibg$ capture range to energies well above 12 keV, where the XRC optics have negligible effective area, to avoid contamination of $ibg$ by X-rays from bright sources. Having defined $ibg$ and $hrej$ as the first two parameters for the 3C50 background model, Fig.~\ref{fig:ampl} shows how each parameter varies with the in-focus, in-band background count rate at 0.4--12 keV, hereafter $R_{BG}$. The lower limit of the energy range, i.e., choosing 0.4 keV instead of the XTI sensitivity limit at 0.2 keV, is a hedge against effects due to noise and the optical light leak, which are described in the next section. The primary objective of the background model is to predict the spectrum associated with $R_{BG}$. Fig.~\ref{fig:ampl} shows that both $ibg$ (top panel) and $hrej$ (bottom panel) are roughly correlated with $R_{BG}$, with a steeper dependence in the case of $ibg$. The high degree of variability in $R_{BG}$ is apparent along the horizontal axes. The median value is $R_{BG} = 0.87$ c/s, but the distribution, even while ignoring the 1\% high and low extremes, still ranges from 0.33 to 300 c/s. Values of $ibg$ can also vary by many orders of magnitude. Since $ibg$ is the high-energy extension of $R_{BG}$), while the energy range is beyond the effective area of the \texttt{NICER} optics, $ibg$ values can be used to normalize the stage 1 library selection in the 3C50 model, tuning the model to converge with the source spectrum at 15--18 keV (considered in extrapolation, since extractions from cleaned event lists terminate at 15 keV under the default pipeline settings). While $hrej$ is a metric for the spatially extended events due to edge-clipping particles, the origin(s) of the in-focus background component, tracked with $ibg$, is not well understood. Background components with \texttt{PI\_ratio} near unity are expected from the cosmic diffuse X-ray background, as well as possible soft X-ray emission from other sources (see Section \ref{sec:intro}). True X-ray events are expected to appear in-focus, since the metal collimator above the detector surface limits the path of X-rays to radii within 1 mm (3.17 arcmin) displacement from the anode \citep{prig16}. However, the count rate from these sources is expected to yield $R_{BG} \sim 0.5$ c/s over most of the sky, while measured $R_{BG}$ values are sometimes far brighter and highly variable. We are therefore led to view $ibg$ as representing a second particle component that is either unable to penetrate the collimator or is guided to the detector with assistance from the XRC. \\ \\ \subsection{3C50 Library for \texttt{ISS} Night} Fig.~\ref{fig:ibghrej} shows a plot of $hrej$ versus $ibg$ for all of the 3556 GTIs represented in Table~\ref{tab:obs}. The $hrej$, $ibg$ plane is the parent for stage 1 of the 3C50 model. A $5 \times 7$ grid of parameter values in this plane is used to bin the background spectra, per GTI, and the combined spectrum per grid cell is computed to populate the stage 1 library in the 3C50 model. The cell boundaries are chosen to follow the population pattern, rather than a regular grid. The library cells on the upper left and lower right of the grid are left vacant, and queries to those cells would select the nearest occupied neighbor, moving horizontally along $hrej$. \begin{figure}[ht!] \includegraphics[width=4in]{bkgd_ibg_hrej_2020a.eps} \caption{Plot of $ibg$ versus $hrej$. This plane is divided into five cells (horizontal) covering $hrej < 5.0$ and seven cells (vertical) with $ibg < 10.0$. The cells are referenced by ($ibg$, $hrej$) number with 11 on the lower left and 75 to the upper right. Cell boundaries in units of $ibg$, $hrej$ are given in Table~\ref{tab:stage1}. These intervals define bins in which the background spectra per GTI are combined to create the stage 1 background library for the 3C50 model. The stage 1 library contains 33 such spectra, with cells 15 and 71 unoccupied. \label{fig:ibghrej}} \end{figure} The stage 1 model cells are labeled with two digits: $ibg$ number (1-7) and $hrej$ number (1-5). The layout is shown in Fig.~\ref{fig:ibghrej}. The cell boundaries are given in Table \ref{tab:stage1}, along with the average $R_{BG}$ values, the cell accumulation time, and the exposure-weighted normalization values in $ibg_{lib}$. Library selections must be standardized to a fixed number of FPMs, here chosen to be the maximum user choice of 52. However, users are free to select and specify, for any GTI$_i$, any number of FPMs ($nfpm_i$), along with the model parameter measurements, $ibg_i, hrej_i$. The 3C50 model will then map a given GTI to a particular library cell, using values ($ibg_{52} = ibg_i * 52 / nfpm_i; hrej_{52} = hrej_i * 52 / nfpm_i$), and the matching library spectrum is presumed to have the correct spectral shape for that GTI. The matching library spectrum is then re-normalized by a factor $ibg_i / ibg_{lib}$, and the result is the stage 1 prediction for the background spectrum in the 3C50 model. \begin{deluxetable*}{crrccccc} \tablenum{3} \tablecaption{Stage 1 Cell Boundaries and Normalizations \label{tab:stage1}} \tablewidth{0pt} \tablehead{ \colhead{Cell} & \colhead{$R_{BG}$} & \colhead{GTI time} & \colhead{$ibg_{52}$ start} & \colhead{$ibg_{52}$ end} & \colhead{$hrej_{52}$ start} & \colhead{$hrej_{52}$ end} & \colhead{$ibg_{lib}$} norm. \\ \colhead{($ibg, hrej$)} & \colhead{(c/s)} & \colhead{(ks)} & \colhead{(c/s)} & \colhead{(c/s)} & \colhead{(c/s)} & \colhead{(c/s)} & \colhead{(c/s)} } \decimalcolnumbers \startdata 11 & 0.446 & 44.3 & 0.00 & 0.04 & 0.00 & 0.10 & 0.0218 \\ 12 & 0.493 & 149.4 & 0.00 & 0.04 & 0.10 & 0.15 & 0.0265 \\ 13 & 0.596 & 100.3 & 0.00 & 0.04 & 0.15 & 0.30 & 0.0285 \\ 14 & 0.775 & 28.5 & 0.00 & 0.04 & 0.30 & 0.60 & 0.0317 \\ 15 & ... & ... & 0.00 & 0.04 & 0.60 & 10.0 & ... \\ 21 & 0.689 & 95.7 & 0.04 & 0.08 & 0.00 & 0.15 & 0.0503 \\ 22 & 0.801 & 109.8 & 0.04 & 0.08 & 0.15 & 0.25 & 0.0517 \\ 23 & 0.855 & 71.4 & 0.04 & 0.08 & 0.25 & 0.35 & 0.0527 \\ 24 & 1.008 & 67.5 & 0.04 & 0.08 & 0.35 & 0.60 & 0.0567 \\ 25 & 1.242 & 8.0 & 0.04 & 0.08 & 0.60 & 10.0 & 0.0583 \\ 31 & 1.103 & 41.0 & 0.08 & 0.15 & 0.00 & 0.20 & 0.1006 \\ 32 & 0.955 & 29.5 & 0.08 & 0.15 & 0.20 & 0.30 & 0.0987 \\ 33 & 1.322 & 43.9 & 0.08 & 0.15 & 0.30 & 0.45 & 0.1013 \\ 34 & 1.302 & 20.4 & 0.08 & 0.15 & 0.45 & 0.60 & 0.1012 \\ 35 & 1.771 & 16.5 & 0.08 & 0.15 & 0.60 & 10.0 & 0.1088 \\ 41 & 2.020 & 18.6 & 0.15 & 0.40 & 0.00 & 0.20 & 0.2047 \\ 42 & 1.724 & 27.2 & 0.15 & 0.40 & 0.20 & 0.35 & 0.2370 \\ 43 & 1.904 & 15.6 & 0.15 & 0.40 & 0.35 & 0.50 & 0.2128 \\ 44 & 2.414 & 20.6 & 0.15 & 0.40 & 0.50 & 0.65 & 0.2314 \\ 45 & 2.673 & 16.9 & 0.15 & 0.40 & 0.65 & 10.0 & 0.2318 \\ 51 & 2.683 & 8.6 & 0.40 & 1.00 & 0.00 & 0.25 & 0.5002 \\ 52 & 2.763 & 20.1 & 0.40 & 1.00 & 0.25 & 0.40 & 0.5659 \\ 53 & 3.293 & 10.0 & 0.40 & 1.00 & 0.40 & 0.60 & 0.6039 \\ 54 & 4.446 & 7.5 & 0.40 & 1.00 & 0.60 & 0.80 & 0.7110 \\ 55 & 4.288 & 9.3 & 0.40 & 1.00 & 0.80 & 10.0 & 0.5986 \\ 61 & 4.527 & 7.3 & 1.00 & 3.00 & 0.00 & 0.25 & 1.4975 \\ 62 & 6.080 & 13.0 & 1.00 & 3.00 & 0.25 & 0.40 & 1.7515 \\ 63 & 6.644 & 4.9 & 1.00 & 3.00 & 0.40 & 0.63 & 2.0238 \\ 64 & 7.604 & 11.4 & 1.00 & 3.00 & 0.63 & 0.85 & 1.6836 \\ 65 & 11.644 & 7.7 & 1.00 & 3.00 & 0.85 & 10.0 & 2.1318 \\ 71 & ... & ... & 3.00 & 10.00 & 0.00 & 0.25 & ... \\ 72 & 9.944 & 2.5 & 3.00 & 10.00 & 0.25 & 0.40 & 3.8110 \\ 73 & 16.477 & 3.4 & 3.00 & 10.00 & 0.40 & 0.70 & 5.2374 \\ 74 & 15.311 & 2.9 & 3.00 & 10.00 & 0.70 & 0.90 & 3.6876 \\ 75 & 17.046 & 4.4 & 3.00 & 10.00 & 0.90 & 10.0 & 4.3715 \\ \enddata \end{deluxetable*} Before proceeding to complete the stage 1 library spectra, we examine the distribution in $R_{BG}$ that is found in each cell, as seen in the upper panel of Fig.~\ref{fig:cellrbg}. Only the 1991 GTIs sorted for \texttt{ISS}-night conditions (see Table~\ref{tab:obs}) are included here. Cells are noted by bin value, [$ibg: i=1-7, hrej: j=1-5$], and we use the cell number plus the fractional order of membership within a given cell to create an artificial horizontal axis that stretches out the data points simply for viewing purposes. Data from cell i,j begin at value $10*i + 2*(j-1) + 1$, and the last GTI in the cell is plotted one unit later. For example, the GTIa in cell 11 are plotted in the range $11 < x < 12$, while cell 12 has range $13 < x < 14$, and cell 75 has range $79 < x < 80$. The larger gaps indicate the vacant cells, 15 and 71. \begin{figure}[ht!] \includegraphics[angle=-90.,width=5.5in]{cells_3c50_pub2.eps} \caption{top panel: Values of $R_{BG}$, the in-focus and in-band count rate for the background GTIs, are plotted for each successive cell of the stage 1 library. The cells are counted by their $ibg, hrej$ number, ranging 11 to 75. The cell id numbers and their ordinal number within the cell are used to stretch out the points along the x-axis (see text). The dashed line shows the high-rate filter that clips the brightest outliers to be excluded when computing the library spectra. bottom panel: Reduced chi square values for each cell, after filtering out the high points (top panel), assuming that $R_{BG}$ is constant within each cell. The levels of intrinsic scatter in each cell are $20-40$ \%, motivating a step to normalize each library selection using the $ibg_i$ value of each target spectrum. \label{fig:cellrbg}} \end{figure} Even after filtering out the points above the dashed line in Fig.~\ref{fig:cellrbg}, the variations in $R_{BG}$ within each cell are much larger than the statistical uncertainties. This is shown in the lower panel of Fig.~\ref{fig:cellrbg}, where the reduced chi square value ($\chi^2_\nu$) are shown for each cell, after the bright cases are removed, with the assumption that $R_{BG}$ is constant within a cell. For cell 11, where the lowest $R_{BG}$ rates limit the statistical precision, we find $\chi_\nu = 8.7$, and $\chi^2_\nu$ increases for the cells with higher rates (i.e., better statistics). The rms variations in $R_{BG}$ within each cell correspond to intrinsic fractional fluctuations in the range 20--40\% of the mean values. This result suggests that the $ibg, hrej$ parameter scheme is far from a deterministic model, and additional background parameters are likely to be important. These results also motivate the strategy to normalize the library sections using the $ibg_i$ value of a given source spectrum, to mitigate against the fractional errors of the cell parents that would be otherwise inherited. Thus, the 3C50 model assumes that the spectral shape per library spectrum is appropriate, while the normalization is fine-tuned to the target spectrum at hand. \begin{figure}[ht!] \includegraphics[width=5.5in]{bg_group_3C50_ngt8_2020a.eps} \caption{The stage 1 library for the 3C50 background model consists of these 33 spectra arranged with $ibg$ number (1--7) increasing vertically and $hrej$ number (1--5) increasing to the right. The spectra are substantially brighter and flatter with increasing $ibg$ cell number (vertical steps), while a more shallow brightness increase, a stronger soft component at 0.2--4 keV, and brighter emission lines are seen with increased $hrej$ cell number (left to right). Cell 11 (lower left) is the closest that the library comes to an isolation of the cosmic diffuse X-ray background. \label{fig:lib1}} \end{figure} We note that for $R_{BG}$ or related quantities of central importance to background modeling, the exercise represented in Fig.~\ref{fig:cellrbg} can be conducted for any hypothetical set of model parameters. Intrinsic variances within cells and the progression of variances along each parameter axis can quantify how the cells organize the background measurements with minimal variance, and whether (from measurement slopes within each cell) the background properties are divided into a grid with sufficient resolution. The library spectra are made for each cell after filtering out the data points that are above the dashed line in the top panel of Fig.~\ref{fig:cellrbg}, to avoid over-weighting the results by such cases. There is a library spectrum for each cell, which is simply the sum of all the counts in the selected GTIs of that cell, divided by the total exposure time. Table~\ref{tab:select} quantifies all of the selection and filtering steps used to construct the 33 spectra for the stage 1 library, using 1947 selected GTIs (of 1991 during \texttt{ISS} night) that are below the dashed line in Fig.~\ref{fig:cellrbg}. Day and night assignments are made on the basis of the $nz$ value, as explained in the next Section. The stage 1 library spectra are displayed in Fig.~\ref{fig:lib1}, using a rebinning scheme that over-samples the XTI resolution uniformly by a factor $\sim 3$. In the range 0.2--15 keV, the number of combined $PI$ bins is 3 (0.2--2.48 keV), 4 (2.48--6.00 keV), 5 (6.0--12.0 keV) and 6 (above 12 keV). As expected, the spectra are substantially brighter and flatter with increasing $ibg$ cell number (vertical steps), and there is an appearance of the Si K--{$ \alpha$} emission line (1.74 keV) in the highest two levels of $ibg$. With increasing $hrej$ (i.e., from left to right), there is a more shallow increase in continuum brightness, a stronger soft component at 0.2--3 keV, and increasing emission lines indicating fluorescence at 7.47 keV (Ni K--{$\alpha$}), 9.71 keV (Au L--{$\alpha$}), and 11.44 keV (Au L--{$\beta$}). These changing spectral features over the surface of the $ibg, hrej$ plane offer some validation for the utility of choosing those model parameters. The background component tied to $ibg$ is the main source of the $R_{BG}$ count rate, while the $hrej$ parameter, despite its exclusion from $R_{BG}$ via $PI\_ratio$ filtering in the \texttt{NICER} pipeline, signals systematic changes in the spectrum of $R_{BG}$ for both the continuum shapes and the characteristics of emission lines. \\ \\ \section{Second Stage of the 3C50 Background Model} \label{sec:day} The second stage of the 3C50 background model is required to subtract an independent soft X-ray component tied to observations during \texttt{ISS} daytime. The noise in the slow chain ($\sim 3$ c/s per FPM; see Section~\ref{sec:intro}) always creates a spectral component that is centered near 0.1 keV, and it is usually invisible at 0.3 keV. However, it was recognized soon after launch that all of the \texttt{NICER} FPMs exhibit systematically higher levels of this low-energy noise when the XTI is illuminated by sunlight during the course of the \texttt{ISS} orbit (e.g, \citealt{bogd19}). Optical photons cannot trigger events in the MPUs, as such, but they liberate Si electrons, causing a number of secondary effects. The increased detector current, in the presence of optical light, elevates the undershoot rate (i.e., detector reset rates) and also causes modest changes in detector gain and spectral resolution. The pipeline's gain calibration makes corrections for such effects, while the changes in spectral resolution are also predictable, again allowing appropriate corrections for science investigations. However, the spectral broadening of the low-energy electronic noise can increasingly intrude above 0.2 keV as the optical load becomes more intense. This is illustrated below in Subsection 4.2. Thus, the tail of the low-energy noise distribution can encroach on a portion of the in-band source spectrum during \texttt{ISS} daytime, making it necessary to include a quantification of this effect in the background model. There is no expected or measured correlation between excess noise and either $ibg$ or $hrej$, and so we treat the daytime soft excess as an independent spectral component to be handled in a second stage of the 3C50 model. When a GTI occurs during \texttt{ISS} daytime, the derived spectra from model stages 1 and 2 are simply added together to form the predicted background spectrum. \\ \subsection{Third Model Parameter for Soft X-ray Excess during \texttt{ISS} Daytime} \begin{figure}[ht!] \includegraphics[width=3.0in]{bkgd_resid_step1_pub2.eps} \caption{The 3C50 model was applied to GTIs of the background observations using only the stage 1 library. Residuals from this exercise are shown at 0.4--12 keV (top panel) and 0.3-0.4 keV (bottom panel). The plot symbols distinguish GTIs during \texttt{ISS} night (black cross) and \texttt{ISS} day (red triangle). The positive residuals during \texttt{ISS} daytime are associated with the seepage of sunlight into the XTI, which expands the $\sim 0.1$ keV noise component to the point that it can encroach into the soft X-ray region that is valuable for \texttt{NICER} science. \label{fig:resid1}} \end{figure} The need to correct for low-energy noise in spectra obtained during \texttt{ISS} daytime is apparent in the residuals found after applying stage 1 (only) of the 3C50 model to the background spectra. Fig.~\ref{fig:resid1} shows the stage 1 residuals in two energy bands: 0.4--12 keV and 0.3-0.4 keV. These model residuals should display an average value of zero, in all bands, if the background model has completed its job. GTIs during \texttt{ISS} night are plotted with a black cross, and GTIs during \texttt{ISS} day are plotted with a red triangle. The stage 1 residuals are plotted versus $R_{BG}$, i.e., the original background rate at 0.4--12 keV. The horizontal axis is truncated at $R_{BG} = 3.0$, covering 89.8 \% of GTIs, for a better view of the details. Residuals in the range 0.4--12 keV (top panel) are fairly well contained during both day and night. But in the range 0.3--0.4 keV (bottom panel), a region in soft X-rays that is valuable to \texttt{NICER} science investigations, the GTIs during \texttt{ISS} daytime show the encroachment of noise. The vertical axis scale is chosen to be identical in both panels to highlight the significance of the problem. To incorporate the effect of the soft X-ray excess during \texttt{ISS} daytime, we choose to monitor the entire low-energy noise component for the 50 FPMs during each GTI. The third parameter for the 3C50 background model, $nz$ is defined as the total count rate in the slow chain in the range 0.0-0.25 keV. The GTIs chosen for the stage 1 (nighttime) spectral library show a primary distribution peak in $nz$ at $156 \pm 5$ c/s, consistent with the $\sim 3$ c/s/chain target for trigger threshold settings for 50 FPMs (see \ref{sec:intro}). During the analyses leading to the 3C50 model, it was determined that the effects of optical light become measurable at 0.3--0.4 keV only when $nz > 200$ c/s. This value was then used to distinguish day and night categories for background GTIs in the 3C50 model. Quantification of the relationships between $nz$ and the soft excess in various energy bands are included in Table~\ref{tab:stage2}. \subsection{3C50 Library for \texttt{ISS} Daytime} An empirical strategy is adopted to model the soft excess during \texttt{ISS} daytime with an additional one-dimensional set of spectra that comprise the stage 2 library in the 3C50 model. The stage 2 library captures the mean soft-excess spectra left behind by stage 1 of the background model, using 12 steps in value of $nz$, as given in Table~\ref{tab:stage2}. This library strategy appears to offer better performance, compared to alternative efforts to fit the noise component with a function with broad wings, e.g., a Lorentzian or modified Gaussian. \begin{deluxetable*}{ccccccc} \tablenum{4} \tablecaption{Stage 2 levels and Quantified Soft Excess \label{tab:stage2}} \tablewidth{0pt} \tablehead{ \colhead{Level} & \colhead{Min. $nz_{52}$ c/s} & \colhead{Max. $nz_{52}$ c/s} & \colhead{Normalized $nz_{lib}$ c/s} & \colhead{S0 c/s} & \colhead{S1 c/s} & \colhead{A band c/s}} \decimalcolnumbers \startdata 01 & 200 & 215 & 202.81 & 0.159 & 0.011 & -0.010 \\ 02 & 215 & 250 & 228.52 & 0.264 & 0.022 & -0.004 \\ 03 & 250 & 300 & 277.36 & 0.491 & 0.037 & 0.001 \\ 04 & 300 & 400 & 346.98 & 2.806 & 0.120 & 0.030 \\ 05 & 400 & 500 & 457.51 & 2.091 & 0.141 & 0.007 \\ 06 & 500 & 600 & 547.25 & 2.922 & 0.184 & 0.037 \\ 07 & 600 & 750 & 683.16 & 5.032 & 0.307 & 0.019 \\ 08 & 750 & 900 & 810.45 & 7.937 & 0.408 & 0.026 \\ 09 & 900 & 1100 & 1003.81 & 12.466 & 0.632 & 0.061 \\ 10 & 1100 & 1300 & 1198.35 & 31.528 & 1.230 & 0.114 \\ 11 & 1300 & 1600 & 1393.30 & 60.267 & 1.901 & 0.132 \\ 12 & 1600 & 0 & 1796.66 & 80.318 & 2.493 & 0.173 \\ \enddata \tablecomments{The last 3 columns give the soft excess rates per 50 FPMs, and the energy bands are S0: 0.2--0.3 keV, S1: 0.3--0.4 keV, A band: 0.4--1.0 keV} \end{deluxetable*} Starting with 1273 daytime GTIs, we apply two filters before combining the spectra within the designated levels in $nz$ (see Table~\ref{tab:stage2}). Both filters are intended to reduce systematic problems that would be inherited by the stage 2 library. The first filter limits the input GTIs to moderate count rates, using $ibg_{52} < 0.4$ c/s, which corresponds to the first four $ibg$ levels (of 7) in stage 1. The second filter excludes cases in which the stage 1 residuals at 2--12 keV (i.e., away from the soft excess) are outside the range $\pm 1.0$ c/s. This condition screens out the GTIs with stage 1 background spectra that deviate from the predicted one. These filters exclude 137 and 60 GTIs, respectively. The parent spectra for the stage 2 library then consist of 1076 GTIs, amounting to 83\% of the total daytime exposure (see Table~\ref{tab:select}), divided into 12 $nz_{52}$ levels, as defined in Table~\ref{tab:stage2}. \begin{figure}[ht!] \includegraphics[width=4in]{bg_group_3C50_day_pub.eps} \caption{The stage 2 library for the 3C50 background model consists of 12 spectra that represent the soft excess caused by various levels of sunlight intrusion into the XTI. These spectra were determined from the residuals of the stage 1 model for 1076 GTIs during \texttt{ISS} day. The 12 spectra correspond to different levels in the $nz_{52}$ parameter, which closely tracks the amount of the optical light leak. \label{fig:lib2}} \end{figure} The stage 2 model spectra are shown in Fig.~\ref{fig:lib2}. The amplitude and extent in keV of the soft excess increases with the $nz_{52}$ level, as expected. These spectra have been smoothed with a 15 PI-bin ``boxcar'' at energies above 0.4 keV. Above 0.7 keV, the continuum levels are very faint, even at the highest $nz$ rates (i.e., levels 10--12), where the integrated count rates correspond to an addition of less than 0.06 c/s to the count rate (i.e., 0.7--12 keV) during \texttt{ISS} daytime. The normalization scheme for the stage 2 library selection follows the practices used for the stage 1 library. For any spectrum ($GTI_i$), a user can select and specify any number of FPMs ($nfpm_i$), along with the noise level $nz_i$, that is consistent with $nfpm_i$. The stage 2 library intervals are again based on a standard 52-FPM scale (Table~\ref{tab:stage2}), so that a library selection is based on the value $nz_{52} = nz_i * 52 / nfpm_i$. If $nz_{52}$ corresponds to level $j$ of the stage 2 library, then that spectrum is selected and re-normalized by a factor, $nz_i / nz_j$, and then added to the background prediction in stage 2 of the 3C50 model. Table~\ref{tab:stage2} also specifies the count rates of library spectra integrated for the softest \texttt{NICER} bands, $S0$ (0.2--0.3 keV), $S1$ (0.3--0.4 keV), and $A$ (0.4--1.0 keV). The Table quantifies the soft excess that would be suffered if stage 2 of the model were ignored. Stage 2 is therefore a required part of the 3C50 model for studies of faint and soft X-ray sources, including the rotation-powered pulsars that are the prime targets for \texttt{NICER}, since their count rates are often $< 1$ c/s. Table~\ref{tab:stage2} can further help to evaluate residual count rates in background-subtracted spectra for science targets. We retain our conventions for labeling \texttt{NICER} energy bands, and we use the subscript ``net'' to indicate count rate queries applied to background-subtracted spectra. The manner in which noise events leak into the different energy bands (Table~\ref{tab:stage2}) gives some guidance as to how to use $S0_{net}$ as a quality metric to estimate, per GTI, the likely level of residual contamination that may be present in $S1_{net}$, which we want to preserve for science. This is relevant because systematic errors in the background model can leave behind many more counts in $S0_{net}$, compared to the X-ray brightness of the target, since the effective area of the \texttt{NICER} XTI is very low in $S0$ and this band is strongly attenuated by absorption in the interstellar medium. Depending on the brightness and softness of a given X-ray target, such filtering steps using $S0_{net}$ can, for example, inform users as to whether the soft X-ray light curve (e.g., 0.3--2.0 keV) is free from contaminated GTIs, or when spectral fitting down to to 0.3 keV is likely to be safe. We offer specific recommendations and examples for filtering results via the background-subtracted spectra, in part using $S0_{net}$, in \ref{sec:practical} and \ref{sec:lc}, below. \section{Model Evaluation} \label{sec:eval} To close the loop on the background observations we apply the full 3C50 model to all of the 3477 background GTIs that have parameter values within the model limits. Residual count rates are shown vs. $R_{BG}$ in Fig.~\ref{fig:resid2}. The top panel displays residuals in the in-band energy range ($R_{BGnet}$), and the bottom panel shows residuals at 0.3--0.4 keV ($S1_{net}$). The night/day observations are distinguished with a black cross / red triangle, respectively. What can these residual rates tell us about systematic uncertainty when applying the 3C50 background model? Considering first the in-band residuals (top panel), the 3C50 model is shown to be most effective when the background count rates is low. The need to bifurcate the evaluation into high and low count rates is tied to the pattern of points in the top panel. When $R_{BG} < 2$ c/s, which corresponds to 82\% of all background GTIs, then $R_{BGnet}$ has $rms$ value 0.33 c/s. In Sections \ref{sec:practical} and \ref{sec:lc} below, we show that the quality of these results can be improved by quality-filtering the background-subtracted spectra in off-target energy bands. In Fig.~\ref{fig:resid2}, the 3C50 model residuals above 2 c/s becomes more random, losing the population near zero, in marked contrast with the bottom panel. We interpret this as evidence that an additional model parameter, which has not been identified, has first-order significance when the background rate is high. Below, we also provide methods to identify and filter out some of the GTIs associated with high background rates, to protect the integrity of \texttt{NICER} science while further studies of the \texttt{NICER} background go forward. \begin{figure}[ht!] \includegraphics[width=3in]{bkgd_resid_3c50_pub2.eps} \caption{Final residuals from the application of the 3C50 model to the background observations, plotted versus the original count rate at 0.4--12 keV, $R_{BG}$. The horizontal axis is truncated at 5 c/s for clarity; only 6\% of the observations exceed this level. Model residuals are shown at 0.4--12 keV ($R_{BGnet}$, top panel) and 0.3-0.4 keV ($S1_{net}$, bottom panel). The plot symbols distinguish GTIs during \texttt{ISS} night (black cross) and \texttt{ISS} day (red triangle). \label{fig:resid2}} \end{figure} The bottom panel of Fig.~\ref{fig:resid2} shows very small residuals in $S1_{net}$ during \texttt{ISS} nighttime (black crosses), but the daytime GTIs have higher residuals, particularly when the $nz$ rates are high. High residuals in $S1_{net}$ are matched with much higher residuals in $S0_{net}$, and this provides a method to use $S0_{net}$ to safeguard $S1_{net}$ or $A_{net}$, taking guidance for the relationship between noise leaks in $S0, S1,$ and the $A$-band given in Table~\ref{tab:stage2}. Users can screen for problematic GTIs by choosing a maximum tolerable light leak in $S1_{net}$ or $A_{net}$ for their science, finding that level in Table~\ref{tab:stage2} : $S1$ or $A$, and then use the corresponding $S0$ value as the filter criterion for $S0_{net}$ to exclude GTIs that are likely to exceed the chosen noise limit. Systematic differences within any $nz$ bin in the Stage 2 library will leave residuals of either sign in $S0_{net}$, $S1_{net}$, and $A_{net}$, and filtering efforts should mirror the rejection criteria accordingly. \begin{figure}[ht!] \includegraphics[angle=-90.,width=5.0in]{bkgd_resid_lat_lon_pub.eps} \caption{Map of the background observations in Earth longitude and latitude, with the magnitude of 3C50 model residuals, $R_{BGnet}$, represented in the symbols type. The c/s ranges and symbols are: $\pm 0.1$ (black ``x''); $\pm 1.0$, excluding the previous level (small black triangle); $\pm 2.0$, excluding previous levels (cyan filled circles); $-5.0$ to $-2.0$ (medium size magenta square); 2.0 to 5.0 (medium green triangle); $< -5.0$ (large red triangle); and $ > 5.0$ (large blue square). The points terminate in latitude at the inclination of the \texttt{ISS} orbit ($52 \deg$). The highest residual count rates from the 3C50 model reside in the polar horns in the \texttt{ISS} orbit. The region void of points in the lower-right quadrant is the exclusion zone for the SAA, and there are no indications of problems near the chosen SAA boundaries. \label{fig:residmap}} \end{figure} The origin of GTIs with high background model residuals is revealed in Fig.~\ref{fig:residmap}, where different intervals in $R_{BGnet}$ (see the Fig. caption) are plotted on a grid of Earth longitude and latitude, using the orbit location at the midpoint of each GTI. It is clear that the GTIs with the largest residuals coincide with the polar horns in the \texttt{ISS} orbit. The first two intervals (i.e., residuals within $\pm 1.0$ c/s) account for 90\% of all GTIs, while the intervals with largest residuals (1.2\% of the total and $ |R_{BGnet}| > 5.0$ c/s) are largely (86\%) confined to high latitude: $|lat| > 42.0 \deg$. On the other hand, the \texttt{NICER} exclusion zone for the SAA (Southern area void of points) appears to effectively exclude any background-related problems. This exclusion zone coincides with keyword \texttt{NICER\_SAA} = 1 in the pipeline's information files. \section{Practical Considerations for the 3C50 Background Model} \label{sec:practical} \subsection{Implementing the 3C50 Background Model} The prototype ftool task ''\texttt{nibackgen3C50}'' is the recommended tool to run the 3C50 background model. Although not yet formally part of the HEASoft \texttt{NICER} software suite, it is compliant with NASA HEASARC standards and is supported by the \texttt{NICER} Guest Observer Facility. There are two time intervals that are important in the use of \texttt{nibackgen3C50}. The first is the particular investigation interval for which a predicted background spectrum is desired. This is driven by the input given to the tool, which is nominally an ObsID, i.e., the reference number and top level directory for a daily accumulation of GTIs for a single target. Users can obtain background spectra on timescales shorter than an ObsID (which generally contains multiple GTIs) by alternatively inputting the combination of a single unfiltered event file and GTI file. In either case, one background spectrum is output, per call to \texttt{nibackgen3C50}. Thus, users who plan to investigate target spectra on timescales shorter than one day would sort the unfiltered event files (i.e., the pipeline's \$ObsID/xti/event\_cl/ni*ufa.evt.gz files) into a series of smaller files with the intended time boundaries and the run \texttt{nibackgen3C50} sequentially on these files along with their associated GTI files. The second important time interval is internal to \texttt{nibackgen3C50}, which generally computes the background in sub-intervals and then provides the exposure-weighted results in the output file. The sub-intervals may be the GTIs within the input file, or a shorter timescale directed by the user. Sub-intervals are never allowed to cross GTI boundaries, since the gaps between GTIs can be many hours, with a likelihood of different background conditions on either side of the gap. The background rate systematically varies with the \texttt{ISS} location in its 93 min orbit. As noted above, \texttt{NICER} GTIs are seldom as long 2 ks, and a typical monitoring program has an average GTI $\sim$600 s, or 11\% of the orbit. As a fraction of the \texttt{ISS} orbit, GTIs are an acceptable choice for \texttt{nibackgen3C50} sub-intervals. Shorter intervals are also acceptable, but below $\sim$100 s, Poisson noise in $ibg$, the parameter that normalizes the nighttime library selections, can be a concern. Another option of \texttt{nibackgen3C50} is the ability to control the FPM selections by listing the ones to ignore. Then, for each sub-interval, \texttt{nibackgen3C50} reads the event lists and calculates the average count rates in $ibg$, $hrej$, and $nz$ for the selected FPMs, scales these rates to the level of 52 FPMs to make library selections (Tables \ref{tab:stage1} and \ref{tab:stage2}), normalizes the nighttime library selection by $ibg / ing_{lib}$ (Table \ref{tab:stage1}), and then, if $nz_{52} > 200$, adds the selected daytime library spectrum, normalized by $nz / nz_{lib}$ (Table \ref{tab:stage2}). Then, as noted abve, the sub-interval spectra are exposure weighted to produce the modeled background spectrum. Further information about the control of time intervals and other command parameters for \texttt{nibackgen3C50} are available via the HEASARC \texttt{NICER} tools website \footnote{\url{https://heasarc.gsfc.nasa.gov/docs/nicer/tools/nicer\_bkg\_est\_tools.html}}, from where it may be downloaded and locally installed. Additional implementation notes are as follows. The 3C50 model enforces a minimum $ibg_{52}$ value of 0.016 c/s, to avoid cases where short intervals and low $ibg$ rates would lead to occasions when $ibg_i$ is zero, and the background prediction would also be zero, since $ibg_i / ibg_{lib}$ is the normalization factor for the stage 1 contribution to the predicted background. This lower limit on $ibg_{52}$ was estimated from the distributions in $R_{BG}$ and $ibg_{52}$ for the 3556 GTIs examined in this study. Finally, the current version of the \texttt{nibackgen3C50} tool described above allows for application of the $hbg_{net}$ quality check, with the user selecting the maximum allowed absolute value. The $SO_{net}$< check is not implemented the current release, but will be in subsequent releases. The \texttt{NICER} data archive contains event files that were calibrated with a series gain solutions, which can be identified by the keyword ''GCALFILE''. Users are recommended to bring their data sets to a uniform calibration level with the ''nicerl2'' tool, using either the calibration used in this paper, nixtiflightpi20170601v005.fits (2020), or the more recent one nixtiflightpi20170601v006.fits (2021), for which there are no differences below 12 keV. The \texttt{nibackgen3C50} tool attempts to match the GCALFILE in the input event lists with an appropriate set of library spectra, and 3C50 Model libraries have been prepared for two previous gain calibrations, governed by GCALFILEs nixtiflightpi20170601v002.fits (2018) and nixtiflightpi20170601v004.fits (2019)). By default, if a match cannot be made, then the user is warned, but a background spectrum is produced using the most recent model library. Tables \ref{tab:stage1} and \ref{tab:stage2} the details in this Section provide sufficient information (i.e., cell boundaries, library normalizations, minimum $ibg_{52}$ value, and GCALFILE issues) for users who wish to download the model library files, which are a simple directory of standard ''pha'' files, and script their own implementation of the 3C50 background model. The calculation steps would be the same as those given above in the functional outline of \texttt{nibackgen3C50}, and the scripts would need to include commands to sort out the events corresponding to definitions of $ibg$, $hrej$, and $nz$, e.g., using ''fselect'' on the unfiltered and calibrated event lists, given choices of time intervals and lists of FPMs (''DET\_ID'') to exclude. The conversion of the model-parameter event lists to event rates would complete the assembly of required 3Crp parameters: $N_{FPM}$, $ibg$, $hbg$, and $nz$, for each time interval needing a background prediction. \subsection{Variations in {\it ibg}, {\it hrej}, and {\it nz} within a GTI} The selection of GTIs to populate the model libraries did not have an explicit screening step for variations in the values of model parameter within a GTI (Table~\ref{tab:select}). We address this issue here. To evaluate the 3C50 parameter values for each GTI interval, we extracted both the spectra and light curves (1 s bins) for each parameter. The light curves were routinely used to calculate the mean ($\mu$), standard deviation ($\sigma$), and the variance in excess of Poisson statistics ($\sigma_{int}^2 = \sigma^2 - \mu$) for each parameter. Trends with brightness were investigated, and we explored several filtering criteria and their ramifications. For $ibg$ and $hrej$, the low values of $\mu$ make it inappropriate to use the fraction of intrinsic deviations (i.e., sqrt($\sigma_{int}^2) / \mu$) as a screening tool. Instead, an ad hoc relationship was favored, with a rejection criterion: $\sigma_{int} > 1.0 + 0.2 \times \mu$. At the lowest count rates (see Fig.~\ref{fig:ampl}), the intrinsic deviations must be above 1 c/s to prompt rejection, while at the highest rates (i.e., 10 c/s), the intrinsic deviation must be 20\% of the mean rate. Considering all of the GTIs with average parameter values within the 3C50 model limits, $hrej$ variations fail this test in only 0.8\% of the intervals, while the $ibg$ rejection rate is 5.4\%, and this group includes all of the $hrej$ failures. Furthermore, all but 50 of the $ibg$ failures were already rejected for library use by other criteria listed in Table~\ref{tab:select}. These remaining 50 cases are all among the 183 GTIs in the brightest three levels of the night library, i.e., in cells 51-75. Rather than exclude these cases, we concluded that variability in $ibg$ is another characteristic of the high background conditions that do not fall in line with the 3C50 model. Variations in $nz$ are an entirely different matter. The count rates are high, and systematic differences in the response of individual FPMs to the optical light leak may occur during \texttt{ISS} daytime. The effects of the light leak on the X-ray spectrum are not the same, at a given FPM-integrated count rate in $nz$, if the distribution is skewed toward one FPM rather than being more evenly distributed. The most striking example is FPM \#34, which was eliminated from this study because of its frequent extreme response to \texttt{ISS} daytime. In practice, it was found that the disparity in FPM noise rates was a more important issue than the changes in $nz$ within a given GTI. Different rejection criteria were investigated, using 3C50 model residuals in the S1 and A bands as the metrics for quality assessment. We adopted the criterion that a GTI would be excluded from consideration (during \texttt{ISS} daytime) if $FPM_i$ with the highest noise rate yields a GTI-average rate $nz_i > 100$, while $FPM_i$ also contributes more than 15\% of the total noise counts. This step is included in the data selection outline given in Table~\ref{tab:select}. GTIs that fail this test were rejected from further consideration , in order to maintain the strategy to use the same 50 FPMs to build the model libraries. For general investigations with \texttt{NICER}, users could alternatively choose to excluding the offending $FPM_i$ and recompute the extractions from the event lists for that GTI with a reduced set of selected FPMs. \subsection{Filtering Steps After Background Subtraction to Improve Data Quality} To deal with systematic errors in the background subtraction process, the strategy was introduced in Section~\ref{sec:eval} to filter out results on the basis of residual count rates in spectral bands that are not needed for science. The energy range for any spectra extracted from cleaned event lists is 0.2--15.0 keV, and in the context of this background investigation, this can be seen as $S0 + S1 + A + B + C + D + gap + hbg$, corresponding to energy bands 0.2--0.3 0.3--0.4, 0.4--1.0, 1--2, 2--4, 4--12, a gap, and 13--15 keV. Background-subtracted rates are expected to be near zero in $S0_{net}$ and $hbg_{net}$, while the other bands contain the target spectrum to be used for science analyses. Tabulating the count rates in $S0_{net}$ and $hbg_{net}$ provides a basis for quality fitering the background modeling process. One can view $hbg_{net}$ as a quality diagnostic for the Stage 1 background component, while $S0_{net}$ is a diagnostic for the Stage 2 component. Three levels of filtering are advised for \texttt{NICER} investigations of targets with different levels of X-ray brightness. They are detailed below in the sense of data selections for quality purposes, to be applied to GTIs prior to science analyses of light curves or spectra. The next Section offers two examples of this process. The filter levels given below are illustrative, and users should explore the tradeoffs in coverage versus data quality to decide the optimal filter criteria that are consistent with the investigation goals and requirements. \begin{itemize} \item Level 1 filter selects GTIs with (($-30.0 < S0_{net} < 30.0)$ c/s \&\& $(-0.5 < hbg_{net} < 0.5$)) c/s. This filter should be applied to even the brightest X-ray sources. \item Level 2 filter selects GTIs with (($-10.0 < S0_{net} < 10.0)$ c/s \&\& $(-0.1 < hbg_{net} < 0.1$)) c/s. The level 2 filter is appropriate for moderately bright sources, e.g., $20.0 < R_{net} < 300$ c/s. For moderately bright sources with very soft spectra (e.g., with detections limited to energy below 2 keV), filter level 2S can additionally impose: $-0.5 < D_{net} < 0.5$ c/s, where the D-band (4--12 keV) is given up as an additional background band. \item Level 3 filter selects GTIs with (($-2.0 < S0_{net} < 2.0)$ c/s \&\& $(-0.05 < hbg_{net} < 0.05$)) c/s. This filter is appropriate for faint sources, e.g., $R_{net} < 20.0$ c/s). For a faint source with a very soft spectrum, filter level 3S can again impose: $-0.5 < D_{net} < 0.5$ c/s. \item It has been shown that the majority of GTIs with the highest background rates and largest model residuals occur in the polar regions of the \texttt{ISS} orbit (Figs.~\ref{fig:resid2} and \ref{fig:residmap}). However, there is no effective way to screen results by orbit position without incurring significant data losses. To illustrate this, we define the a group of ``bad'' model residuals as 153 GTIs (of 3477) with $R_{BGnet} < -2.0$ or $R_{BGnet} > 2.0$ c/s. A broad definition of the polar region, with latitude $lat < -42 \deg$ or $lat > 42 \deg$, captures 80\% of the bad GTIs. However, only 9.4 \% of the polar GTIs are bad, and GTI exclusion on this basis would be costly. An ad hoc definition of the polar horns can be made from Fig.~\ref{fig:residmap}, using the same polar region with additional constraints for longitude intervals: $200 < lon < 320$ (North) and $60 < lon < 180$ (South). This region captures 63\% of the bad GTIs, but only 19 \% of the GTIs are bad. \end{itemize} \subsection{Model and Filtering Considerations for the Brightest Source} Considerable attention has been paid to the brightest and the softest X-ray sources observed with \texttt{NICER}, to investigate the effects of source counts on the background parameters and also the background-subtracted count rates pertinent to quality filtering. We first consider the the high-energy range of the spectrum, specifically $ibg$, a 3C50 model parameter (15-18 keV, in-focus), and $hbg_{net}$ (13-15 keV, in-focus), which used as a data quality filter. We note that $ibg$ is a raw measurement, while $hbg_{net}$ is a background-subtracted quantity (subscript "net"). Both of these energy bands are outside the imaging effective areas of the concentrator, but there is still a finite probability that a high energy photon may pass straight through to the detector, without interacting with the concentrator foils. Thus, it is relevant to investigate whether any extremely bright X-ray sources may elevate the count rates of either parameter. Of particular interest are Scorpius X-1, the brightest X-ray source in the sky (116,000 c/s when normalized to 50 FPMs), the black hole transients, MAXI J1820+070 (65,000 normalized c/s at maximum) and MAXI J1348-630 (47,000 normalized c/s), and the neutron star transients with high-mass companion stars (HMXBs) and relatively hard X-ray spectra, Swift J0243.6+6124 (28,000 c/s at maximum) and A0535+26 (6,000 c/s). The first three cases are the only targets (2017-2020) for which there was a commanded reduction of the number of active FPMs, so that the telemetry rate would remain below the maximum event rate ($\sim$ 30,000 c/s) for the cables connecting the output of the MPUs to the telemetry stream. For all of these sources, we find that contamination of $ibg$ is not an issue. Values of $ibg$ are found to be uncorrelated with changes in source intensity, when comparing these quantities on the GTI timescales. Variations in $ibg$ are dominated by seemingly random changes in the background conditions, with no evidence of the outburst profile of the X-ray sources. However, the impact of these bright or hard sources on $hbg_{net}$ is somewhat different. After background subtraction, a residual count rate in $hbg_{net}$ is seen in Sco X-1 (up to 1 c/s), and similar residuals are seen for the pair of bright HMXBs at times of maximum intensity. All of the other bright transients show $hbg_{net} < 0.2$ c/s when the sources are near maximum intensity. Since the prescription for the level 1 quality filter is to reject background-subtracted GTIs with $hbg_{net} < -0.5$ or $hbg_{net} > 0.5$, observations would be falsely rejected, at level 1 filtering, for the Sco X-1, Swift J0243, and A0535-26. The solution to this problem is to either refrain from filtering these three sources, near times of maximum intensity, or to predict the \texttt{NICER} background with the ''Space Weather'' Model (see Section 9.3), which has no parameters related to measured count rates. Users of the 3C50 background model are advised to compare the light curves for exceptionally bright or hard X-ray sources with the light curve of derived $hbg_{net}$ values in order to determine customized filtering values that are appropriate. Furthermore, the level of filtering should be approached as a function of source count rate, particularly for transients that \texttt{NICER} observes with more than 5 magnitudes of dynamic range between intensity maxima and the final measurements as the source returns to quiescence. The increased susceptibility to X-rays from bright and hard sources for $hbg$ (13--15 keV), relative to $ibg$ (15--18 keV), can be understood as a combination of the decreasing absorption cross section at 13--18 keV in silicon, combined with the decreasing photon spectrum in that same range, for most X-ray subclasses. An analogous search was made for target contributions to $nz$, the raw count at 0.0--0.25 keV, and the filtering parameter, $S0_{net}$, the count rate at 0.2--0.3 keV in the background subtracted spectrum. The investigation included the same bright sources noted above, plus very soft X-ray transients, e.g., MAXI J0637-430 (6,000 c/s at maximum) and the coronal flares in HR1099 (reaching 675 c/s). It was found that the model parameter $nz$ is dominated by variations in the Sun angle and is not significantly affected by exceptionally bright or soft source. However, the level 1 filtering condition, $S0_{net} < 30 c/s$ is exceeded in the brightest GTIs for four sources: MAXI J1820, Sco X-1, MAXI J0637, and HR1099. Again, users can suspend data filtering near times of maximum intensity for these sources, or alternatively they can use the "Space Weather" background model (Section 9.3). The comparison of light curves in $S0_{net}$ versus the broadband source intensity (0.4 -- 12 keV) is a prudent step in the effort to customize filtering and optimize data quality for exceptional sources. \\ \section{Background-Subtracted Light Curves} \label{sec:lc} \subsection{Observations of the Crab Nebula} The Crab Nebula is commonly used as a bright reference source in X-ray astronomy. However, the Crab intensity is not truly constant; long term variations up to 7\% were detected in multi-satellite observations \citep{wils11}. \texttt{NICER} observations (with target name ``PSR\_B0531+21'') over the interval 2017 August 5 to 2020 April 27 were reprocessed, applying the same calibrations used for the background fields (Section~\ref{sec:data}). The query for GTIs, again using \texttt{nimaketime} while excluding the undershoot/overshoot rate filters, netted 418 GTIs with duration $> 50$ s. \begin{figure}[ht!] \includegraphics[width=4.5in]{crab_lc1_pub.eps} \caption{top panel: \texttt{NICER} light curve of the Crab Nebula at 0.3--12 keV, after background subtraction with the 3C50 model. The plot shows 408 GTIs with an average exposure of 816 s. The selected sample has a mean: $10531 \pm 60$ c/s, and most of the the 0.6\% variations can be seen as a gentle 900 day wave in the light curve. Observations with other satellites are needed to determine whether these variations in the Crab are real. bottom panel: the Hard Color, defined as the ratio of count rate at 4--12 keV, relative to that at 2--4 keV. Measurements have a mean and rms, $0.251 \pm 0.001$. \label{fig:crab}} \end{figure} Application of the 3C50 model yielded background predictions for 416 GTIs, while the remaining two cases had $ibg_{52}$ values that exceeded the model limits (10 c/s). Count rates were integrated from the background-subtracted spectra in the range of 0.3--12 keV, and Level 1 filtering was applied (see Section~\ref{sec:practical}). This filtering step excluded eight GTIs as quality risks, and the count rates for the remaining 408 are shown in Fig.~\ref{fig:crab}, scaled to 50 FPMs. These data display mean and rms values: $10531 \pm 60$ c/s. In contrast, the eight filter-eliminated GTIs average $10414 \pm 226$ c/s at 0.3--12 keV. Fig.~\ref{fig:crab} also shows the hard color, which is the ratio of count rates at 4--12 and 2--4 keV (or D/C in terms of energy band labels). The hard color measurements have a mean and rms, $0.251 \pm 0.001$. The hard color results indicate the photometric precision that can be achieved with \texttt{NICER} spectra, using modest quality filtering, over the 2017-2020 time interval. The Crab light curve shows that the 0.6\% variations have a systematic temporal profile that can be seen as a gentle $\sim 900$ d wave in intensity. These results are not corrected for deadtime, but the latter depends on the total event rate (all energies and all event flags), and the total event rate for the Crab varies on an annual timescale, due to the correlation between the noise rate and the solar angle. Observations of the Crab by other space missions will help to determine whether the changes in the \texttt{NICER} light curve are intrinsic to the Crab or arise from systematic factors that have escaped the current investigation. \subsection{Light Curve of 1E 0102.2–7219} The supernova remnant in the Small Magellanic Cloud (SMC), 1E0102.2$-$7219 (hereafter ``E0102''), is a faint calibration source that serves as a flux and spectral line reference for many X-ray instruments \citep{pluc17}. The \texttt{NICER} observations from 2017 July 17 to 2020 June 12 netted 965 GTIs with an average exposure of 438 s. The 3C50 model yielded 941 background subtracted spectra, while filtering steps (see Section~\ref{sec:practical}) left 916 GTIs at level 1 and 804 GTIs at level 2. We note that the fraction removed by the level 2 filter (15\%) is larger than normal, because E0102 is observed in a wider range of conditions, as a calibration source, compared to many \texttt{NICER} science targets. \begin{figure}[ht!] \includegraphics[angle=-90.,width=5in]{e0102_msoft_pub.eps} \caption{\texttt{NICER} observations of the SNR in the SMC, E0102, yielded background subtracted spectra with the 3C50 model for 941 GTIs through 2020 June 12, accumulating 408 ks exposure time. The soft light curve (0.3--2.0 keV) extracted from all of the GTIs is shown in the left panel, while the same data with level 2 filtering (804 GTIs, 352 ks) is shown in the right panel (see Section~\ref{sec:practical}). These measurements have mean and rms: $25.60 \pm 0.62$ c/s and $25.63 \pm 0.42$ c/s, respectively, while the rms statistical uncertainty is 0.32 c/s. \label{fig:e0102}} \end{figure} Since the source is soft, we examine the background-subtracted light curve in the range 0.3--2.0 keV. Fig.~\ref{fig:e0102} shows the results for level 1 filtering (left panel) and level 2 filtering (right panel). The measurements have mean and rms: $25.61 \pm 1.15$ c/s and $25.62 \pm 0.41$ c/s, respectively. The average statistical uncertainty at 0.3--2.0 keV is 0.3 c/s. This demonstrates the utility in using $S0_{net}$ and $hbg_{net}$ as metrics for filtering GTIs, sacrificing some amount of temporal coverage to improve quality. \section{Background Modeling at 1 s Timescale} \label{sec:fast} The background parameters, $ibg$ and $hrej$, normally have count rates below 1 Hz, and one must integrate for a few hundred seconds to produce an average value with reasonable statistical precision. However, the occasional surges in the background rates show corresponding variations in $ibg$ and $hrej$ and the relationship between these quantities can help to diagnose whether rapid changes in \texttt{NICER} light curves may originate from either the X-ray target or the in-band background. \begin{figure}[ht!] \includegraphics[width=3.5in]{bkgd_1s_pub2.eps} \caption{Estimate of the in-band background rate using the relationship, $R_{est} = 2.91 * ibg + 4.67 * hrej$. Each point corresponds to one GTI, and the equation is determined with a least-squares fit confined to the range $0.5 < R_{BG} < 300$ c/s. The dashed line is a reference for the match between this relationship and the measured rate ($R_{BG}$). The estimator is more precise at low count rate, showing again the systematic error for the 3C50 model when the background rate is high. Nevertheless, the estimator is an effective aid to diagnose whether observed variability at short timescales originates from the target or from activity in the background. \label{fig:bg1s}} \end{figure} In Fig.~\ref{fig:ampl} it was shown that $ibg$ and $hrej$ are both roughly correlated with $R_{BG}$, with somewhat different average slopes. This motivates a strategy to estimate $R_{BG}$ as a linear combination of $ibg$ and $hrej$ with different coefficients. Using a least-squares fit confined to the range $0.5 < R_{BG} < 300$ c/s, the best fit relationship is $R_{est} = 2.91 * ibg + 4.67 * hrej$, with results shown in Fog.~\ref{fig:bg1s}. There is significant scatter in the ability of the background estimator to predict $R_{BG}$ at high count rates, pointing to the same problem seen with the 3C50 model residuals (Fig.~\ref{fig:resid2}). Nevertheless, the background estimator might show rapid increases and temporal structure that resemble the \texttt{NICER} light curve in short time bins, and this would convincingly implicate the background as the origin of the fast flares. \begin{figure}[ht!] \includegraphics[angle=-90.,width=5in]{bg_1s_swiftj1858.eps} \caption{Application of the 1-s background estimator to two GTIs from observations of SwiftJ1858 when the source, an X-ray binary system containing an accreting neutron star, was in a flaring state. The first GTI shows that the rapid flaring is from the X-ray source (blue), and this behavior has been observed many times with \texttt{NICER} and other instruments. However, the second GTI shows a light curve with unusual rapid flaring in the background (red), demonstrating that this particular sequence of flares is from the background, rather than the X-ray source. \label{fig:bgswiftj1858}} \end{figure} To illustrate the use of the background estimator, we consider the case of the X-ray transient source, Swift J1858.6$-$0814 \citep{krim18}, hereafter ``SwiftJ1858''. \texttt{NICER} observations from 2018 November 1 through 2019 November 17 show dozens of GTIs containing fast variability, when the source intensity ranges from non detectable levels to multiple sharp maxima in the range 100 to 1600 c/s \citep{ludl18}. It was later shown that much of this "flaring" is actually driven by variable absorption along the line of sight to a nearly eclipsing binary system \citep{buis21}. After a data gap imposed by low Sun angle during the interval MJD $58805 - 58903$, \texttt{NICER} found SwiftJ1858 to be in a more conventional state of quasi-steady emission with eclipses and absorption dips. Then type I X-ray bursts were detected, identifying the source as an accreting neutron star \citep{buis20}. Despite the propensity of Swift J1858 to vary rapidly, during the first part of its outburst, it is necessary to distinguish the few cases in which rapid flaring originated in the background, rather than the X-ray source. Fig.~\ref{fig:bgswiftj1858} shows two GTI light curves, in 1 s bins, with contrasting findings regarding the origin of the fast flares. In both cases, the light curve shown in blue is the background-subtracted count rate using the 3C50 model with parameters averaged over the respective GTIs. Values for the background estimator are shown in red. In the first case (the second GTI on MJD 58426), the background intensity remains low and quiet, implying that the flares originate in SwiftJ1858. In the second case (the second GTI on MJD 58429), the BG estimator show that the high-amplitude variations coincide with significant activity in $ibg$ and $hrej$. A precise match is not expected between blue and red curves, since an effort has been made to background-subtract the blue curve on the GTI timescale (note the negative values at times before 800 s), and since the predictability of the background is compromised by systematic error in the model at high background rates. Nevertheless, it is clear that the second set of flares originated in the background and not in SwiftJ1858. In this example, the background flares did occur in the Southern polar horn, during a time interval with a range in orbit latitude and longitude: $-51.7 \deg < lat < -47.5 \deg$ and $79.0 \deg < lon < 107.2 \deg$. \section{Discussion} \label{sec:disc} \subsection{Sensitivity Limits with the 3C50 Background Model} Quantitative analysis of the residuals in a background model, when applied to observations of blank sky regions, provides information on both the instrument sensitivity limits and systematic problems regarding model performance. Given the large range in $R_{BG}$ (Fig.~\ref{fig:ampl}) and the limited success of the 3C50 model when $R_{BG} > 2.0$ (Section~\ref{sec:eval}), these topics must be approached with qualifications. After applying the 3C50 model to 3447 GTIs with model parameters within limits, the residuals at 0.4--12 keV are within $\pm 0.5$ c/s in 80\% of the GTIs. Applying level 3 filtering criteria (see Section~\ref{sec:practical}), which excludes 15\% of these GTIs, the rms value of $R_{BGnet}$ is 0.40 c/s. This, in turn, implies a detection limit (3 $\sigma$ in a single GTI) of 1.20 c/s at 0.4--12 keV, which is equivalent to $3.6 \times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ at 0.4--12 keV. assuming the spectral shape of the Crab Nebula. In the soft X-ray band, the corresponding detection limit ($3 \sigma$, single GTI) is 0.51 c/s at 0.3--2.0 keV. These limits would improve by a factor of 4 if the exploratory GTIs for a given target accumulate 10 ks of exposure. We note that the percentage of GTIs that pass level 3 filtering should exceed 90\%, since the scheduling of science targets would be more favorable than the program to widely sample the background conditions. \subsection{Model Limitations at High Background Rates} Our assessment of the 3C50 model noted evidence of a missing model parameter that is particularly important at high values of the raw background rate at 0.4--12 keV ($R_{BG}$). The correlation between $R_{BG}$ and $ibg$ implies that the most of these events are associated with the in-focus component implied by values in \texttt{PI\_ratio}. This is reminiscent of the early {\it Chandra} discovery that protons could scatter off the mirrors and come into focus on the detectors when the satellite passed through the radiation belt \citep{odel10}. The hypothesis that a similar condition is affecting \texttt{NICER} would imply that the background model should consider the angle between the camera pointing direction and the local magnetic field lines during the course of the \texttt{NICER} orbit. Such attention was suggested by \cite{fuka09} in the background model for the {\it Suzaku} HXD Instrument. Further motivation to track the camera viewing direction, relative to its position in the Earth orbit, is provided by a study of an archive of particle rate measurements built with a series of NOAA polar-orbiting satellites \citep{fida10}. Since 1998, these satellites have been equipped with the Space Environment Monitor (SEM-2) system, which contains two sets of instruments that monitor the energetic charged particle environment above the Earth. One of these systems, the Medium Energy Proton and Electron Detector, has two sets of detectors mounted perpendicular to each other. The different viewing angles produce differential particle fluxes for electrons in the ranges of both 30-100 and 100-300 keV (see Fig. 1 of \cite{fida10}). The pitch angle, i.e., the angle between the charged particle flow and the local magnetic field, varies systematically with the position in low Earth orbit, suggesting that the particle flux should depend on the longitude, latitude, the camera angle with respect to the local magnetic field, and the local pitch angle. This context will be explored further in the effort to link the direction of particle flow to the in-focus component in the \texttt{NICER} background. The limitations of the 3C50 model were first apparent in Fig.~\ref{fig:cellrbg}, where the raw background rates showed significant variations when binned in model cells for the nighttime library. This, in part, motivated the strategy to re-normalize the selected library spectrum for a given GTI$_i$, by the factor $ibg_i / ing_{lib}$. An assessment of the impact of this step can be made by comparing the residuals with and without the re-normalization step, while making use of the quality filtering criteria described in Section 6.3. Closing the loop on the background pointings with (without) $ibg$ re-normalization leaves residuals that pass level 1 filtering in 93\% (88\%) of all GTIs, while passing level 2 filtering 85\% (85\%) of the time. The conclusion is that $ibg$ re-normalization does provide a modest improvement in the quality of the 3C50 model when the background rate is high. \subsection{Comparisons with Parameters of the Environmental Background Model} The ``Space Weather'' Background Model is an alternative to 3C50 that predicts \texttt{NICER} background spectra on the basis of the local spacecraft environment (Gendreau et a. in prep.), and it is implemented in the FTOOL, ``nicer\_bkg\_estimator'', which is also available via the HEASARC \texttt{NICER} tools website (prior footnote). The principal model parameters are the local cutoff rigidity (``COR'' ; \cite{smar05}), which is a measure of the shielding provided by the Earth's magnetic field, and the $K_p$ index, which is a global measure of disturbances in the magnetic field. The \texttt{NICER} pipeline furnishes the values of ``$COR\_SAX$'', every second, in the filter files, and the values of the cutoff rigidity are computed with a particular model developed for the $BeppoSAX$ Mission \citep{amat02}. $COR\_SAX$ shielding has units in the range $0-17$, while the $K_p$ index is given for each 3 hr interval, with a range $0-6$ quantized in steps of 0.333. Values for the $K_p$ index are obtained from the GFZ site in Potsdam (https://www.gfz-potsdam.de/en/kp-index/). The relationship between model pairs for the 3C50 and Space Weather models is examined in Fig.~\ref{fig:env}. It is immediately apparent that $hrej$ and $COR\_SAX$ are measuring the same phenomenon, i.e., the amount of magnetic shielding at the \texttt{ISS} position is inversely proportional to the rate of spatially extended events due to particles, as measured with $hrej$. The $COR\_SAX$ parameter would be a desirable substitute for $hrej$, in future versions of the 3C50 model, because it is readily accessible in \texttt{NICER} filter files, and it is free from the statistical accuracy limits that confronts $hrej$, due to its low count rate. \begin{figure}[ht!] \includegraphics[width=5in]{3c50_env.eps} \caption{Relationship between 3C50 model parameters and those of the \texttt{NICER} Environmental Model. The plot of $hrej$ vs. the cutoff rigidity ($COR\_SAX$) is the best correlation seen between any two hypothetical background parameters in this study. On the other hand $ibg$ and $K_p$ pair with their model partners in different ways. \label{fig:env}} \end{figure} \section{Summary} \texttt{NICER} has a comparatively low background rate, typically $10^{-4}$ times the broad-band count rate of the Crab Nebula, but it is highly variable in both amplitude and spectral shape. The silicon drift detectors are high-throughput, but single-channel devices, and so the background spectrum must be predicted using measurements that are not affected by the targeted X-ray source. The 3C50 model predicts the background spectrum using an empirical approach and three model parameters. Data analyses are based on recurrent observations of seven pointing directions that are void of detectable sources. Spectra from a wide range of observing conditions are sorted by values of the model parameters to build a two-stage library of spectra that are the core of the background model. It is noted that most particle hits are automatically excluded from either the target spectrum or the background model because the energy of the event trips the overshoot flag, removing such events from spectral consideration. Two model parameters, $ibg$ and $hrej$, track background components associated with particle-induced events. They are distinguished by values of \texttt{PI\_ratio}, which is the ratio of event energies in the fast measuring chain, relative to the slow chain in the instrument electronics. Values of \texttt{PI\_ratio} can discriminate detector ionization locations near the center of the silicon drift detector (i.e., events appearing ''in-focus'', from those near the outer edges of the detector (hence associated with spatially extended events). We define $ibg$ as the rate of in-focus events at 15--18 keV (beyond the effective area of the optics), while $hrej$ is the rate of particle events at 3--18 keV that originate near the outer edges of active silicon, underneath the metal collimator. A grid of values in these two parameters is used to bin and average the GTI-based collection of background spectra to form the stage 1 library of the model. Measured values in these parameters for any given target observation are then used to select a matching library spectrum. That spectrum is re-normalized by $ibg / ibg_{lib}$ to form the stage 1 prediction of the \texttt{NICER} background. The third parameter, $nz$ (count rate at 0--0.25 keV), allows predict of a low-energy excess that is tied to observations conducted in sunlight, when $nz > 200$ c/s. Twelve intervals in $nz$ are used to sort and average the residual spectra from the stage 1 process, applied to all of the background observations, to form the stage 2 library of the model. For target observations, the measured value, $nz_i$ is used to select a spectrum from the stage 2 library. That spectrum is re-normalized by $nz_i / nz_{lib}$, and the result is added to the stage 1 background spectrum to complete the background prediction. The small contribution from the cosmic diffuse X-ray background is carried into the background model by the manner in which the stage 1 library is constructed. This component is always present in the \texttt{NICER} field of view, and its inclusion in the 3C50 model is guaranteed by the imposition of a minimum value for $ibg$, 0.016 c/s. There are no provisions in the model for diffuse Galactic emission, components local to the Earth or the solar system, or contaminating sources in the field of view. Such contributions, when anticipated, must be considered externally. An examination of 3556 GTIs, with an average duration of 570 s, shows that the in-band cont rate of good events at 0.4--12 keV, scaled to 50 selected detectors, has a median value 0.87 c/s. However, the distribution is quite broad, ranging from 0.33 to 300 c/s, after excluding 1\% outliers on each end. After applying the 3C50 model to 3447 GTIs with model parameters within limits, the residuals at 0.4--12 keV are within $\pm 0.5$ c/s in 80\% of the GTIs. However, residuals persist at 20--30\% of the initial rate for the brightest cases, which tend to occur in the polar horns of the \texttt{ISS} orbit (mixed with many quiet GTIs at the same polar locations). The inaccuracy of the model, when the background rate is high, suggests one or more missing model parameters. Quality filtering criteria are developed to warn users when the predicted background spectrum is not likely to be satisfactory. When such filtering criteria are applied at the level appropriate for faint X-ray sources, the systematic uncertainty in the model, which is an estimate of the detection limit, is 1.20 c/s at 0.4--12 keV (3 $\sigma$, for a single GTI), and 0.51 c/s at 0.3--2.0 keV. For a Crab-like spectrum, the detection limit at 0.4--12 keV is equivalent to $3.6 \times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$. The limiting count rate in soft X-rays is equivalent to $4.3 \times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$, assuming a 100 eV blackbody spectrum, with an ISM column density of $5 \times 10^{20} cm^{-2}$. These limits would improve by a factor of 4 if the exploratory GTIs accumulate 10 ks. The GTIs that pass such filtering criteria amount to 85\% of the total, while higher success rates would be expected for general targets scheduled more favorably than the background observations. Under normal conditions, the empirical model's background predictions are limited to timescales of minutes or longer, because of Poisson noise in $ibg$ and $hrej$, which often have count rates $< 1$ c/s. However, the crude background estimator, $R_{est} = 2.91 * ibg + 4.67 * hrej$, can be applied on timescales of seconds or less, to help assess whether observations of fast variability originate in either the source or the background. Background flares that are associated with significant variations in the raw, in-band count rate produce momentarily high values of $ibg$ and $hrej$, and the temporal structure in $R_{est}$ will be highly correlated with the in-band variations under scrutiny. \newpage
1,314,259,993,145
arxiv
\section{Introduction} \label{sect:01:introduction} \noindent Shape theory is a machinery that allows to focus on the global properties of a space by abstracting from its local behavior. This is done by approximating the space by a system of nicer spaces, and then studying this approximating system instead of the original space. After this idea was successfully applied to commutative spaces, it was first introduced to the noncommutative world by Effros and Kaminker, \cite{EffKam1986}. Soon after, noncommutative shape theory was developed to its modern form by Blackadar, \cite{Bla1985}. In classical shape theory one approximates a space by absolute neighborhood retracts (ANRs). In the noncommutative world, the role of these nice spaces is played by the semiprojective \Cs{s}. It is however not true that every (compact) ANR $X$ gives a semiprojective \Cs{} $C(X)$. In fact, already the two-disc $D^2$ is a counterexample (see \ref{prop:S03:Disc_not_SP} and \ref{pargr:S03:spaces_containing_disc}). This hints to a possible problem in noncommutative shape theory: While it easy to show that there are enough ANRs to approximate every compact metric space, the analogue for \Cs{} is not obvious at all. In fact it is still an open problem whether every separable \Cs{} can be written as an inductive limit of semiprojective \Cs{s}. Some progress on this problem was recently made by Loring and Shulman, \cite{LorShu2010}. Hence, it is important to know which \Cs{s} are semiprojective. And although semiprojectivity was modeled on ANRs, the first large class of \Cs{s} shown to be semiprojective were the highly noncommutative Cuntz-Krieger algebras, see \cite{Bla1985}. Since then, these results have been extended to cover all UCT Kirchberg algebras with finitely generated K-theory and free $K_1$-group, see \cite{Szy2002} and \cite{Spi2009}, and it is conjectured that in fact all Kirchberg algebras with finitely generated K-theory are semiprojective. Yet, the following natural question remained unanswered: \begin{quest} \label{quest:S01:Main_quest} Which commutative \Cs{s} are semiprojective? \end{quest} An important partial answer was obtained by Loring, \cite[Proposition 16.2.1, p.125]{Lor1997}, who showed that all one-dimensional CW-complexes give rise to semiprojective \Cs{s}. In \cite{ELP1998} this was extended to the class of one-dimensional NCCW-complexes. In another direction, Chigogidze and Dranishnikov recently gave a characterization of the commutative \Cs{s} that are projective: They show in \cite[Theorem 4.3]{ChiDra2010} that $C(X)$ is projective in $\mathcal{S}_1$ (the category of unital, separable \Cs{s} with unital $\ast$-homomorphisms) if and only if $X$ is an AR and $\dim(X)\leq 1$. Inspired by their results we obtain the following answer to question \ref{quest:S01:Main_quest}: \begin{thm} \label{MainTheorem} Let $X$ be a compact, metric space. Then the following are equivalent: \begin{enumerate}[label=(\Roman*)] \item $C(X)$ is semiprojective. \item $X$ is an ANR and $\dim(X)\leq 1$. \end{enumerate} \end{thm} This confirms a conjecture of Blackadar, \cite[II.8.3.8, p.163]{Bla2006}. We proceed as follows: \tableofcontents \noindent In section $2$ (Preliminaries), we recall the basic concepts of commutative and noncommutative shape theory, in particular the notion of an ANR and of semiprojectivity. \\ \noindent In section $3$ (Necessity), we show the implication ''(I) $\Rightarrow$ (II)'' of our main result \ref{MainTheorem}. The idea is to use the topological properties of higher dimensional spaces, to show that if $C(X)$ was semiprojective and $X$ an ANR of dimension at least $2$ then we could solve a lifting problem known to be unsolvable. \\ \noindent In section $4$ we study the structure of compact, one-dimensional ANRs. We characterize when a one-dimensional Peano continuum $X$ is an ANR, see \ref{prop:S04:TFAE_1D_Peano_ANR}. As it turns out, one criterium is that $X$ contains a finite subgraph that contains all homotopy information, a (homotopy) core, see \ref{prop:S04:Existence_of_core}. This is also equivalent to $K^\ast(X)$ being finitely generated, which is a recurring property in connection with semiprojectivity. The main result of this section is theorem \ref{prop:S04:Structure_1D_Peano} which describes the internal structure of a compact, one-dimensional ANR $X$. Starting with the homotopy core $Y_1\subset X$ there is an increasing sequence of subgraphs $Y_1\subset Y_2\subset\ldots\subset X$ that exhaust $X$, and such that $Y_{k+1}$ is obtained from $Y_k$ by simply attaching a line segment at one end to a point in $Y_k$. This generalizes the classical structure theorem for dendrites (which are precisely the \emph{contractible}, compact, one-dimensional ANRs). \\ \noindent In section $5$ (Sufficiency) we show the implication ''(II) $\Rightarrow$ (I)'' of \ref{MainTheorem}. Using the structure theorem \ref{prop:S04:Structure_1D_Peano} for $X$, we obtain subgraphs $Y_k\subset X$ such that $X\cong\varprojlim Y_k$. The first graph $Y_1$ contains all K-theory information, and the subsequent graphs are obtained by attaching line segments. Dualizing, we can write $C(X)$ as an inductive limit, $C(X) = \varinjlim C(Y_k)$. Since the maps $Y_{k+1}\to Y_k$ are retractions, the dual bonding morphisms $C(Y_k)\to C(Y_{k+1})$ are accessible for lifting problems. The main result of this section is \ref{InductiveLimitProjective}. Given a lifting problem $C(X)\to C/\overline{\bigcup_k J_k}$ and an initial lift from $C(Y_1)$ to some $C/J_l$, there exists a lifting from any $C(Y_k)$ to the same height, and finally a lift from the inductive limit $C(X)$ to $C/J_l$. This idea is central in \cite{ChiDra2010}, but it has also been used before, for instance by Blackadar in order to prove that the Cuntz algebra $\mathcal{O}_\infty$ is semiprojective. We note that some form of inductive limit argument seems necessary for lifting an infinite number of generators. We also wish to point out that Chigogidze and Dranishnikov only needed semiprojectivity, and not projectivity, in many steps of their proofs. The proof ''(II) $\Rightarrow$ (I)'' follows from \ref{InductiveLimitProjective} if we can find an initial lift from $C(Y_1)$. For this we use Loring's deep result, \cite{Lor1997}, which says that $C(Y)$ is semiprojective for every finite graph $Y$. We also need Loring's result to write the algebras $C(Y_k)$ as universal \Cs{s}. To summarize, the proof proceeds in two steps. First, we construct an initial lift $C(Y_1)\to C/J_l$ from the homotopy core. This will lift all K-theory information of $X$. But once the K-theory information is lifted, we do not need to ''sink to a lower level''. \\ \noindent In section $6$ we give applications of our main result \ref{MainTheorem}. First, we analyze the structure of non-compact, one-dimensional ANRs. We give a characterization when the one-point compactification of such spaces is again an ANR, see \ref{prop:S06:cpctfn_ANR}. This is motivated by the fact that a \Cs{} $A$ is semiprojective if and only if its minimal unitalization $\widetilde{A}$ is semiprojective. For commutative \Cs{s}, the minimal unitalization corresponds to taking the one-point compactification of the underlying commutative space. Using the characterization of semiprojectivity for unital, separable, commutative \Cs{s} given in \ref{MainTheorem}, we derive a characterization of semiprojectivity for non-unital, separable, commutative \Cs{s}, see \ref{prop:S06:SP_for_non-compact}. In \ref{prop:S06:cpctfn_ANR} we also note that the one-point compactification of the considered spaces is an ANR if and only every finite-point compactification is an ANR. This allows us to study short exact sequences \begin{center} \makebox{ \xymatrix{ 0\ar[r] & I \ar[r] & A \ar[r] & F \ar[r] & 0 \\ }} \end{center} with $F$ finite-dimensional. It was conjectured by Loring and also by Blackadar, \cite[Conjecture 4.5]{Bla2004}, that in this situation $A$ is semiprojective if and only if $I$ is. One implication was recently proven by Dominic Enders, \cite{EndPrivat}, who showed that semiprojectivity passes to ideals when the quotient is finite-dimensional. The converse implication is in general not even known for $F=\CC$. However, in \ref{prop:S06:ideal_with_fd_quotient} we verify this conjecture under the additional assumption that $A$ is commutative. Then, we will study the semiprojectivity of \Cs{s} of the form $C_0(X,M_k)$. We derive in \ref{matrixMain} that for a separable, commutative \Cs{} $A$, the algebra $A\otimes M_k$ is semiprojective if and only if $A$ is semiprojective. Again, this question can be asked in general. It is known that semiprojectivity of $A$ implies that $A\otimes M_k$ is semiprojective as well, see \cite[Corollary 2.28]{Bla1985} and \cite[Thoerem 14.2.2, p.110]{Lor1997}. For the converse, it is known that semiprojectivity passes to full corners, \cite[Proposition 2.27]{Bla1985}. It was conjectured by Blackadar, \cite[Conjecture 4.4]{Bla2004}, that the same holds for full hereditary sub-\Cs{s}. Note that $A$ always is a full hereditary sub-\Cs{} of $A\otimes M_k$. Thus, we verify the conjecture for commutative \Cs{s}. As a final application, we consider the following variant of question \ref{quest:S01:Main_quest}: When is a commutative \Cs{} weakly (semi-)projective? In order to study this problem, we analyze the structure of one-dimensional approximative absolute (neighborhood) retracts, abbreviated AA(N)R. In \ref{prop:S07:TFAE_1D_AANR} we show that such spaces are approximated from within by finite trees (finite graphs). Since finite trees (finite graphs) give (semi-)projective \Cs{s}, we derive in \ref{prop:S07:1D_AANR_implies_wSP} that $C(X)$ is weakly (semi-)projective in $\mathcal{S}_1$ if $X$ is a one-dimensional AA(N)R. Summarizing our results, \ref{MainTheorem} and \ref{prop:S07:1D_AANR_implies_wSP}, and the result of Chigogidze and Dranishnikov, \cite[Theorem 4.3]{ChiDra2010}, we get: \begin{thm} \label{summaryThm} Let $X$ be a compact, metric space with $\dim(X)\leq 1$. Then: \begin{tabular}{lll} (1)\quad & $C(X)$ is projective in $\mathcal{S}_1$ & $\Leftrightarrow$ $X$ is an AR \\ (2) & $C(X)$ is weakly projective in $\mathcal{S}_1$ & $\Leftrightarrow$ $X$ is an AAR \\ (3) & $C(X)$ is semiprojective $\mathcal{S}_1$ & $\Leftrightarrow$ $X$ is an ANR \\ (4) & $C(X)$ is weakly semiprojective $\mathcal{S}_1$ & $\Leftrightarrow$ $X$ is an AANR \\ \end{tabular} \noindent Moreover, $C(X)$ projective or semiprojective already implies $\dim(X)\leq 1$. \\ \end{thm} \section{Preliminaries} \label{sect:02:preliminaries} \noindent By $A,B,C,D$ we mostly denote \Cs{s}, usually assumed to be separable here, and by a morphism between \Cs{s} we understand a $\ast$-homomorphism. By an ideal in a \Cs{} we mean a closed, two-sided ideal. If $A$ is a \Cs{}, then we denote by $\widetilde{A}$ its minimal unitalization, and by $A^+$ the forced unitalization. Thus, if $A$ is unital, then $\widetilde{A}=A$ and $A^+\cong A\oplus\CC$. We use the symbol $\simeq$ to denote homotopy equivalence. By a map between two topological spaces we mean a continuous map. Given $\varepsilon>0$ and subsets $F,G\subset X$ of a metric space, we say $F$ is \termDef{$\varepsilon$-contained} in $G$, denoted by $F\subset_\varepsilon G$, if for every $x\in F$ there exists some $y\in G$ such that $d_X(x,y)<\varepsilon$. Given two maps $\varphi,\psi\colon X\to Y$ between metric spaces and a subset $F\subset X$ we say ''$\varphi$ and $\psi$ agree on $F$'', denoted $\varphi=^F\psi$, if $\varphi(x)=\psi(x)$ for all $x\in F$. If moreover $\varepsilon>0$ is given, then we say ''$\varphi$ and $\psi$ agree up to $\varepsilon$'', denoted $\varphi=_\varepsilon\psi$, if $d_Y(\varphi(x),\psi(x))<\varepsilon$ for all $x\in X$ (for normed spaces, this is usually denoted by $\|\varphi-\psi\|_\infty<\varepsilon$). We say ''$\varphi$ and $\psi$ agree on $F$ up to $\varepsilon$'', denoted $\varphi=_\varepsilon^F\psi$, if $d_Y(\varphi(x),\psi(x))<\varepsilon$ for all $x\in F$. \begin{pargr}[(Approximative) absolute (neighborhood) retracts] \label{pargr:S02:AANR} A metric space $X$ is an \termDef{(approximative) absolute retract}, abbreviated by \termDef{(A)AR}, if for all pairs\footnote{A $(Y,Z)$ pair of spaces is simply a space $Y$ with a \emph{closed} subspace $Z\subset Y$.} $(Y,Z)$ of metric spaces and maps $f\colon Z\to X$ (and $\varepsilon>0$) there exists a map $g\colon Z\to X$ such that $f=g\circ\iota$ (resp. $f=_\varepsilon g\circ\iota$), where $\iota\colon Z\hookrightarrow Y$ is the inclusion map. This means that the following diagram can be completed to commute (up to $\varepsilon$ ): \begin{center} \makebox{ \xymatrix@M+=5pt{ & Y \ar@{.>}[dl]_{g} \\ X & Z \ar[l]^{f} \ar@{^{(}->}[u]_{\iota} \\ }} \end{center} A metric space $X$ is an \termDef{(approximative) absolute neighborhood retract}, abbreviated by \termDef{(A)ANR}, if for all pairs $(Y,Z)$ of metric spaces and maps $f\colon Z\to X$ (and $\varepsilon>0$) there exists a neighborhood $V$ of $Z$ and a map $g\colon V\to X$ such that $f=g\circ\iota$ (resp. $f=_\varepsilon g\circ\iota$) where $\iota\colon Z\hookrightarrow V$ is the inclusion map. This means that the following diagram can be completed to commute (up to $\varepsilon$ ): \begin{center} \makebox{ \xymatrix@M+=5pt{ & Y \\ & V \ar@{^{(}->}[u] \ar@{..>}[dl]_{g} \\ X & Z \ar[l]^{f} \ar@{^{(}->}[u]_{\iota} }} \end{center} For details about ARs and ANRs see \cite{Bor1967}. We will only consider compact AARs and AANRs in this paper, and the reader is referred to \cite{Cla1971} for more details. \\ \end{pargr} \noindent We consider shape theory for separable \Cs{s} as developed by Blackadar, \cite{Bla1985}. Let us shortly recall the main notions and results: \\ \begin{pargr}[(Weakly) (semi-)projective \Cs{s}] \label{pargr:S02:wSP} Let $\mathcal{D}$ be a subcategory of the category of \Cs{s}, closed under quotients\footnote{This means the following: Assume $B$ is a quotient \Cs{} of $A$ with quotient morphism $\pi\colon A\to B$. If $A\in\mathcal{D}$, then $B\in\mathcal{D}$ and $\pi$ is a $\mathcal{D}$-morphism.}. A $\mathcal{D}$-morphism $\varphi\colon A\to B$ is called \termDef{(weakly) projective in $\mathcal{D}$} if for any \Cs{} $C$ in $\mathcal{D}$ and $\mathcal{D}$-morphism $\sigma\colon B\to C/J$ to some quotient (and finite subset $F\subset A$, $\varepsilon>0$), there exists a $\mathcal{D}$-morphism $\bar{\sigma}\colon A\to C$ such that $\pi\circ\bar{\sigma}=\sigma\circ\varphi$ (resp. $\pi\circ\bar{\sigma}=_\varepsilon^F\sigma\circ\varphi$), where $\pi\colon C\to C/J$ is the quotient morphism. This means that the following diagram can be completed to commute (up to $\varepsilon$ on $F$): \begin{center} \makebox{ \xymatrix{ & & C \ar[d]^{\pi} \\ A \ar[r]_{\varphi} \ar@{..>}[urr]^{\bar{\sigma}} & B \ar[r]_{\sigma}& C/J \\ }} \end{center} A \Cs{} $A$ is called \termDef{(weakly) projective} in $\mathcal{D}$ if the identity morphism $\id_A\colon A\to A$ is (weakly) projective. A $\mathcal{D}$-morphism $\varphi\colon A\to B$ is called \termDef{(weakly) semiprojective in $\mathcal{D}$} if for any \Cs{} $C$ in $\mathcal{D}$ and increasing sequence of ideals $J_1\lhd J_2\lhd\ldots\lhd C$ and $\mathcal{D}$-morphism $\sigma\colon B\to C/\overline{\bigcup_k J_k}$ (and finite subset $F\subset A$, $\varepsilon>0$), there exists an index $k$ and a $\mathcal{D}$-morphism $\bar{\sigma}\colon A\to C/J_k$ such that $\pi_k\circ\bar{\sigma}=\sigma\circ\varphi$ (resp. $\pi_k\circ\bar{\sigma}=_\varepsilon^F\sigma\circ\varphi$), where $\pi_k\colon C/J_k\to C/\overline{\bigcup_k J_k}$ is the quotient morphism. This means that the following diagram can be completed to commute (up to $\varepsilon$ on $F$): \begin{center} \makebox{ \xymatrix{ & & C \ar[d] \\ & & C/J_k \ar[d]^{\pi} \\ A \ar[r]_{\varphi} \ar@{..>}[urr]^{\psi} & B \ar[r]_{\sigma} & C/\overline{\bigcup_k J_k} \\ }} \end{center} A \Cs{} $A$ is called \termDef{(weakly) semiprojective} in $\mathcal{D}$ if the identity morphism $\id_A\colon A\to A$ is (weakly) semiprojective. It is well known that if $A$ is separable then $A$ is semiprojective in the category of all \Cs{s} if and only if it is in the category of separable \Cs{s}. If $\mathcal{D}$ is the category $\mathcal{S}$ of all separable \Cs{s} (with all $\ast$-homomorphisms), then one drops the reference to $\mathcal{D}$ and simply speaks of (weakly) (semi-)projective \Cs{s}. Besides $\mathcal{S}$ one often considers the category $\mathcal{S}_1$ of all \emph{unital} separable \Cs{s} with \emph{unital} $\ast$-homomorphisms as morphisms. A projective \Cs{} cannot have a unit. For a (separable) \Cs{s} $A$ we get from \cite[Proposition 2.5]{Bla1985}, see also \cite[Theorem 10.1.9, p.75]{Lor1997}, that the following are equivalent: \begin{enumerate}[label=(\arabic*)] \item $A$ is projective \item $\widetilde{A}$ is projective in $\mathcal{S}_1$ \end{enumerate} The situation for semiprojectivity is even easier. A unital \Cs{} is semiprojective if and only if it is semiprojective in $\mathcal{S}_1$. Further, for a separable \Cs{} $A$ we get from \cite[Corollary 2.16]{Bla1985}, see also \cite[Theorem 14.1.7, p.108]{Lor1997}, that the following are equivalent: \begin{enumerate}[label=(\arabic*)] \item $A$ is semiprojective \item $\widetilde{A}$ is semiprojective \item $\widetilde{A}$ is semiprojective in $\mathcal{S}_1$ \\ \end{enumerate} \end{pargr} \begin{pargr}[Connection between (approximative) absolute (neighborhood) retracts and (weakly) (semi-)projective \Cs{s}] \label{pargr:S02:Connection} Let $\mathcal{SC}$ be the full subcategory of $\mathcal{S}$ consisting of (separable) commutative \Cs{s}, and similarly let $\mathcal{SC}_1$ be the full subcategory of $\mathcal{S}_1$ consisting of (separable, unital) commutative \Cs{s}. In general, for a \Cs{} it is easier to be (weakly) (semi-)projective in a smaller full subcategory, since there are fewer quotients to map into. In particular, if a commutative \Cs{} is (weakly) (semi-)projective, then it will be (weakly) (semi-)projective with respect to $\mathcal{SC}$. If one compares the definitions carefully, then one gets the following equivalences for a \emph{compact}, metric space $X$ (see \cite[Proposition 2.11]{Bla1985}): \begin{tabular}{lll} (1)\quad & $C(X)$ is projective in $\mathcal{SC}_1$ & $\Leftrightarrow$ $X$ is an AR \\ (2) & $C(X)$ is weakly projective in $\mathcal{SC}_1$ & $\Leftrightarrow$ $X$ is an AAR \\ (3) & $C(X)$ is semiprojective in $\mathcal{SC}_1$ & $\Leftrightarrow$ $X$ is an ANR \\ (4) & $C(X)$ is weakly semiprojective in $\mathcal{SC}_1$ & $\Leftrightarrow$ $X$ is an AANR \\ \end{tabular} Thus, the notion of (weak) (semi-)projectively is a translation of the concept of an (approximate) absolute (neighborhood) retract to the world of noncommutative spaces. Let us clearly state a point which is used in the proof of the main theorem: If $C(X)$ is (weakly) (semi-)projective in $\mathcal{SC}_1$, then $X$ is an (approximate) absolute (neighborhood) retract. As we will see, the converse is not true in general. We need an assumption on the dimension of $X$. \\ \end{pargr} \begin{pargr}[Covering dimension] \label{pargr:S02:Dimension} By $\dim(X)$ we denote the covering dimension of a space $X$. By definition, $\dim(X)\leq n$ if every finite open cover $\mathcal{U}$ of $X$ can be refined by a finite open cover $\mathcal{V}$ of $X$ such that $\ord(\mathcal{V})\leq n+1$. Here $\ord(\mathcal{V})$ is the largest number $k$ such that there exists some point $x\in X$ that is contained in $k$ different elements of $\mathcal{V}$. To an open cover $\mathcal{V}$ one can naturally assign an abstract simplicial complex\footnote{An abstract simplicial complex over a set $S$ is a family $C$ of finite subsets of $S$ such that $X\subset Y\in C$ implies $X\in C$. An element $X\in C$ with $n+1$ elements is called an $n$-simplex (of the abstract simplicial complex).} $\mathcal{N}(\mathcal{V})$, called the nerve of the covering. It is is defined as the family of finite subsets $\mathcal{V}'\subset\mathcal{V}$ with non-empty intersection, in symbols: \begin{align*} \mathcal{N}(\mathcal{V}):=\{ \mathcal{V}'\subset\mathcal{V} \text{ finite }\ :\ \bigcap\mathcal{V}'\neq\emptyset \}. \end{align*} A $n$-simplex of $\mathcal{N}(\mathcal{V})$ corresponds to a choice of $n$ different elements in the cover that have non-empty intersection. Given an abstract simplicial complex $C$, one can naturally associate to it a space $|C|$, called the geometric realization of $C$. The space $|C|$ is a polyhedron, in particular it is a CW-complex. Note that $\ord(\mathcal{V})\leq n+1$ if and only if the nerve $\mathcal{N}(\mathcal{V})$ of the covering $\mathcal{V}$ is an abstract simplicial set of dimension\footnote{The dimension of an abstract simplicial set is the largest integer $k$ such that it contains a $k$-simplex.} $\leq n$, or equivalently the geometric realization of $|\mathcal{N}(\mathcal{V})|$ is a polyhedron of covering dimension\footnote{The covering dimension of polyhedra, or more generally CW-complexes, is easily understood. These spaces are successively build by attaching cells of higher and higher dimension. The (covering) dimension of a CW-complex is simply the highest dimension of a cell that was attached when building the complex.} $\leq n$. Let $\mathcal{U}$ be a finite open covering of a space $X$, and $\{e_u\ :\ U\in\mathcal{U}\}$ a partition of unity that is subordinate to $\mathcal{U}$. This naturally defines a map $\alpha\colon X\to|\mathcal{N}(\mathcal{U})|$ sending a point $x\in X$ to the (unique) point $\alpha(x)\in|\mathcal{N}(\mathcal{U})|$ that has ''coordinates'' $e_U(x)$. By $\locdim(X)$ we denote the local covering dimension of a space $X$. By definition $\locdim(X)\leq n$ if every point $x\in X$ has a closed neighborhood $D$ such that $\dim(D)\leq n$. If $X$ is paracompact (e.g. if it is compact, or locally compact and $\sigma$-compact), then $\locdim(X)=\dim(X)$. See \cite{Nag1970} for more details on nerves, polyhedra and the (local) covering dimension of a space. \\ \end{pargr} \noindent A particularly nice class of one-dimensional\footnote{We say a space is one-dimensional if $\dim(X)\leq 1$. So, although it sounds weird, a one-dimensional space can also be zero-dimensional. It would probably be more precise to speak of ''at most one-dimensional'' space, however the usage of the term ''one-dimensional space'' is well established.} spaces are the so-called dendrites. Before we look at them, let us recall some notions from continuum theory. A good reference is Nadler's book, \cite{Nad1992}. A \termDef{continuum} is a compact, connected, metric space, and a \termDef{generalized continuum} is a locally compact, connected, metric space. A \termDef{Peano continuum} is a locally connected continuum, and a \termDef{generalized Peano continuum} is a locally connected generalized continuum. By a \termDef{finite graph} we mean a graph with finitely many vertices and edges, or equivalently a compact, one-dimensional CW-complex. By a \termDef{finite tree} we mean a contractible finite graph. \\ \begin{pargr}[Dendrites] \label{pargr:S02:Dendrites} A \termDef{dendrite} is a Peano continuum that does not contains a simple closed curve (i.e., there is no embedding of the circle $S^1$ into it). There are many other characterizations of a dendrite. We collect a few and we will use them without further mentioning. Let $X$ be a Peano continuum. Then $X$ is a dendrite if and only if one (or equivalently all) of the following conditions holds: \begin{enumerate}[label=(\arabic*)] \item $X$ is one-dimensional and contractible \item $X$ is tree-like\footnote{A (compact, metric) space $X$ is tree-like, if for every $\e>0$ there exists a finite tree $T$ and a map $f\colon X\to T$ onto $T$ such that $\diam(f^{-1}(y))<\e$ for all $y\in T$.}. \item $X$ is dendritic\footnote{A space $X$ is called dendritic, if any two points of $X$ can be separated by the omission of a third point} \item $X$ is hereditarily unicoherent\footnote{A continuum $X$ is called unicoherent if for each two subcontinua $Y_1,Y_2\subset X$ with $X=Y_1\cup Y_2$ the intersection $Y_1\cap Y_2$ is a continuum (i.e. connected). A continuum is called hereditarily unicoherent if all its subcontinua are unicoherent.}. \end{enumerate} For more information about dendrites see \cite[Chapter 10]{Nad1992}, \cite{Lel1976}, \cite{CasCha1960}. \\ \end{pargr} \section{One implication of the main theorem: Necessity} \label{sect:03:necessity} \begin{prop} \label{prop:S03:SP_gives_1D} Let $C(X)$ be a unital, separable \Cs{} that is semiprojective. Then $X$ is a compact ANR with $\dim(X)\leq 1$. \end{prop} \begin{proof} Assume such a $C(X)$ is given. Then $X$ is a compact, metric space. As noted in \ref{pargr:S02:Connection}, semiprojectivity (in $\mathcal{S}_1$) implies semiprojectivity in the full subcategory $\mathcal{SC}_1$ and this means exactly that $X$ is a (compact) ANR. We are left with showing $\dim(X)\leq 1$. Assume otherwise, i.e., assume $\dim(X)\geq 2$. Since $X$ is paracompact, we have $\locdim(X)=\dim(X)\geq 2$. This means there exists $x_0\in X$ such that $\dim(D)\geq 2$ for each closed neighborhood $D$ of $x_0$. For each $k$ consider $D_k:=\{y\in X\ :\ d(y,x_0)\leq 1/k\}$. This defines a decreasing sequence of closed neighborhoods around $x_0$ with $\dim(D_k)\geq 2$. It was noted in \cite[Proposition 3.1]{ChiDra2010} that a Peano space of dimension at least $2$ admits a topological embedding\footnote{If $X,Y$ are spaces, then an injective map $i\colon X\to Y$ is called a topological embedding if the original topology of $X$ is the same as initial topology induced by the map $i$. We usually consider a topologically embedded space as a subset with the subset topology.} of $S^1$. Indeed, a Peano space that contains no simple arc (i.e. in which $S^1$ cannot be embedded) is a dendrite, and therefore at most one-dimensional. It follows that there are embeddings $\varphi_k\colon S^1\hookrightarrow D_k\subset X$. Putting these together we get a map (not necessarily an embedding) $\varphi\colon Y\to X$ where $Y$ is the space of ''smaller and smaller circles'': \begin{align*} Y=\{(0,0)\}\cup\bigcup_{k\geq 1}S((1/2^k,0),1/(4\cdot 2^k))\subset\RR^2, \end{align*} where $S(x,r)$ is the circle of radius $r$ around the point $x$. We define $\varphi$ as $\varphi_k$ on the circle $S((1/k,0),1/3k)$. The map $\varphi\colon Y\to X$ induces a morphism $\varphi^\ast\colon C(X)\to C(Y)$. Next we construct a \Cs{} $B$ with a nested sequence of ideals $J_k\lhd B$, such that $C(Y)=B/\overline{\bigcup_k J_k}$ and $\varphi^\ast\colon C(X)\to C(Y)$ cannot be lifted to some $B/J_k$. Let $\mathcal{T}$ be the Toeplitz algebra and let $\mathcal{T}_1,\mathcal{T}_2,\ldots$ be a sequence of copies of the Toeplitz algebra, and set: \begin{align*} B &:=(\bigoplus_{k\in\NN} \mathcal{T}_k)^+ \\ &=\{(b_1,b_2,\ldots)\in \prod_{k\geq 1}\mathcal{T} \text{ such that } (b_k)_k \text{ converges to a scalar multiple of } 1_\mathcal{T}\}. \end{align*} The algebras $\mathcal{T}_k$ come with ideals $\KK_k\lhd\mathcal{T}_k$ (each $\KK_k$ a copy of the algebra of compact operators $\KK$). Define ideals $J_k\lhd B$ as follows: \begin{align*} J_k &:=\KK_1\oplus\ldots\oplus\KK_k\oplus 0\oplus 0\oplus\ldots \\ &=\{(b_1,\ldots,b_k,0,0,\ldots)\in B\ :\ b_i\in\KK_i\lhd\mathcal{T}_i\}. \end{align*} Note $B/J_k=C(S^1)\oplus\ldots_{(k)}\oplus C(S^1)\oplus (\bigoplus_{l\geq k+1}\mathcal{T}_l)^+$ ($k$ summands of $C(S^1)$). Also $J_k\subset J_{k+1}$ and $J:=\overline{\bigcup_k J_k}=\bigoplus_{k\in\NN}\KK_k$ and $B/J=(\bigoplus_{l\geq 1}C(S^1))^+\cong C(Y)$. The semiprojectivity of $C(X)$ gives a lift of $\varphi^\ast\colon C(X)\to C(Y)=B/J$ to some $B/J_k$. Consider the projection $\rho_{k+1}\colon B/J_k\to\mathcal{T}_{k+1}$ onto the (k+1)-th coordinate, and similarly $\varrho_{k+1}\colon B/J\to C(S^1)$. The composition $C(X)\to C(Y)\cong B/J\to C(S^1)$ is $\varphi_{k+1}^\ast$, the morphism induced by the inclusion $\varphi_{k+1}\colon S^1\hookrightarrow X$. Note that $\varphi_{k+1}^\ast$ is surjective since $\varphi_{k+1}$ is an inclusion. The situation is viewed in the following commutative diagram: \begin{center} \makebox{ \xymatrix{ & & B/J_k \ar[d] \ar[r]^{\rho_{k+1}} & \mathcal{T}_{k+1} \ar[d] \\ C(X) \ar@/_1pc/[rrr]_{\varphi_{k+1}^\ast} \ar[r]^>>{\varphi^\ast} \ar[urr] & C(Y) \ar[r]^{\cong} & B/J \ar[r]^{\varrho_{k+1}} & C(S^1) }} \end{center} The unitary $\id_{S^1}\in C(S^1)$ lifts under $\varphi_{k+1}^\ast$ to a normal element in $C(X)$, but it does not lift to a normal element in $\mathcal{T}_{k+1}$. This is a contradiction, and our assumption $\dim(X)\geq 2$ must be wrong. \qedhere \\ \end{proof} \noindent It is well known that $C(D^2)$, the \Cs{} of continuous functions on the two-dimensional disc $D^2=\{(x,y)\in\RR^2\ :\ x^2+y^2\leq 1\}$, is not weakly semiprojective. For completeness we include the argument which is essentially taken from Loring \cite[17.1, p.131]{Lor1997}, see also \cite{Lor1995}. \\ \begin{prop} \label{prop:S03:Disc_not_SP} $C(D^2)$ is not weakly semiprojective. \end{prop} \begin{proof} The $\ast$-homomorphisms from $C(D^2)$ to a \Cs{} $A$ are in natural one-one correspondence with normal contractions in $A$. Thus, statements about (weak) (semi-)projectivity of $C(D^2)$ correspond to statements about the (approximate) liftability of normal elements. For example, that $C(D^2)$ is projective would correspond to the (wrong) statement that normal elements lift from quotient \Cs s. To disprove weak semiprojectivity of $C(D^2)$ one uses a construction of operators that are approximately normal but do not lift in the required way due to an index obstruction. More precisely, define weighted shift operators $t_n$ on the separable Hilbert space $l^2$ (with basis $\xi_1,\xi_2,\ldots$) as follows: $$ t_n(\xi_k)=\begin{cases} ((r+1)/2^{n-1})\xi_{k+1} &\text{if } k=r2^{n+1}+s, 0\leq s<2^{n+1} \\ \xi_{k+1} &\text{if } k\geq 4^n. \\ \end{cases} $$ Each $t_n$ is a finite-rank perturbation of the unilateral shift. Therefore the $t_n$ lie in the Toeplitz algebra $\Toep$ and have index $-1$. The construction of $t_n$ is made so that $\|t_n^\ast t_n-t_nt_n^\ast\|=1/2^{n-1}$. Consider the \Cs{} $B=\prod_\NN\Toep / \bigoplus_\NN\Toep$. The sequence $(t_1,t_2,\ldots)$ defines an element in $\prod_\NN\Toep$. Let $x=[(t_1,t_2,\ldots)]\in B$ be the equivalence class in $B$. Then $x$ is a normal element of $B$, and we let $\varphi\colon C(D^2)\to B$ be the corresponding morphism. We have the following lifting problem: \begin{center}\makebox{ \xymatrix{ & \prod_{k\geq N}\Toep_k \ar[d]^{\pi} \\ C(D^2) \ar[r]^{\varphi} \ar@{..>}[ur]^{\bar{\varphi}} & {\prod_\NN\Toep / \bigoplus_\NN\Toep} \\ }} \end{center} Assume $C(D^2)$ is weakly semiprojective. Then the lifting problem can be solved, and $\bar{\varphi}$ defines a normal element $y=(y_N,y_{N+1},\ldots)$ in $\prod_{k\geq N}\Toep_k$. But the index of each $y_l$ is zero, while the index of each $t_l$ is $-1$, so that the norm-distance between $y_l$ and $t_l$ is at least one. Therefore the distance of $\pi(y)$ and $x$ is at least one, a contradiction. Thus, $C(D^2)$ is not weakly semiprojective. \qedhere \\ \end{proof} \begin{rmk}[Spaces containing a two-dimensional disc] \label{pargr:S03:spaces_containing_disc} We have seen above that $C(D^2)$ is not weakly semiprojective. Even more is true: Whenever a (compact, metric) space $X$ contains a two-dimensional disc, then $C(X)$ is not weakly semiprojective. This was noted by Loring, \cite{LorPrivat}. For completeness we include the argument: Let $D^2\subset X$ be a two-dimensional disc with inclusion map $i\colon D^2\to X$. Since $D^2$ is an absolute retract, there exists a retraction $r\colon X\to D$, i.e., $r\circ i=\id\colon D^2\to D^2$. Passing to \Cs s, we get induced momorphisms $i^\ast\colon C(X)\to C(D^2), r^\ast\colon C(D^2)\to C(X)$ such that $i^\ast\circ r^\ast$ is the identity on $C(D^2)$. Assume $C(X)$ is weakly semiprojective. Then any lifting problem for $C(D^2)$ could be solved as follows: Using the weak semiprojectivity of $C(X)$, the morphism $\varphi\circ i^\ast$ can be lifted. Then $\sigma\circ r^\ast$ is a lift for $\varphi=\varphi\circ i^\ast\circ r^\ast$. The situation is viewed in the following commutative diagram: \begin{center}\makebox{ \xymatrix{ & & & \prod_{k\geq N}B_k \ar[d]^{\pi} \\ C(D^2) \ar[r]_{r^\ast} & C(X) \ar[r]_{i^\ast} \ar@{..>}[urr]^{\sigma} & C(D^2) \ar[r]_<<<<<{\varphi} & {\prod_{k\geq 1} B_k / \bigoplus_{k\geq 1} B_k} \\ }} \end{center} This gives a contradiction, as we have shown above that $C(D^2)$ is not weakly semiprojective. However, that a space does not contain a two-dimensional disc is no guarantee that it has dimension at most one. These kind of questions are studied in continuum theory, and Bing, \cite{Bin1951}, gave examples of spaces of arbitrarily high dimension that are hereditarily indecomposable\footnote{A continuum (i.e. compact, connected, metric space) is called decomposable if it can be written as the union of two proper subcontinua. Note that the union is not assumed to be disjoint. For example the interval $[0,1]$ is decomposable as it can be written as the union of $[0,1/2]$ and $[1/2,1]$. A continuum is called hereditarily indecomposable if none of its subcontinua is decomposable. See \cite{Nad1992} for further information.}, in particular they do not contain an arc or a copy of $D^2$. These pathologies cannot occur if we restrict to ''nicer'' spaces. For example, if a CW-complex does not contain a two-dimensional disc, then it has dimension at most one. What about ANRs? Bing and Borsuk, \cite{BinBor1964}, gave an example of a three-dimensional AR that does not contain a copy of $D^2$. The question for four-dimensional AR's is still open, i.e., it is unknown whether there exist high-dimensional AR's (or just ANRs) that do not contain a copy of $D^2$. The point we want to make clear is the following: To prove that an ANR is one-dimensional it is not enough to prove that it does not contain a copy of $D^2$. \\ \end{rmk} \begin{rmk}[Spaces contained in ANRs of dimension $\geq 2$] \label{pargr:S03:subspaces_ANR_dim2} Although an ANR $X$ of with $\dim(X)\geq 2$ might not contain a disc, one can show that it must contain (a copy of) one of the following three spaces: \begin{description} \item[Space 1] The space $Y_1$ of distinct ''smaller and smaller circles'' as considered in the proof of \ref{prop:S03:SP_gives_1D}, i.e., $Y_1=\{(0,0)\}\cup\bigcup_{k\geq 1}S((1/2^k,0),1/(4\cdot 2^k))\subset\RR^2$. \item[Space 2] The Hawaiian earrings, i.e., $Y_2=\bigcup_{k\geq 1}S((1/2^k,0),1/2^k)\subset\RR^2$. \item[Space 3] A variant of the Hawaiian earrings, where the circles do not just intersect in one point, but have a segment in common. It is homeomorphic to: $Y_3=\{(x,x),(x,-x)\ :\ x\in[0,1]\} \cup \bigcup_{k\geq 1} \{1/k\}\times[-1/k,1/k]\subset\RR^2$. \end{description} \begin{figure}[ht] \centering \subfigure[Space $Y_1$] \psset{unit=0.013cm} \degrees \begin{pspicture}(-50,125)(900,-125) \psline[linestyle=dotted, linewidth=2pt, dotsep=2pt](-20,0)(5,0) \pscircle(754, 0){121} \pscircle(471, 0){81} \pscircle(282, 0){54} \pscircle(156, 0){36} \pscircle(72, 0){24} \end{pspicture} } \subfigure[Space $Y_2$] \psset{unit=0.02cm} \begin{pspicture}(0,130)(260,-130) \pscircle(121, 0){121} \pscircle(81, 0){81} \pscircle(54, 0){54} \pscircle(36, 0){36} \pscircle(24, 0){24} \pscircle(16, 0){16} \pscircle[linestyle=dotted, linewidth=2pt, dotsep=1.2pt](10, 0){10} \end{pspicture} \quad \subfigure[Space $Y_3$] \psset{unit=1.3cm} \degrees \begin{pspicture}(0,2)(5,-2) \rput{-90}(2,2){ \parabola(0,2)(2,-2) } \psarc(0,0){4.472}{-26.565}{26.565} \psarc(0,0){2.981}{-32.192}{32.192} \psarc(0,0){1.988}{-38.776}{38.776} \psarc(0,0){1.325}{-46.253}{46.253} \psarc(0,0){0.883}{-54.334}{54.334} \psarc(0,0){0.589}{-62.431}{62.431} \psarc[linestyle=dotted, linewidth=2pt, dotsep=1.5pt](0,0){0.393}{-69.775}{69.775} \end{pspicture} \caption{Spaces contained in high-dimensional ANRs} \end{figure} To prove this, one uses the same idea as in the proof of \ref{prop:S03:SP_gives_1D}: If $\dim(X)\geq 2$, then there exists a point $x_0$ where the local dimension is at least two. Then one can embed into $X$ a sequence of circles that get smaller and smaller and converge to $x_0$. Note that the circles may intersect or overlap. By passing to subspaces, we can get rid of ''unnecessary'' intersections and overlappings, and finally there are only three qualitatively different ways a bunch of ''smaller and smaller'' can look like. We skip the details. Note that none of the three spaces $Y_1,Y_2,Y_3$ are semiprojective. Further, no (compact, metric) space $X$ that contains a copy of $Y_1,Y_2$ or $Y_3$ can be semiprojective. One uses a similar argument as for an embedded $D^2$. Assume for some $k$ there is an inclusion $i\colon Y_k\hookrightarrow X$. Since $Y_k$ is not an AR, there will in general be no retraction onto it. Instead, choose an embedding $f\colon Y_k\hookrightarrow D^2$. This map can be extended a map $\tilde{f}\colon X\to D^2$ on all of $X$ since $D^2$ is an AR. \begin{center}\makebox{ \xymatrix{ D^2 \\ Y_k \ar[r]_{i} \ar[u]^{f} & X \ar@{..>}[ul]_{\tilde{f}} & \\ }} \end{center} If $C(X)$ is semiprojective, then any lifting problem as shown in the diagram below can be solved. However, using Toeplitz algebras as in \ref{prop:S03:SP_gives_1D} we see that the morphism $f^\ast=i^\ast\circ\tilde{f}^\ast\colon C(D^2)\to C(Y_k)$ is not semiprojective. \begin{center} \makebox{ \xymatrix{ & & & B/J_N \ar[d]^{\pi} \\ C(D^2) \ar[r]_{\tilde{f}^\ast} & C(X) \ar[r]_{i^\ast} \ar@{..>}[urr]_{\sigma} & C(Y_k) \ar[r]_{\varphi} & B/\overline{\bigcup_{k\geq 1}J_k} \\ }} \end{center} Finally let us note that the \Cs{s} $C(Y_1),C(Y_2)$ and $C(Y_3)$ are weakly semiprojective. \\ \end{rmk} \section{Structure of compact, one-dimensional ANRs} \label{sect:04:compact_ANR} \noindent In this section we prove structural theorems about compact, one-dimensional absolute neighborhood retracts (ANRs). The results are used in the next section to show that the \Cs{} of continuous functions on such a space is semiprojective. In section \ref{sect:06:Applications} we will study the structure on non-compact, one-dimensional ANRs. We start with some preparatory lemmas. By $\pi(X,x_0)$ we denote the fundamental group of $X$ based at $x_0\in X$. Statements about the fundamental group often do not depend on the basepoint, and then we will simply write $\pi(X)$ to mean that any (fixed) basepoint may be chosen. \\ \begin{lma} \label{prop:S04::lma:homotope_path_to_piecewise_arc} Let $X$ be a Hausdorff space. Assume $X$ has a simply connected covering space. Then every path in $X$ is homotopic (relative endpoints) to a path that is piecewise arc. \end{lma} \begin{proof} Let $p\colon \widetilde{X}\to X$ be a simply connected, Hausdorff covering space. Let $\alpha\colon [0,1]\to X$ be a path, and let $\widetilde{\alpha}\colon [0,1]\to \widetilde{X}$ be a lift. Then the image of $\widetilde{\alpha}$ is a Peano continuum (i.e., a compact, connected, locally connected, metric space), and is therefore arcwise connected. Choose any arc $\beta\colon [0,1]\to \widetilde{X}$ from $\widetilde{\alpha}(0)$ to $\widetilde{\alpha}(1)$. The arc may of course be chosen within the image of $\widetilde{\alpha}$. Since $\widetilde{X}$ is simply connected, the paths $\widetilde{\alpha}$ and $\beta$ are homotopic (relative endpoints). Then $\alpha=p\circ\widetilde{\alpha}$ and $p\circ\beta$ are homotopic paths in $X$. Since $p$ is locally a homeomorphism, $p\circ\beta$ is piecewise arc, i.e., there exists a finite subdevision $0=t_0<t_1<\ldots<t_N=1$ such that each restriction $p\circ\beta_{|[t_j,t_{j+1}]}$ is an arc. \qedhere \\ \end{proof} \begin{lma} \label{prop:S04::lma:subgr_giving_fundGp} Let $X$ be a Hausdorff space, and $x_0\in X$. Assume $X$ has a simply connected covering space, and $\pi(X,x_0)$ is finitely generated. Then there exists a finite graph $Y\subset X$ with $x_0\in Y$ such that $\pi(Y,x_0)\to\pi(X,x_0)$ is surjective. \end{lma} \begin{proof} Choose a set of generators $g_1,\ldots,g_k$ for $\pi(X,x_0)$, represented by loops $\alpha_1,\ldots,\alpha_k\colon S^1\to X$. From the above lemma we can homotope each $\alpha_j$ to a loop $\beta_j$ that is piecewise arc. Then the image of each $\beta_j$ in $X$ is a finite graph. Consequently, also the union $Y:=\bigcup_j\im(\beta_j)$ is a finite graph (containing $x_0$). By construction each $g_j$ lies in the image of the natural map $\pi(Y,x_0)\to\pi(X,y_0)$. Therefore this map is surjective. \qedhere \\ \end{proof} \begin{rmk} \label{pargr:S04:Existence_of_univ_cov_sp} Let $X$ be a connected, locally pathwise connected space. Then $X$ has a simply connected covering space (also called universal cover) if and only if $X$ is semilocally simply connected\footnote{A space $X$ is called semilocally simply connected (sometimes also called locally relatively simply connected) if for each $x_0\in X$ there exists a neighborhood $U$ of $x_0$ such that $\pi(U,x_o)\to\pi(X,x_0)$ is zero.} (s.l.s.c.), see \cite[Theorem III.8.4, p.155]{Bre1993}. \\ \end{rmk} \begin{prop} \label{prop:S04:Subgr_fundGp_for_SLSC_Peano} Let $X$ be a s.l.s.c. Peano continuum and $x_0\in X$. Then there exists a finite graph $Y\subset X$ with $x_0\in Y$ such that $\pi(Y,x_0)\to\pi(X,x_0)$ is surjective. \end{prop} \begin{proof} Peano continua are connected and locally pathwise connected. Therefore, by the above remark \ref{pargr:S04:Existence_of_univ_cov_sp}, $X$ has a simply connected covering space. By \cite[Lemma 7.7]{CanCon2006}, $\pi(X,x_0)$ is finitely generated (even finitely presented). Now we may apply the above lemma \ref{prop:S04::lma:subgr_giving_fundGp}. \qedhere \\ \end{proof} \begin{rmk} \label{pargr:S04:subgraph_has_free_fundGp} The fundamental group of a finite graph is finitely generated (f.g.), free and abelian. Thus, the above map $\pi(Y,x_0)\to\pi(X,x_0)$ will in general not be injective. Even if $\pi(X,x_0)$ is f.g., free and abelian, the constructed map might not be injective. The reason is simply that the constructed graph could contain ''unnecessary'' loops (e.g. consider a circle embedded into a disc). However, by restricting to a subgraph one can get $\pi(Y,x_0)\to\pi(X,x_0)$ to be an isomorphism. Thus, if $X$ is a Hausdorff space that has a simply connected covering space, and $\pi(X,x_0)$ is finitely generated, free and abelian, then there exists a finite graph $Y\subset X$ such that $\pi(Y,x_0)\to\pi(X,x_0)$ is an isomorphism. Let us consider a one-dimensional space $X$. This situation is special, since Cannon and Conner, \cite[Corollary 3.3]{CanCon2006}, have shown that an inclusion $Y\subset X$ of one-dimensional spaces induces an injective map on the fundamental group. Thus, we get the following: \\ \end{rmk} \begin{prop} \label{prop:S04:Subgr_fundGp_for_SLSC_1D} Let $X$ be a one-dimensional, Hausdorff space, and $x_0\in X$. Assume $X$ has a simply connected covering space, and $\pi(X,x_0)$ is finitely generated. Then there exists a finite graph $Y\subset X$ with $x_0\in Y$ such that $\pi(Y,x_0)\to\pi(X,x_0)$ is an isomorphism. \\ \end{prop} \noindent Above we have studied, when there is a finite subgraph containing (up to homotopy) all loops of a space. We now turn to the question, when there is canonical such subgraph. It is clear that we can only hope for this to happen if the space is one-dimensional. We will use results from the master thesis of Meilstrup, \cite{Mei2005}, where also the following concept is introduced: A one-dimensional Peano continuum is called a \termDef{core continuum} if it contains no proper deformation retracts. \\ \begin{prop}[{see \cite[Corollary 2.6]{Mei2005}}] \label{prop:S04:TFAE_core_continuum} \noindent Let $X$ be a one-dimensional Peano continuum. Then the following are equivalent: \begin{enumerate}[label=(\arabic*)] \item $X$ is a core \item $X$ has no attached dendrites (an attached dendrite is a dendrite $C\subset X$ such that for some $y\in C$ there is a strong deformation retract $r\colon X\to(X\setminus C)\cup\{y\}$) \item every point of $X$ is on an essential loop that cannot be homotoped off it \item whenever $Y\subset X$ is a subset with $\pi(Y)\to\pi(X)$ surjective (hence bijective), then $Y=X$ \end{enumerate} \end{prop} \begin{proof} The equivalence of (1),(2) and (3) is proved in \cite[Corollary 2.6]{Mei2005}. \impliesStep{3}{4}: Let $Y\subset X$ be a subset with $\pi(Y)\to\pi(X)$ surjective. Let $x\in X$ be any point. Then $x$ is on an essential loop, say $\alpha$, which cannot be homotoped off it. Since $[\alpha]\in\pi(Y,x)$ there is a loop $\beta$ with image in $Y$ that is homotopic to $\alpha$. Therefore $x\in Y$. \impliesStep{3}{4}: For any subset $Y$ that is a deformation retract of $X$ the map $\pi(Y)\to\pi(X)$ surjective. \qedhere \\ \end{proof} \noindent To proceed further and prove that every one-dimensional Peano continuum contains a core we need the notion of reduced loop from \cite[Definition 3.8]{CanCon2006}. In fact, we will slighty generalize this to the notion of reduced path. This will help to simplify some proofs below. \\ \begin{defn}[{see \cite[Definition 3.8]{CanCon2006}}] \label{defn:S04:reduced_path} \noindent A non-constant path $\alpha\colon [0,1]\to X$ is called \termDef{reducible}, if there is an open arc $I=(s,t)\subset[0,1]$ such that $f(s)=f(t)$ and the loop $\alpha_{|[s,t]}$ based at $f(s)$ is nullhomotpic. A path is called \termDef{reduced} if it is not reducible. A constant path is also called reduced. \\ \end{defn} \noindent By \cite[Theorem 3.9]{CanCon2006} every loop is homotopic to a reduced loop, and if the space is one-dimensional, then this reduced loop is even unique (up to reparametrization of $S^1$). The analogue for paths is proved in the same way. \\ \begin{prop}[{see \cite[Theorem 3.9]{CanCon2006}}] \label{prop:S04:path_htpc_to_reduced_path} \noindent Let $X$ be a space, and $\alpha\colon [0,1]\to X$ a path. Then $\alpha$ is homotopic (relative endpoints) to a reduced path $\beta\colon [0,1]\to X$ and we may assume the homotopy takes place inside the image of $\alpha$, so that also the image of $\beta$ lies inside the image of $\alpha$. If $X$ is one-dimensional, then the reduced path is unique up to reparametrizing of $[0,1]$. \\ \end{prop} \begin{prop}[{see \cite[Theorem 2.4]{Mei2005}}] \label{prop:S04:Existence_of_core} \noindent Let $X$ be a non-contractible, one-dimensional Peano continuum. Then there exists a unique strong deformation retract $C\subset X$ that is a core continuum. We call it the core of $X$ and denote it by $\core(X)$. Further: \begin{enumerate}[label=(\arabic*)] \item $\core(X)$ is the smallest strong deformation retract of $X$ \item $\core(X)$ is the smallest subset $Y\subset X$ such that the map $\pi(Y)\to\pi(X)$ is surjective \end{enumerate} \end{prop} \begin{proof} Let $\core(X)\subset X$ be the union of all essential, reduced loops in $X$. In the proof of \cite[Theorem 2.4]{Mei2005} it is shown that $\core(X)$ is a core continuum and a strong deformation retract of $X$. For every strong deformation retract $Y\subset X$ the map $\pi(Y)\to\pi(X)$ is surjective. Thus, to prove the two statements it is enough to show that $\core(X)$ is contained in every subset $Y\subset X$ such that the map $\pi(Y)\to\pi(X)$ is surjective. Let $Y\subset X$ be any subset such that the map $\pi(Y)\to\pi(X)$ is surjective, and let $\alpha$ be an essential, reduced loop in $X$. Then $\alpha$ is homotopic to a loop $\alpha'$ in $Y$. By the above remark the image of $\alpha'$ contains the image of $\alpha$. Thus, $Y$ contains all essential, reduced loops in $X$, and therefore $\core(X)\subset Y$. \qedhere \\ \end{proof} \begin{rmk} \label{pargr:S04:Core} If $X$ is a contractible, one-dimensional Peano continuum (i.e. a dendrite), then it can be contracted to any of its points. That is why $\core(X)$ is not defined in this situation. However, to simplify the following statements we will consider the core of a dendrite to be just any fixed point. If $X$ is a finite graph, then the core is obtained by successively removing all ''loose'' edges, i.e., vertices that are endpoints and the edge connecting the endpoint to the rest of the graph. \\ \end{rmk} \noindent Next, we combine a bunch of known facts with some of our results to obtain a list of equivalent characterizations when a one-dimensional Peano continuum is an ANR. \\ \begin{thm} \label{prop:S04:TFAE_1D_Peano_ANR} Let $X$ be a one-dimensional Peano continuum. Then the following are equivalent: \begin{enumerate}[label=(\arabic*)] \item $X$ is an absolute neighborhood retract (ANR) \item $X$ is locally contractible \item $X$ has a simply connected covering space \item $\pi(X)$ is finitely generated \item there exists a finite graph $Y\subset X$ such that $\pi(Y)\to\pi(X)$ is an isomorphism \item $\core(X)$ is a finite graph \end{enumerate} \end{thm} \begin{proof} \impliesStep{1}{2}: Every ANR is locally contractible, see \cite[V.2.3, p.101]{Bor1967}. \impliesStep{2}{3}: By the above remark \ref{pargr:S04:Existence_of_univ_cov_sp}. \impliesStep{3}{4}: By \cite[Lemma 7.7]{CanCon2006}. \impliesStep{4}{1}: This follows from \cite[V.13.6, p.138]{Bor1967}. ''(3)+(4) $\Rightarrow$ (5)'': Follows from \ref{prop:S04:Subgr_fundGp_for_SLSC_1D}. \impliesStep{5}{6}: By \ref{prop:S04:Existence_of_core} (2), $\core(X)\subset Y$. Then $\pi(\core(X))\to\pi(Y)$ is an isomorphism, and therefore $\core(X)=\core(Y)$. By the above remark \ref{pargr:S04:Core} the core of a finite graph is again a finite graph. \impliesStep{6}{4}: Follows since $\pi(\core(X))\to\pi(X)$ is bijective and the fundamental group of a finite graph is finitely generated. \qedhere \\ \end{proof} \begin{rmk} \label{pargr:S04:1D_Peano_AR_iff_core_pt} Let $X$ be a one-dimensional Peano continuum. In the same way as the above theorem \ref{prop:S04:TFAE_1D_Peano_ANR} one obtains that the following are equivalent: \begin{enumerate}[label=(\arabic*)] \item $X$ is an absolute retract (AR) \item $X$ is contractible \item $X$ is simply connected \item $\pi(X,x_0)$ is zero \item there exists a finite tree $Y\subset X$ such that $\pi(Y,x_0)\to\pi(X,x_0)$ is an isomorphism (for any $x_0\in Y$) \item $\core(X)$ is a point \end{enumerate} Note that $X$ is a dendrite if and only if it is a one-dimensional Peano continuum that satisfies one (or equivalently all) of the above conditions. \\ \end{rmk} \noindent Let us proceed with the study of the internal structure of compact, one-dimensional ANRs. We will give a structure theorem which says that these spaces can be approximated by finite graphs in a nice way, namely from within. This generalzes a theorem from Nadler's book, \cite{Nad1992}, about the structure of dendrites (which are exactly the \emph{contractible} one-dimensional, compact ANRs). The point is that compact, one-dimensional ANRs can be approximated from within by finite graphs in exactly the same way as dendrites can be approximated by finite trees (which are exactly the contractible finite graphs). \\ \begin{lma} \label{prop:S04::lma:First_pt_unique} Let $X$ be a one-dimensional Peano continuum, and $Y$ a subcontinuum with $\core(X)\subset Y$. For each $x\in X\setminus Y$ there is a unique point $r(x)\in Y$ such that $r(x)$ is a point of an arc in $X$ from $x$ to any point of $Y$. \end{lma} \begin{proof} This is the analogue of \cite[Lemma 10.24, p.175]{Nad1992}. We use ideas from the proof of \cite[Theorem 2.4]{Mei2005}. Let $X,Y$ be given, and $x\in X\setminus Y$. Pick some point $y\in Y$. Since $X$ is arc-connected, there exists an arc $\alpha\colon [0,1]\to X$ starting at $\alpha(0)=x$ and ending at $\alpha(1)=y$. Let $y_0=\alpha(\min\alpha^{-1}(Y))$, which is the first point in $Y$ of the arc (starting from $x$). Note that $y_0\in Y$ since $Y$ is closed. Assume there are two arcs $\alpha_1,\alpha_2\colon [0,1]\to X$ from $x$ to different points $y_1,y_2\in Y$ such that $\alpha_i([0,1))\subset X\setminus Y$. We show that this leads to a contradiction. Let $\beta$ be a reduced path in $Y$ from $y_1$ to $y_2$. Define \begin{align*} t_1 &:=\sup\{t\in[0,1]\ :\ \alpha_1(t)\in\im(\alpha_2)\} \\ t_2 &:=\sup\{t\in[0,1]\ :\ \alpha_2(t)\in\im(\alpha_1)\}, \end{align*} so that $x_0=\alpha_1(t_1)=\alpha_2(t_2)$ is the first point where the arcs $\alpha_1,\alpha_2$ meet (looking from $y_1$ and $y_2$). Connecting $(\alpha_1)_{|[t_1,1]}$ (from $x_0$ to $y_1$) with $\beta$ (from $y_1$ to $y_2$) and the inverse of $(\alpha_1)_{|[t_2,1]}$ (from $y_2$ to $x_0$), we get a reduced loop containing $x_0$ which contradicts $x_0\notin\core(X)\subset Y$. It follows that there exists a unique point $y\in Y$ with the desired properties. \qedhere \\ \end{proof} \begin{defn}[{see \cite[Definition 10.26, p.176]{Nad1992}}] \label{defn:S04:First_pt_map} \noindent Let $X$ be a one-dimensional Peano continuum, and $Y$ a subcontinuum with $\core(X)\subset Y$. Define a map $r\colon X\to Y$ by letting $r(x)$ as in the lemma \ref{prop:S04::lma:First_pt_unique} above if $x\in X\setminus Y$, and $r(x)=x$ if $x\in Y$. This map is called the \termDef{first point map}. \\ \end{defn} \noindent The first point map is continuous, and thus a retraction of $X$ onto $Y$. This is the analogue of \cite[Lemma 10.25, p.176]{Nad1992} and proved the same way. But more is true: As in the proof of \cite[Theorem 2.4]{Mei2005}, one can show that $Y$ is a strong deformation retract of $X$. \\ \begin{prop} \label{prop:S04:First_pt_map_cts} Let $X$ be a one-dimensional Peano continuum, and $Y$ a subcontinuum with $\core(X)\subset Y$. Then the first point map is continuous. Further, there is a strong deformation retraction to the first point map. \end{prop} \begin{proof} Let $X,Y$ be given. As in the proof of \cite[Theorem 2.4]{Mei2005}, the complement $X\setminus Y$ consist of a collection of attached dendrites $\{C_i\}$. That means each $C_i\subset X$ is a dendrite such that $C_i\cap Y$ consists of exactly one point $y_i$ and such that there is a strong deformation retract $r_i\colon X\to(X\setminus C_i)\cup\{y_i\}$. Meilstrup shows that these strong deformation retracts can be assembled to give a strong deformation retract to the first point map $r$. \qedhere \\ \end{proof} \begin{thm} \label{prop:S04:Structure_1D_Peano} Let $X$ be a one-dimensional Peano continuum. Then there is a sequence $\{Y_k\}_{k=1}^\infty$ such that: \begin{enumerate}[label=(\arabic*)] \item each $Y_k$ is a subcontinuum of $X$ \item $Y_k\subset Y_{k+1}$ \item $\lim_k Y_k=X$ \item $Y_1=\core(X)$ and for each $k$, $Y_{k+1}$ is obtained from $Y_k$ by attaching a line segment at a point, i.e., $\overline{Y_{k+1}\setminus Y_k}$ is an arc with an end point $p_k$ such that $\overline{Y_{k+1}\setminus Y_k}\cap Y_k=\{p_k\}$ \item letting $r_k\colon X\to Y_k$ be the first point map for $Y_k$ we have that $\{r_k\}_{k=1}^\infty$ converges uniformly to the identity map on $X$ \end{enumerate} If $X$ is also ANR, then all $Y_k$ are finite graphs. If $X$ is even contractible (i.e. is an AR, or equivalently a dendrite), then $\core(X)$ is just some point, and all $Y_k$ are finite trees. \end{thm} \begin{proof} This is the analogue of \cite[Lemma 10.24, p.175]{Nad1992}, and the proof goes through if we use our analoguous lemmas \ref{prop:S04::lma:First_pt_unique} and \ref{prop:S04:First_pt_map_cts}. \qedhere \\ \end{proof} \section{The other implication of the main theorem: Sufficiency} \label{sect:05:sufficiency} \noindent For this implication we aim to mirror the approach of Chigogidze and Dranishnikov, \cite{ChiDra2010}. However we first show how to go from $C(X)$ being a universal \Cs{} to $C(Y)$ being one, where $Y$ is obtained from $X$ by attaching a line segment at one point. This step is not needed in \cite{ChiDra2010}, since they are able to give a general description of the generators and relations of the relevant spaces. We have not been able to find such generators and relations, and doing so might be of independent interest. \\ \begin{lma} \label{ExtendUniversal} Suppose $X$ is a space, that $C(X) = C^* \langle \cG \mid \mathcal{R} \rangle$ and that $\{ \hat{g} \mid g \in \cG \}$ is a generating set of $C(X)$ that fulfills $\mathcal{R}$. Let $Y$ be the space formed from $X$ by attaching a line segment at a point $v$, and let $\lambda_g = \hat{g}(v)$. Then $C(Y) = C^* \langle \cG \cup \{ h \} \mid \mathcal{R}' \rangle$, where \[ \mathcal{R}' = \mathcal{R} \cup \{ g h = \lambda_g h \text{ and } g h = h g \mid g \in \cG \} \cup \{ 0 \leq h \leq 1 \}. \] \end{lma} \begin{proof} Extending the $\hat{g}$ to $Y$ by letting them be constant on the added line segment and letting $\hat{h}$ be the function that is zero on $X$ and grows linearly to one on the line segment (identifying it with $[0,1]$), shows that that there is a generating family in $C(Y)$ that fulfills $\mathcal{R}'$. We will use \cite[Lemma 3.2.2, p.26]{Lor1997} to show that $C(Y)$ is universal for $\mathcal{R}$. By this lemma, it suffices to show, that whenever we have a family $\{ T_g \mid g \in \cG \cup \{ h \} \}$ of operators, on some Hilbert space $H$, that fulfills $\mathcal{R}$ and $\{ T_g \mid g \in \cG \}' = \CC I$, then we can find a morphism from $C(Y)$ to $B(H)$ taking $\hat{g}$ to $T_g$ for all $g \in \cG \cup \{ h \}$. Suppose we have such operators. Since $C(X)$ is commutative and $\mathcal{R}'$ forces $h$ to commute with all the other generators, we have that $T_g = \mu_g I$ for some $\mu_g \in \CC$, for all $g \in \cG \cup \{ h \}$. We need to find a morphism from $C(Y)$ to $\CC$. There are two cases. \begin{itemize}[leftmargin=20pt, itemsep=5pt] \item \textbf{Case 1:} $\mu_h = 0$: In this case we can find a morphism $\phi \colon C(X) \to \CC$ such that $\phi(\hat{g}) = \mu_g$ for all $g \in \cG$, since $C(X) = C^* \langle \cG \mid \mathcal{R} \rangle$. Then $\phi = \ev_u$ for some point $u \in X$. The morphism $\ev_u \colon C(Y) \to \CC$ maps $\hat{h} = 0$ and $\hat{g} = \mu_g$, and thus is the required morphism. \item \textbf{Case 2:} $\mu_h \neq 0$ Since $0 \leq T_h \leq 1$, we have $0 < \mu_h \leq 1$. For $g \in \cG$ we have \[ \mu_g \mu_h I = T_g T_h = \lambda_g \mu_h I. \] So since $\mu_h \neq 0$, we have $\mu_g = \lambda_g$ for all $g \in \cG$. Let us now identify the added line segment with $[0,1]$. The morphism $\ev_{\mu_h} \colon C(Y) \to \CC$, takes $\hat{h}$ to $\mu_h$ and $\hat{g}$ to $\lambda_g = \mu_g$. Hence it is the required morphism. \end{itemize} \qedhere \\ \end{proof} \noindent We now provide a slightly altered (in both proof and statement) version of \cite[Proposition 4.1]{ChiDra2010}. \\ \begin{lma} \label{AddLine} Suppose $X$ is a one-dimensional finite graph, that $C(X) = C^* \langle \cG \mid \mathcal{R} \rangle$, that $\{ \hat{g} \mid g \in \cG \}$ is a generating set of $C(X)$ that fulfills $\mathcal{R}$, and that $\cG$ is finite. Let $Y$ be the space formed from $X$ by attaching a line segment at a point $v$. Suppose we have a commutative square \begin{center} \makebox{ \xymatrix{ C(X) \ar[d]_{\iota} \ar[r]^{\psi} & C \ar[d]^{\pi} \\ C(Y) \ar[r]_{\phi} & C/J } } \end{center} where $J$ is an ideal in the unital \Cs{} $C$, $\pi$ is the quotient morphism, $\psi$ and $\phi$ are unital morphisms, and $\iota$ is induced by the retraction from $Y$ onto $X$, i.e., $\iota$ takes a function in $C(X)$ to the function in $C(Y)$ given by \[ \iota(f)(x) = \left\{ \begin{array}{rl} f(x), & x \in X, \\ f(v), & x \text{ is in the added line segment} \end{array} \right.. \] Then for every $\e > 0$ we can find a morphism $\chi \colon C(Y) \to C$ such that $\pi \circ \chi = \phi$ and $\| (\chi \circ \iota)(\hat{g}) - \psi(\hat{g}) \| \leq \e$ for every $g \in \cG$. \end{lma} \begin{proof} Throughout the proof we use the notation of Lemma $\ref{ExtendUniversal}$. Let $\delta > 0$ be given. We will construct a $\delta$-representation $\{ d_g \mid g \in \cG \cup \{ h \} \}$ of $\mathcal{R}'$ in $C$ such that $\pi(d_g) = \phi(\iota(\hat{g}))$ for $g \in \cG$ and $\pi(d_h) = \phi(\hat{h})$. Let $q_\kappa \colon X \to X$ be the map that collapses the ball $B_{\kappa/2}(v)$, fixes $X \setminus B_{\kappa}(v)$, and extends linearly in between. Since there are only finitely many $\hat{g}$, we can find $\kappa_0$ such that $\| q_{\kappa_0}^*(\hat{g}) - \hat{g} \| \leq \delta / 2 $, where $q_\kappa^*$ is the morphism on $C(X)$ induced by $q_\kappa$. For simpler notation we let $q = q_{\kappa_0}$, and put $w_g = q^*(\hat{g})$ for all $g \in \cG$. Let $f_0$ be a positive function in $C(X)$ of norm $1$ that is zero on $X \setminus B_{\kappa_0 / 2}(v)$ and $1$ at $v$. Observe that if $f \in q^*(C_0(X \setminus \{v \}))$, then $f f_0 = 0$. Since $\hat{h} \leq \iota(f_0)$ and $\psi(f_0)$ is a lift of $\phi(\iota(f_0))$, we can, by \cite[Corollary 8.2.2, p.63]{Lor1997}, find a lift $\bar{h}$ of $\phi(\hat{h})$ such that $0 \leq \bar{h} \leq \psi(f_0)$. We now claim that $\{ \psi(\hat{g}) \mid g \in \cG\} \cup \{ \bar{h} \}$, is a $\delta$-representation of $\mathcal{R}$. Since the $\bar{g}$ fulfill the relations $\mathcal{R}$ and $\bar{h}$ is a positive contraction, we only need to check that $\psi(\hat{g})$ and $\bar{h}$ almost commute, and that $\psi(\hat{g}) \bar{h}$ is almost $\lambda_g \bar{h}$. First we note that since $0 \leq \bar{h} \leq \psi(f_0)$ for any $f \in q^*(C_0(X \setminus \{ v \}))$ we have \[ \| \psi(f) \bar{h}^{1/2} \|^2 = \| \psi(f) \bar{h} \psi(f)^* \| \leq \| \psi(f) \psi(f_0) \psi(f)^* \| = 0. \] Thus $\psi(f) \bar{h} = 0$. In particular we have \[ \psi(w_g - \lambda_g) \bar{h} = 0. \] Now we have \begin{align*} \| \psi(\hat{g}) \bar{h} - \bar{h} \psi(\hat{g}) \| &= \| \psi(\hat{g}) \bar{h} - \psi(w_g - \lambda_g) \bar{h} - \bar{h} \psi(\hat{g}) + \bar{h} \psi(w_g - \lambda_g) \| \\ &= \| \psi(\hat{g} - w_g) \bar{h} + \lambda_g \bar{h} - \bar{h}(\psi(\hat{g} - w_g)) - \lambda_g \bar{h} \| \\ &\leq \|\bar{h}\| (\| \psi(\hat{g} - w_g) \| + \|\psi(\hat{g} - w_g)\|) \\ &\leq 2 \| \hat{g} - w_g \| \leq 2 \cdot \delta /2 = \delta, \end{align*} for all $g \in \cG$. Likewise we have \begin{align*} \| \psi(\hat{g})\bar{h} - \lambda_g \bar{h} \| &= \| \psi(\hat{g}) \bar{h} - \lambda_g \bar{h} - \psi(w_g - \lambda_g) \bar{h} \| \\ &= \| \psi(\hat{g} - w_g) \bar{h} + \lambda_g \bar{h} - \lambda_g \bar{h} \| \\ &= \| \psi(\hat{g} - w_g) \bar{h} \| \leq \| \hat{g} - w_g \| \leq \delta/2 \leq \delta, \end{align*} for all $g \in \cG$. So $\{ \psi(g) \mid g \in \cG \} \cup \{ \bar{h} \}$ is indeed a $\delta$-representation of $\mathcal{R}'$. Further we have that $\pi(\psi(\hat{g})) = \phi(\iota(\hat{g}))$ and that $\pi(\bar{h}) = \phi(h)$. Since $X$ is a one-dimensional finite graph, $Y$ is also a one-dimensional finite graph, so $C(Y)$ is semiprojective by \cite[Proposition 16.2.1, p.125]{Lor1997}. By \cite[Theorem 14.1.4, p.106]{Lor1997} the relations $\mathcal{R}'$ are then stable. So the fact that we can find a $\delta$-representation for all $\delta$ implies that we can find a morphism $\chi \colon C(Y) \to C$ such that $\pi \circ \chi = \phi$ and $\| \chi(\iota(\hat{g})) - \psi(\hat{g}) \| \leq \e$ for all $g \in \cG$. \qedhere \\ \end{proof} \noindent We are now ready to show that some inductive limits have good lifting properties. In particular if we have an initial lift then we can lift all that follows. \\ \begin{prop} \label{InductiveLimitProjective} Suppose that $X$ is a compact space such that $C(X)$ can be written as an inductive limit $\varinjlim_n C(Y_n) = C(X)$, where each $Y_n$ is a finite graph, $Y_{n+1}$ is just $Y_n$ with a line segments attached at a point (as in Lemma \ref{AddLine}), and the bonding morphisms $\iota_{n,n+1}\colon C(Y_n)\to C(Y_{n+1})$ are as the morphism in Lemma \ref{AddLine}, i.e., induced by retracting the attached interval to the attaching point. If there is a unital morphism $\phi \colon C(X) \to C/J$, where $J$ is an ideal in a unital \Cs{} $C$, and a unital morphism $\psi_1 \colon C(Y_1) \to C$ such that $\pi \circ \psi_1 = \phi \circ \iota_{1,\infty}$, then there is a unital morphism $\bar{\psi} \colon C(X) \to C$ such that $\pi \circ \bar{\psi} = \phi$. \end{prop} \begin{proof} We have the following situation: \begin{center} \makebox{ \xymatrix{ & & C \ar[d]^{\pi} \\ C(Y_1) \ar[r]_{\iota_{1,\infty}} \ar[urr]^{\psi_1} & C(X) \ar[r]_{\phi} \ar@{..>}[ur]_{\bar{\psi}} & C/J \\ }} \end{center} As $Y_1$ is a finite graph, $C(Y_1)$ is finitely generated. Thus $C(Y_1)$ is a universal \Cs{} for some finite set of generators and relations, $C(Y_1) = C^* \langle \cG_1 \mid \mathcal{R}_1 \rangle $, say. In view of Lemma \ref{ExtendUniversal} we can now assume that $C(Y_n) = C^* \langle \cG_n \mid \mathcal{R}_n \rangle$, where $\cG_1 \subseteq \cG_2 \cdots$, and likewise for the $\mathcal{R}_n$. We also get from Lemma \ref{ExtendUniversal} that all the $\cG_n$ and $\mathcal{R}_n$ are finite. Since we are given $\psi_1$, we can, using Lemma \ref{AddLine} inductively, for any sequence of positive numbers $(\e_n)$ find morphisms $\psi_n \colon C(Y_n) \to C$ for each $n > 1$ such that $\pi \circ \psi_n = \phi \circ \iota_{n,\infty}$ and such that $\| \psi_n(\hat{g}) - \psi_{n-1}(\hat{g}) \| \leq \e_n$ for the generators $\hat{g}$ of $C(Y_n)$. We now wish to define new morphisms $\chi_n \colon C(Y_n) \to C$ such that $\pi \circ \chi_n = \phi \circ \iota_{n,\infty}$ and $\chi_{n+1}$ extends $\chi_n$. To this end we define, for each $n \in \NN$, elements $\{ \bar{g}_n \mid g \in \cG_n \}$, by \[ \bar{g}_n = \lim_k \psi_{n+k}(\hat{g}). \] The sequence $(\psi_{n+k}(\hat{g}))$ is Cauchy if $\sum \e_n < \infty$, so we will assume that. We claim that for any $n \in \NN$ the elements $\{ \bar{g}_n \mid g \in \cG_n \}$ in $C$ fulfill $\mathcal{R}_n$. By \cite[Lemma 13.2.3, p.103]{Lor1997} the set $\{ \bar{g}_n \mid g \in \cG_n \}$ is an $\e$-representation of $\mathcal{R}_n$ for all $\e > 0$ since $\{ \psi_{n+k}(\hat{g}) \mid g \in \cG_n \}$ is a representation of $\mathcal{R}_n$ for all $k$. Thus $\{ \bar{g}_n \mid g \in \cG_n \}$ is a representation of $\mathcal{R}_n$. Observe that if $m \geq n$, then $\bar{g}_m = \bar{g}_n$, since $\bar{g}_m$ is the limit of a tail of the sequence $\bar{g}_n$ is the limit of. Thus, we will drop the subscripts, and simply say that we have elements $\{ \bar{g} \mid g \in \cup \cG_n \}$ such that for any $n \in \NN$ the set $\{ \bar{g} \mid g \in \cG_n \}$ fulfills $\mathcal{R}_n$. Now we can define the $\chi_n$. We put $\chi_n(\hat{g}) = \bar{g}$, for $g \in \cG_n$, and this extends to a morphism since $C(Y_n) \cong C^* \langle \cG_n \mid \mathcal{R}_n \rangle$. We get $\chi_{n_1} \circ \iota_{n,n+1} = \chi_n$ and $\pi \circ \chi_n = \phi \circ \iota$ by universality, since it holds on generators. By the universal property of an inductive limit we get a morphism $\chi \colon C(X) \to C$ such that $\pi \circ \chi = \phi$. \qedhere \\ \end{proof} \begin{rmk} Using the structure theorem for dendrites, \cite[Theorem 10.27, p.176]{Nad1992}, see \ref{prop:S04:Structure_1D_Peano}, and the above Proposition \ref{InductiveLimitProjective} we may deduce that for a dendrite $X$ the \Cs{} $C(X)$ is projective in $\mathcal{S}_1$ (the category of unital \Cs{s}, see \ref{pargr:S02:wSP}). Thus, we recover the implication \impliesStep{1}{2} of \cite[Theorem 4.3]{ChiDra2010}. To elaborate: Each dendrite $X$ can be approximated from within by finite trees, i.e., $C(X)\cong\varinjlim C(Y_k)$ where $Y_1$ is just a single point and the trees $Y_k$ are obtained by successive attaching of line segments. Since $C(Y_1)=\CC$ is projective in $\mathcal{S}_1$, we obtain from \ref{InductiveLimitProjective} that morphisms from $C(X)$ into a quotients can be lifted, i.e., $C(X)$ is projective in $\mathcal{S}_1$. \\ \end{rmk} \noindent We are now ready to prove our main theorem: \\ \begin{proof}[Proof of theorem \ref{MainTheorem}] The implication ''$(I) \Rightarrow (II)$'' is Proposition \ref{prop:S03:SP_gives_1D}. Let us prove ''$(II) \Rightarrow (I)$'': So assume $X$ is a compact ANR with $\dim(X)\leq 1$. Note that $X$ can have at most finitely many components $X_i$. If we can show that each $C(X_i)$ is semiprojective, then $C(X)=\bigoplus_i C(X_i)$ will be semiprojective (since semiprojectivity is preserved by finite direct sums, see \cite[Theorem 14.2.1, p.110]{Lor1997}). So we may assume $X$ is connected. Then theorem \ref{prop:S04:Structure_1D_Peano} applies, and we may find an increasing sequence $Y_1\subset Y_2\subset\ldots\subset X$ of finite subgraphs such that: \begin{enumerate}[label=(\arabic*)] \item $\lim_k Y_k=X$, i.e., $\overline{\bigcup_kY_k}=X$ \item $Y_{k+1}$ is obtained from $Y_k$ by attaching a line segment at a point \end{enumerate} Then $C(X)=\varinjlim_k C(Y_k)$ where each bonding morphism $\iota_{k,k+1}\colon C(Y_k)\to C(Y_{k+1})$ is induced by the retraction from $Y_{k+1}$ to $Y_k$ that contracts $Y_{k+1}\setminus Y_k$ to the point $\overline{Y_{k+1}\setminus Y_k}\cap Y_k$. Suppose now that we are given a unital \Cs{} $C$ and an increasing sequence of ideals $J_1\lhd J_2\lhd\ldots\lhd C$ and a unital morphism $\sigma\colon C(X)\to C/\overline{\bigcup_k J_k}$. We need to find a lift $\bar{\sigma}\colon C(X)\to C/J_l$ for some $l$. Consider the unital morphism $\sigma\circ\iota_{1,\infty}\colon C(Y_1)\to C/\overline{\bigcup_k J_k}$. By \cite[Proposition 16.2.1, p.125]{Lor1997}, the initial \Cs{} $C(Y_1)$ is semiprojective. Therefore, we can find an index $l$ and a unital morphism $\alpha\colon C(Y_1)\to C/J_l$ such that $\pi_l\circ\alpha=\sigma\circ\iota_{1,\infty}$. This is viewed in the following commutative diagram: \begin{center} \makebox{ \xymatrix{ & & & & C \ar[d] \\ & & & & C/J_l \ar[d]^{\pi_l} \\ C(Y_1) \ar[r]_{\iota_{1,2}} \ar@{..>}[urrrr]^{\alpha} & C(Y_2) \ar[r] & \ldots \ar[r] & C(X) \ar[r]_{\sigma} & C/\overline{\bigcup_k J_k} \\ }} \end{center} Now we can apply \ref{InductiveLimitProjective} to find a unital morphism $\bar{\sigma}\colon C(X)\to C/J_l$ such that $\pi_l\circ\bar{\sigma}=\sigma$. This shows that $C(X)$ is semiprojective. \qedhere \\ \end{proof} \section{Applications} \label{sect:06:Applications} \noindent In this section we give applications of our findings. First, we characterize semiprojectivity of non-unital, separable commutative \Cs{s}. Building on this, we are able to confirm a conjecture of Loring in the particular case of commutative \Cs{s}. Then, we will study the semiprojectivity of \Cs{s} of the form $C_0(X,M_k)$. Finally, we will give a partial solution to the problem when a commutative \Cs{} is weakly (semi-)projective. To keep this article short, we will omit most of the proofs in this sections. \\ To characterize semiprojectivity of non-unital commutative \Cs{s} we have to study the structure of non-compact, one-dimensional ANRs. We are particularly interested in the one-point compactifications of such spaces. The motivation are the following results: If $X$ is a locally compact, Hausdorff space, then naturally $\widetilde{C_0(X)}\cong C(\cpctnPt{X})$, where $\cpctnPt{X}$ is the one-point comapctification of $X$. Further, a \Cs{} $A$ is semiprojective if and only if $\widetilde{A}$ is semiprojective. Thus, $C_0(X)$ is semiprojective if and only if $C(\cpctnPt{X})$ is semiprojective. By our main result \ref{MainTheorem} this happens precisely if $\cpctnPt{X}$ is a one-dimensional ANR. The following result gives a topological characterization of such spaces. We derive a characterization of semiprojectivity for non-unital, separable commutative \Cs{s}, see corollary \ref{prop:S06:SP_for_non-compact}. We also show that $\cpctnPt{X}$ is a one-dimensional ANR if and only if every finite-point compactification\footnote{A compactification of a space $X$ is a pair $(Y,\iota_Y)$ where $Y$ is a compact space, $\iota\colon X\to Y$ is an embedding and $\iota(X)$ is dense in $Y$. Usually the embedding is understood and one denotes a compactification just by the space $Y$. A compactification $\gamma(X)$ of $X$ is called a finite-point compactification if the remainder $\gamma(X)\setminus X$ is finite.} of $X$ is a one-dimensional ANR. Using this, we can confirm a conjecture about the semiprojective of extensions in the commutative case, see \ref{prop:S06:ideal_with_fd_quotient} and \ref{pargr:S06:Conj_SP_extension}. \\ \begin{thm} \label{prop:S06:cpctfn_ANR} Let $X$ be a one-dimensional, locally compact, separable, metric ANR. Then the following are equivalent: \begin{enumerate}[label=(\arabic*)] \item The one-point compactification $\cpctnPt{X}$ is an ANR \item $X$ has only finitely many compact components and also only finitely many components $C\subset X$ such that $\cpctnPt{C}$ is not a dendrite \item Every finite-point compactification of $X$ is an ANR \item Some finite-point compactification of $X$ is an ANR \\ \end{enumerate} \end{thm} \begin{cor} \label{prop:S06:SP_for_non-compact} Let $X$ be a locally compact, separable, metric space. Then the following are equivalent: \begin{enumerate}[label=(\arabic*)] \item $C_0(X)$ is semiprojective. \item $X$ is a one-dimensional ANR that has only finitely many compact components, and $X$ has also only finitely many components $C\subset X$ such that $\cpctnPt{C}$ is not a dendrite \\ \end{enumerate} \end{cor} \begin{cor} \label{prop:S06:ideal_with_fd_quotient} Let $A$ be a separable, commutative \Cs{}, and $I\lhd A$ an ideal. Assume $A/I$ is finite-dimensional, i.e.. $A/I\cong\CC^k$ for some $k$. Then $A$ is semiprojective if and only if $I$ is semiprojective. \end{cor} \begin{proof} Let $A=C_0(X)$ for a locally compact, separable, metric space $X$. Then $I=C_0(Y)$ for an open subset $Y\subset X$. Since $A/I$ is finite-dimensional, $X\setminus Y$ is finite. It follows that also $\cpctnPt{X}\setminus Y$ is finite, and so the closure $\overline{Y}\subset\cpctnPt{X}$ is a finite-point compactification of $Y$. Set $F:=\cpctnPt{X}\setminus\overline{Y}$ (which is also finite). Note that $\overline{Y}\subset\cpctnPt{X}$ is a component, so that $\cpctnPt{X}=\overline{Y}\sqcup F$. It follows that $\cpctnPt{X}$ is an ANR if and only $\overline{Y}$ is. Then we argue as follows: \begin{tabular}[itemsep=5pt]{lll} & $A=C_0(X)$ is semiprojective \\ $\Leftrightarrow$ & $\widetilde{A}=C(\cpctnPt{X})$ is semiprojective \\ $\Leftrightarrow$ & $\cpctnPt{X}$ is a one-dimensional ANR & [ by theorem \ref{MainTheorem} ] \\ $\Leftrightarrow$ & $\overline{Y}\subset\cpctnPt{X}$ is a one-dimensional ANR & [ since $\cpctnPt{X}=\overline{Y}\sqcup F$] \\ \multirow{2}{*}{$\Leftrightarrow$} & \multirow{2}{*}{$\cpctnPt{Y}$ is a one-dimensional ANR} & [by theorem \ref{prop:S06:cpctfn_ANR} since $\overline{Y}$ is a \\ & & \ finite-point compactification of $Y$ ] \\ $\Leftrightarrow$ & $\widetilde{I}=C(\cpctnPt{Y})$ is semiprojective & [ by theorem \ref{MainTheorem} ] \\ $\Leftrightarrow$ & $I=C_0(Y)$ is semiprojective \end{tabular} \qedhere \\ \end{proof} \begin{rmk} \label{pargr:S06:Conj_SP_extension} Let $A$ be a separable \Cs{}, and $I\lhd A$ an ideal so that the quotient is finite-dimensional. We get a short exact sequence: \begin{center} \makebox{ \xymatrix{ 0\ar[r] & I \ar[r] & A \ar[r] & F \ar[r] & 0 \\ }} \end{center} It was conjectured by Loring and also by Blackadar, \cite[Conjecture 4.5]{Bla2004}, that in this situation $A$ is semiprojective if and only if $I$ is semiprojective. One implication was recently proven by Dominic Enders, \cite{EndPrivat}, who showed that semiprojectivity passes to ideals when the quotient is finite-dimensional. The converse implication is in general not even known for $F=\CC$. Our above result \ref{prop:S06:ideal_with_fd_quotient} confirms this conjecture in the case that $A$ is commutative. \\ \end{rmk} \noindent Let us now study the semiprojectivity of \Cs{s} of the form $C_0(X,M_k)$. \\ \begin{lma} \label{eval} Let $X$ be a locally compact metric space and let $k\in\NN$. If $\phi \colon C_0(X,M_k) \to M_k$ is a morphism then there is a unitary $u \in M_k$ and a unique point $x \in \cpctnPt{X}$ such that \[ \phi = Ad_u \circ \ev_x. \] \\ \end{lma} \begin{prop} \label{matrixAR} Let $X$ be a locally compact, separable, metric space and let $k \in \NN$. If $C_0(X,M_k)$ is projective, then $\cpctnPt{X}$ is an AR. \end{prop} \begin{proof} Suppose we are given a compact metric space $Y$ with an embedding $\iota\colon \cpctnPt{X}\to Y$. Dualizing and embedding $C_0(X)$ into $C(\cpctnPt{X})$, we get the following diagram \begin{center} \makebox{ \xymatrix{ & C_0(Y) \ar[d]^{\iota_*} \\ C_0(X) \ar[r] & C(\cpctnPt{X}) }} \end{center} Tensoring everything by the $k$ by $k$ matrices $M_k$, we get \begin{center} \makebox{ \xymatrix{ & C_0(Y, M_k) \ar[d]^{(\iota_*)_k} \\ C_0(X,M_k) \ar[r] & C(\cpctnPt{X}, M_k) }} \end{center} Since $C_0(X,M_k)$ is projective, there is a morphism $\psi \colon C_0(X,M_k) \to C_0(Y,M_k)$ such that $(\iota_*)_k \circ \psi$ is the inclusion of $C_0(X,M_k)$ into $C(\cpctnPt{X},M_k)$. For each $y \in Y$ lemma \ref{eval} tells us that the morphism $\ev_y \circ \psi$, has the form $Ad_{u_y} \circ \ev_{x_y}$ for some unitary $u_y \in M_k$ and some unique $x_y \in \cpctnPt{X}$. Hence we can define a function $\lambda \colon Y \to \cpctnPt{X}$ such that \[ \ev_y \circ \psi = Ad_{u_y} \circ \ev_{\lambda(y)}. \] This map $\lambda$ is continuous. For each $x \in\cpctnPt{X}$ we have the following commutative diagram \begin{center} \makebox{ \xymatrix{ & C_0(Y, M_k) \ar[d]^{(\iota_*)_k} \ar[r]^>>>>{\ev_{\iota(x)}} & M_k \ar@{=}[d] \\ C_0(X,M_k) \ar[r] \ar[ur]^{\psi} & C(\cpctnPt{X}, M_k) \ar[r]^>>>>{\ev_x} & M_k }} \end{center} From this diagram, it follows that if $x \in \cpctnPt{X}$ then \[ Ad_{u_{\iota(x)}} \circ \ev_{\lambda(\iota(x))} = \ev_{\iota(x)} \circ \psi = \ev_x \circ (\iota_*)_k \circ \psi = \ev_x. \] So for any function $g \in C_0(X, M_k)$ we get \[ \ev_{\lambda(\iota(x))} \begin{pmatrix} g & & \\ & \ddots & \\ & & g \end{pmatrix} = (Ad_{u_{\iota(x)}} \circ \ev_{\lambda(\iota(x))}) \begin{pmatrix} g & & \\ & \ddots & \\ & & g \end{pmatrix} = \ev_x \begin{pmatrix} g & & \\ & \ddots & \\ & & g \end{pmatrix}. \] Hence we must have $\lambda(\iota(x)) = x$. All in all, we have found a continuous map $\lambda \colon Y \to \cpctnPt{X}$ such that $\lambda \circ \iota = \id$, i.e., the embedded space $\cpctnPt{X}\subset Y$ is a retract. As the embedding was arbitrary, $\cpctnPt{X}$ is an AR. \qedhere \\ \end{proof} \noindent The proof can be modified to show: \\ \begin{prop} \label{matrixANR} Let $X$ be a locally compact, separable, metric space and let $k \in \NN$. If $C_0(X,M_k)$ is semiprojective, then $\cpctnPt{X}$ is an ANR. \\ \end{prop} \noindent Using the idea of the proof of \ref{prop:S03:SP_gives_1D} one can show the following: \\ \begin{prop} \label{matrixDimension} Let $X$ be a locally compact, separable, metric space, and $k\in\NN$. If $C_0(X, M_k)$ is semiprojective, then $\dim(X) \leq 1$. \\ \end{prop} \begin{cor} \label{matrixMain} Let $A$ be a separable, commutative \Cs{}, and $k\in\NN$. If $A\otimes M_k$ is projective, then so is $A$. Analogously, if $A\otimes M_k$ is semiprojective, then so is $A$. \end{cor} \begin{proof} Let $A=C_0(X)$ for a locally compact, separable, metric space $X$. First, assume $A\otimes M_k$ is semiprojective. By proposition \ref{matrixDimension}, $\dim(X)\leq 1$. This implies that the dimension of $\cpctnPt{X}$ is at most one. By proposition \ref{matrixANR}, $\cpctnPt{X}$ is an ANR. Then our main theorem \ref{MainTheorem} shows that $C(\cpctnPt{X})$ is semiprojective. Since $C(\cpctnPt{X})$ is the unitization of $C_0(X)$, we also have that $C_0(X)$ is semiprojective. Assume now that $A\otimes M_k$ is projective. It follows that $A$ cannot be unital, for otherwise $A\otimes M_k$ would be unital and that is impossible for projective \Cs{s}. As in the semiprojective case we deduce $\dim(\cpctnPt{X})\leq 1$. By \ref{matrixAR}, $\cpctnPt{X}$ is an AR. It follows from \cite[Theorem 4.3]{ChiDra2010}, see also \ref{summaryThm}, the $C(\cpctnPt{X})$ is projective in $\mathcal{S}_1$. It follows that $C_0(X)$ is projective, see \ref{pargr:S02:wSP}. \qedhere \\ \end{proof} \noindent We will now turn to the question, when a unital, commutative \Cs{} is weakly (semi-)projective in $\mathcal{S}_1$. The analogue of a weakly (semi-)projective \Cs{} in the commutative world is an approximative absolute (neighborhood) retract (abbreviated by AAR and AANR). As mentioned in \ref{pargr:S02:Connection}, if $C(X)$ is weakly (semi-)projective, then $X$ is AA(N)R. We will show below, that for one-dimensional spaces the converse is also true. \\ \begin{pargr} \label{pargr:S07:conditions_AANR} Let $X$ be a compact, metric space. Consider the following conditions: \begin{enumerate}[label=(\arabic*)] \item for each $\varepsilon>0$ there exists a map $f\colon X\to Y\subset X$ such that $Y$ is an AR (an ANR), and $d(f)\leq\varepsilon$ \item $X$ is an AAR (an AANR) \end{enumerate} Here, by $d(f)<\e$ we mean that the distance of $x$ and $f(x)$ is less than $\e$ for all $x\in X$, i.e., $d(x,f(x))<\e$ for all $x\in X$. The first condition means that $X$ can be approximated from within by ARs (by ANRs). As shown by Clapp, \cite[Theorem 2.3]{Cla1971}, see also \cite[Proposition 2.2(a)]{ChaPra2005}, the implication \impliesStep{1}{2} holds in general. It was asked by Charatonik and Prajs, \cite[Question 5.3]{ChaPra2005}, whether the converse also holds (at least for continua). They showed that this is indeed the case for hereditarily unicoherent continua, \cite[Observation 5.4]{ChaPra2005}. In theorem \ref{prop:S07:TFAE_1D_AANR} below we show that the two conditions are also equivalent for one-dimensional, compact, metric spaces. \\ \end{pargr} \noindent The following is a standard result from continuum theory: \\ \begin{prop} \label{prop:S07:1D_Peano_inner_approx_by_graphs} Let $X$ be a one-dimensional Peano continuum, and $\e>0$. Then there exists a finite subgraph $Y\subset X$ and a surjective map $f\colon X\to Y\subset X$ such that $d(f)<\e$. \\ \end{prop} \begin{cor} Every one-dimensional Peano continuum is an AANR. \end{cor} \begin{proof} Let $X$ be a one-dimensional Peano continuum. By \ref{prop:S07:1D_Peano_inner_approx_by_graphs}, $X$ can be approximated from within by finite subgraphs. A finite graph is an ANR. It follows from \cite[Theorem 2.3]{Cla1971}, see \ref{pargr:S07:conditions_AANR}, that $X$ is an AANR. \qedhere \\ \end{proof} \noindent The following Lemma is a direct translation of \cite[Lemma 5.5]{Lor2009} to the commutative setting. \\ \begin{lma}[{see \cite[Lemma 5.5]{Lor2009}}] \noindent Let $X$ be an compact AAR, and $D$ any ANR. Then every map $f\colon X\to D$ is inessential, i.e., homotopic to a constant map. \\ \end{lma} \begin{cor} \label{prop:S07:1D_AAR_tree-like} Every one-dimensional, compact AAR is tree-like. \end{cor} \begin{proof} Let $X$ be a one-dimensional, compact AAR. Then $X$ is connected and thus a continuum. In \cite[Theorem 1]{CasCha1960} tree-like continua are characterized as one-dimensional continua such that every map into a finite graph is inessential. Thus, we need to show that every map from $X$ into a finite graph is inessential. This follows from the above Lemma since every finite graph is an ANR. \qedhere \\ \end{proof} \begin{thm} \label{prop:S07:TFAE_1D_AANR} Let $X$ be a one-dimensional, compact, metric space. Then the following are equivalent: \begin{enumerate}[label=(\arabic*)] \item for each $\varepsilon>0$ there exists a map $f\colon X\to Y\subset X$ such that $Y$ is a finite tree (a finite graph), and $d(f)\leq\varepsilon$ \item for each $\varepsilon>0$ there exists a map $f\colon X\to Y\subset X$ such that $Y$ is an AR (an ANR), and $d(f)\leq\varepsilon$ \item $X$ is an AAR (an AANR) \end{enumerate} Moreover, in $(1)$ and $(2)$ the map $f$ may be assumed to be surjective. \end{thm} \begin{proof} \impliesStep{1}{2} is clear, and \impliesStep{2}{3} follows from \cite[Theorem 2.3]{Cla1971}, see \ref{pargr:S07:conditions_AANR}. \impliesStep{3}{1}: It was shown by Clapp, \cite[Theorem 4.5]{Cla1971}, that for each embedding of a compact AANR $X$ in the Hilbert cube $Q$ and $\delta>0$ there exists a compact polyhedron $P\subset Q$ with maps $f\colon X\to P$ and $g\colon P\to X$ such that $d(f)<\delta$ and $d(g)<\delta$. Note that $g$ maps each component of $P$ onto a Peano subcontinuum of $X$. Thus, the image $Y:=g(P)\subset X$ is a finite union of Peano subcontinua. Moreover, the map $g\circ f:X\to Y\subset X$ satisfies $d(f)<2\delta$. Assume $X$ is a one-dimensional, compact AANR and fix some $\e>0$. We apply the result of Clapp for $\delta=\e/4$ and obtain a compact subspace $Y\subset X$ that is the (disjoint) union of finitely many Peano continua, together with a surjective map $f\colon X\to Y$ such that $d(f)<\e/2$. Since $Y\subset X$ is closed, $\dim(Y)\leq\dim(X)\leq 1$. Applying \ref{prop:S07:1D_Peano_inner_approx_by_graphs} to each component of $Y$ and $\e/2$ we obtain a finite subgraph $Z\subset Y$ and a surjective map $g\colon Y\to Z$ such that $d(g)<\e/2$. We may consider $Z$ as a finite subgraph of $X$. The map $h:=g\circ f\colon X\to Z\subset X$ is surjective and satisfies $d(h)<\e$. So we have shown the implication for the case that $X$ is AANR. Assume additionally that $X$ is an AAR. We have already shown that $X$ can be approximated from within by finite subgraphs. We need to show that the same is true with finite trees. By \ref{prop:S07:1D_AAR_tree-like}, $X$ is tree-like. By \cite[2.2 and 2.3]{Lel1976}, every tree-like continuum is hereditarily unicoherent. A coherent finite graph is a finite tree. It follows that \emph{every} finite subgraph $Z\subset X$ is a finite tree, and so $X$ can be approximated from within by finite subgraphs which automatically are finite trees. \qedhere \\ \end{proof} \begin{cor} \label{prop:S07:1D_AANR_implies_wSP} Let $X$ be a compact, metric space. Then the following implications hold: \begin{enumerate}[label=(\arabic*)] \item If $X$ is an AANR and $\dim(X)\leq 1$, then $C(X)$ is weakly semiprojective $\mathcal{S}_1$. \item If $X$ is an AAR and $\dim(X)\leq 1$, then $C(X)$ is weakly projective in $\mathcal{S}_1$. \end{enumerate} \end{cor} \begin{proof} Let $X$ be a one-dimensional, compact AAR (AANR). By \ref{prop:S07:TFAE_1D_AANR}, $X$ can be approximated from within by finite trees (finite graphs), i.e., for each $n\geq 1$ there exists a finite tree (graph) $Y_n\subset X$ and a surjective map $f_n\colon X\to Y_n$ with $d(f_n)<1/n$. We want to use \cite[Theorem 4.7]{Lor2009} to show $C(X)$ is weakly (semi-)projective in $\mathcal{S}_1$. The surjective maps $f_n$ induce injective morphisms $f_n^\ast\colon C(Y_n)\to C(X)$. Consider also the inclusion map $\iota_n\colon Y_n\hookrightarrow X$ and the dual morphism $\iota_n^\ast\colon C(X)\to C(Y_n)$. Set $\theta_n:=f_n^\ast\circ\iota_n^\ast\colon C(X)\to C(X)$. Since $d(f_n)$ tends to zero, the morphisms $\theta_n$ converge (pointwise) to the identity morphism. Further, the image of $\theta_n$ is equal to the image of $f_n^\ast$, and therefore isomorphic to $C(Y_n)$. As shown by Loring, \cite[Proposition 16.2.1, p.125]{Lor1997}, $C(Y)$ is semiprojective (in $\mathcal{S}_1$) if $Y$ is a finite graph. Similarly, $C(Y)$ is projective in $\mathcal{S}_1$ if $Y$ is a finite tree $Y$ (see also \cite{ChiDra2010}). Now, it follows from \cite[Theorem 4.7]{Lor2009} (and the analogous result for weakly semiprojective \Cs{s}) that $C(X)$ is weakly (semi-)projective in $\mathcal{S}_1$. \qedhere \\ \end{proof} \begin{rmk} We remark that the converse implications of \ref{prop:S07:1D_AANR_implies_wSP} also hold. As explained in \ref{pargr:S02:Connection}, if $C(X)$ is weakly (semi-)projective in $\mathcal{S}_1$, then $X$ is necessarily an approximative absolute (neighborhood) retract. The dimension condition was recently shown by Enders, \cite{EndPrivat}. Thus, $C(X)$ is (weakly) (semi-)projective in $\mathcal{S}_1$ if and only if $X$ is a compact (approximative) absolute (neighborhood) retract with $\dim(X)\leq 1$. \\ \end{rmk} \section*{Acknowledgments} \noindent We thank Dominic Enders for his comments and inspiring suggestions that helped to improve some of the results in section \ref{sect:06:Applications}. We thank S{\o}ren Eilers for his valuable comments on the first draft of this paper.
1,314,259,993,146
arxiv
\section{Introduction} Starting with the early theoretical work by Hubbard\cite{hubbard_hIII64} and Brinkman and Rice,\cite{PhysRevB.2.4302} and experimental work on transition metal oxides,\cite{PhysRevB.7.326,56312321654} the Mott-Hubbard metal-insulator transition (MIT) has been a subject of great interest in condensed matter physics for decades. Much of its essential physics is captured already by the single-band Hubbard model,\cite{gutzwiller:HubbardModel63,kanamori:HubbardModel,hubbard_hIII64} as has been shown in numerous studies using the dynamical mean-field theory (DMFT) framework.\cite{PhysRevLett.62.324,georges:dmft96} Multi-band extensions of the Hubbard model allow for a more realistic description of the MIT and of other strong-correlation phenomena and have, in general, much richer phase diagrams. Multi-orbital Mott transitions have been explored (within DMFT) in a doped two-band model with {\it equivalent} orbitals already more than 10 years ago.\cite{PhysRevB.55.R4855} In this case, the band degeneracy increases the number of Mott lobes from 1 to 3, but has no fundamental impact on the Mott physics near half filling (in contrast, e.g., to ferromagnetism for which the inter-orbital Hund exchange is essential\cite{PhysRev.49.537,RevModPhys.25.220,PhysRevB.56.3159,PhysRevB.57.6896,b8243}). More recently, it has been realized that orbital degeneracy can change the character of Mott transitions in a fundamental way: namely, in the case of multiple {\it inequivalent} orbitals. In the electronic context, such inequivalence can arise naturally by orbital-dependent hopping amplitudes, associated, e.g., with in-plane versus out-of-plane $t_{2g}$ orbitals in layered ruthenates.\cite{PhysRevLett.84.2666,PhysRevB.62.6458} It has been suggested by Anisimov et al.\cite{EPJB.25.191} that such systems should undergo a sequence of orbital-selective Mott transitions (OSMTs) with increasing interaction strength, which would then explain the peculiar phase diagram of Ca$_{2-x}$Sr$_x$RuO$_4$.\cite{PhysRevLett.84.2666,PhysRevB.62.6458} Clearly, such a scenario -- with coexisting itinerant and localized valence electrons in the intermediate orbital-selective Mott phase -- is fundamentally different from a conventional simultaneous Mott transition of all valence electrons. Within the last few years, it has been established that OSMTs occur in multi-band Hubbard models with inequivalent orbitals under quite general circumstances.\cite{JPhysCondMat.19.436206,inaba:155106,de'medici:205124,PhysRevB.72.205126,PhysRevLett_99_236404,liebsch:116402,inaba:085112,koga:216402,PhysRevLett.91.226401,osmt_0506151} A (nearly) minimal Hamiltonian for orbital-selective Mott behavior is the two-band Hubbard model with band-specific hopping amplitudes $t_m$ (for orbital index $m\in\{1,2\}$) and Ising type Hund rule couplings (parametrized by $J_z$), \begin{eqnarray} \op H &=& -\sum_{\langle ij\rangle m\sigma} t_{m}^{\phantom{\dagger}} \op c_{im\sigma}^\dagger \op c_{jm\sigma}^{\phantom{\dagger}}+ U\sum_{im}\op n_{im\uparrow}^{\phantom{\dagger}}\op n_{im\downarrow}^{\phantom{\dagger}}+\nonumber \\ \label{eq:model} &\,&\sum_{i\sigma\sigma'}\left(U' -\delta_{\sigma\sigma'}^{\phantom{\dagger}}J_z^{\phantom{\dagger}}\right)\op n_{i1\sigma}^{\phantom{\dagger}}\op n_{i2\sigma'}^{\phantom{\dagger}}\, . \end{eqnarray} Here, $\sigma\in\{\uparrow,\downarrow\}$ denotes the spin; $i$ and $j$ label lattice sites; $\op n_{im\sigma}\equiv \op c_{im\sigma}^\dagger \op c_{im\sigma}^{\phantom{\dagger}}$. In contrast to the hopping, the intra-orbital Hubbard interaction $U$ is assumed orbital independent. The intra- and interorbital interactions are related by $U = U' + 2J_z$. In the following, we will refer to Eq.\ (\ref{eq:model}) as the $J_z$-model.\cite{knecht2005,PhysicaBCondMat.359.1366,pvd2007} A general discussion of the Hund exchange would have to include spin-flip and pair hopping terms, \[ \op H_\perp^{\phantom{\dagger}} = \frac{1}{2} J_{\perp}^{\phantom{\dagger}}\sum_{im\sigma} \op c_{im\sigma}^\dagger \left( \op c_{i\bar{m}\bar{\sigma}}^\dagger \op c_{im\bar{\sigma}}^{\phantom{\dagger}} +\op c_{im\bar{\sigma}}^\dagger \op c_{i\bar{m}\bar{\sigma}}^{\phantom{\dagger}} \right) \op c_{i\bar{m}\sigma}^{\phantom{\dagger}}\, . \] However, such terms are not essential for OSMTs (Ref.\ \onlinecite{knecht2005}) and change their character only in the $SU(2)$ symmetric limit $J_z=J_{\perp}\equiv J$ (i.e., in the Heisenberg limit). In particular, it was shown for the half-filled case\cite{PhysRevLett.95.206401} that the itinerant electron species in the orbital-selective Mott phase is a non-Fermi-liquid for all $J_\perp<J_z$ and a Fermi liquid only at $J_\perp = J_z$. In this sense, $J_z$ model (\ref{eq:model}) represents the generic class of anisotropic Hund couplings. The nature of OSMTs and their essential requirements are well understood for particle-hole symmetric situations. This symmetry is destroyed upon doping (i.e., by variation of the chemical potential $\mu$) and/or by adding an orbital dependent field, i.e., by crystal field splitting. Koga et al.\ studied effects of weak hole doping on OSMTs in a two-band Hubbard model.\cite{koga:216402} Crystal field effects have been investigated in two-orbital\cite{werner:126405} and three-orbital\cite{0611075} systems. In a ferromagnetic slave-boson mean-field theory calculation, R\"uegg et al.\ \cite{23987239857} obtained a phase diagram for the two-band Hubbard model with crystal field splitting, which includes ferromagnetic as well as band insulating regimes. In this paper, we extend high-precision\cite{PhysRevB_76_205120} quantum Monte Carlo (QMC) calculations\cite{knecht2005,pvd2007,353542354} within DMFT for two-band model (\ref{eq:model}) to general doping levels. Of central interest is the phase diagram, which is derived from orbital-dependent fillings, double occupancies, and quasiparticle weights. At the same time, these observables as well as spectra are used for characterizing the orbital-selective physics, i.e., the effects of orbital inequivalence. Other results of interest in this paper include unexpected behavior in the intermediate interaction regime, where the double occupancy for the wide band becomes nearly independent of the interaction. We also show that the entire doping range $0\leq n\leq 4$ is subdivided into three different regimes by the transport behavior of the narrow band, and that the two-band spectra differ from analogous single-band results in that the additional scattering channels introduced by the interorbital couplings tend to smear out the spectrum. Finally, we will discuss the relation between orbital-selective Mott physics and non-Fermi-liquid behavior, i.e., whether these phenomena always occur simultaneously (in the $J_z$ model). Here, as in previous work on this subject, we concentrate on the effect of correlations on the thermodynamic and spectral properties of the system in the paramagnetic phase, i.e., excluding magnetism. The structure of this paper is as follows: In Section \ref{subsec:methods} we briefly discuss the methods and model parameter values relevant to our work. Section \ref{subsec:observables} gives information on the observables studied in our paper. Static and dynamic observables are treated separately (in Secs.\ \ref{sec:static} and \ref{sec:dynamic}, respectively). The static observables discussed in Sec.\ \ref{sec:static} include band-resolved particle numbers (Sec.\ \ref{subsec:filling}), intraorbital double-occupancies (Sec.\ \ref{subsec:double}), and the phase diagram as a function of interaction and total filling (Sec.\ \ref{subsec:phase}). As dynamic observables, we study band-resolved spectral functions (Sec.\ \ref{subsec:spectra}), the spectral weight at the Fermi edge (Sec.\ \ref{subsec:N0}), and the Matsubara self-energy as well as quasiparticle weights (Sec.\ \ref{subsec:QP}). Finally, in Sec.\ \ref{summary}, we summarize the results and formulate our conclusion. \subsection{\label{subsec:methods}Methods} In the following, we consider $J_z$-model (\ref{eq:model}) for two bands with a bandwidth ratio of 2. For the narrow band, we assume a semi-elliptic ``Bethe'' non-interacting DOS with a fixed hopping amplitude $t_\mathrm{n}=0.5$ and, hence, a full bandwidth $W_\mathrm{n} = 2$. Similarly, we assume a semi-elliptic DOS with $t_\mathrm{w}=1$ and $W_\mathrm{w} = 4$ for the wide band. The intraband on-site interaction $U$ is chosen as the primary (variable) interaction parameter; it acts equally in both orbitals. The interorbital interactions are scaled as $J_z = U/4$ and $U' = U/2$, in line with most earlier studies.\cite{23987239857,knecht2005,inaba:155106,PhysRevLett_99_236404} This work is restricted to the paramagnetic case; spin indices are omitted when appropriate. We choose a fixed temperature $T=1/40$, which is of the order of the critical temperature in the undoped case.\cite{pvd2007} Within DMFT, two-band model (\ref{eq:model}) is mapped to a two-orbital single-impurity Anderson model which has to be solved self-consistently.\cite{georges:dmft96} In this work, the impurity model is solved using the Hirsch-Fye quantum Monte Carlo (HF-QMC) algorithm.\cite{PhysRevLett.56.2521,PhysRevLett.69.168} This method discretizes the imaginary-time path integral expression for the Green function into $\Lambda$ time slices of uniform width $\Delta\tau=\beta/\Lambda$, where $\beta=1/T$ (for $k_{\text{B}}\equiv 1$); a Hubbard-Stratonovich (HS) transformation replaces the electron-electron interaction at each time step by a binary auxiliary field which is sampled using standard Markov Monte Carlo techniques. The results are exact in the combined limit of vanishing discretization $\Delta\tau\to 0$ and a large number of Monte Carlo update sweeps. In this work, the systematic errors associated with finite discretization ($\Delta\tau=0.4$, unless noted otherwise) are minimized by supplementing the discrete QMC Green function with a high-frequency expansion of the self-energy.\cite{knechtMasterthesis,nilsPhd} The high precision of this method has proved essential in detecting both OSMTs in the undoped case;\cite{knecht2005} if required, extrapolations $\Delta\tau\to 0$ can be used in order to achieve extreme precision\cite{PhysRevB.71.195102} and efficiency.\cite{PhysRevB_76_205120} In order to compute spectral functions, the imaginary-time Green functions resulting from QMC are analytically continued to the real axis using a standard maximum-entropy method (MEM).\cite{234523452345345} We have verified that the remaining discretization bias (at $\Delta\tau=0.4$) is generally small (much smaller than in preceding QMC implementations\cite{PhysRevB.51.10411}), up to slight shifts in the critical interactions, by performing additional simulations with $0.25\le \Delta\tau \le 0.5$. In particular, we did not find significant effects on the single-particle spectra: their general shapes as well as peak positions and weights were unchanged. \subsection{\label{subsec:observables}Observables} Using the QMC-DMFT methods, described above, we investigate several observables in this paper: band-specific particle numbers and intraorbital double occupancies, the density of states, in particular its value at the Fermi level, and the quasiparticle weight. We now briefly introduce these quantities, which are of interest by themselves and are used in order to construct the phase diagram. To start with the general properties of $J_z$-Hamiltonian (\ref{eq:model}): this model is formulated in a particle-hole symmetric way (in particular, no crystal field splitting is included), so that results for a total filling of $n > 2$ can be easily mapped to results for $n < 2$. As a result of this well-known symmetry, particle- and hole-dopings are equivalent and, accordingly, the word ``doping'' in the following refers to both. Moreover, the phase diagrams, to be calculated below, will be mirror symmetric with respect to the half-filled case, where $n=2$. The study of orbital-specific particle numbers and intraorbital double occupancies is particularly interesting for parameter values, for which the orbitals behave physically differently, i.e., in the orbital-selective Mott phase (OSMP). In this case, the narrow band is in the insulating state and displays a gap, while the wide band is in a non-Fermi-liquid metallic state. The insulating nature of the narrow band physically suggests that both the densities and the double occupancies of this band may initially remain pinned upon doping, and in fact one of the goals of our investigation of these observables is to determine, whether and to what extent (depending upon the interaction strength $U$) this is true. Clearly, in addition to the transport properties (e.g., the electrical conductivity), which are not explicitly investigated in this paper, one of the most fundamental observables for detecting metal-insulator transitions is the density of states (DOS), i.e., the local spectral function. It is particularly interesting to contrast the results for the DOS in the two-band calculations with single-band results for comparable parameter sets, whenever appropriate. Another important indicator for the onset of insulating behavior is the quasiparticle weight $Z$, which is mathematically determined by the self-energy $\Sigma(\omega)$. Upon approaching the insulating phase of a {\it single-band\/} model from the metallic side, the quasiparticle weight vanishes (at least in the ground state) at the metal-insulator transition. On account of the relation $Z = m/m^*$, the vanishing of the quasiparticle weight is equivalent to the divergence of the effective mass $m^*$. It is important to note that the physical interpretation of $Z$ and $m^*$ as the ``quasiparticle weight'' and the ``effective mass'', respectively, is meaningful only in a Fermi-liquid regime. Within a Fermi-liquid phase, the quasiparticle weight is well approximated in an imaginary-time calculation by its imaginary-time analog: \begin{eqnarray} Z = \frac{m}{m^*} &\equiv & \left( 1-\frac{d \mathrm{Re}\Sigma}{d\omega}\bigg|_{\omega = 0} \right)^{-1} \nonumber \\ \label{eq:Z}&\simeq& \left[ 1+\frac{\mathrm{Im}\Sigma({{i}}\omega_1)}{\pi T} \right]^{-1}\, . \end{eqnarray} Here $\omega_1 = \pi T$ is the first Matsubara frequency. Both expressions for the quasiparticle weight clearly recover the non-interacting limit $Z\rightarrow 1$ at arbitrary temperatures for $\Sigma\rightarrow 0$. In addition, the imaginary-time secant approximation is exact in a Fermi-liquid phase in the limit $T\rightarrow 0$. However, both expressions are demonstrably distinct, e.g., in the insulating phase at $T=0$, since the real-frequency definition of $Z$ vanishes, whereas estimate (\ref{eq:Z}) remains finite. This illustrates that estimate (\ref{eq:Z}) will in general be inaccurate or even invalid in non-Fermi-liquid regimes, such as in particular the OSMP, where the narrow band is insulating, while the wide band is a non-Fermi-liquid type metal (see). In any case, the discrete imaginary-time expression in Eq.\ (\ref{eq:Z}) is an established measure of the low-frequency behavior of the self-energy. \section{\label{sec:static}Static observables as a function of doping} The total electron concentration in a two-band model is composed of contributions from the two orbitals: $n = n_\mathrm{w}+n_\mathrm{n}$, where $n_\mathrm{w}= \sum_\sigma\langle \op n_{i\mathrm{w}\sigma}\rangle$ and $n_\mathrm{n}= \sum_\sigma\langle \op n_{i\mathrm{n}\sigma}\rangle$ represent the particle filling of the wide and the narrow band, respectively. In two-band system (\ref{eq:model}) with total half filling, due to particle-hole symmetry, either band is exactly half-filled: $n_\mathrm{w} = n_\mathrm{n} = 0.5$. Let us briefly review the central results obtained earlier for this special case:\cite{knecht2005} with increasing interaction, and starting from the non-interacting limit (and for low $T=1/40$), both bands undergo consecutive Mott transitions. For weak interaction ($U<U_\mathrm{c1} \approx 2.1$), both bands remain itinerant. For strong interaction ($U>U_\mathrm{c2}\approx 2.6$), both bands become insulating, since the energy difference between the atomic levels becomes comparable to the interaction $U$. In the intermediate region, the wide band is metallic and the narrow band is insulating. Slight variations in temperature do not significantly change the results quantitatively.\cite{pvd2007} This picture agrees with arguments in Ref.\ \onlinecite{EPJB.25.191}, relating the interacting two-band model to two distinct bands, which separately undergo the Mott transition according to their individual bandwidth. In the following, we first discuss our results for the band-specific particle numbers and double occupancies, which are then compared to predictions from a simple rigid-band model and from perturbation theory. In comparing the QMC results to simple model calculations, the doping dependence turns out to be of great interest. We also present a phase diagram for the doped two-band model, with particular emphasis on transport characteristics. Accordingly, we distinguish insulating, orbital-selective Mott and metallic phases. \vspace*{-3mm}\subsection{\label{subsec:filling}Band-specific particle numbers} We start the presentation of our results with the band-specific fillings, which are plotted in Fig.\ \ref{fig:filling} for different interaction strengths $1.8\le U\le 2.8$. \begin{figure \includegraphics[angle=270,scale=0.5]{figures/filling.ps}\\ \caption{(Color online) Band-specific fillings for the two-band model versus total filling for a range of interactions $U$. Open and solid symbols represent the narrow and the wide band, respectively. }\label{fig:filling} \end{figure} For small deviations from half filling and $U>2$, only the wide band accounts for the doping, while the narrow band remains (to a very good approximation) half-filled. As a consequence, the relation $n_\mathrm{w} \simeq n - 1$ holds approximately in this low-doping regime. As one expects, the pinning of the particle filling of the narrow band at its half-filled value in the low-doping regime becomes more extended for increasing $U$. These plateaus are bounded by well-localized kinks, beyond which the curves $n_\mathrm{n}(n)$ show rapid, nearly linear, increase: additional electrons are predominantly allocated to the narrow band. Hence, the slope of $n_\mathrm{w}(n)$ drops significantly. The corresponding phase boundary will be discussed below, in Sec.\ \ref{subsec:phase}. Note that the curves in Fig.\ \ref{fig:filling} for the narrow and the wide bands {\it cross\/} in the OSMP regime near $n\simeq 2.4$, with details depending upon $U$. \subsection{\label{subsec:double}Double occupancies} In Fig.\ \ref{fig:double}a, we show QMC results for the intraorbital double occupancies $\langle \op n_{im\uparrow} \op n_{im\downarrow}\rangle$ ($m\in\{w,n\}$) of the wide band (upper curves) and the narrow band (lower curves). In the weakly doped regime ($n<2.1$) of the OSMP (with $U\gtrsim 2.2$), the intraorbital double occupancy for the narrow band - which, as was demonstrated above, is effectively singly occupied - is strongly suppressed. In this regime ($n\lesssim 2.1$), the double occupancy of the wide band rapidly decreases with increasing interaction due to Coulomb repulsion. In the stronger doped regime ($n\gtrsim 2.1$) the situation reverses: The double occupancy is supressed in the narrow band with increasing interaction, whereas the intraorbital double occupancy for the wide band becomes nearly $U$ independent. As we will see below, both bands are metallic in this regime ($n\gtrsim 2.1$) and show a significant DOS in the vicinity of the Fermi edge, the DOS of the narrow band obviously being larger than that of the wide band. As a consequence, both the average filling and the concentration of doubly occupied orbitals increases more rapidly in the narrow band than in the wide band for $n\gtrsim 2.1$. Hence, as for the particle numbers, a crossing of the curves for the wide and narrow bands occurs at $n\simeq 2.5$, so that the concentration of intraorbital double occupancies is {\it larger\/} for the narrow than for the wide band for $n\gtrsim 2.5$. Moreover, these curves show a very weak $U$-dependence in this regime, which suggests that very similar behavior is to be expected from weak-coupling perturbation theory. We will come back to this issue below. The crossing and the virtual $U$ independence can be clearly seen in Fig.\ \ref{fig:double}b, which shows the QMC results for the double occupancies in the entire regime $2\leq n\leq 4$; for comparison, the corresponding weak-coupling (Hartree-Fock) and noninteracting results are also shown. \begin{figure} \includegraphics[angle=270,scale=0.5]{figures/intra_double_occ.ps}\\ \includegraphics[angle=270,scale=0.5]{figures/intra_double_occ_vs_hf_more.ps} \caption{(Color online) Intraorbital double occupancy for various $U$-values as a function of doping for the two-band model. Panel ($a$) shows for dopings near half filling and panel ($b$) in the entire doping range $2\leq n\leq 4$ that the intraorbital double occupancy for the wide band becomes nearly independent of the interaction for intermediate $U$ and all fillings $n\gtrsim 2.2$. In ($b$), results for the noninteracting case ($U=0$, black line) and within Hartree-Fock approximation (for $U=2.4$, long-dashed line) are included for comparison. }\label{fig:double} \end{figure} The primary mechanism governing the band-specific fillings and the intraorbital double occupancies in the {\it low-doping\/} regime can be understood on the basis of a rigid-band model with two bands and intraorbital interaction only. A comparison between the band-specific fillings within this rigid-band model [sketched in Fig.\ \ref{fig:rigid}] \begin{figure}[t] \includegraphics[angle=270,scale=0.5]{figures/rigid_band.ps} \caption{(Color online) ($a$) DOS of a rigid-band model with a semi-elliptic DOS and bandwidths $W_\mathrm{n} = 2$ and $W_\mathrm{w} = 4$, representative of interactions $U \simeq 2.4$. ($b$) Band-specific fillings within this model, compared to QMC results for $U = 2.4$ and $T=1/40$. }\label{fig:rigid} \end{figure} with the QMC results reveals good agreement for $n\lesssim 2.1$, so that the pinning of the narrow-band filling can be explained as resulting from the single-particle gap. If one further assumes that the narrow-band double occupancy is a function of the narrow-band filling and of the interactions, it has also to remain pinned in the same regime -- in full agreement with the QMC results. The validity of the rigid-band model rapidly breaks down as the total filling is increased beyond $n>2.1$, where the actual particle numbers of the narrow (wide) band increase more rapidly (slowly) than in the rigid-band model. The deviations between the simple rigid-band assumption and the true results in the metallic phase ($n>2.1$) are due to correlations, in particular to the formation of Kondo resonances in both bands. We now discuss and interpret the remarkably weak $U$-dependence of the intraorbital double occupancies of the narrow and the wide band for $n\gtrsim 2.5$. Before comparing to perturbative results, we illustrate the weak dependence of the double occupancies on $U$ in Fig.\ \ref{fig:doubleZ}, which covers the interaction regime $0\leq U\lesssim 7$ for a total density $n=3$. Figure \ref{fig:doubleZ} indeed shows a near-constancy of the double occupancies for such interaction values. On the basis of the virtual $U$-independence, in particular for $2\lesssim U\lesssim 4$, one expects good agreement between QMC-results and perturbation theory in this regime. As can be seen from Fig.\ \ref{fig:double}b, where corresponding Hartree-Fock results are also plotted, this is indeed what one finds. Figure \ref{fig:double}b shows that the wide band agrees with perturbation theory better than the narrow band, at least for $n\lesssim 3$, as one expects, since the effective interaction strength (compared to the band width) is larger for the narrow than for the wide band. This is also confirmed by Fig.\ \ref{fig:corr}, which shows the quantum fluctuations around the Hartree-Fock solution, which (for $n\lesssim 2.7$) are indeed largest for the narrow band. The near-perfect agreement between the narrow-band QMC results and perturbation theory for $n\gtrsim 3.2$ follows immediately from the near absence of singly occupied or empty orbitals in the narrow band in that regime. For completeness we should add that the weak $U$-dependence of the concentrations of intraorbital double occupancies clearly cannot persist for arbitrarily large $U$ values. From strong-coupling perturbation theory one expects that, for sufficiently large $U$, the double occupancies of {\it both\/} bands converge towards a common strong-coupling value $0.5$. \begin{figure \includegraphics[angle=270,scale=0.5]{figures/double_versus_orbital_filling_B40_n3-00.ps} \caption{(Color online) Intraorbital double occupancy versus $U$ for density $n = 3$ (squares). In the interaction range $2.5 \lesssim U\lesssim 5$, the intraorbital double occupancy is only very weakly dependent on $U$. Density-dependent upper and lower bounds on the double occupancies are obtained in the limits of weak (dashed/solid lines) and strong (dotted lines) correlation, respectively. Inset: quasiparticle weight $Z$ (cf. Sec.\ \ref{subsec:QP}). }\label{fig:doubleZ} \end{figure} In between such a strong-coupling phase and the metallic weak-coupling phase, one would expect a metal-insulator transition. As mentioned in the introduction, such additional Mott lobes (at quarter and three quarter filling) had been found in an early HF-QMC study of a two-band Hubbard model, even with critical interactions similar to those at half filling.\cite{PhysRevB.55.R4855} However, this study considered the case of SU(4) symmetric interactions ($U=U'$ and $J=0$) and equal bandwidth. The first difference is particularly relevant, since the Heisenberg exchange strongly suppresses Mott phases at (three) quarter filling in terms of the critical interactions: the phase boundaries are shifted to much larger $U_c$. This is easily seen by comparing the cost of a charge fluctuation in the atomic limit ($\frac{1}{4} U$ at $n=1$ or $n=3$, compared to $\frac{7}{4} U$ at $n=2$; both for $J=U/4, U'=U/2$) and has also been found numerically for SU(2) symmetric interactions.\cite{0706-1.3948} In the present case, we expect the system to remain metallic (at $n=3$) up to $U_c\gtrsim 20$, i.e., beyond the interaction range easily accessible using QMC calculations. In addition, the critical temperatures may be considerably lower; however, sharp crossover lines should persist at least up to the temperatures studied in this paper.\cite{Koga05PRB,Gorelik09} From the data of Fig.\ \ref{fig:doubleZ}, we can, indeed, exclude a phase transition at $n=3$ for $U\lesssim 7$: both the intraorbital double occupancies (squares, main panel) and the quasiparticle weights $Z$ (inset) are completely smooth as a function of the interaction $U$, i.e., do not show any kinks or hysteresis effects that would be characteristic of an MIT. Note that both the double occupancy and $Z$ are larger for the narrow band due to its larger filling fraction. \begin{figure}[t] \includegraphics[angle=270,scale=0.5]{figures/quantum_fluctuations.ps} \caption{(Color online) The $\uparrow\downarrow$-density-density correlation function for total fillings $n\gtrsim 2.2$, corresponding to quantum fluctuations in the intraorbital double occupancy around the uncorrelated limit. The correlations are generally {\it negative\/}, since an enhanced $\uparrow$-density on a site implies a decreased $\downarrow$-density. At fixed interaction $U$, the amplitude of the correlation function {\it decreases\/} monotonically with filling, since the available phase space for density fluctuations decreases with increasing $n>2$. }\label{fig:corr} \end{figure} \subsection{\label{subsec:phase}Phase diagram} Next we discuss the phase diagram as a function of the interaction strength $U$ and the total filling $n$ with particular emphasis on transport-related properties. To do so, we distinguish an {\it insulating\/} state of the two-band system from an {\it orbital-selective Mott\/} phase, in which one band is metallic and the other insulating, and a purely {\it metallic\/} phase, in which neither of the bands is insulating. As is well-known, all three phases occur already at half filling, where the system is in the metallic, orbital-selective Mott or insulating state for $U\leq U_{\rm{c1}}\simeq 2.1$, $U_{\rm{c1}}\leq U\leq U_{\rm{c2}}\simeq 2.6$, or $U_{\rm{c2}}\leq U$, respectively. For the doped system ($n\neq 2$) it is already clear from our investigation of particle numbers and of double occupancies for the {\it narrow\/} band at low doping, which appeared to be pinned at the half-filled state for moderate-to-large $U$-values, that this band is {\it insulating\/} in this regime. Since the particle filling of the wide band changes substantially with total density (according to $n_\mathrm{w}\simeq n-1$) for these $(U,n)$-values, one expects the wide band to be metallic in this part of the phase diagram. Accordingly, one can take the pinning of the narrow-band particle numbers as a criterion for the occurrence of an OSMP. Alternatively, at larger doping, where the fillings of both bands change upon changing the total density, it is clear that both bands have finite compressibility and, hence, are {\it metallic\/}. Finally, at non-integer total filling in the paramagnetic phase, one does not expect the occurrence of a rigorously insulating state for both bands simultaneously, since at least one of the bands must have finite compressibility in that region. Thus, to summarize, we found a simple criterion for the determination of the paramagnetic phase diagram as a function of interaction and doping: If the system shows pinning for the narrow band away from half filling, it is in an OSMP, if not, then it is metallic. It is clear that this criterion for the determination of the phase diagram, which is essentially based on the behavior of the compressibility as a function of $U$ and $n$, should be cross-checked with alternative criteria, based on dynamical quantities, such as, e.g., the finiteness of the DOS at the Fermi level for the wide and the narrow bands. Results for such dynamical quantities, which are presented below, turn out to be fully consistent with the present ``static'' criteria and lead to the same phase diagram. \begin{figure \includegraphics[angle=270,scale=0.5]{figures/phase_diagram_NB.ps} \caption{(Color online) Orbital-selective Mott phase diagram of the two-band model for particle densities $1.85\leq n\leq 2.15$. For interaction $U < U_{\mathrm{c1}}$, both bands are metallic at all densities. The insulating phase at half filling (thick black line) only exists for $U>U_{\mathrm{c2}}$. }\label{fig:phase} \end{figure} The phase diagram, which is based on the above-mentioned criteria concerning the band-specific filling factors $n_\mathrm{n}(n)$ and $n_\mathrm{w}(n)$ as a function of the total density, is presented in Fig.\ \ref{fig:phase} for the moderately strong interactions $U\le 3$, near the OSMP previously established at half filling. One finds that the purely insulating state, which occurs only at half filling as explained above, is embedded in an OSMP, which reaches from $U\simeq 2.1$ to $U=\infty$ and is itself embedded in a region of purely metallic states. As can be seen from Fig.\ \ref{fig:phase}, these metallic states occur, for a given fixed interaction strength, only for sufficiently large doping. The QMC phase diagram, presented here, is in qualitative agreement with the results of R\"uegg et al.\ ,\cite{23987239857} which were obtained based on variational methods. As a remark we add that, since, in the OSMP at {\it half filling\/}, the wide band is in a non-Fermi-liquid state,\cite{353542354} by continuity one also expects a non-Fermi-liquid state in the OSMP at low doping. Signatures of such a non-Fermi-liquid state away from half filling will indeed be found and discussed below. Our phase diagram Fig.\ \ref{fig:phase} is in reasonable general agreement with the ground state phase diagram obtained by Inaba and Koga\cite{0706-1.3948} in the SU(2) symmetric limit (also for $J=U/4$): the critical interactions at half filling are only slightly larger in the latter case (by about $10\%$); however, at strong interactions $U\gtrsim 3$, the OSM phases seem to extend over larger filling ranges for SU(2) symmetric Hund rule couplings. Unfortunately, the coarse filling grid employed in Ref.\ \onlinecite{0706-1.3948} does not allow for a detailed comparison at the scale of Fig.\ \ref{fig:phase}. Still, we may conclude that the phase boundaries do not sensitively depend on the specifics of the Hund rule couplings (with or without spin-flip and pair hopping terms). \begin{figure} \includegraphics[angle=270,scale=0.5]{figures/phase_diagram_2.ps} \caption{(Color online) Phase diagram of the two-band model for densities $2\leq n\leq 2.6$. Within the range of parameter values, for which both bands are metallic, the solid green (or patterned red) area indicates that the wide (or narrow) band contributes stronger to the doping, implying $n_\mathrm{w}>n_\mathrm{n}$ (or $n_\mathrm{n}>n_\mathrm{w}$). At the boundary (dashed line), both bands are equally filled. }\label{fig:phase2} \end{figure} The purely metallic phase in Fig.\ \ref{fig:phase} can be subdivided into two regimes, depending on which of the two orbitals is stronger doped (cf.\ Fig.\ \ref{fig:filling}), as shown in Fig.\ \ref{fig:phase2}. Evidently, the OSMP is entirely embedded in a region in which the wide band hosts the larger fraction of the doping, which is itself embedded in a parameter region in which the narrow band contains the larger doping fraction. The boundary between both sections of the metallic phase is marked by a dashed line, which corresponds to the crossing points of the density curves for the narrow and the wide band at fixed interaction of Fig.\ \ref{fig:filling}. Interestingly, it is quite easy to predict the large-$U$ asymptotics of the (dashed) cross-over line: it should converge to $n=3$ for $U\to \infty$. The reason is that equal filling of both orbitals is asymptotically expected in all Mott phases, in particular also at three quarter filling (with $U_c\gtrsim 20$, cf.\ Sec.\ \ref{subsec:double}). Thus, the cross-over line should either continue, beyond $U_c$, as the Mott phase transition line or converge towards that line for $U>U_c$. \section{\label{sec:dynamic}Dynamic observables as a function of doping} In this section, we discuss dynamic observables: first the spectral function (at general frequencies), then, in particular, the spectral weight at the Fermi level and the functional dependence of the $Z$-factor, which corresponds to the quasiparticle weight in a Fermi-liquid phase but loses this interpretation in the OSMP or an insulating state. The spectral function and the spectral weight at the Fermi level are calculated using the maximum entropy method (MEM). Finally, the extent of non-Fermi-liquid behavior will be analyzed on the basis of imaginary-time self-energies. \subsection{\label{subsec:spectra}Spectral function} Both in the case of the wide band and for the narrow band a comparison to typical single-band behavior is helpful for understanding the doping dependence of the spectral function. \begin{figure \includegraphics[angle=270,scale=0.5]{figures/wide_band_dos.ps} \caption{(Color online) DOS calculated for various particle densities near half filling as a function of frequency ($a$) for the wide band in the two-band model with interaction $U=2.4$, and ($b$) for a single-band calculation with identical parameters (bandwidth $W = 4$, interaction $U=2.4$, and temperature $T = 1/40$). }\label{fig:spectraW} \end{figure} The spectral function of the wide band of the two-band Hubbard model, as calculated with the MEM from our quantum Monte Carlo data for the single-particle Green function, is presented in Fig.\ \ref{fig:spectraW}a (left panel), where it is juxtaposed with the spectral function of a single-band model with the same parameters [see Fig.\ \ref{fig:spectraW}b]; the latter corresponds to the wide band in two-band model (\ref{eq:model}) where the interorbital couplings $U', J_z$ have been removed. The spectral function of the wide band in Fig.\ \ref{fig:spectraW}a is characteristic of the typical spectrum of the wide band within the OSMP. Minor changes in the interaction strength $U$ cause only slight changes in the spectrum. Comparing the result of the wide band of a two-band system to the spectral function from a single-band calculation, one finds that the spectral weight of the wide band has a broader distribution. This can be attributed to an increased scattering amplitude in the two-band model, due to the larger number of interaction channels. Moreover, the Kondo resonance is more pronounced in the single-band case and the dips between the resonance and Hubbard bands are clearly visible in the low-doping regime. One also notices a dip in the spectral function of the wide band at the Fermi edge in the low-doping domain. This phenomenon is well-known from previous work\cite{knecht2005} and can be attributed to non-Fermi-liquid behavior. In fact, the same kind of dip is found in the Hubbard-III solution or, equivalently, in the exact solution of the half-filled Falicov-Kimball model, which describes a non-Fermi liquid. Note that the dip persists to about $n=2.1$, i.e., the boundary of the OSM phase (at $U=2.4$); beyond that density, it cannot be distinguished from the (inevitable) numerical noise in the spectra. We will further discuss the extent of non-FL features in Sec.\ \ref{subsec:QP}. We now consider the spectral function of the {\it narrow\/} band in a two-band system, which is again juxtaposed with the spectrum of a single-band model at comparable parameter values. For this purpose, we define scaled interaction strengths $\tilde U^\mathrm{(1)}\equiv U/U_\mathrm{c}^\mathrm{(1)}$ and $\tilde U^\mathrm{(2)}\equiv U/U_\mathrm{c}^\mathrm{(2)}$ for the single- and the two-band system, respectively. Here $U_\mathrm{c}^\mathrm{(1)}$ and $U_\mathrm{c}^\mathrm{(2)}$ are the critical interactions for the Mott transition of the single-band model and the metal-OSMP transition of the two-band system, respectively. The numerical values of $U_\mathrm{c}^\mathrm{(1)}$ and $U_\mathrm{c}^\mathrm{(2)}$ are approximately given by: \begin{eqnarray*} U_\mathrm{c}^\mathrm{(1)} \approx 2.35\hspace{5mm}&\ &\mbox{(single-band\ model)} \\ U_\mathrm{c}^\mathrm{(2)} \approx 2.10\hspace{5mm}&\ &\mbox{(two-band\ model)} \end{eqnarray*} The parameter values of the narrow band and the single-band system are referred to as ``comparable'' if $\tilde U^\mathrm{(1)}\approx \tilde U^\mathrm{(2)}$; then, the systems have about the same ``distance'' from a Mott transition at half filling. In actual QMC calculations it has to be taken into account that the critical interactions $U_\mathrm{c}^\mathrm{(1)}$ and $U_\mathrm{c}^\mathrm{(2)}$ depend slightly on the Trotter discretization. Moreover, in comparing the two bands, a slight mismatch of the available $\tilde U$ values is unavoidable, since the required $U_\mathrm{c}$ values are only approximately known and the ratios $\tilde U$, resulting from the simulations, are known only on in general non-identical grids. Note that the need of rescaling arises for the interactions since the relevant parameter is $\tilde U -1$, which is strongly affected even by small changes in $U_\mathrm{c}$ in the phase region of interest. In contrast, due to the small difference of only about $10\%$ between $U_\mathrm{c}^\mathrm{(1)}$ and $U_\mathrm{c}^\mathrm{(2)}$, a rescaling of frequencies is not necessary for the comparison (but would make quantitative comparisons with other data more difficult). Similarly, it is not expected that temperature variations of the order of $10\%$ would visibly alter the spectra; consequently the comparison is made at identical (unscaled) temperatures. It should also be noted that a rescaling would not have been appropriate for Fig.\ \ref{fig:spectraW}, where the wide-band of the two-band system is compared with a single-band system of equal bandwidth $W=4$; in that case, only the two-band system is close to an MIT (of the narrow band) while the single-band system remains metallic at least for $U\lesssim 4.7$. \begin{figure} \begin{center} \hspace*{4mm}\includegraphics[angle=0,scale=0.45,trim=3mm 91mm 3mm 10mm,clip]{figures/dos_two_vs_single.ps} \end{center} \caption{(Color online) DOS calculated for various particle densities near half filling as a function of frequency, both for the narrow band in the two-band model (left column) and for a single-band calculation (right column). Panels within one row have approximately matching scaled interactions $\tilde U$. In the two-band case (left column), the {\it total} filling is varied in the range $2\leq n\leq 2.2$ in uniform steps of 0.01, with a corresponding offset. In the single-band case (right column), the filling is varied nonuniformly: here, the offset is chosen so that adjacent spectra with the same offset have (roughly) the same {\it partial} filling (i.e., $n_{\text{n}}$ in the two-band model equals $n$ of the single-band model). }\label{fig:spectraN} \end{figure} Due to the occurrence of a metal-insulator transition, the changes in the spectrum of the {\it narrow\/} band, as the interaction is varied through $\tilde U =1$, are far more drastic than in the case of the wide band. For the narrow band, therefore, we discuss results for several interaction ratios $\tilde U$, both below and above the metal-insulator transition, which are presented in Fig.\ \ref{fig:spectraN} (panels $a$, $c$, $e$, and $g$), where they are juxtaposed with the ``comparable'' single-band spectra of panels $b$, $d$, $f$, and $h$. For interaction values $\tilde U\simeq 0.85$, clearly below the metal-insulator transition(s), we conclude from Fig.\ \ref{fig:spectraN} (panels $a$ and $b$) that the spectra of the narrow band and the single-band model show only minor differences. The Kondo resonance is well-established in both cases, and the overall shape of the spectra is very similar. The dips in the spectrum between the resonance and the Hubbard bands are clearly more pronounced in the single-band case, in particular for larger doping. For a relative interaction strength $\tilde U\simeq 0.95$, still slightly below the metal-insulator transition, we note from Fig.\ \ref{fig:spectraN} (panel $c$ and $d$) that, close to half filling, the Kondo resonance of the narrow band in a two-band model has considerably smaller weight and width than that of the single-band model, much beyond possible scaling effects. For the narrow band, more spectral weight is shifted from the vicinity of the Fermi level to the shoulders of the Hubbard bands so that the dips between the resonance and the Hubbard bands are, again, less pronounced for the narrow band than for the single-band system and the peaks of the Hubbard bands are closer to the Fermi level in panel $c$ than in panel $d$. For $\tilde U\simeq 1.02-1.05$, both the narrow band of the two-band system and the single band are in a Mott insulating state at half filling, as can be seen from the gapped spectra at the lowest doping levels in panel $e$ and $f$ of Fig.\ \ref{fig:spectraN}. As the doping concentration is increased, the spectral weight near the Fermi level is larger for the single than for the narrow band, in accordance with our previous observations concerning pinning of the particle numbers of the narrow band at not-too-large doping. In spite of the pinning of the particle density ($n_{\mathrm{n}} \simeq 1$), we note from Fig.\ \ref{fig:spectraN}$e$ that the shape of the Hubbard bands of the narrow-band spectrum experiences some considerable changes at low doping. At larger doping, we note that the gap between resonance and Hubbard bands is much deeper in the single-band case than for the narrow band. Moreover, the height of the Kondo resonance of the narrow band at the largest available doping levels seems to be reduced compared to the single-band case due to a larger number of available scattering channels. The evolution of the overall shapes with increasing doping can also be characterized in the following way: in the single-band case, the quasiparticle peak grows essentially within the gap; in the two-band case, the narrow-band resonance evolves from a shoulder of the (upper) Hubbard band. For interaction values $\tilde U\simeq 1.11-1.14$, significantly above the metal-insulator transition, one sees from panels $g$ and $h$ of Fig.\ \ref{fig:spectraN} that the behavior, already indicated in panels $e$ und $f$, becomes more pronounced. For larger doping ($n\ge 2.10$) the upper Hubbard band crosses the Fermi level, and a Kondo resonance evolves in both the narrow and the single band. For the narrow band, the resonance is strongly surpressed, however, and the vicinity of the Fermi level is dominated by the upper Hubbard band. Also, there is not a clear gap between the resonance and the lower Hubbard band of the narrow band spectrum. In contrast, the single-band model develops a very sharp Kondo resonance which is clearly separated from the lower Hubbard band by a (pseudo) gap. \subsection{\label{subsec:N0}Spectral weight at the Fermi level} From our MEM-results for the spectral function we determine in particular the spectral weight at the Fermi level. Our results for this quantity, both for the narrow and for the wide band, are presented in Fig.\ \ref{fig:N0}. \begin{figure}[t] \includegraphics[angle=270,scale=0.5]{figures/fermi_edge_NB.ps} \caption{(Color online) Spectral weight at the Fermi level for the two-band model. The thin dashed curves represent the spectral weights at the Fermi level according to Luttinger's theorem. All other curves are guides to the eyes only. }\label{fig:N0} \end{figure} Within DMFT, M\"uller-Hartmann\cite{ZPB.76.211} showed for {\it single-band\/} models that, in the ground state, the spectral weight at the Fermi level is not renormalized by the interaction as a consequence of Luttinger's theorem. His proof is also applicable to multiband Hamiltonians, diagonal in the band- and spin-indices, such as the $J_z$-model (\ref{eq:model}) considered here. The reason is that, for such Hamiltonians, the multiband Green function, the Weiss field and the self-energy are also diagonal in the band- and spin-indices. Hence, Luttinger's theorem applies also to the multiband case, if the self-energy has quasiparticle properties: $\mathrm{Im}\,\Sigma(\omega) = {\cal O}(\omega^2)\ (\omega\rightarrow 0)$ on the real axis. Similar arguments can be found in Ref.\ \onlinecite{held_et_al_phys_stat_sol_review}. For comparison, we have also plotted, in Fig.\ \ref{fig:N0}, the spectral weight at the Fermi level as predicted by Luttinger's theorem (dashed lines). It is clear from Fig.\ \ref{fig:N0} that the agreement between the spectral weights $A_\mathrm{w}(0)$ for the wide band and $A_\mathrm{n}(0)$ for the narrow band, both calculated at the Fermi level, and the values predicted by Luttinger's theorem improves with doping. In particular, there are significant deviations at low doping between the data and the reference curves. There are several reasons for this: First of all, the narrow band is non-metallic and, hence, not a Fermi liquid at low doping (and for $U\gtrsim 2.1$). Secondly, the data were sampled at finite temperature and the reference curve is valid only at $T=0$. Third, one expects that the bands, even when they are metallic, retain some non-Fermi-liquid properties at interaction values which, at half filling, are characteristic of an OSMP. Quantitatively, one deduces from Fig.\ \ref{fig:N0} that the metallicity (and hence the quasiparticle properties) of the wide and the narrow band develop in the doping ranges $2.0 \lesssim n \lesssim 2.3$ and $2.3 \lesssim n \lesssim 2.6$, respectively. For larger doping concentrations and interactions $U\gtrsim 2.6$, the agreement between the data and the reference curves is quite good. \subsection{\label{subsec:QP}Self-energy and quasiparticle weights} We briefly discuss the behavior of the quasiparticle weight (QPW) $Z$, as defined by Eq.\ (\ref{eq:Z}), as a function of interaction and density. As stressed before, discrete estimate (\ref{eq:Z}) of $Z$ can, strictly speaking, only be interpreted as a physical ``quasiparticle weight'' if both bands are Fermi liquids. If one now studies the behavior of the QPW in the two-band Hubbard-model away from half filling for interaction values {\it outside\/} the Fermi-liquid regime ($U\gtrsim 2$), one expects and indeed finds an anomaly: The QPW of the narrow band does {\it not\/} depend monotonically on the filling [Fig.\ \ref{fig:QP}a], as it would in a Fermi liquid phase, but is increasingly enhanced at $n\approx 2.1$ with increasing $U$. To make sure that the behavior in Fig.\ \ref{fig:QP}a is not a numerical artifact, these results were counter-checked with a different QMC solver. \begin{figure} \includegraphics[angle=270,scale=0.5]{figures/Z_inset.ps}\\ \includegraphics[angle=270,scale=0.5]{figures/real_sigma_U2-40.ps} \caption{(Color online) ($a$) Quasiparticle weight $Z_{\text{n}}$ as defined in Eq. (\ref{eq:Z}) for the narrow band in the two-band model. $Z_{\text{n}}$ develops a local maximum as a function of filling in the weakly doped regime $2.05\lesssim n\lesssim 2.15$ with increasing interaction $U$. There are no further anomalies at stronger doping (see inset). ($b$) Real part of the self-energy calculated for $U=2.4$ at discrete Matsubara frequencies for the two-band model. The self-energy exhibits an anomaly (i.e., a strong enhancement at small frequencies) in the intermediate doping range ($2.05\lesssim n\lesssim 2.2$). The lines are guides to the eyes only. }\label{fig:QP} \end{figure} Having established that the QPW behaves anomalously outside the Fermi liquid regime, it is interesting to search for possible non-Fermi-liquid anomalies in the full frequency-dependent self-energies, from which the QPWs have been computed according to Eq.\ (\ref{eq:Z}). The self-energies behave smoothly as a function of the (discrete) Matsubara frequencies, down to the first Matsubara frequency, for all metallic solutions. However, already for the moderately large interaction value $U=2.4$, a clear anomaly can be seen in the real part of the self-energy [Fig.\ \ref{fig:QP}b]. In the doping region between $n=2.00$ and $n=2.40$ some intermediate solutions show divergent behavior for $\omega_{n}\to 0$, which strongly differs from the smooth solutions for densities $n=2.00$ and $n=2.40$. Moreover, $\Sigma(i\omega_n)$ shows a non-monotonic dependence on the filling at the first four Matsubara frequencies $\omega_1$, $\omega_2$, $\omega_3$, and $\omega_4$. This contrasts strongly with the characteristics found in the metallic regime, where $Z(i\omega_n)$ is monotonic as a function of $n$. Hence the self-energies, too, show anomalous behavior, as was found before for the secant estimate [see Eq. (\ref{eq:Z})] or Fig.\ \ref{fig:QP}a. Note that the real part of the self-energy has to vanish on the imaginary axis due to particle-hole symmetry at half filling, even in the insulating phase. Let us now come back to the question of how closely non-Fermi-liquid properties are connected with orbital-selective Mott physics (for anisotropic Hund rule couplings). In Fig.\ \ref{fig:spectraW}, we have already seen that the low-frequency dip in the wide-band spectrum, characteristic of a non-FL, persists at least up to $n=2.1$, the boundary of the OSM phase; however, some traces remain up to about $n=2.15$. So it is not clear, at this point, whether the OSMP and the non-FL phase are really identical. In order to avoid the ambiguities of analytic continuation, we will now discuss this issue on the basis of the imaginary-time self-energy data shown in Fig.\ \ref{fig:ImS}. \begin{figure} \includegraphics[angle=270,scale=0.5]{figures/SE_imag_FL.ps} \caption{(Color online) ($a$) Imaginary part of the self-energy $\mathrm{Im}\,\Sigma({{i}} \omega_n)$ for various total fillings $n$ at $U=2.4$ and $T=1/40$; intermediate minima are seen at low frequencies only for $n\ge 2.15$. ($b$) Results for $\mathrm{Im}\,\Sigma$ computed at different temperatures for selected densities (at $U=2.4$). The curves are hardly distinguishable for $n=2.1$; for $n=2.2$, the low-frequency minimum becomes more pronounced with decreasing $T$. }\label{fig:ImS} \end{figure} Subpanel (a) shows $\mathrm{Im}\,\Sigma({{i}} \omega_n)$ for $U=2.4$ and $T=1/40$ (i.e., the same parameters as in Fig.\ \ref{fig:spectraW}) across a range $2.05\le n\le 2.25$ of densities. Close to half filling, at $n\le 2.1$, $\mathrm{Im}\,\Sigma({{i}} \omega)$ is a monotonic function of $\omega$ which clearly tends toward finite non zero values (of order 1) for $\omega\to 0$, i.e., shows characteristic non-FL behavior. In contrast, at $n=2.25$, far away from half filling (and from the OSMP), the self-energy clearly decays at small frequencies, consistent with the Fermi liquid scenario $\mathrm{Im}\,\Sigma({{i}} \omega) \propto {\cal O}(\omega) + {\cal O}(T^2)$. In between, the situation is less clear: it is impossible to decide, from this data alone, whether the apparent finite value to which the data extrapolates for $\omega\to 0$ represents thermal or intrinsic non-FL effects. Therefore, we have checked the impact of variations of the temperature for two selected densities. As seen in Fig.\ \ref{fig:ImS} (b), the curves connecting the discrete Matsubara data are on top of each other for $n=2.1$. Evidently, the low-temperature physics is determined by genuine non-FL scattering in this case; temperature effects are negligible. In contrast, a clear trend with decreasing temperature toward a Fermi-liquid form is seen for $n=2.2$; however, much lower temperatures would be needed in order to verify the full restoration of FL properties which would be very costly using QMC. This is even more true for $n=2.15$ for which $\mathrm{Im}\,\Sigma({{i}} \omega_n)$ shows only a very shallow minimum. So we have established that Fermi liquid properties are, indeed, restored for large enough doping; unfortunately our data cannot show whether this boundary matches the narrow-band OSMT at low temperatures. However, we have clearly shown that the wide band is a non-FL throughout the orbital-selective Mott phase. Note that the present Matsubara-frequency analysis nicely confirms the conclusions drawn from the wide-band spectra [Fig. 8] and, thereby, the reliability of the maximum-entropy procedure. \section{\label{summary}Summary and Conclusion} We first summarize our results. In this paper we studied electronic correlations within two-band $J_z$ model (\ref{eq:model}) using a high-precision QMC-DMFT code, both close to and far away from half filling. We calculated both static and dynamic properties. Among the static properties, we focused particularly on band-specific fillings and intraorbital double occupancies as functions of interaction and doping. The most important result for these quantities is their pinning at the Mott insulating state for the narrow band. Our results for static quantities in a wide range of interaction and doping values were summarized in a phase diagram. In the section on the dynamical properties we focused on the frequency dependence of the spectral function and, in particular, on the density of states at the Fermi level. We calculated the density of states, both of the wide and of the narrow band, and compared the results to those for a single-band system. The main finding here is that, in the two-band model, the density of states is somewhat broadened, and that (compared to single-band spectra) weight from the Kondo resonance is shifted into the dips between the resonance and the Hubbard bands. A physical explanation for this is that the interorbital coupling introduces additional scattering channels for the electrons, which reduce quasiparticle lifetimes and, hence, smear out the spectrum. In this paper, we followed the common practice in the study of orbital-selective Mott transitions and focused on \emph{paramagnetic} phases, excluding magnetic states. Still, the true thermodynamic equilibrium states of model (\ref{eq:model}) may well be magnetic for the parameters of interest. In fact, it is known for the half-filled case (in high dimensions) that the N\'{e}el temperature at $U_{c1}$ is about \emph{six} times larger than the critical temperature $T_{c1}\approx0.02$ of the OSMT (\onlinecite{pvd2007}) so that the OSMTs observed in paramagnetic calculations should be hidden inside the antiferromagnetic equilibrium state. However, studies of symmetry-broken states are notoriously difficult away from half-filling due to the large variety of possibilities. Also, various mechanisms exist which can suppress ordered phases, such as disorder, spin-orbit coupling or longer-range hopping.\cite{pvd2007} In particular, \emph{finite dimensionality} can be expected to suppress magnetic states effectively in planar systems such as $\mathrm{Ca_{2-x}Sr_xRuO_4}$. Therefore, and since Mott transitions have been observed\cite{Koga05PRB,Gorelik09} to persist as narrow crossovers far above critical temperatures, results obtained in the paramagnetic phase should be qualitatively valid, at least at intermediate temperatures. To conclude, we demonstrated that, for interaction values in the OSMP range $2\lesssim U\lesssim 2.6$, the entire doping range $0\leq n\leq 4$ is subdivided into three different regimes by the behavior of the narrow band. At low doping, both the particle number and the double occupancy of the narrow band are pinned to their Mott insulating values, so that, upon increasing the doping, the wide band absorbs all additional electrons or holes; in this regime, the wide band shows characteristic non-Fermi-liquid behavior. In the intermediate doping regime, i.e., from a critical doping concentration $n_{\mathrm{crit}}(U)$ onward, the narrow band leaves the pinned state, so that both bands contribute to the compressibility and become metallic. Third, for a nearly filled (or nearly empty) system with $n\gtrsim 3.6$ (or $n\lesssim 0.4$), the narrow band is effectively a band insulator (or effectively empty), so that, also in this regime, only the wide band contributes to the compressibility. In this third regime, correlation effects become surpressed and quantum fluctuations vanish. Surprisingly, in the double occupancy this effect can qualitatively already be seen for densities, not too far away from half filling ($|n-2|\gtrsim 0.5$). Taken together, our findings give a fairly complete and reliable description of the physics of two-band model (\ref{eq:model}) with inequivalent bands, in which genuine multi-band and orbital-selective features are clearly separated from generic correlation effects (that also exist in single-band models). \vspace{1ex} {\bf Acknowledgments --} The work of one of us (E.J.) was supported by the Graduate School of Excellence ``Materials Science in Mainz'', funded by the German Research Foundation with both federal and state support within the framework of the Excellence Initiative.
1,314,259,993,147
arxiv
\section{Research Methods} \end{document} \endinput \section{Improvements to the Platform} \label{Improvements} The results of the previously published validation of HuggieBot 2.0 \cite{TheSixHugCommandments} and the user comments from the action-response elicitation study (Section ~\ref{subsec:user-comments-study1}) showed four main aspects of the system that could benefit from improvement: the hug initiation process, the vertical placement of the robot's arms on the user's body, the reliability of the inflated torso, and the quality and consistency of the robot's embrace around the user's body. Thus, we spent time addressing these concerns to improve the quality of the hug that this robot can deliver to users, upgrading HuggieBot from version 2.0 to version 3.0. Table \ref{tab:huggiebot} summarizes the key features of these two successive versions. We extensively piloted all of these changes with representative users and made further adjustments based on their feedback. The following subsections provide more detail about the final changes made and used in the validation study. \subsection{Hug Initiation} \label{subsec:initiation} The initial evaluation of HuggieBot 2.0 by \citet{TheSixHugCommandments} tested two different ways for users to initiate a hug, always starting about 2.5 meters in front of the robot. In the first method, the user pushed a button to start the hugging process. The second method used HuggieBot's built-in depth camera to recognize a human and then start the hugging process when the potential user starts walking toward the robot. Users did not rate the two methods significantly different; this indifference can be explained by their comments on this topic. Because the visual hug-initiation method was triggered by the user's forward movement, and because the robot arms close slowly, users would often reach the robot before its arms had closed very far, causing them to have to wait for the robot's embrace. After piloting several alternative approaches, we improved the visual hug initiation process by dividing it into two steps. First, when all the necessary software nodes are running, the robot lifts its arms and asks the user ``Can I have a hug, please?'' The phrase and arm movement clearly show the user when they may begin hugging the robot; previously, the experimenter prompted users when they could begin. Lifting the arms also beneficially reduces the distance the robot's arms need to travel to close around the person, thus shortening the waiting time disliked by users. After this invitation step, HuggieBot uses the previous method to visually detect the user's forward motion and initiate the closing sequence. The robot waits between the ``hug request'' pose and closing its arms for as long as the user needs in order to reduce time pressure, so users do not feel like they have to start the next hug immediately. These small changes were implemented to make the robot's hug timing more natural and intuitive. This method also beneficially reduces experimenter interaction with the user and better mimics human-human hugs, where one person lifts their arms for a hug and waits for their partner to approach before wrapping their arms around them. \subsection{Adjustment to User Height} \label{subsec:height} As shown in Fig.~\ref{fig:QualityMatrix}, the robot's rubs and pats received lower average quality ratings than hold and squeeze. When users explained the low ratings, the most common criticism was the location at which the gesture was performed, which was not optimal for their bodies. As HuggieBot 2.0's arms always hugged at the same height off the ground, these gestures were performed too low for tall users and too high for short users. In addition to the inappropriateness of some contact locations reported in the comments, the convex or concave curvature of different areas of different users' backs exacerbated this problem by causing loss of contact or excessive contact when the robot performed some gestures. To resolve this hand placement issue, the robot must improve its visual perception of the user. In addition to detecting a potential user and estimating his/her approach speed toward the robot, HuggieBot needs to perceive the user's approximate height and adjust its arm positions accordingly, something humans do naturally, quickly, and efficiently. To simplify this problem, several assumptions were made. First, we assumed that the camera is perfectly parallel to the floor. Second, we assumed that the person approaching is standing perpendicular to the floor. These assumptions help simplify the problem from a three-dimensional problem in point-cloud space to a planar problem. Simplicity is desired in this case to keep the computational load low and allow for real-time adjustments. The problem then becomes one of similar triangles. The depth camera's resolution is 1280 pixels $\times$ 720 pixels, and its focal length is 651.55 pixels. Based on the room's size constraints and the need to keep the camera oriented parallel to the floor, the camera cannot see the user's feet and lower legs when the person is first detected. To accommodate this reduction in the bounding box's height, after the visible height of the user has been calculated in meters, a small adjustment is added based on the user's distance from the camera to account for the height of the unseen portion of their body. The full linear projection can be written as follows, using constants obtained through measurements: \begin{equation}\label{eq:1} H = \frac{(D \cdot b)}{f} - \alpha \cdot D + h_c \end{equation} where $H$ is the user's full height in meters, $D$ is the distance between the user and the camera in meters, $f$ is the depth image focal length in pixels, and $b$ is the height of the bounding box of the detected person in pixels. In addition, $h_c = 1.73$~m, which is the height of the robot's camera above the ground, and $\alpha=0.5518$, which is a geometric constant that is multiplied by the distance between the user and the robot to proportionally account for the occluded height between the bottom of the bounding box and the floor. We found that individual height estimates computed in this way are somewhat noisy, so HuggieBot 3.0 averages five successive measurements. We set the ideal shoulder lift angle for HuggieBot 3.0's left arm based on the estimated height of the user, and we then offset the right arm shoulder lift angle up by 20 degrees from that point to create good inter-arm spacing. To determine appropriate arm lift angles for users of different heights, we performed brief experiments with two model users at the minimum (1.40 m) and maximum (1.93 m) user heights we anticipate encountering. We manually adjusted both robot arms around each model user to a comfortable height on their back and recorded the corresponding shoulder lift joint angles. We perform linear interpolation to find the ideal left shoulder lift joint angle for the approaching user, as follows: \begin{equation}\label{eq:2} \theta_{\ell} = \theta_{\ell,\min} + (H - H_{\min}) \cdot \frac{(\theta_{\ell,\max} - \theta_{\ell,\min})}{(H_{\max} - H_{\min})} \end{equation} where $\theta_{\ell,\min}$ and $\theta_{\ell,\max}$ are the robot's left shoulder lift angle angle for the minimum-height and maximum-height model users, respectively, $H$ is the user's estimated height in meters, and $H_{\min}$ and $H_{\max}$ are the height of the short and tall model users, respectively. When the user's estimated height is outside the range of the model users, the closer model user's robot arm placement is used. \subsection{New Torso} \label{subsec:torso} We created a new inflated torso for HuggieBot 3.0 to address several shortcomings in the previous design. HuggieBot 2.0's torso contained a pressure sensor and a microphone in both the front chamber and the back chamber. Early testing showed that the front chamber data provided little information beyond the back chamber data, so HuggieBot 2.0 did not use the information from these sensors. Our new torso has sensors only in the back chamber; furthermore, the sensor wires exit the top of the chamber rather than the bottom of the chamber to minimize the distance to the computer in the robot's head. The torso's initial design featured two different-sized chambers, with the back chamber being slightly smaller. Study participants of all sizes used various arm positions to hug the robot and perform intra-hug gestures on its back. Therefore, we decided to increase the back chamber's size to be equal to the front chamber to better accommodate all users. Some users squeezed the robot much more tightly than we anticipated during the squeezing hugs of the action-response elicitation study, occasionally popping holes in a chamber or forcing a resealable inflation valve to open; both of these failure modes allow air to escape, change the feel of the robot's torso, and require re-inflation. We designed the new chamber to be more robust to withstand these higher pressures. We ensured a robust and airtight seal on HuggieBot 3.0's new torso by heat sealing along the edges and then using HH-66 vinyl cement on top of the heat seal. The newly constructed torso was tested by pilot users performing the four studied gestures during hugs with the robot. After we matched the sensitivity of the new microphone to that of the previous one, the sensor recordings exhibited the same general patterns as those shown in Figs.~\ref{fig:P8} and \ref{fig:P13}. The lack of leaks in the new chamber beneficially also stabilized the starting pressure, which makes both the feel of the robot and the measurements of its haptic sensors more consistent over time. \subsection{Quality and Consistency of the Embrace} Finally, several smaller changes were made to how the robot's arms grasp the user based on the feedback from the action-response elicitation study as well as additional pilot users. The changes we made are as follows: \begin{enumerate} \item Some users were thinner than we had anticipated, so some of the robot's joints reached their goal angles without coming into contact with the user's back. Therefore, we made the closing goal angles much smaller, such that no user will fit inside the goal pose, which ensures good contact between both robot arms and the user's back. \item To automatically adjust to our users' shapes and sizes, we use a torque threshold to turn off individual joints when they come into contact with the user. If a single torque measurement exceeds this threshold, HuggieBot 2.0 stops that joint from moving further. We found that while this kept users safe, it did not provide them with a consistent feeling of being fully embraced because some spuriously high torque readings were occasionally measured. Therefore, we implemented a moving average filter on all torque measurements with a window of three values. When the average torque in this window surpasses the threshold, the joint stops. After making this adjustment, we also tuned the torque thresholds and the relative offset for each joint's final target with multiple diverse users to ensure HuggieBot 3.0's embrace was comfortable and not painful. \item With the improved arm placement and more complete closure, we noticed that the wrist rotation of the robot's arms could cause uncomfortably high pressure on the backs of some users. We adjusted the goal pose for the wrist angles for both arms to ensure the flat and comfortable side of the wrist is in contact with the user's back. \item With the improved quality of the contact between the arms and the user's body, we found that HuggieBot's squeeze was now too tight for many users. We thus reduced the elbow joint's squeeze movement from a magnitude of 5$^\circ$ to only 3$^\circ$. \item We reduced the torque threshold required to release from the hug during gestures. This value was set rather high (60~Nm) for HuggieBot to prevent users from accidentally triggering a torque release when performing a gesture; however, pilot users were occasionally bothered by how hard they had to lean back to make the robot release them in the middle of a gesture. This threshold was tuned through pilot testing to have a final value of 20~Nm during a regular embrace and 40~Nm during robot gestures. It is important to note that we kept a single value for this torque threshold rather than checking a moving average of the measurements in order to release the user without delay. If a user wants to end the hug at any time, HuggieBot 3.0 opens its arms immediately. Note that the user presently cannot trigger a release during a timed squeeze from the robot because this call is blocking; however, the user can trigger a release between timed squeezes and also when the robot is performing a squeeze response that is matched in duration to their squeeze. \end{enumerate} \noindent These four categories of hardware and software improvements upgraded HuggieBot 2.0 to become HuggieBot 3.0, as summarized in Table~\ref{tab:huggiebot}. \section{Detection and Classification of Gestures} \label{Detection} To build an autonomous perception system that HuggieBot 3.0 can use to detect and classify intra-hug gestures, we manually labeled 498 trials (14 of the initial 512 trials were discarded due to problems in the data) recorded in the action-response elicitation study. One of the authors visualized the pressure and microphone signals and marked the hug's start to be the first point where the pressure rises from its initial value. Similarly, the hug's end was the first point where the pressure value declined back to its initial value. The same author marked the start and end of touch gestures based on the distinct signatures of the gestures in the pressure and microphone signals. Next, we divided the time-series hug data into a large number of segments and extracted statistical features from each segment. We applied a moving window (W) with an overlap size (O) to divide the time-series data for each hug into shorter segments. If a segment’s overlap with the gesture timestamps was above a certain threshold (T), the segment was labeled with that touch gesture. For each segment, we subtracted the baseline pressure and baseline microphone values from the corresponding signals. These baseline values were calculated by taking the median of the first 150 pressure and microphone data points after the start of the hug, which corresponds to about 3.3 seconds of data recorded at 45~Hz. We extracted 80 statistical features for each segment, including sum, minimum, maximum, average, median, standard deviation, variance, number of peaks, interquartile range, and area under the curve for the pressure and microphone signals and their first and second time derivatives. We divided the segments into train, validation, and test sets. We trained a random forest algorithm on 70\% of the data and used 20\% of the data as the validation set to determine the impact of the three parameters W, O, and T on the model performance. We tested window sizes of W = 50, 75, and 100 data points, overlap sizes of O = 37, 25, and 12 data points, and labeling thresholds of T = 0.75, 0.5, and 0.25. The classification accuracy varied by only about 1\% to 3\% among different parameter combinations. However, larger values of W and T make the robot slower at detecting user gestures. Thus, we selected W = 50, O = 37, and T = 0.75 as a trade-off between gesture detection accuracy and delay. Fig.~\ref{fig:confusionmatrix1} shows the confusion matrix after applying the resulting random forest model to the remaining 10\% test data. The overall classification accuracy achieved is 88\%, with the best performance on detecting squeezes \begin{figure}[t] \includegraphics[width=0.75\columnwidth, trim = {0cm 0cm 0cm 0cm},clip]{figures/ConfusionMatrix-Test-31-12-2020.png} \vspace{-0.3cm} \caption{Confusion matrix for the test dataset gathered in the action-response elicitation study.} \label{fig:confusionmatrix1} \vspace{-0.3cm} \end{figure} After development and offline validation, this perceptual pipeline was transferred to the physical robot and adapted to run in real time. To conserve computational effort while achieving a fast reaction time, we calculate features on the most recent window of 50 data points every ten samples, rather than every sample. Intra-hug gestures are thus detected at a rate of approximately 4.5~Hz, always yielding an output of hold, pat, rub, or squeeze. After being trained on the dataset collected from HuggieBot 2.0, the classifier was tested on the new inflated torso (Sec.~\ref{subsec:torso}) of HuggieBot 3.0. Pilot testing showed that the classifier's performance was worse than desired, probably due to the changes in the size and shape of the back chamber. Gesture samples were thus acquired from five pilot participants performing each of the four studied gestures during two hugs on the new torso. These 40 trials were labeled and processed in the same way as the original data from the action-response elicitation study. Because these additional examples did not have sufficient power when added to the 70\% training data (354 trials), we trained a new version of the classifier using 80\% of the data from ten randomly selected users from the action-response study (125 trials) plus the newly collected 40 trials. This retrained version showed good performance on the new inflated torso in pilot testing, so it was selected for use in the validation study. \section{Behavioral Response to Detected Gestures} \label{Response} After it possesses a good pipeline for detecting and classifying intra-hug gestures, a hugging robot needs to decide how to act, i.e., which of the available intra-hug gestures to perform. We implemented the robot's behavior in all situations using a simple probabilistic approach (Section~\ref{subsec:probabilistic}), calculating the likelihood that the robot will perform each gesture as a function of the relevant average user ratings given the action that was just detected. Although we have treated them equivalently thus far, the active gestures of pat, rub, and squeeze occur far less often than the passive gesture of hold in natural hugs. The hold gesture can be thought of as the standard background against which the active gestures occur; almost every hug in the action-response elicitation study included periods of time where the user was simply holding the robot. The differences between active and passive gestures require somewhat different types of behavioral responses from the robot. We found the need for an additional distinction between the discrete active gestures of pat and rub (Section~\ref{subsec:discrete}), which occur in discrete units, and the modal active gesture of squeeze (Section~\ref{subsec:modal}), which involves transitioning into and out of the state of applying higher pressure to the hugging partner. Note that the described methods are not limited to these four intra-hug gestures: another example of a discrete active gestures that future hugging robots could consider is tickling, while an alternative modal active gesture could be leaning into one's hugging partner. Finally, we describe how to apply our probabilistic behavior paradigm when a hugging robot detects a passive gesture such as hold (Section~\ref{subsec:passive}). Figure \ref{fig:robot_state_diagram} depicts the robot's state transition diagram to clarify the robot's controller. \begin{figure}[tp] \includegraphics[width=0.8\columnwidth, trim = {0.1cm 0cm 0.1cm 0.1cm},clip]{figures/robot_state_diagram.pdf} \vspace{-0.3cm} \caption{Diagram of HuggieBot 3.0's states and state transitions.} \label{fig:robot_state_diagram} \vspace{-0.3cm} \end{figure} \subsection{Probabilistic Behavior Paradigm} \label{subsec:probabilistic} Examination of Fig.~\ref{fig:BehaviorMatrix} shows that each row follows a somewhat different distribution of average user ratings; thus, the appropriateness of a particular robot response critically depends on the action the user has just performed. Our probabilistic approach was designed to respect that dependency as well as the relative appropriateness of the different favorably received responses in the way it chooses between gestures. Specifically, we designed a simple generic method for converting the user preferences gathered during the action-response elicitation study into a probabilistic behavior model that determines which action the robot should perform based on which gesture it just detected from the user. The equation we crafted to transform the average ratings into probabilities is as follows: \begin{equation}\label{eq:3} p_{g|a} = \frac{(\max(r_{g|a} - \eta, 0))^m}{\sum_{i = 1}^{N}{(r_{i|a}-\eta})^m} \end{equation} where $p_{g|a}$ is the probability with which the robot will perform the specified gesture $g$ given the user action $a$, $r_{g|a}$ is the average user rating that gesture $g$ received when presented as a robot response to user action $a$, $\eta$ is the neutral value on the rating scale, $m$ is a positive power that controls how strongly higher-rated gestures should be favored (chosen to be 3 for our study), $N$ is the total number of gesture options being considered (usually 4 for our study), and $r_{i|a}$ is the average user rating that gesture $i$ received when presented as a robot response to user action $a$. This formula subtracts the neutral value $\eta=5$ from each average rating to focus only on responses that were positively received; if a rating is below neutral, the value of zero is used instead to ensure the robot never performs that gesture in response to this user action. The numerator and denominator terms in equation~\eqref{eq:3} are raised to the power of $m=3$ to increase the probability the robot will select highly rated responses; other powers could be used to provide other blends between exploration of different options and exploitation of the known best choice. For example, $m=1$ sets the gesture probabilities to be directly proportional to the relative strength of their received ratings, which we found to be too random. In comparison, $m=0$ would perform the $N$ gestures with equal probability (pure exploration with no dependence on user rating), and $m=\infty$ would always select the highest-rated gesture (pure exploitation with no variety). Regardless of the value of $m$, the $N$ resulting probabilities necessarily sum to unity. In practice, we pre-compute the contingent gesture probabilities from the average ratings shown in Fig.~\ref{fig:BehaviorMatrix}. The robot's software then implements this probabilistic behavior algorithm by generating a random number between 0 and 1 each time a response is required. The probabilities relevant to the detected action are stacked, and HuggieBot 3.0 executes the gesture corresponding to the generated number. \subsection{Responding to Discrete Active Gestures} \label{subsec:discrete} The intra-hug gestures of rubbing and patting both consist of discrete hand motions that the user can perform a single time or repeat many times in a row. Thus, the robot's response to these gestures is designed to also follow a discrete action paradigm. When the perceptual pipeline detects the start of a new user rub or pat, the robot first determines which discrete gesture to respond with by applying equation~\eqref{eq:3} to the average user ratings from the appropriate row (rub or pat) of Fig.~\ref{fig:BehaviorMatrix}. Because hold received a positive average rating in response to user rub actions, the robot chooses between all four gestures in this situation with probabilities of $p_{\textrm{hold|rub}}=0.01$, $p_{\textrm{rub|rub}}=0.30$, $p_{\textrm{pat|rub}}=0.14$, and $p_{\textrm{squeeze|rub}}=0.55$. In contrast, hold received a slightly negative average rating in response to user pat actions, so the hold response is never chosen after a pat is detected ($p_{\textrm{hold|pat}}=0.00$); the robot's three non-zero gesture response probabilities in this case are $p_{\textrm{rub|pat}}=0.27$, $p_{\textrm{pat|pat}}=0.21$, and $p_{\textrm{squeeze|pat}}=0.52$. The gestures with which the robot responds are all executed with fixed timing, as in the action-response elicitation study. The overlapping time windows used in our perception algorithm cause many successive detection events to be triggered over the course of a single long user action. Thus, while the robot is executing the response to a detected discrete active gesture, it enters a state of ignoring new detected actions for 2.5 seconds (slightly longer than the time it takes the robot to perform the selected gesture). This way, the robot does not accumulate a backlog of queued actions it needs to respond to, which could result in a never-ending pat or rub response. It should be noted that even in this state, the robot constantly checks to see if the user has indicated a desire to end the hug by either releasing pressure on the back chamber or leaning back against the arms, as shown in Fig.~\ref{fig:robot_state_diagram}. If either of these conditions occurs, the state-machine overrides whatever gesture the robot is performing and begins releasing the user. Otherwise, the robot returns to responding to new detected actions after the delay elapses. \subsection{Responding to Modal Active Gestures} \label{subsec:modal} When a new squeeze is detected, HuggieBot 3.0 calculates the response gesture probabilities by substituting the average user ratings from the squeeze row of Fig.~\ref{fig:BehaviorMatrix} into calculating equation~\eqref{eq:3}. Because hold received a slightly negative average rating in response to user squeezes, the hold response is never chosen after a squeeze is detected ($p_{\textrm{hold|squeeze}}=0.00$); the robot's three non-zero gesture response probabilities in this case are $p_{\textrm{rub|squeeze}}=0.10$, $p_{\textrm{pat|squeeze}}=0.09$, and $p_{\textrm{squeeze|squeeze}}=0.81$. If the chosen gesture is a discrete rub or pat, the response proceeds as described in the previous Section. However, the squeeze response to a squeeze action merits special handling because of its fundamentally different nature and the popularity of this robot gesture among users. Unlike the repeated discrete motions of rubs and pats, squeezes are modal active gestures that move between two states that apply lower and higher pressure on the partner, respectively. As shown by the action-response elicitation study, user squeeze actions can vary greatly in duration, and the start and end of a robot squeeze are quite salient to the user. To avoid repeatedly squeezing the user for fixed time intervals, and to follow the user suggestion to have the robot's extended squeeze duration match theirs, HuggieBot 3.0 responds somewhat differently to squeezes compared to discrete active gestures. Specifically, if the robot's response to a detected squeeze is to squeeze the user back, as soon as the robot detects a squeeze, it enters a squeezing state, and it leaves this state only when the perception algorithm detects a hold (which means the chamber pressure has decreased to near the baseline value and that the microphone is not detecting strong activity). At this point, the robot stops squeezing and returns to the normal hug state, where it reacts to detected gestures in the normal way. As shown in Fig.~\ref{fig:robot_state_diagram}, the robot's response to user squeezes has one other special aspect. Observing user behavior during the action-response elicitation study showed that several users squeezed the robot at the same time that they performed rubs or pats. Thus, when the robot is in the squeezing state, it continues monitoring the detected gestures. It stays in the squeezing state when the perceptual algorithm continues detecting user squeezes. If a rub or a pat is instead detected, it responds to this discrete active gesture layered on top of a modal active gesture in a way similar to usual discrete active gestures. The robot's response is again determined using the probabilistic behavior paradigm given in equation~\eqref{eq:3}. Because the robot is already in the squeezing state, the robot squeeze response is not considered, so the robot chooses between the remaining response options for the detected action, which are all discrete and thus take a fixed amount of time. In this way, a user can squeeze the robot, be squeezed in response, then rub or pat the robot's back, and receive a rub or a pat in response, still during the squeeze. \subsection{Responding to Passive Gestures} \label{subsec:passive} The final case happens when the perceptual pipeline detects that the user is passively holding the robot and not performing an active gesture. Based on the positive user ratings and comments found for proactive robot gestures in the action-response study, it is important for hugging robots to occasionally take the initiative to perform a gesture when it detects only hold gestures from the user. Providing proactive affective touch is more subtle than responding to discrete actions because the robot must take the temporal element into account to ensure that the person is genuinely not doing anything over an extended period. To determine the frequency at which proactive robot affective touch should occur, we looked at the rate at which the investigator remotely activated the gestures in the action-response elicitation study. The average time delay between the end of one proactive robot gesture and the beginning of the next one across the holding hugs of the 32 participants was 1.5 seconds. Thus, the behavior algorithm waits until it detects the hold user action detection for 1.5 seconds in a row (approximately seven overlapping windows) before proactively initiating an intra-hug gesture by the robot. Our probabilistic behavior model is again used to determine which gesture the robot should perform, as described in equation~\eqref{eq:3}; the calculated conditional probabilities are $p_{\textrm{hold|hold}}=0.11$, $p_{\textrm{rub|hold}}=0.22$, $p_{\textrm{pat|hold}}=0.10$, and $p_{\textrm{squeeze|hold}}=0.57$. This definition completes our description of how HuggieBot 3.0 behaviorally responds to gestures detected by its perceptual pipeline. \section{Introduction} \label{Introduction} \begin{figure} \includegraphics[trim = {0cm 0cm 0cm 0cm}, clip, width=\textwidth]{figures/Slide1.pdf} \caption{The four intra-hug gestures that our hugging robot, HuggieBot, can perform, either in response to a user action or proactively when it does not detect any user actions. Despite their importance during prolonged hugs between humans, no prior hugging robot has been able to detect and respond to intra-hug gestures.} \Description{} \label{fig:teaser} \end{figure} From the moment we are born, social touch affects our future ability to function well in society. Infants who are held by their mothers for two hours after they are born have better interactions with their mothers and are better at handling stress \cite{Uvnas-Moberg2014}. In such a close, positive relationship, the hormone oxytocin is released when the two partners see, hear, or even think of each other. In turn, this release bonds them even more closely and improves positive human relationships. A similar positive calming response can be evoked during an embrace, a massage, or other stroking of the skin, often called deep pressure touch \cite{SqueezeMachine}. Hugs that last more than three seconds often include intra-hug gestures, like squeezes and rubs (Fig.~\ref{fig:teaser}), that create a close physical exchange between the two participants and confer additional benefits from the increased deep pressure touch \cite{3secondhug}. Not everyone is fortunate enough to have close, positive relationships with people around them. If a similar effect can be achieved through a robotic embrace, this helpful touch can benefit people who otherwise would not be able to experience hugs. The broader goal of this research project is to \textit{provide an embodied affective robot that can supplement human hugs in situations when requesting this form of comfort from others is difficult or impossible}. Many common examples where people lack access to human hugs stem from long-term physical separation. During the current global pandemic of COVID-19, some family members have been unable to come into close contact with each other for more than one year, and the effects are showing. Lack of social touch can be detrimental to both our physical and mental health \cite{DepressedTeensFromSocialMedia, InternetAndDepression, SocialTouchDevelopment}. Accurately replicating a human hug is a difficult problem because it requires \textit{real-time adaptation to a wide variety of users}, \textit{close physical contact}, and \textit{quick, natural responses to intra-hug gestures performed by the user}. In the past, researchers have avoided tackling these challenges by providing a huggable device that does not actively hug the user back, thereby entirely avoiding the challenges of reciprocating a hug \cite{HuggablePillowWithPhone, Hugvie, TheHuggable, disalvo2003hug}. Others have chosen to create robots that hug in a ``one-size-fits-most'' model \cite{HugShirt, hedayati2019hugbot, shiomi2017hug, shiomi2017robot, tsetserukou2010haptihug}. Another set of researchers adjusted the robot to each specific user prior to experimentation, thereby avoiding the challenge of real-time adaptation \cite{block2019softness, block2018emotionally}. \citet{TheSixHugCommandments} recently introduced HuggieBot 2.0 as the first robot that uses visual and haptic perception to deliver closed-loop hugging that adapts to the circumference of the user and their preferred hug timing; however, this robot could not perceive or respond to intra-hug gestures, and user testing revealed other limitations. Section~\ref{RelatedWork} further details prior research in this domain, discussing the importance of social touch between humans, summarizing different forms of technology-mediated social touch, reviewing the state of the art for social touch in human-robot interaction (HRI), and summarizing scientific approaches to evaluating user experience. We accept and build upon the six design guidelines for hugging robots previously presented by Block et al. \cite{TheSixHugCommandments}: a good hugging robot should: (G1) be soft, (G2) be warm, (G3) be sized similar to an adult human, (G4) visually perceive and react to an approaching user, (G5) autonomously adapt its embrace to the size and position of the user's body, and (G6) reliably detect and react to a user's desire to be released from a hug regardless of their arm positions. Section~\ref{DesignGuidelines} presents a \textbf{refined version of G4} plus \textbf{five additional design guidelines} for autonomous hugging robots that we derived from the findings of the two studies reported in this paper. Our new presented guidelines state that a good interactive hugging robot should: (refined G4) autonomously initiate a hug in a consensual and synchronous manner, (G7) adapt to the user's height, (G8) perceive intra-hug gestures in real time, (G9) respond quickly to detected gestures, (G10) slightly vary its response to each detected intra-hug gesture, and (G11) occasionally perform proactive affective gestures during hugs. Section~\ref{DesignGuidelines} also defines the goal of this research and outlines our reasoning for the process we followed. To create a robot that can autonomously deliver pleasant, natural-feeling hugs, we first conducted a Wizard-of-Oz user study (action-response elicitation study) with HuggieBot 2.0 to collect data on intra-hug gestures; Section~\ref{UserStudyMethods} explains the methods, and Section~\ref{Results} presents and briefly discusses the results. As described in Section~\ref{Improvements}, we improved several aspects of the platform's hardware and software based on user feedback from this study as well as a prior evaluation of HuggieBot 2.0. The collected data were then used to develop a perception system and a behavioral response algorithm for the updated version of our platform, HuggieBot 3.0. Specifically, Section~\ref{Detection} explains how we analyzed the microphone and pressure sensor data collected from our novel inflated robot torso (HuggieChest) as 32 diverse users performed four distinct intra-hug gestures. Our developed machine-learning methods quickly detect and reliably classify these different gestures. Based on the 32 users' ratings of the different robot responses, we developed a probabilistic behavior algorithm to determine which action the robot should perform in response to a user gesture; it is described in Section~\ref{Response}. Rather than maximizing user acceptance for each robot gesture, which would result in the robot only squeezing the user, our behavior algorithm balances exploration and exploitation \cite{explorationexploitation} to create a natural, spontaneous robot that provides comforting hugs. As detailed in Section~\ref{Validation}, we then ran a follow-up study with sixteen new users to test our detection and classification system's real-world accuracy and evaluate the user acceptance of our robot behavior algorithm (validation study). Section~\ref{Results2} shares the results of this study, which show that HuggieBot 3.0 is the \textit{first fully autonomous human-sized hugging robot that recognizes and responds to the user's intra-hug gestures.} Section~\ref{Discussion} discusses the results of the validation study in the context of our six new hugging design guidelines, and it also addresses the limitations of our approach, such as the lab setting and the number of participants in the validation study. Finally, Section~\ref{Conclusion} provides a summary of this article and shows various avenues for future work on this topic. We believe the presented design guidelines can be expanded beyond HuggieBot to give a wide range of companion robots the ability to exchange high-quality hugs with their human users. \section{Design Guidelines and Research Goal} \label{DesignGuidelines} To advance the state of the art in robotic hugging, this article proposes and evaluates a refined version of tenet 4 presented by \citet{TheSixHugCommandments} plus five new design guidelines for the future creation of hugging robots. \begin{itemize} \item[G4.] (refined) When a hugging robot is the one initiating the interaction, it should \textit{autonomously invite the user for a hug} when it detects someone in its personal space. A hugging robot should \textit{wait for the user to begin walking toward it} before closing its arms to ensure a consensual and synchronous hugging experience. \item[G7.] A good hugging robot should \textit{perceive the user's height and adapt its arm positions accordingly} to comfortably fit around the user at appropriate body locations. \item[G8.] It is advantageous for a hugging robot to \textit{accurately detect and classify gestures applied to its torso in real time}, regardless of the user's hand placement. \item[G9.] Users like a robot that \textit{responds quickly to their intra-hug gestures}. \item[G10.] To avoid appearing too robotic and to help conceal inevitable errors in gesture perception, a hugging robot should not attempt perfect reciprocation of intra-hug gestures. Rather, the robot should adopt a \textit{gesture response paradigm that blends user preferences with slight variety and spontaneity}. \item[G11.] To evoke user feelings that the robot is alive and caring, the robot should \textit{occasionally provide unprompted, proactive affective social touch} to the user through intra-hug gestures. \end{itemize} \textbf{Research Goal:} We seek to evaluate the extent to which each of these six new design guidelines can improve user perception of hugging robots. \textbf{Research Process:} To test these guidelines, we went through a cycle of collecting a large corpus of sensor data from 32 users, updating HuggieBot 2.0's core hardware and software with extensive pilot testing to address user comments and experimenter observations, creating an intra-hug gesture detection and classification algorithm, developing a probabilistic behavior algorithm for responding to intra-hug gestures, and testing both algorithms on HuggieBot 3.0 in real time with sixteen new users. All human-robot interactions exist somewhere along a spectrum from being highly unnatural and protocol-centric to being fully natural and socially intelligent. As discussed in Section~\ref{subsec:user-experience-evaluation}, different experimental approaches from the literature fall in different positions on this spectrum. The work presented in this manuscript is not completely protocol-free. However, with each iteration of HuggieBot (two of which are discussed in this paper), we are moving the interaction closer toward the natural end of the spectrum. Specifically, the overall experience of hugging HuggieBot 3.0 is more natural than any of the solutions described in the existing literature, as reviewed in Section~\ref{subsec:human-robot-social-touch}. We hope that these validated guidelines and our discussion of detailed quantitative and qualitative findings can help roboticists design human-robot hug interactions that are more natural and human-like than the current state of the art. \section{Conclusion and Future Work} \label{Conclusion} This article started by collecting a large data set that shows the characteristic microphone and pressure signals for 32 diverse users performing four intra-hug gestures (hold, rub, pat, and squeeze) on the inflated torso of HuggieBot 2.0. We used these recordings to create a perception pipeline that detects and identifies these different gestures in real time. Ratings and comments in reaction to how the robot responded showed that users do not want a robot that mimics their gestures back to them; instead, they want a robot that responds quickly and naturally to their gestures with some level of unpredictability similar to the choices made by a human hugging partner. We thus developed a behavior algorithm that uses conditional probabilities based on user ratings to determine how our robot should respond after detecting a particular user action during a hug, distinguishing between discrete active gestures, modal active gestures, and passive gestures. We also made several critical changes to our robot platform, including changing the method of hug initiation, adjusting the robot's arm positions to the estimated height of the user, constructing a new robot torso, and improving the quality of the robot's embrace. We tested this upgraded version (HuggieBot 3.0) together with its new perceiving and acting skills on a new set of sixteen diverse users who had not previously interacted with this robot. Users were generally very pleased with the robot's responses to their actions. The platform changes seemed to improve the quality of the interaction, and the perception and behavior approaches that we developed worked very well. Therefore, we conclude that hugging robots should be able to perceive and respond quickly to intra-hug gestures from users. While there are several possibilities for future work related to this project, we are most interested in five future directions. First, we want to deploying a future version of HuggieBot in a pedestrian environment to measure how many people would naturally be interested in interacting with such a robot in the wild. We will first need to improve the robot's approaching user detection algorithm, which currently works well with only one user in the frame. To function in a non-controlled (non-lab) environment with many passersby potentially in the robot's field of view, we will need to make this aspect of the robot more robust. HuggieBot will also need to detect if a user changes his/her mind and decides not to hug the robot, rather than eternally waiting with its arms raised, as HuggieBot 3.0 does. Another challenge of conducting an in-the-wild study will be dealing with non-haptic noise detected by the microphone. We are currently taking a baseline at the start of every hug to determine the mean microphone output, but we are not estimating the magnitude or spectrum of background noise that needs to be filtered out. We found that nearby construction sometimes caused high levels of ambient noise during pilot testing, causing our perception pipeline to mistakenly think that the user was continuously rubbing the robot's back. A simpler improvement to the robot's behavior centers on reducing the likelihood that users accidentally end the hug before they intend to do so. We noticed that several users in both studies briefly took their hands off the robot's torso to adjust their grip and grasp the robot tighter when performing a squeeze, as shown in one of the annotated videos included in the supplementary materials. Because the chamber pressure decreased, the robot assumed the user wanted to be released, so it ended the hug. To make our system more robust, rather than act on a single low pressure value, if the pressure decreases and increases again quickly, the robot should not release the user because they probably do not want the hug to end. Given the negative social impact of not letting go of a user who wants to be released, it is important to find the right balance between fast and reliable hug termination and avoidance of accidental user releases. A third aspect we are interested in investigating is to see if the calming effects of robot hugs are somewhat physiologically similar to the calming effects of human hugs. We propose to investigate this question by safely inducing stress upon voluntary participants and providing either an active human hug, a passive human hug, an active robot hug, or a passive robot hug. Over the course of the experiment, we would periodically collect saliva samples from users to measure the cortisol and oxytocin levels in their bodies, also recording heart rate, video, and subjective opinions of the experience. Running a study of this type would enable us to more confidently say whether an embodied affective robot like HuggieBot 3.0 truly has the potential to supplement human hugs in situations when requesting this form of comfort from others is difficult or impossible. Another element we are looking forward to researching is the extent to which a future version of HuggieBot can help strengthen personal relationships between people who are separated by a physical distance. To do this, we have already developed a mobile app, the HuggieApp, that allows remote users to send customized hugs to local users (via the robot). The sender can customize the hug's duration and tightness, and they can add a variety of intra-hug gestures. They can even replace the robot's animated face with a video message for the receiver. The local user redeems the hug by scanning a custom QR code on their mobile phone at the robot's camera, which is located above its face screen. Users can re-redeem their favorite hugs as many times as they want, which could be especially meaningful if the original sender has passed away. For several months, we plan to observe pairs of platonic users interacting with each other through this future version of HuggieBot; we hope to evaluate the relationship's perceived closeness to see the extent to which this type of embodied affective robots can help bridge physical distance between people. A final potential future direction would to be transfer the capabilities of HuggieBot to a more generalized companion or care robot. Of particular interest could be combining the comfort of a hug with socially assistive robots in care homes. Adults over 65 are the fastest growing demographic, and there are not enough workers to care for them \cite{roberts20119}. Depression in older adults is extremely common as they can struggle with medical illness, cognitive dysfunction, physical separation from their friends and family members, or a combination of the three \cite{taylor2014depression}. Robots like Paro \cite{Chang2013} and Stevie \cite{taylor2021exploring} show that not only is there great interest in robots being used with older adults, but that they can be beneficial to these users. Considering the proxemic work of \citet{mead2017autonomous} could help us enable HuggieBot to perform both robot- and human-initiated hugs, making the entire experience more natural and hopefully more enjoyable for users. Equipping a more generalized care robot, which can already assist with a variety of tasks, with the ability to provide high-quality embraces could help address older adults' unmet social, physical, and emotional needs. \section{Discussion and Limitations} \label{Discussion} \subsection{Discussion} This research project investigates the previously unstudied phenomenon of intra-hug gestures during hugs between a human user and an autonomous robot. We found users to be positively interested in hugging a robot that can both respond to their gestures and proactively perform gestures on its own. The results support all six new of the design guidelines we proposed for hugging robots. \paragraph*{G4: Hug Initiation} First, our revised version of the fourth tenet states that when a hugging robot is the one initiating the interaction, it should autonomously initiate a hug when it detects a user in its personal space by inviting the user for a hug; it should then wait for the user to begin walking toward it before closing its arms to ensure a consensual and synchronous hugging experience, as done by HuggieBot 3.0. The previous evaluation of HuggieBot 2.0 did not find a statistically significant user preference between a hug initiated with a button press or a hug initiated via computer vision detection of the user's approach~\cite{TheSixHugCommandments}. Users gave our new two-phase hug initiation method an average naturalness rating of almost a 7 out of 10. All users were able to initiate hugs with the robot after watching a simple video demonstration twice, without requiring the detailed instructions that were previously provided. Thus, we conclude that the new hug initiation method is an improvement over the previously tested methods. We also had pilot-tested an alternative three-step hug initiation method that was harder for users to master and received an average naturalness rating of only 4.62 out of 10 from twelve pilot participants. Our observations repeatedly indicate that consensual and synchronous hug initiation is indeed important. Although it still has room for improvement, rather than forcing the user to hurry up and wait, the new two-phase method more closely mimics how hugs occur between humans and lets the user decide when to start hugging the robot. Relevantly, \citet{walters2008human} found that both the voice used and previous experience interacting with the robot can have an effect on the mean approach distance to a robot. \citet{mead2017autonomous} extensively worked on understanding interaction potential based on human-robot proxemics. These prior works also support the need for consensual and synchronous hug initiation highlighted by G4. \paragraph*{G7: Adaptation to User Height} The next new design guideline states that a good hugging robot should also perceive the user's height and adapt its arm positions to contact the user at appropriate locations. We thus extended our platform's perceptual capabilities beyond detecting a user's approach so that HuggieBot 3.0 attempts to embrace the user at the proper height -- not too high, and not too low. Our newly developed height adjustment system was created in direct response to user feedback about the inappropriateness of HuggieBot 2.0's hand placement in the action-response elicitation study. The average rating for the appropriateness of HuggieBot 3.0's hand placement's was an 8.4 out of 10, with several users giving it the highest rating possible; these ratings indicate that the proposed approach usually succeeds at adjusting for user height. We also believe that the non-significant increases in the quality ratings for the rub and pat gestures (which we did not adjust other than the location of the robot's left hand) can probably be attributed to this improved placement. In human-human touch interactions, there are relationship-specific maps of body regions where touch is considered appropriate \cite{suvilehto2015topography}; the areas where contact is allowed increase with the emotional bond to the toucher. Therefore, from human-human touch interaction research, we confirm that our robot's hand placement on the user's back is important to avoid taboo zones and ensure the comfort of all users, regardless of the relationship they associate with the robot. Thus, we conclude that users prefer hugging a robot that adjusts its arm placement to match their height. \paragraph*{G8: Gesture Perception} The next design guideline centers on enabling a hugging robot to accurately and reliably detect and classify user gestures applied to its torso in real time, regardless of the user's hand placement. Both of our user studies demonstrated the excellent haptic sensing capabilities of the pressure sensor and microphone inside the inflated chamber of HuggieChest; simple signal-processing and machine-learning techniques were able to detect and classify contacts very well, even when the pipeline was transferred to new sensing hardware and adapted with limited new training data. As our subjects used various hand positions on the robot (both arms below the robot's arms, both above, or one above and one below), we found that this non-localized haptic sensing system works well regardless of user hand placement on the back chamber. Based on the average gesture detection accuracy of 86\% for the sixteen participants in the validation study, along with the positive opinions users shared when the robot responded to their gestures in both studies, we believe the results support the validity of this design guideline. In further exploring cases when the detection algorithm did not perform well, we found that users frequently performed the gesture in an unexpected or uncommon way; in some cases, these variations may have come from the user's limited vocabulary for intra-hug gestures in English. As a surprising benefit, however, the perception pipeline was able to detect some rubs and pats performed at the same time as a squeeze, even though it had not been trained to do so. \paragraph*{G9: Fast Response} The ninth design guideline simply states that users like a robot that responds quickly to their intra-hug gestures. As seen in Fig.~\ref{fig:BehaviorMatrix}, when a user performed an intra-hug gesture on the robot's back, and the robot did not respond, users perceived this as a neutral robot behavior on average. In their written and verbal comments to the experimenter, users indicated they did not feel like the robot ``understood'' them, ``knew [they were] there,'' or ``wanted to support/comfort [them].'' Users clearly preferred when the robot indicated that it knew the user had performed an intra-hug gesture and responded quickly in some way. When we were piloting the validation study, we noticed that a technical error occasionally caused the sampling rate of the microphone and pressure sensor to drop to about half the normal value. We found that when users interacted with the robot in this condition, the delay was highly noticeable and detracted from the user experience. Several pilot subjects mentioned that ``it felt like the robot was performing random actions at random intervals, not in response to anything I was doing.'' During our final validation study, when the sampling rate issue was fixed, the robot responded quickly to user actions, and our subjects were delighted. They repeatedly performed the same gesture to experience the robot response again. Users would often comment to the experimenter during the hug itself, saying ``it tapped me back!'' (P4), or remarking after the hug that ``every time I did an action, it noticed and did something back to me!'' (P16) because they were so pleasantly surprised at the responsiveness of the robot. This user desire for a fast robot response time aligns with expert therapist opinions that robotic systems for perceiving social touch from humans also have strict timing requirements \cite{BurnsSeifiLeeKuchenbecker2021}. A relevant common psychology study is called ``infant response to still-face,'' where a mother, who had been interacting and playing normally with her child, suddenly stops smiling and talking to her infant while the experimenters observe the child's response \cite{toda1993infant}. Children commonly become distressed when their mothers no longer respond, and can cry, become fussy, and grasp at themselves and their mothers, trying to get attention. The \textit{delayed response} from their mother significantly upsets them. Through the combination of results from our two user studies, as well as research from human-human interactions, we believe there is support for our ninth guideline suggesting a fast robot response to intra-hug gestures. \paragraph*{G10: Response Variety} The next design guideline states that hugging robots should adopt a gesture response paradigm that blends user preferences with slight variety and spontaneity. When starting this project, we believed that hugging robots should always reciprocate the same intra-hug gesture the user had performed. The results from the action-response elicitation study surprised us by showing that rote reciprocation is not expected and would not be perceived in a fully positive way. If users preferred gesture reciprocation, we would see a dark pink diagonal in \ref{fig:BehaviorMatrix}. Instead, we see a slight preference for a robot to respond to any user action with a squeeze. Speaking with our users showed us that they appreciate variety in robot responses. Something about the unpredictability of the response leads users to feel it is more ``alive.'' Users also mentioned that having the robot respond with the same action as the user performed feels ``too mechanical,'' because based on the input you know exactly what output you will receive. The results from the action-response elicitation study thus support this design guideline, as do the very positive user reactions to the resulting robot behavior algorithm tested in the validation study. We believe the slightly spontaneous robot hugging behavior enabled by our simple probabilistic behavior paradigm (equation~\eqref{eq:3}) succeeds at blending user preferences with spontaneity to reasonably match natural human exchanges of intra-hug gestures. The behavior algorithm's tendency to prefer exploration versus exploitation can also easily be adjusted by changing the value of the exponent $m$. Interestingly, most human-human research found that mimicry increases perception of another person \cite{chartrand2013antecedents, van2009love}. The chameleon effect is a common phenomenon in which humans unconsciously mimic the gestures and facial expressions of an interaction partner to match their type of social expression and level of extroversion \cite{chartrand1999chameleon}. In this instance, our users made it clear that explicit mimicry from the robot is not appreciated, but that they did want similar levels of support. This finding could be seen as similar to human interactions. In a social environment, it would be uncomfortable if a social partner was obviously copying you, but responding to gestures with similar levels of enthusiasm seems warranted. \paragraph*{G11: Proactive Robot Gestures} This design guideline states that hugging robots should occasionally provide unprompted proactive affective social touch to the user through intra-hug gestures. The findings of~\citet{ClinicalRobotTouch} made us initially hypothesize that users would dislike robot-initiated affective social touch delivered via unprompted intra-hug gestures; their users reacted negatively when a robot attempted to comfort them by touch but did not mind functional contact from the robot. The findings of our two studies explicitly contradict this hypothesis and support G11. We were so surprised by these ratings during the action-response elicitation study that after the user had finished explaining their ratings, the experimenter asked the follow-up question, ``so just to clarify, it did not bother you that you did nothing and the robot unprompted started rubbing/patting/squeezing you?'' Users confirmed that not only did they not mind this robot behavior, but they also \textit{enjoyed and appreciated it}. Users indicated that while in the other cases, the robot would respond to their gestures, here, they felt the robot was comforting them. In these cases, many users commented that they felt the robot's emotions and feelings and that it cared more about them when it chose on its own to perform a gesture, rather than just responding. Although more work needs to be done to confirm this positive finding, it seems that appropriately framed robot-initiated affective touch may be key to creating robots that can provide good emotional support to human users. How can we grapple with the seemingly conflicting findings between our work and \citet{ClinicalRobotTouch}? We believe these results are not as different as they may appear. The users in our studies \textit{agreed to enter into a hug with a robot}, so we believe they also felt at least partially responsible for initiating the affective touch that occurred during the resulting hug. Once this initial boundary is broken, we believe users are more receptive to proactive robot affective touch, for example, a rub, pat, or squeeze. Users in all of our studies have appreciated that HuggieBot \textit{politely asked} them for a hug, thereby allowing them to agree to this affective touch. Many users even responded affirmatively to the robot every time it asked the question, even though they knew it never listened to their answer. By changing the hug initiation method to be prompted by the robot lifting its arms for a hug and asking ``Can I have a hug, please?'' and then waiting with its arms outstretched for the user to approach, we further put the initiation of the affective social touch on the user, solidifying that it is their choice to enter the hug. We believe user initiation is key to acceptance of future social, affective touch from a robot. We therefore firmly believe G11's statement that robots can evoke user feelings that the robot is alive and caring by occasionally providing unprompted affective touch to the user, as delivered by HuggieBot 3.0 through intra-hug gestures. \subsection{Limitations} While the research described in this paper presents several key contributions to robotic hugging and broader social-physical human-robot interaction, we nevertheless acknowledge several limitations of our work. The first limitation is the somewhat artificial methodology of our studies. We recognize the importance of conducting in-the-wild studies for human-robot interaction research. By conducting these studies in a laboratory environment, we have a self-selection bias of our participant pool. Only users who were interested in hugging a robot chose to participate in the reported studies. Unfortunately, due to the current COVID-19 pandemic, lab studies were the safest way to conduct research on hugging robots. We were able to screen participants for potential health risks and thoroughly sanitize the robot between subjects. For the validation study, we changed the robot's introduction from verbal instructions from the experimenter to having the user watch a simple video of another user hugging the robot. This video introduction was meant to mimic how users would learn to use the robot in the wild. Once the COVID-19 crisis has ended in our region, we look forward to conducting a thorough in-the-wild study to see how many everyday people would and would not be interested in hugging a robot. Additionally, the COVID-19 pandemic also reduced the number of participants we could recruit for our second user study; the results from the validation study have lower statistical power (roughly 50\%) than would typically be presented and thus should not be the only results taken into consideration. Next, our refined fourth guideline is limited in that it addresses only situations where the robot is the one initiating the interaction. If the user is the one requesting the hug from the robot, the first half of the guideline should be ignored, and the robot should observe only the second half, which is ``waiting for the user to begin walking toward it before closing its arms to ensure a consensual and synchronous hugging experience.'' Additionally, other researchers such as \citet{ClinicalRobotTouch} have shown the importance of context with respect to the acceptance of social touch. The reported studies did not use any specific context beyond the narrative descriptions provided by the experimenter. Future work could study HuggieBot in different contexts, such as a nursing home or shopping mall, and evaluate the effect each context has on user expectations and interpretations of the interaction. We have also identified two main limitations of the Kinova JACO arms used in both HuggieBot 2.0 and 3.0. While we adjusted the hug initiation process to accommodate this limitation better, the Kinova JACO arms just cannot move fast enough to mimic the speed of a human's arms closing. These arms were selected for safety reasons, and this speed limit was considered during that choice. After extensive user testing, we have found that the speed limit causes a real limitation on the naturalness of the user experience because subjects have to wait several seconds before the arms have fully closed around them. A second limitation of these arms is that repeated small movements of the first, second and third joints (shoulder lift, shoulder pan, and elbow flex, respectively) occasionally result in a sudden short but fast jerking movement, which startles the user. This phenomenon occurs only rarely during repeated rubs or pats. When this issue occurred during the two reported studies, we immediately commanded the robot to release the user, checked that they were okay, verified whether they wanted to continue, discounted the trial with the malfunction, and restarted the trial. Because the sudden motion is very small, this technical glitch never hurt a user. However, it is likely that it negatively affected some user ratings of HuggieBot as a whole, as well as the robot's ability to perform rubs and pats. This work is also limited by the fact that we have simplified the problem of gesture classification significantly by focusing on only four gestures. There are infinitely many gestures a person could choose to perform during a hug, and there are infinitely many ways they could perform each gesture. A user could even combine multiple gestures together. We chose to select a simple subset of four classic gestures and their combinations (e.g., squeeze-pat) as a first step into this new research area. We currently do not estimate the intensity with which users perform these gestures, nor do we measure the location where gestures are performed. Interesting future steps would be to measure the intensity and/or the location of user gestures to enable the robot to reciprocate gestures with an appropriate intensity and/or location on the user’s back. Another limitation is that both of our studies asked the user to perform intra-hug gestures somewhat artificially. After placing their hands on the robot's back, users had to wait for the robot's arms to close fully before performing a gesture. This pause was used to collect baseline measurements for the microphone and pressure signals so that the real-time perception pipeline could determine what gestures the user subsequently performed. We found that many users naturally wanted to start performing the gesture immediately after beginning the hug, regardless of the robot's arm movements. To collect data and then test our algorithm's accuracy, we also asked users to perform only one gesture per hug, though they could perform the gesture repeatedly if they chose. This restriction was also somewhat unnatural. We found that many users naturally wanted to combine gestures. We added the natural hug scenarios in phase 1 and phase 3 of the validation study to address this limitation. Our action-response elicitation study challenged users with the difficult task of separating the appropriateness of the robot's response from the quality with which HuggieBot 2.0 performed the gesture. We had them explain their ratings to the experimenter to ensure they understood the distinction and were answering the question correctly. Nonetheless, the robot's gesture quality probably affected other user ratings. Users experienced a similar challenge in the validation study, where we again asked them to separately rate the robot responses from the quality of the robot's gestures; gesture quality also thus probably affected these results. Another limitation from the action-response elicitation study is that we did not ask users to rate the naturalness of the hug initiation process or the appropriateness of the robot's hand placement. We had not realized these aspects of HuggieBot 2.0's behavior would garner negative comments and thus need to be adjusted for the new version of the platform. Thus, we had to rely on written and verbal comments to evaluate the effects of these changes. An additional limitation involves our evaluation of the user experience. Though we aimed to assess it in an accurate manner and specifically collected data in multiple ways to facilitate comparisons, it is possible there were still problems. First, whenever collecting self-reported data, the questions will be subject to the interpretation of the users, who may not have the same understanding \cite{podsakoff1986self}. We used pilot testing to make our questions as clear and unambiguous as possible. We also did our best to conceal which aspects of the robot we were evaluating, so as to avoid the demand effect, where a participant tries to respond in a way to confirm or deny the hypothesis of a study \cite{nichols2008good}. Though we acknowledge that a participant could have deduced what we were testing for, we do not think it is likely that participants responded untruthfully because we consciously conveyed equipoise throughout the experiment and because several of our findings did not match our initial hypotheses. Finally, as with any technology, there is the concern of the novelty effect, that users' attitudes and preferences will wane over time \cite{leite2009time, kidd2005human}. We aimed to mitigate this effect by conducting long experiments and querying user opinions both before and after one and a half hours of robot hugs. Nevertheless, for almost all of our users, the reported study was their first experience interacting with HuggieBot, and for many of them it was their first time interacting with any robot. To better evaluate the influence of the novelty effect on user evaluations with HuggieBot, future studies should have users interact with HuggieBot over the course of many weeks or months. Finally, having a robot that fully understands a human hug is very challenging. We acknowledge that the current version of our robot does not deliver on the full aspirational goal of a hugging robot. Rather, HuggieBot simulates a hug in a reasonably compelling way, and our data suggest that users enjoy the hug and can engage with the robot and relate to it as an autonomous being. However, in its current state, HuggieBot does not have an internal emotional model similar to humans, and thus it is not capable of engaging in the embodied emotional experience of a hug. \section{Validation Study -- Results} \label{Results2} We analyzed the system's ability to perceive intra-hug gestures, the users' survey responses about the quality of their interactions, and user comments from all parts of the validation study to characterize HuggieBot 3.0's skill at autonomously and interactively hugging users. For reference, two annotated videos of participants performing active intra-hug gestures during their concluding hugs are included as supplementary material for this article; the annotations indicate the actions performed by the user, the actions perceived by the robot, and the gestures that the robot decides to execute in response. A video of a participant performing an extended hold during a hug in phase 2 is also included, annotated in the same way. Appendix \ref{app:vidplots} presents plots showing the joint angles, joint torques, microphone signal, and pressure signal for all three of the annotated videos included as supplementary material. \subsection{Performance of the Perception Pipeline} We analyzed data recorded by the robot during the second phase of the validation study to estimate the gesture perception pipeline's accuracy. The recorded data for each hug included timestamps, microphone voltage, and pressure values with a 45~Hz sampling rate, as well as the gestures detected every 10 samples. To determine accuracy, one of the authors visualized the data from the four hugs in the second phase of the experiment for all users (a total of 64 trials). As previously described, the participant was instructed to perform a specific gesture (e.g., squeeze) during each of these hugs. Since the users often interleaved their active intra-hug gestures with passive pauses, we calculated accuracy as the number of data points detected with the correct gesture or hold over the total number of detected gestures between the start and end of each hug. The mean detection accuracy for the sixteen participants was 85.9\% (standard deviation = 12.5\%). This overall accuracy is comparable to the perception pipeline's accuracy (88\%) on the data set collected in the action-response elicitation study. Figure~\ref{fig:validation_accuracy} presents the perceptual accuracy for each participant. The gesture detection accuracy is above 86\% for eleven participants, above 73\% for four other participants (P2, P6, P7, P16), and at 53.5\% for one participant (P13). We examined the trials that had low detection rates and noted that sometimes the participants performed a gesture in an unexpected way. For example, for the squeeze gesture, P16 released the pressure on the chamber and applied it again instead of increasing the pressure (P16 -- Squeeze, Figure \ref{fig:WrongTrials}). At other times, the participant applied little pressure (P13 -- Squeeze, Figure \ref{fig:WrongTrials}) or squeezed the robot while performing another gesture (P6 -- Rub, Figure \ref{fig:WrongTrials}). In some cases, the participant's accidental move or gesture, such as shifting their body against the front of the robot's torso, was detected as a rub (P13 -- Hold, Figure \ref{fig:WrongTrials}). The algorithm also sometimes misclassified the start or end of a pat as rubbing. \begin{figure}[t] \includegraphics[width=0.8\columnwidth, trim = {0.1cm 0.1cm 0.1cm 0.1cm},clip]{figures/DetectionPerParticipant_1-1-2021.pdf} \vspace{-0.3cm} \caption{Gesture perception accuracy per participant.} \label{fig:validation_accuracy} \vspace{-0.3cm} \end{figure} \begin{figure}[t] \includegraphics[width=\columnwidth, trim = {0cm 2.75cm 0cm 2.75cm},clip]{figures/WrongTrialsFixedTime.pdf} \vspace{-0.3cm} \caption{Example misclassified trials where the participants performed a gesture in an unexpected or uncommon way. Each panel's title gives the participant number and the action they were instructed to perform. The colored data points mark the time periods automatically classified as each of the four intra-hug gestures; following the color convention of Fig.~\ref{fig:teaser}, hold is green, rub is yellow, pat is purple, and squeeze is red.} \label{fig:WrongTrials} \vspace{-0.3cm} \end{figure} \subsection{User Experience} For all statistical analyses of the user responses to the questionnaires, we used an alpha value of $\alpha = 0.05$ to determine significance. To handle the problem of multiple comparisons and to lower the likelihood of a type I error, we used the Holm-Bonferroni method for alpha correction \cite{Holm1978}. The Bonferroni correction is the simplest and most conservative approach to apply to multiple comparisons; however the consequence is an increased likelihood of type II errors \cite{vanderweele2019some, weisstein2004bonferroni}. When the Bonferroni correction is considered too conservative, it is recommended to use a Holm-Bonferroni or a Hochberg correction \cite{armstrong2014use}. We have chosen a Holm-Bonferroni correction because it has the added benefit of showing more power compared to the Bonferroni procedure \cite{kim2015statistical}. When comparisons are significant, we report effect sizes using MATLAB 2019b's built-in function for Pearson's linear correlation coefficient: \texttt{$\rho$ = corr(X)}, where X is the matrix of data being evaluated. The value of $\rho$ signifies the strength of the bivariate relationship. $\rho = 0.1$ shows a small effect size, $\rho = 0.3$ indicates a medium effect size, and $\rho = 0.5$ or above signals a large effect size. \subsubsection{Opening and Closing Surveys} Box plots of the user responses to the opening and closing survey questions from Table \ref{table:OpenCloseSurvey} are shown in Fig. \ref{fig:PrePost}. In this study, answers were submitted on a continuous sliding scale from 0 (disagree) to 10 (agree), so a paired t-test comparison of the opening and closing survey was conducted after verifying the data were normally distributed. After applying the Holm-Bonferroni method for alpha correction (with a $n = 15$ for the fifteen questions in the survey), we found that users felt significantly more understood by the robot ($p = 0.0023$, $\rho = 0.21$) and felt the robot was significantly nicer to hug ($p = 0.0035$, $\rho = 0.59$) after the experiment. Three other comparisons approached significance: users liking the presence of the robot ($p = 0.0084$, $\rho = 0.72$), thinking the robot could support them ($p = 0.0372$, $\rho = 0.92$), and viewing the robot as a social agent ($p = 0.0208$, $\rho = 0.68$). \begin{figure}[t] \includegraphics[width=\columnwidth, trim = {7cm 0cm 5cm 0cm},clip]{figures/adjustedalphaprepost.eps} \vspace{-0.3cm} \caption{Box plots comparing the responses to the opening (blue) and closing (red) survey questions given in Table~\ref{table:OpenCloseSurvey}. The top and bottom of each box represent the 25th and 75th percentile responses, respectively, while the line in the center marks the median, and the triangle shows the mean. The lines extending past the boxes show the farthest data points not considered outliers. The + marks indicate outliers. The two black lines with stars at the top of the graph indicate statistically significant differences.} \label{fig:PrePost} \vspace{-0.3cm} \end{figure} \subsubsection{Introductory and Concluding Hugs (Phases 1 and 3)} The first and third phases of the study involved asking the participants to perform several natural hugs with the robot. The responses to the four questions from Table~\ref{table:FourQuestions} asked after the introductory hugs (phase 1) and the concluding hugs (phase 3) can be seen in Fig.~\ref{fig:IntroConcl}. These responses were also analyzed using a paired t-test comparison and a Holm-Bonferroni alpha adjustment. We found that users did not initially find the robot's hugging behavior very natural, but their opinion of it significantly improved by the end of the study ($p = 0.0138$, $\rho = 0.47$). At the end, users also found the robot hugs significantly more enjoyable ($p = 0.0028$, $\rho = 0.77$). Finally, after hugging the robot repeatedly, users found the robot significantly more socially intelligent ($p = 0.0049$, $\rho = 0.84$) than their initial impressions. The users did not significantly change their opinion about the robot's friendliness over the course of the study; it was already rather highly rated after the introductory hugs. \begin{figure}[t] \includegraphics[width=\columnwidth, trim = {4cm 0cm 4cm 0cm},clip]{figures/IntroConclFat.eps} \vspace{-0.3cm} \caption{A box plot comparison of the responses to the four survey questions about the introductory hugs (light turquoise) and concluding hugs (dark turquoise), as listed in Table~\ref{table:FourQuestions}. The three black lines with stars indicate statistically significant differences.} \label{fig:IntroConcl} \vspace{-0.3cm} \end{figure} We also investigated the average hug duration and the average number of gestures users performed during the introductory and concluding hug phases (Fig.~\ref{fig:AverageDuration}). For the introductory hug phase, the average hug duration was 22.7$\pm$12.6 seconds, where 12.6 seconds is the standard deviation. The concluding hug phase had an average hug duration of 25.3$\pm$11.0 seconds. The average number of active user gestures (rub, pat, squeeze) detected during the introductory hug phase was 1.41$\pm$1.84. The average number of active gestures detected during the concluding hug phase was 4.03$\pm$3.95. We ran a paired t-test on the average hug duration and the average number of gestures in these two phases. Although twelve of the sixteen users engaged in longer hugs with the robot during the concluding phase than during the introductory phase, we did not find a significant difference for hug duration. However, we did find a significant difference for the number of gestures performed ($p < 0.001$, $\rho = 0.89$), with significantly more gestures performed during the concluding hugs. \begin{figure}[t] \includegraphics[width=\columnwidth, trim = {3cm 0cm 3cm 0cm},clip]{figures/HugDurationAndGestures.eps} \vspace{-0.3cm} \caption{A box plot comparison of the average duration (left subplot) and the average number of intra-hug gestures (right subplot) of the hugs during the introductory hug phase (pale green) and the concluding hug phase (dark green) of the validation study. The black line with a star indicates a statistically significant pairwise difference.} \label{fig:AverageDuration} \vspace{-0.3cm} \end{figure} \subsubsection{Gesture Hugs (Phase 2)} Users experienced four hugs during phase two of the study, performing a specific gesture as many times as they liked in each hug. The robot used its perceptual pipeline and probabilistic behavior algorithm to autonomously decide how to respond to each of the user's actions. Figure \ref{fig:RobotResponseRating} shows a box plot of the user ratings for the overall robot responses for each performed user action. For all user actions, the average rating of the robot's combined response is around eight out of ten with a standard deviation of around 1.5 ($r_{\textrm{hold}}=7.90\pm1.77$, $r_{\textrm{rub}}=7.94\pm1.64$, $r_{\textrm{pat}}=7.82\pm1.35$, and $r_{\textrm{squeeze}}=7.96\pm1.42$), indicating that the robot's responses were perceived very positively by users. \begin{figure}[p] \includegraphics[width=\columnwidth, trim = {0cm 0cm 0cm 0cm},clip]{figures/CorrectColors.eps} \vspace{-0.3cm} \caption{A comparison of the user ratings of the robot's autonomous responses to the four different intra-hug actions performed by users in the second phase of the validation study.} \label{fig:RobotResponseRating} \vspace{-0.3cm} \end{figure} \begin{figure}[p] \includegraphics[width=0.8\columnwidth, trim = {5cm 15cm 0cm 15cm},clip]{figures/REDOQuality.jpg} \vspace{-0.3cm} \caption{A matrix showing the user ratings of the quality of the various robot responses from the validation study, following the visualization approach used in Fig.~\ref{fig:BehaviorMatrix}.} \label{fig:QualityMatrix2} \vspace{-0.1cm} \end{figure} \begin{figure}[p] \includegraphics[width=\columnwidth, trim = {4cm 0cm 4cm 0.5cm},clip]{figures/OGvVQualityComparison.eps} \vspace{-0.3cm} \caption{A comparison of the responses to the quality of the robot's gestures during the original action-response elicitation study (pale pink, Fig.~\ref{fig:QualityMatrix}) and the validation study (dark pink, Fig.~\ref{fig:QualityMatrix2}).} \label{fig:QualityComparison} \vspace{-0.3cm} \end{figure} \subsubsection{Additional Closing Survey Questions} At the end of the experiment, users were asked to evaluate the quality with which the robot performed the four different intra-hug gestures. This question was presented and framed the same way as in the action-response study, for continuity and comparison. A matrix showing the average rating on a color scale and each user's individual rating in dots can be seen in Fig.~\ref{fig:QualityMatrix2}. Figure~\ref{fig:QualityComparison} shows side-by-side comparisons of the gesture quality ratings obtained in the original action-response elicitation study and those from the validation study. We then compared the quality ratings of the four different robot actions between the two studies using unpaired t-tests. We had a different number of participants and an entirely separate population. A Holm-Bonferroni alpha adjustment with $n=4$ did not show any significant differences in the perceived quality of the robot gestures. Fig. \ref{fig:TriggerPlacement} reports how the users rated the naturalness of the hug initiation process and the appropriateness of the robot's hand placement on their back. The average rating of the hug initiation was 6.94$\pm$2.22 on a scale from unnatural (0) to natural (10). The average rating of the robot's hand placement was 8.40$\pm$2.02 on a scale from inappropriate (0) to appropriate (10). One outlier rated robot hand placement a 4 out of 10. This user was one of the tallest users we tested, with a height of 1.83~m. Two other users had the same height as this user and rated it 6.3 and 9 out of 10. This user tended not to stand straight when the robot was estimating his height, and thus, HuggieBot 3.0 thought he was shorter than he was, which resulted in the robot's arms occasionally being placed lower than the user would have preferred. \begin{figure}[t] \includegraphics[width=\columnwidth, trim = {2.5cm 0cm 3cm 0cm},clip]{figures/InitiationPlacement.eps} \vspace{-0.3cm} \caption{Box plots showing user ratings of the naturalness of the hug initiation process (left) and the appropriateness of the robot hand placement (right).} \label{fig:TriggerPlacement} \vspace{-0.3cm} \end{figure} \subsection{User Comments} Once again, some of the most informative results come straight from the users in the form of free-response written and verbal comments. Half of the users commented explicitly on the naturalness and ease of initiating the hug with the robot. 31.25\% of the sixteen users specifically commented that the ``timing was well synchronized/done perfectly on time'' (P10, P13) between initiating a hug and the arms closing. One user (6.25\%) initially felt the arms closed too slowly. However, by the end of the experiment, this user stated the timing was ``natural and comfortable'' (P12) for them. Interestingly, two users (12.5\%) thought the robot's arms closed slightly faster than with people and felt they had to walk slightly faster than usual. Regarding the robot's hand placement, 75\% of users mentioned that the location of the robot's hand was ``good'' and ``well-placed exactly where [they] want it'' (P15) at all times. Users specifically mentioned that at no point did it make them feel uncomfortable, which is an improvement from the user comments from the action-response elicitation study. After the introductory hugs, six users (37.5\% of sixteen) mentioned that hugging the robot felt ``strange'' (P2) or ``weird'' (P6, P7, P9) because it was their first time interacting with a robot, and they were not sure what to expect or how the robot would respond. While these users initially found hugging a robot strange, that does not mean they did not enjoy their experience. 50\% of all the users mentioned enjoying their first group of robot hugs, particularly commenting on the robot's ``warmth'' (P4, P5, P10, P12, P16), ``friendliness'' (P2, P7, P12), and ``comfortable appearance'' (P2, P4, P6, P7). Some users even went as far as to say ``I enjoyed the experience as I am away from my family and a warm hug is always comforting'' (P4). Only three users (18.75\%) mentioned feeling uncomfortable or nervous initially. After the concluding hugs session, user comments were much more positive. Fifteen out of our sixteen participants (93.75\%) shared positive comments mentioning ``great overall experience'' (P3, P4) and that they ``liked the robot's responses'' (P4, P5, P6, P12, P15, P16). Users mentioned that they found the hugs to be both ``comfortable'' (P2, P4, P6, P7) and ``natural'' (P4, P7 P10). The one user who did not have as positive comments as the rest mentioned that ``the experience did not feel natural, but it was fun to test how a robot can interact with a human in this way'' (P8). Three users who were initially tentative about the robot in their comments about the introductory hugs stated how comfortable they eventually found the robot (18.75\%). Six users (37.5\%) were ``amazed'' (P3, P7) by the experience and came to think of the robot as a ``friend'' (P2, P7, P14) by the end of the concluding hugs phase. We also asked users to share their opinions about the overall experience at the end of the entire study. Many users (62.5\%, ten out of sixteen) shared that they felt the entire experiment was a ``great experience'' or that they ``loved hugging the robot'' (P5, P6). Two users (12.5\%) mentioned that while they enjoyed the interaction, they did not understand the purpose of such a robot as they felt that human hugs are ``irreplaceable'' (P12, P13). Several of our users (37.5\%, six of sixteen) mentioned how valuable they found this robot especially given the current COVID-19 pandemic, mentioning that ``emotional and mental health are also important'' (P7) but are often forgotten or not addressed. \section{Action-Response Elicitation Study -- Methods} This study serves three main goals. First, we seek additional user comments on all aspects of HuggieBot 2.0 to guide major updates to this platform. Second, this study aims to collect a large corpus of representative haptic sensor data for four common intra-hug gestures so that we can create a perceptual pipeline that detects and identifies these gestures in real time. Third, this study seeks to gather user preferences for how a hugging robot should respond to intra-hug gestures; these opinions will be used to develop a behavior algorithm that enables autonomous hugging robots to respond well to intra-hug gestures that the user performs. \label{UserStudyMethods} \subsection{Robot Platform} \begin{figure}[t] \includegraphics[width=0.9\columnwidth, trim = {0cm 0cm 0cm 0cm}, clip]{figures/SideBySide.pdf} \vspace{-0.3cm} \caption{Views of HuggieBot 2.0~\cite{TheSixHugCommandments} ready for a hug and hugging a user. This custom human-sized hugging robot has two padded arms, an inflated torso, and a face screen mounted to a rigid frame. A camera above the screen visually senses the user at the start of the interaction, and torque sensors on the shoulder flexion and elbow flexion joints are used to embrace the user with a comfortable pressure. A microphone and pressure sensor in the back chamber of the torso are used to detect user contact and detect and classify gestures. The user ends the hug by releasing the robot's torso and/or leaning back against the arms.} \label{fig:UserHug} \vspace{-0.3cm} \end{figure} \begin{figure}[t] \includegraphics[width=0.8\columnwidth, trim = {0cm 0cm 0cm 0cm}, clip]{figures/NakedHuggieBot.pdf} \vspace{-0.3cm} \caption{Front and side views of HuggieBot 2.0~\cite{TheSixHugCommandments} without the robe and sweatshirt so that the HuggieChest, heating pads, and foam arm padding can be clearly seen.} \label{fig:NakedRobot} \vspace{-0.3cm} \end{figure} Block et al.\ previously created and validated a custom hugging robot called HuggieBot 2.0 \cite{TheSixHugCommandments}, which we use in this study. As shown in Fig.~\ref{fig:UserHug}, the robot features a v-shaped base that makes it easy for users to get close for a hug. The robot's torso, HuggieChest, is a custom sensing system that simultaneously softens the robot and detects user contacts. It is made of two chambers of sheet polyvinyl chloride (PVC) that are fabricated using a combination of heat sealing and gluing. Inside the chamber located on the back (where users place their hands) is an Adafruit BME680 barometric pressure sensor and an Adafruit electret microphone amplifier MAX4466 with adjustable gain. Both are connected to an Arduino Mega, which sends the data to ROS (Robot Operating System) over serial at about 45~Hz. The pressure sensor is used to detect the start and end of user contact with the back chamber, and both sensors will be used to detect and classify intra-hug gestures in this article. As explained by \citet{TheSixHugCommandments}, the combination of a microphone and pressure sensor was selected to create a simple and soft omnidirectional haptic sensor. Since users come in different shapes and sizes and have different hugging preferences, it is important that the system can detect contact over a large area, regardless of where the user touches the surface of the robot's torso. In this case, contact detection was more important than contact localization, though localization would be an interesting aspect to study in the future. We also opted for a simpler sensing system to minimize the transmitted data, conserve computational power for real-time processing, and facilitate replication. The robot has two six-degree-of-freedom Kinova JACO arms mounted horizontally on a custom metal frame. These arms were selected for being anthropomorphic, quiet, and safe; their movement speed is limited in firmware, and they can be mechanically overpowered by the user if necessary. Torque sensors at each arm joint are used to automatically adjust to the user's size at the start of the embrace, and torque signals above a threshold give the user a second way to end the hug. As shown in Fig.~\ref{fig:NakedRobot}, the arms are covered in foam pads for softening. The robot's head is a custom 3D-printed case that houses the Dell OptiPlex 7050 minicomputer that controls the entire robot, the Intel RealSense Depth Sensing Camera, the robot's face screen, a small JBL speaker, and many cables. On top of each torso chamber is a Thermophore MaxHeat Deep-Heat Therapy heating pad (35.6 cm $\times$ 68.6 cm). A purple robe and gray sweatshirt are placed on top of the heating pads, and gray mittens cover the robot's end-effectors to create the final robot outfit. Further details of HuggieBot 2.0's hardware design can be found in \citet{TheSixHugCommandments}. The minicomputer runs Ubuntu 14.04 and controls the robot via ROS Kinetic. All six robot joints on each arm have angle and torque sensors, which we continuously monitor and record. We use a PID controller to command each joint angle over time. We selected a set of human-inspired target angle waypoints for each arm to move through to create the robot's hugging motion. The robot arms start at its side while the camera module waits to detect a user. Once detected, the robot asks the user for a hug saying, ``Can I have a hug, please?'' while the robot's face changes to show the mouth animation. Then, the hug is executed by commanding each joint to move at a fixed angular velocity toward a predetermined goal pose through the set of target angle waypoints. To adjust haptically to the size of each user, the robot arms move toward a pose sized for the smallest anticipated user. We continually monitor each joint torque and stop a joint's movement if the joint torque exceeds the pre-set torque threshold. When the torque is exceeded, we give each joint a new target angle, which is the joint angle where the torque was exceeded. This method haptically adjusts the robot's embrace to the size of each user (T5 from \citet{TheSixHugCommandments}). \subsection{Ethical Approval, Recruitment, and Participants} The Ethics Council of the Max Planck Society approved this study under protocol F006B of the Haptic Intelligence Department's framework agreement. The first author recruited participants by email, social media, and paper flyers. All participants were English-speaking volunteers from the local area in Stuttgart, Germany. We ran two subjects as pilot participants to refine the experimental methods; their data were excluded from analysis because they were not given the same instructions as the later participants. 32 people participated in the study; twelve males and 20 females. The participants ranged in age from 21 to 60 (mean = 30, standard deviation = 7) and came from thirteen different countries. Overall, the participants did not have a technical background; many experienced their first interaction with a robot during this study. The 27 participants not employed by the Max Planck Society were compensated at a rate of 8 euros per hour. \subsection{Procedure} After confirming their eligibility to participate given the exclusion criteria and local COVID-19 regulations, users scheduled an appointment for a 1.5-hour-long session with HuggieBot 2.0. After the participant's arrival, the experimenter explained the protocol using the informed consent document as a guide. The potential subject was given time to read over the consent form thoroughly and ask any questions. If they were still willing to participate, the subject signed the consent form and the video release form. After receiving both these documents, the experimenter turned on two video cameras to record the experiment. The user filled out a demographic survey on a tablet computer. Then, the investigator introduced the robot as the personality ``HuggieBot,'' told users it was ``a robot that loves to hug,'' and explained its key features, including the emergency stop. The recruitment materials coupled with this introduction helped set the expectation that the robot's hugging behavior was considered normal and emotionally positive, allowing us to achieve affective grounding with our participants \cite{jung2017affective}. The experimenter explained how the first half of the experiment would work and how the participant could initiate the hug. The experimenter also explained that the user is always in control of the duration of the hug and explained the different ways to non-verbally cue the robot that they wanted to end a hug (release hands from the robot's back or lean back against the robot's arms). At this point, the subject filled out an opening survey to document their initial impressions of the robot. Users answered the same questions at the end of this experiment; these opening and closing survey results were previously published by \citet{TheSixHugCommandments}, showing significant positive changes in several ratings. Users practiced hugging the robot and acclimated to the hug initiation methods and the timing of the robot's arm movements before the experiment began. All users then experienced eight hugs that made up the first half of the experiment; these results were also previously published, validating the haptic sensing for hug sizing and hug release \cite{TheSixHugCommandments}. Interestingly, Block et al.'s analysis of these results found that users showed no preference between starting the hug with a button press or starting it by walking toward the robot \cite{TheSixHugCommandments}. We seek to improve the latter hug initiation method in this article, as will be discussed in Section~\ref{Improvements}. The second half of the experiment contained the activities related to intra-hug gestures. We used two 4$\times$4 balanced Latin squares and a participant number that is a multiple of eight to counter-balance any effects of presentation order \cite{LatinSquare}. Participants experienced a total of sixteen hugs in four groups, each made of four hugs. In each group, the user was instructed to perform a specific action (hold still, rub the robot's back, pat the robot's back, or squeeze the robot) at any point during the hug, as many times as they desired. The experimenter provided the same narrative to each user, to ``perform the gesture like they would on a friend or a family member during a hug.'' Within a group of hugs, in response to a user action, the robot would perform a different gesture during each hug (staying still, moving vertically, tapping on the user's back, or tightening its hold on the user). When staying still, the robot's arms did not move. For moving vertically (rubbing), the shoulder lift angle of the robot's left arm was increased by 3$^\circ$ and then returned to its original value twice in a row for each rub response. For tapping on the user's back (patting), the elbow flexion joint of the robot's left arm was increased by 3$^\circ$ and then decreased by 6$^\circ$ twice in a row before returning to its original value. For tightening the hold on the user (squeezing), the shoulder flexion joints of both arms were adjusted inward by 1$^\circ$ while both elbow flexion joints were simultaneously adjusted inward by 5$^\circ$; all four joints were then returned to their original values. Each movement is commanded in joint space relative to the current hug's embracing pose around the user. The joints move between points at the robot's maximum speed, yielding fixed-duration gestures that last approximately 2 seconds. Because we had not yet developed autonomous intra-hug perception or action capabilities, the timing of each robot response was controlled by the experimenter, who visually observed the actions of the user. A version of Fig.~\ref{fig:teaser} was printed on a large poster and placed in the experiment room for the participants to reference at any time. While the robot responses were designed to be the same as what the user was instructed to perform, the poster contained only the pictures and descriptions, without the colored gesture names, to avoid swaying the participants to match robot responses with user actions of the same name. After each hug, the experimenter asked the user which intra-hug gesture they thought the robot had performed; the hug was repeated if the user was not able to identify the robot response correctly, or if the user performed the wrong action. After a successful action-response hug, users were asked to rate how much they liked that robot response, given the action they performed, using a continuous sliding scale from hate (0) to love (10) with a resolution of 0.1. To focus on more general principles of human-robot interaction, here we asked users to rate the appropriateness of the gesture response rather than the quality with which HuggieBot 2.0 performed the gesture. After users had experienced all four robot responses for the given user action, they were given time to review and adjust their survey entries before calling the experimenter to verbally explain their ratings. After testing all sixteen hug combinations of user action and robot response, participants were asked to rate the quality of each of the robot responses on the same hate-love scale. After the closing survey, a free-response question asked the user to provide any comments or feedback they had about the experiment. \section{Related Work} \label{RelatedWork} \subsection{Social Touch Between Humans} \label{subsec:socialtouchpeople} People interact with each other through various kinds of social touch that change across the lifetime in order to foster a sense of community, strengthen relationships, and build trust \cite{SocialTouchDevelopment}. The first form of affective, social touch usually comes from a mother or other relative to help soothe an infant \cite{Uvnas-Moberg2014}. In addition to calming the child down, this helpful touch (cradling, squeezing, hugging, kissing, stroking, etc.) strengthens the bond between adult and child and increases the body's production of oxytocin in both. An increase in oxytocin provides a host of benefits, including a greater tolerance for pain and stress \cite{LowerCortisol}. Over time as they grow, rather than relying on a parent for comfort, humans learn to self-soothe in similar ways, such as holding themselves, rubbing their arms, and wrapping themselves tightly in a warm blanket. These methods work because they are reminiscent of deep pressure touch, which is the kind of touch one receives when hugging someone or touching them firmly. It has a calming effect and has been shown to alleviate stress and anxiety dramatically and lower heart rates and blood pressure \cite{Edelson1999}. As humans age further, researchers have found that the areas a person is allowed to touch on another's body are directly correlated with the strength of the relationship between the two people \cite{suvilehto2015topography}. The closer two people are emotionally to each other, the more areas they are allowed to contact for social touch. Appropriate location of touch was of particular importance to our research, as Block et al. found users to be highly sensitive to the hand placement of their hugging robot; placing the robot's arms either too low or too high on the body was not appreciated \cite{TheSixHugCommandments}. In addition to the location of social touch, another critical element to consider during social touch between humans is the applied contact strength. Too little pressure can sometimes create a disingenuous impression or not provide enough emotional support, while too much pressure can hurt your partner. Researchers studied the physiological responses (heart rate and R-R interval) of infants being held with different levels of tightness and by people with varying relationships to the child \cite{Yoshida2020}. Infants responded best when being held with a medium amount of contact pressure by a parent. This study identifies a Goldilocks zone where the pressure is not too low and not too high, but ``just right.'' This pressure zone may vary across people and can even change for a single person depending on how they feel and how much support they currently need (more support may require more applied pressure). A common way to appropriately provide a person with deep pressure touch is through a hug, where both participants wrap their arms around the other person's torso. During prolonged hugs (lasting more than three seconds), the two hugging partners rarely remain stationary in the embrace \cite{3secondhug}. Common intra-hug gestures like stroking/rubbing, patting, and squeezing provide comfort and help the receiver feel emotionally supported \cite{Waal2019}. Although few other investigations have explored intra-hug gestures, this research inspired us to create a robot that can detect and respond to such gestures during a hug to more closely mimic humans, so that it can eventually provide better emotional support to its users. \subsection{Technology-Mediated Social Touch} Because physical contact is not always possible, some researchers look for ways to strengthen relationships between emotionally close people who are separated by a distance. As will be explained in the following paragraphs, this gap can be bridged by technology that aims to transmit social touch, e.g., \cite{Pakanen2014, disalvo2003hug}. \citet{Huisman2017} provides a comprehensive review of social-touch technologies. The developed devices typically work in pairs, where users can send signals to each other to let their partner know they are thinking of them, e.g., \cite{HeyBracelet, CoupleVibe, FriendshipLamp}. The signal the other user receives indicates their partner is sending them some contact, but it may not accurately replicate the sender's desired intent (e.g., vibration output may be used to represent a squeezing input). Such objects typically fall into two categories that we discuss in more detail below: wearables and comfort objects. \subsubsection{Wearables} Carrying your loved one with you wherever you go can be an ideal solution for those in a long-distance relationship. In this subsection, we discuss previous work on developing wearable technology to help physically separated loved ones feel emotionally close to each other, e.g., \cite{Pakanen2014, HeartBeatAppleWatch, duvall2016active}. A common version of this wearable technology occurs in the form of a bracelet. The Squeezy bracelet \cite{Pakanen2014} can be paired with a mobile phone and allows the user to receive haptic messages. The Hey Bracelet \cite{HeyBracelet} works in pairs and allows friends to send each other haptic signals by connecting them with their phones to Bluetooth. When one user taps on their Hey Bracelet, the other one will squeeze and vibrate to let the wearer know their partner is sending them a hug. Similar capabilities are also possible with Apple Watches, where you can send a heartbeat to anyone via iMessage \cite{HeartBeatAppleWatch}. Another bracelet developed is CoupleVIBE \cite{CoupleVibe}, which sends a vibrotactile cue to a partner to signal that their long-distance significant other has arrived to or departed from a frequented location. The HaptiHug is another kind of wearable for social touch \cite{tsetserukou2010haptihug}; it is similar to a belt and features soft hands made from rubber-sponge material. The creators developed an animated version of a hug and integrated it into the online game Second Life. Real-world users can connect and interact in this virtual world. By wearing the HaptiHug while playing the game, users can feel squeezed and have pressure applied on their back when their virtual characters hug each other, thus making the experience more immersive. HaptiHug provides three pre-programmed levels of hug intensity from which users can choose. Inflatable and weighted vests have been used to provide children on the autism spectrum with deep pressure touch \cite{duvall2016active}. While they do succeed in delivering this beneficial touch, the inflatable vest's loud pumps are conspicuous and distracting. In contrast, the weighted vests must be removed and replaced frequently for the wearer to feel the benefit. An additional potential drawback of the inflatable vest is that someone can remotely operate it at any time without warning to the child, who could have no idea when, why, or from where the hug is coming. Another kind of wearable technology for mediated social touch is the Hug Shirt \cite{HugShirt}, which has embedded sensors and actuators. The sender and the receiver each wear a Hug Shirt, and the system aims to create the sensation of physical contact between the wearers. The sensors capture the sender's contact location, strength, warmth, and heartbeat, and the actuators attempt to recreate these sensations on the receiver's shirt through heating, vibration, and inflation. The shirts send messages to each other via Bluetooth by connecting to the mobile phones of the users. A final wearable we will discuss is the Huggy Pajama \cite{teh2008huggypajama}. These pajamas are meant to be a hugging communication system between children and parents. A parent hugs a doll embedded with sensors, and the child wearing the pajamas feels virtually hugged. The pajamas are actuated by air inflation with a compressor located outside of the pajamas. Unfortunately, this compressor is loud and can be disruptive to a child trying to sleep, though the provided compression was reported to be enjoyable. In this example and all others discussed above, the wearable was not capable of detecting, classifying, or responding to intra-hug gestures. \subsubsection{Comfort Objects} The idea of technology-mediated social touch is so compelling that people are purchasing ``Friendship Lamps'' \cite{FriendshipLamp} even without research supporting their claimed benefits. These lamps work in pairs. When one user turns on theirs, the partner's lamp lights up. Touching either lamp causes both lamps to change colors to indicate to the partner that the other person is thinking of them. Another group of researchers took the idea of a tele-lamp one step further by giving it an anthropomorphic shape intended for affective interaction \cite{Angelini2015}. This lamp has a face, can change colors, and displays different emotions to reflect a distant loved one's emotional state. A user can change the lamp's emotional state by performing gestures on it, for example kissing the lamp. These researchers also share that while couples can use this lamp for long-distance relationships, it can also be used alone as a single companion. Another group of researchers developed Hugvie \cite{Hugvie, Nakanishi2020}, a small pillow with the approximate shape of a person. The pillow's head contains a small pocket in which a user can place a cell phone. Users then hug the pillow while talking on the phone to their far-away loved one. Researchers found that using this technology can help reduce the hugger's anxiety \cite{Hugvie, Nakanishi2020}. They also found the interpersonal touch from using Hugvie can help improve the hugger's impression of a third person based on hearsay information given by the remote partner. However, this system cannot detect, classify, or respond to any intra-hug gestures. Similar to the friendship lamps, other researchers developed a plush pillow, the Macaron \cite{Nunez2017}, that uses infrared photo-reflective sensors to detect when a user is hugging it. It then sends a message over Bluetooth to the partner pillow, which lights up with blue-colored LEDs and blinks to indicate the intensity with which the partner hugged the other Macaron. Likewise, another group of researchers created The Hug \cite{disalvo2003hug}, a pillow whose shape is derived from a child wrapping its legs around an adult. The Hug also works in pairs. When one user hugs or strokes their Hug, the partner Hug will light up, vibrate, jingle, and heat up to indicate that someone is sending a hug. \subsection{Human-Robot Hugging} \label{subsec:human-robot-social-touch} Social touch occurs in many contexts. Several researchers have devoted effort to enabling robots to shake hands with humans as naturally as humans do with each other, e.g., \cite{Arns2017, Wang2009}. Other researchers have worked to allow humans to connect with robots in a more light-hearted and playful manner through high-fives and hand-clapping games \cite{Fitter2016,Fitter19-IJSR-Clap,Fitter20-JNER-Exercising}. All these interactions feature ways to help humans and robots interact with each other both socially and physically. This interest in enabling social-physical human-robot touch is not unique; many researchers have taken different approaches to this goal. We will focus on the most common approaches that are relevant to our research on HuggieBot involving social touch through human-robot hugs. Perhaps the most well-known of human-robot hugs is Temple Grandin's Squeeze Machine \cite{SqueezeMachine}. Though not technically a robot, this machine applies lateral deep touch pressure by squeezing the user between two foam panels. This machine is user-operated, so it does not require an additional human partner. The user directly sets the duration and pressure of the squeeze they receive. The Huggable is a small robotic teddy bear companion meant to accompany small children during extended stays in the hospital \cite{TheHuggable}. The robot can detect where and how the child is touching it, and it can move its head and arms in response. The Huggable has cameras, microphones, and speakers, and it can record the child using it and send helpful information to a remote caregiver. Due to its small form factor, this robot is huggable, but it cannot actively hug the user back. Shiomi et al. created a life-size teddy bear robot with back-driveable motors at the elbows that was evaluated in two Wizard-of-Oz experiments \cite{shiomi2017hug, shiomi2017robot}. While the system currently requires a human operator, the hug itself is meant to come from the robot, not another user; thus, we consider it a human-robot hug rather than a technology-mediated hug. This floor-sitting robot introduces itself to users before asking for a hug. The user must then crawl toward the robot to hug it. The creators found that hugging this robot caused users to engage in more self-disclosure and be more willing to donate money to charitable causes. HugBot is a large panda bear stuffed animal \cite{hedayati2019hugbot} similar to Shiomi et al.'s life-size teddy bear robot \cite{shiomi2017hug, shiomi2017robot}. HugBot also sits on the floor and requires adults to crawl to it to receive a hug; children can simply bend down. The robot has pressure sensors (four on the chest and two on each arm) to record how much pressure is exerted and an inner t-shaped wooden structure upon which the two soft robotic arms are attached. This sensors and actuators are situated inside a large stuffed animal. These researchers found that zoomorphic robots kept children more engaged compared to non-zoomorphic robots. Another team of researchers created a wheeled inverted pendulum robot named Robovie-III that hugs in a multi-step process \cite{miyashita2004human}. It uses a range sensor to determine the distance between itself and the user. Once it is the appropriate distance away, the robot hugs the user. After some time, the robot releases. The robot uses the human to assist its balance, and it does not have any padding or softening on top of its metallic components. Yamane et al. have been issued a patent for Disney Enterprises, Inc.\ to create their own version of a huggable robot \citep{Disney}. This robot is designed for human interaction, presumably within theme parks. It features a rigid inner structure with specific elements made of softer material to create a deformable exterior in areas that would contact a human. This robot attempts to match the pressure an external user applies using pressure sensors. The wording of the patent is ambiguous as to whether this robot will be autonomous or teleoperated. However, as the users expect the hug to come from the robot itself, not another person, we identify it as human-robot interaction, rather than a technology-mediated hug. The physical appearance of this robot matches that of the character Baymax from the Disney movie ``Big Hero 6'' (2014). This patent supports the idea that there is great interest in furthering human-robot interaction to include more pleasant physical exchanges, particularly human-robot hugging. \citet{Kaplish2019} created a humanoid robot for physical human-robot interaction. It has two Franka Emika robotic arms fitted with 3D printed shells. These shells are covered in polyurethane foam for a soft and enjoyable tactile experience for the user, and bellows cover the gaps in the elbows and neck. The shells also have embedded optics-based force sensors, which the authors use to estimate the hug tightness. The robot's upper body is finally covered in custom stretchable fabric. In their study, a human operator hugs a mannequin while wearing a suit covered in eight IMUs and the same force sensors that are found on the robot. The robot then hugs another mannequin while being controlled by the sensor measurements pre-recorded from the human operator's suit. In this experiment, the robot never hugged another person and did not hug in real time. The main focus was instead on testing how the sensor readings are mapped from the human's suit to the robot's force and motion control. Later, \citet{Campbell2020} built on \citet{Kaplish2019}'s work using the same robotic platform, this time testing with users directly hugging the robot. After equipping the robot with 61 force sensors, they trained a sparse learning-from-demonstration model with teleoperated data collected from 121 sample hugs from four participants. They found that their model generalized well to unseen hug styles and new interactions with six human hugging partners (four from training and two new individuals). Interestingly, the authors created solutions to edge cases when the human or robot hugging partner did not behave as expected, such as the robot hugging the air, a delay before closing the arms for a hug, or hugging without making contact (``air hug''). However, when users gave the robot an air hug, the robot did not know when to release the user; thus the user became trapped and had to be released by an experimenter. Block and Kuchenbecker previously created the original HuggieBot \cite{HuggieBot_master, block2019softness, block2018emotionally}, which applied hardware and software upgrades to a Willow Garage Personal Robot 2 (PR2). They used foam and cloth to soften the robot and various heating elements to warm the robot. A stretchable tactile sensor added to the robot's back detected when a user made contact (indicating the start of the hug) and released contact (indicating a desire to end the hug). They tested combinations of soft and warm hugs and tested the pressure the robot should apply and how long the hug should last. Block and Kuchenbecker found that hugging robots should be soft, warm, squeeze tightly, and release immediately once the user indicates they are ready to be released; users were displeased when released either too soon or too late. These findings formed the basis for the invention of HuggieBot 2.0, a custom hugging robot platform that was recently created and validated by \citet{TheSixHugCommandments}. Because we use it in our action-response elicitation study, this robot is fully described in Section~\ref{UserStudyMethods}. None of the human-robot hugging devices described above are able to detect, classify, or respond to intra-hug gestures. As mentioned in Section~\ref{subsec:socialtouchpeople}, humans commonly perform intra-hug gestures during prolonged hugs to provide the hugging partner with beneficial deep pressure touch. Our work in this article builds on all of these findings from prior research and is specifically focused on the exchange of intra-hug gestures between a user and an autonomous adult-sized hugging robot, a challenging social-physical interaction that could greatly enrich hugging robots and provide insights relevant to other affective embodied systems. \subsection{User Experience Evaluation} \label{subsec:user-experience-evaluation} A major part of this work is trying to quantify how users feel about particular versions of HuggieBot. Several different methods exist to gauge how participants feel about technologies. At the most basic level of assessment is forced choice, where participants are given two conflicting alternatives and must choose one \cite{dhar2003effect}. When evaluating a technology, such a force-choice question could be ``I would use this technology in my home,'' where a participant's only options are ``yes'' or ``no.'' One step up from forced-choice questionnaires are graded-scale questionnaires \cite{morillo2019journey}. An example of such a question type commonly used in human-robot interaction evaluation is a Likert scale, which usually has between five and nine response points \cite{schrum2020four}. However, Likert scales typically do not provide highly granular responses. An increased number of response points increases mental fatigue on participants and can thus lower the overall quality of responses \cite{bendig1953reliability, lee2014search}. An example of a Likert-scale question might be ``I would use this technology in my home,'' where a participant must answer from the following options: ``strongly disagree,'' ``disagree,'' ``neutral,'' ``agree,'' and ``strongly agree.'' For our questionnaires, we chose to use a continuous sliding scale, which is also known as a visual analog scale (VAS) \cite{bijur2001reliability}. Each end of the scale was anchored, for example, from hate (0) to love (10), with neighboring responses separated by only 0.1. Thus, compared to Likert scales, we were able to obtain granular responses without increasing the mental fatigue on our users or lowering the quality of their responses \cite{chyung2018evidence, adamchic2012psychometric}. All of our continuous sliding scales defaulted to start at neutral (for example at 5) so as not to sway our users toward either end of the spectrum. Having a continuous sliding scale with a default at neutral also ensures users do not feel pressured into making a choice that does not feel right to them; they are free to leave the slider at neutral. \citet{wall2017reliability} also found that use of a continuous sliding scale had higher inter-rater reliability compared to a seven-point Likert scale. A final benefit of continuous sliding scale questionnaires is that VAS data have ratio scale properties, which support the use of parametric statistical analysis \cite{myles1999pain}. \citet{lindblom2020evaluating} discuss how to evaluate the user experience in HRI studies. They make important distinctions between evaluating ``what users say'' (attitudinal) and measuring ``what users do'' (behavioral), as well as understanding ``how and why'' (qualitative) and ``how often or how much'' (quantitative). Some assessment can be done through surveys, as discussed above, but they mention that surveys alone cannot fully encompass the user experience. Naturalistic field studies are recommended to understand how users respond to the technology in real-life conditions. \citet{Alenljung_UX} explain that common methods of user evaluation in HRI include scenario-based evaluation, questionnaires, interviews and focus groups, Wizard-of-Oz studies, expert evaluations, and physiological measurements. Since some of the richest data collected regarding the user experience cannot be collected via surveys, and since both \citet{lindblom2020evaluating} and \citet{Alenljung_UX} mention the advantages of open-ended questions and interviews, we supplement our surveys with both. Users in both of our studies wrote responses to open-ended questions and then called over an experimenter and verbally explained their answers. While we were unable to perform a naturalistic field study given the current global pandemic, our validation study (Section~\ref{Validation}) included two free-play scenarios during which we observed ``what users do'' and ``how often or how much,'' as suggested by \citet{lindblom2020evaluating}. \citet{Bargas2011} reviewed 51 papers that were published in the human-computer interaction (HCI) literature from 2005 to 2009, reporting 66 studies that were focused on evaluating user experience. They found that many studies use self-developed questionnaires to evaluate user emotions, enjoyment, and opinions of system aesthetics. \citet{Bargas2011} mention that though it is hardly asked, a crucial element of user experience is understanding the context of use and the anticipated deployment of a system. Following the guidelines by \citet{Bargas2011}, we report all interview questions and protocols, evaluate a prolonged interaction (longer than 30 minutes), and examine and report user behavior (what people do) in addition to what they say. \section{Action-Response Elicitation Study -- Results} \label{Results} We analyzed the user ratings, pressure sensor and microphone data, and user comments from the action-response elicitation study to understand how HuggieBot 2.0 might be upgraded to become capable of autonomously detecting and responding to intra-hug gestures. \subsection{User Ratings} \begin{figure}[t] \includegraphics[width=0.865\columnwidth, trim = {0cm 0cm 0cm 0cm}, clip]{figures/improvedbehavior.pdf} \vspace{-0.3cm} \caption{A matrix showing the user ratings of the appropriateness of each possible robot response to the four intra-hug actions that users performed in the action-response elicitation study. The color of each square represents the average rating, following the legend shown at right. The dots in each cell show the individual rating of each user, consistently ordered based on their average score from low to high. The pale horizontal lines in each square show the ratings of 0 (hate), 5 (neutral), and 10 (love).} \label{fig:BehaviorMatrix} \vspace{-0.3cm} \end{figure} \begin{figure}[t] \hspace{0.9cm}\includegraphics[width=0.8\columnwidth, trim = {5cm 15cm 0cm 15cm},clip]{figures/improvedquality2.png} \vspace{-0.3cm} \caption{A matrix showing the user ratings of the quality of the four robot responses, using the same visualization approach as Fig.~\ref{fig:BehaviorMatrix}.} \label{fig:QualityMatrix} \vspace{-0.3cm} \end{figure} As previously mentioned, each user performed each action during four different hugs, experiencing a different robot response during each hug. In total, all users rated sixteen pairs of user actions and robot responses. Figure \ref{fig:BehaviorMatrix} shows the responses to the sixteen different pairs. The color of each cell in the matrix represents the average score from hate (0) to love (10) over all users. The black dots inside each cell show the 32 individual user ratings, always presented in the same order from lowest to highest average user rating. The three lowest average ratings (5.2, 4.9, and 4.9) all occurred when the user performed an active gesture (rub, pat, or squeeze, respectively) and the robot did not move in response (hold). The hold-hold pairing received a much higher average rating (6.6), as did all conditions wherein the robot responded to user inaction (hold) or action (rub, pat, squeeze) with a rub, pat, and especially a squeeze. While some users gave each action-response pair a rating below neutral, the average ratings achieved for the appropriateness of all of the responsive robotic intra-hug gestures were consistently high. Figure \ref{fig:QualityMatrix} shows the user ratings of the quality of the robot gestures. Users rated the quality of the robot staying still and squeezing as very high (8.5 and 8.2, respectively); no user gave a negative rating for the robot's hold, and only two of the 32 users gave negative ratings for squeeze. Rubbing and patting were rated positively (5.8 and 5.7, respectively), but closer to neutral. The fact that these gesture quality ratings differ somewhat from the average response appropriateness ratings in Fig.~\ref{fig:BehaviorMatrix} shows that users were at least moderately successful at distinguishing these rating tasks from one another, particularly regarding hold. \begin{figure}[t] \includegraphics[width=\columnwidth, trim = {1cm 4cm 1.5cm 4cm},clip]{figures/P8FixedTime.pdf} \vspace{-0.3cm} \caption{Sample pressure signals and microphone signals from participant 8 performing each of the four gestures during the action-response elicitation study. The colored data points mark the time periods manually labeled as positive examples of the indicated intra-hug gestures.} \label{fig:P8} \vspace{-0.3cm} \end{figure} \begin{figure}[t] \includegraphics[width=\columnwidth, trim = {1cm 4cm 1.5cm 4cm},clip]{figures/P13FixedTime.pdf} \vspace{-0.3cm} \caption{Sample pressure signals and microphone signals from participant 13 performing each of the four gestures during the action-response elicitation study. The colored data points mark the labeled data segments.} \label{fig:P13} \vspace{-0.3cm} \end{figure} \subsection{Pressure Sensor and Microphone Data} Data were recorded from HuggieChest's pressure sensor and microphone for each of the sixteen hugs for all 32 users, yielding a total of 512 recordings. Analyzing these signals after the experiment shows common characteristics between different users performing the same gesture. As a representative sample, Figures \ref{fig:P8} and \ref{fig:P13} show the pressure and microphone data collected from two different participants performing the same gestures. While the torso chamber was inflated to somewhat different initial inflation levels for each participant, characteristic signals can still be recognized to determine the type of contact made, regardless of inflation. The use of these two sensors allows us to differentiate between both coarse and fine contacts. While the pressure signals look similar between rubs and pats, we can differentiate between the two by the pat's much larger microphone response. Additionally, the microphone signals look quite similar for squeezes and rubs, but looking at the corresponding pressure signals allows us to determine which action is being performed; squeezing the robot drastically increases the chamber's pressure, while rubbing it does not. These results show the benefit of using two sensors for detecting intra-hug gestures. Video review revealed that these two participants had different approaches to performing the gestures, which can be recognized in the recorded data. P8 (Fig. \ref{fig:P8}) squeezed the robot only once and performed a continuous strong patting movement. In contrast, P13 (Fig. \ref{fig:P13}) performed three distinct squeezes and two pats, separating each repeated gesture with a short pause (hold). The variety of ways in which the 32 users performed these four gestures was surprising and underscored the importance of gathering a large corpus of sensor data. Finally, although we asked users to perform only a single type of gesture per hug, we noticed that seven of the 32 participants (21.9\%) accidentally combined gestures and sometimes performed two gestures at once. This particularly happened during rubs and pats, when the user would sometimes also unintentionally squeeze the robot. \subsection{User Comments} \label{subsec:user-comments-study1} Our users' written and spoken comments provide crucial information on how to improve the quality of the hugs HuggieBot can deliver. A systematic analysis reveals several key themes repeated by many users. The majority of users (68.75\%) commented that they preferred \textit{not} having to press a button to initiate the hug. The slow speed of the robot's arms and thus the amount of time that it took for the arms to close around the user detracted from the experience (34.37\%), particularly for hugs initiated when the user walked toward the robot. Next, almost half of the users (43.75\%) mentioned that they could not feel both arms fully against their backs in at least one hug. Some users (21.87\%) also commented that the robot's hand placement was inappropriate -- either too high on the back (close to the neck) or too low (on the buttocks), both of which made them uncomfortable. Incomplete arm closure and too high or too low hand placement made it difficult for some users (21.87\%) to feel the robot performing gestures on their back, which most likely contributed to the variety of user ratings reported. These comments show that improvements are needed for how HuggieBot initiates hugs and adapts to its hugging partner. Participants in this study experienced the robot both responding to their gestures and holding still (not responding) when they performed a gesture. Almost all users (78.13\%) commented that having the robot respond to their gestures made it feel more ``alive,'' ``social,'' and/or ``realistic''. As we initially expected users to prefer a robot that reciprocates their gestures, we were surprised to find users enjoyed variety in the responses. When they explained their response ratings to the experimenter, twenty of our users (62.5\%) mentioned that reciprocation of their actions every time felt ``too mechanical.'' Rather than thinking the robot made a mistake when it performed a different gesture, users appreciated gestures of similar ``emotional investment levels'' (P21) and felt it showed ``the robot understands [them] and makes his own decision'' (P30). In agreement with the high ratings shown in Fig.~\ref{fig:BehaviorMatrix}, many users shared positive comments about being squeezed by the robot, saying phrases such as ``I love warm, tight hugs'' (P7, P8, P17, P21, P25) or that being squeezed by the robot felt ``the most natural'' (P15) and ``the closest to a real human hug, the best response'' (P2, P16, P20). Some even went as far as to say that the squeezes gave them ``a sense of security and comfort'' (P9, P20, P23). In general, users thought the duration of the robot's timed squeeze was ``too short'' (P5, P24). Additionally, several users suggested making the robot's squeeze duration match theirs because the fixed timing felt ``too mechanical'' (P1, P6) or like ``the robot wasn't as emotionally invested as [they] were'' (P21, P28). These comments hint at the need to treat modal gestures such as squeeze differently from event-based gestures such as rub or pat. Finally, another unexpected finding that wove throughout the comments was how strongly the users anthropomorphized the robot. The experimenter always called the robot ``it'' or ``HuggieBot,'' yet in both written and verbal comments, 96.87\% of users referred to the robot as a ``he'' when describing how the hugs felt, often explaining a social situation it reminded them of. Such interactions included ``a comforting hug from a mother'' or ``a distant relative at a funeral,'' ``seeing friends at a football match,'' ``receiving a pity hug from someone who doesn't want to,'' ``hugging an ex,'' to even ``hugging a lover.'' They attributed emotions, mood swings, and attitudes to the robot, depending on how well it hugged them in each trial of the study. \subsection{Brief Discussion} The ratings gathered in this study provided essential information about user preferences for how a hugging robot should perform intra-hug gestures. We conducted this experiment to guide hardware and software improvements to HuggieBot 2.0, as will be discussed in the following Sections. However, the results can also be generalized and applied to other hugging robots, including those that do not have any haptic sensing. As can be seen in Fig.~\ref{fig:BehaviorMatrix}, regardless of the user action, a squeeze was always perceived on average as the most enjoyable robot response, including when the user had not just actively performed an action (after user hold). Therefore, researchers who want to improve user opinions of their hugging robot without investing in haptic sensing capabilities should program their robot to occasionally squeeze the user. Simultaneously, the neutral average reactions users showed when the robot did not respond to their rubs, pats, and squeezes indicate that perceiving and responding to intra-hug gestures could greatly improve hugging robots. \section{Data Plots For Supplemental Videos} \label{app:vidplots} The supplemental material for this article includes three annotated videos showing how the detection and classification algorithm and the behavioral response algorithm work in real time; the videos show footage recorded from three different participants during the validation study presented in the article. This appendix provides annotated plots of the robot's joint angles, joint torques, microphone signal, and pressure signal for the corresponding videos. \begin{figure}[tp] \includegraphics[width=\columnwidth, trim = {1cm 5cm 1cm 5cm},clip]{figures/P7_video_annotated.pdf} \vspace{-0.3cm} \caption{Participant 7.} \label{fig:P7_annotated} \vspace{-0.3cm} \end{figure} \begin{figure}[tp] \includegraphics[width=\columnwidth, trim = {1cm 5cm 1cm 5cm},clip]{figures/P13_video_annotated.pdf} \vspace{-0.3cm} \caption{Participant 13.} \label{fig:P13_annotated} \vspace{-0.3cm} \end{figure} \begin{figure}[tp] \includegraphics[width=\columnwidth, trim = {1cm 5cm 1cm 5cm},clip]{figures/P16_video_annotated.pdf} \vspace{-0.3cm} \caption{Participant 16.} \label{fig:P16_annotated} \vspace{-0.3cm} \end{figure} \section{Validation Study -- Methods} \label{Validation} This study aims to test and validate the new platform of HuggieBot 3.0 (Section~\ref{Improvements}), its perceptual pipeline for detecting and classifying intra-hug gestures (Section~\ref{Detection}), and its probabilistic behavior algorithm for responding to detected gestures (Section~\ref{Response}). Regarding the platform improvements, we specifically sought to evaluate the user response to the updated hug initiation process (Section~\ref{subsec:initiation}) and the updated robot hand placement based on estimated user height (Section~\ref{subsec:initiation}). The validation study we conducted was similar to the action-response elicitation study except for the key difference that the robot was always behaving autonomously. The Max Planck Society's Ethics Council approved all methods for this study under a new framework agreement protocol number, F011B. The investigator recruited voluntary participants in the same manner as described in Section \ref{UserStudyMethods}. \subsection{Participants} The sixteen participants were English-speaking volunteers recruited from Stuttgart, Germany. No participants were employed by the Max Planck Society, so all were compensated at a rate of 8 euros per hour. While the COVID-19 pandemic reduced the number of participants we were able to safely recruit, we believe the rated quality of the resulting hugs and the qualitative feedback provided by these diverse users were more important to the study than the total quantity of hugs exchanged. This study was every user's first time interacting with any version of HuggieBot and the first robotic interaction of any kind for most users. Half of our participants were men, and half were women; 93.75\% were non-technical. The average height reported by our users was 1.69~m, with a standard deviation of 0.10~m. The participants ranged in age from 22 to 38 (mean = 30, standard deviation = 4.76) and came from ten different countries. Over 80\% of the participants identified as enjoying receiving hugs from others. \begin{figure}[t] \includegraphics[width=\columnwidth, trim = {2cm 12cm 2cm 12cm},clip]{figures/ValidationTimeline.pdf} \vspace{-0.3cm} \caption{The timeline of the validation study. The colored boxes represent the five phases of the experiment. The left-most vertical line marks the start of the user study, and the right-most vertical line marks the end. Vertical lines coming off the center timeline show the order of the user's activities. Squares on the timeline indicate when the user filled out a survey, while circles indicate when they performed hugs. If the shape (square or circle) is not filled, that activity was performed in an open-ended way. If it is filled in with black, the activity was more controlled. Note that each user chose the number of open-ended hugs to perform in phases 1 and 3 rather than always performing the three hugs indicated in the timeline.} \label{fig:ValidationTimeline} \vspace{-0.3cm} \end{figure} \subsection{Procedure} Like the action-response elicitation study, prospective users were required to confirm their eligibility to participate by email, given the exclusion criteria and the local COVID-19 regulations. The timeline for this study can be found in Fig.~\ref{fig:ValidationTimeline}. Once the recruit arrived at the experiment site, the investigator explained the study. They read over and signed the informed consent document and video release form after asking the experimenter any questions. At this point, the experimenters started recording the study and preparing the robot while the user filled out the demographics survey. Instead of verbally explaining how to interact with HuggieBot 3.0, the investigator showed the user a video on a laptop of a sample user hugging the robot; the video demo is included as supplementary material for this article. Without providing any verbal instructions, this video shows the user how the robot asks for a hug (lifting its arms and saying ``Can I have a hug, please?''). It also shows how to prompt the robot to close its arms (walking forward) and how to cause the robot to release (either by releasing pressure off the robot's back or by leaning back against the robot's arms). We chose this new standardized method of introducing users to the robot to more closely mimic how a user might learn how to hug the robot if they encountered it in the wild. We intentionally did not show the user performing any gestures on the robot in this instructional video to avoid biasing users toward this method of human-robot interaction. After watching this video, the user filled out an opening survey to document their initial impressions of the robot (Table \ref{table:OpenCloseSurvey}). Importantly, this study included no formal practice hugs. Before the user began to interact with the robot physically, they watched the 30-second-long instructional video one more time as a refresher, since some pilot participants had forgotten what to do while completing the opening survey. The experimenter prompted them to ``pay particular attention to the timing between the user and the robot.'' \begin{table} \caption{The fifteen questions asked in the opening and closing surveys of the validation study. Users rated their answers on a continuous sliding scale from ``disagree'' (0) to ``agree'' (100) with a resolution of 0.1.} \vspace{-0.2cm} \label{table:OpenCloseSurvey} \begin{small} \begin{tabularx}{\linewidth}{lX} \hline\noalign{\smallskip} I feel understood by the robot\\ I trust the robot\\ Robots would be nice to hug \\ I like the presence of the robot\\ I think using the robot is a good idea \\ I am afraid to break something while using the robot\\ People would be impressed if I had such a robot\\ I could cooperate with the robot \\ I think the robot is easy to use\\ I could do activities with this robot\\ I feel threatened by the robot \\ This robot would be useful for me \\ This robot could help me\\ This robot could support me\\ I consider this robot to be a social agent\\ \noalign{\smallskip}\hline \end{tabularx} \end{small} \vspace{-0.3cm} \end{table} This study included three phases of physical interaction with HuggieBot 3.0; the robot always behaved autonomously, with no intervention by any experimenter and no changes to the program it was executing. The first phase was a natural hugging scenario. The users were given no additional instructions on how to hug the robot. They could perform as many hugs as they wanted, they could position their arms any way they wanted, and they could choose whether or not to perform any intra-hug gestures. They were simply told to hug the robot naturally several times, with a minimum of two hugs. We started the study with this open-ended phase to observe how users would naturally interact with the robot and to see whether they would discover the robot's ability to detect and respond to intra-hug gestures without experimenter prompting. Once they were satisfied with their introductory hugs, users were asked to fill out a single short survey for all the hugs they had performed, including answering the questions found in Table~\ref{table:FourQuestions}. These four questions were asked about \textit{the entire hug experience}, which included the hug initiation, the embrace, the hand location, any intra-hug gestures they may have encountered or triggered, and the release. \begin{table} \caption{The four questions asked after the introductory and concluding hug sessions (phases 1 and 3) in the validation study.} \vspace{-0.4cm} \label{table:FourQuestions} \begin{small} \begin{tabularx}{\linewidth}{lX} \hline\noalign{\smallskip} This robot behavior seemed (Unnatural -- Natural)\\ These hug interactions felt (Awkward -- Enjoyable)\\ These hugs made the robot seem (Socially Stupid -- Socially Intelligent)\\ These hugs made the robot seem (Unfriendly -- Friendly)\\ \noalign{\smallskip}\hline \end{tabularx} \end{small} \vspace{-0.3cm} \end{table} The second phase of the experiment included four hugs with intra-hug gestures conducted in a somewhat controlled manner. Before each hug, the user was instructed to perform a single gesture during the hug after the robot's arms had closed. They could perform the specified gesture as many times as they liked. The four gestures they were asked to perform were hold, rub, pat, and squeeze. We used an $4 \times 4$ balanced Latin square to counter-balance any effects of presentation order \cite{LatinSquare} and recruited participants in multiples of four to have an equal number of participants in all the presentation orders. After each hug, users were asked to fill out a short survey about the robot behavior they had just experienced in response to their gestures. This survey started with a free-response question asking for the user's ``first impressions of this interaction.'' Then, they were asked to mark all the robot responses they experienced (options were robot arms staying still, robot arm moving vertically, robot arm tapping on back, and robot arms tightening hold). Finally, the participant used a sliding scale from 0 (hate) to 10 (love) to rate how they felt about the robot's response to the action they performed during the hug. Before moving on to the next part of the study, the user verbally explained their rating to the experimenter. In the third phase of this study, users were once again asked to hug the robot naturally. They were allowed to hug the robot as many times as they liked. Afterward, users answered the same short questionnaire from the first phase, including the questions found in Table~\ref{table:FourQuestions}. At the end of the third phase, users filled out a closing survey. The closing survey included the same questions from the opening survey (Table \ref{table:OpenCloseSurvey}) plus questions asking users to rate the quality of the four different robot gestures they experienced during the experiment (hold, pat, rub, squeeze), as also done at the end of the action-response elicitation study. There were also two questions aimed at seeing how users responded to two new features we implemented: users rated the naturalness of the hug initiation method and the appropriateness of the robot's hand placement. Finally, users could provide additional free-form comments at the end.
1,314,259,993,148
arxiv
\section{Introduction} Possibilistic networks \cite{fonck} are graphical representations of independence relationships between a set of variables described by uncertain and imprecise information. Despite the multitude of research endeavors devoted to applying possibilistic networks in real domains or to propagating information, their learning from data remains a real challenge. Only few works address this problem and existing ones \cite{book-borgelt,sanguesa1998possibilistic} are direct adaptations of Bayesian networks learning methods without any awareness of specificities of the possibilistic framework which made them theoretically unsound. The main limitation of existing works is that they try to learn separately the parameters, i.e. possibility distributions coding variables uncertainty, and the structure i.e. the graph of the possibilistic network. Moreover, existing methods suffer from the lack of an accurate and standard validation procedure. Working on parameters in the possibilistic framework highlights several difficulties when dealing with the learning task, in particular, when we handle uncertain and imprecise data. This is due to the fact that learning leads commonly to additive assessment while the possibility theory is, by definition, maxitive i.e. the possibility of a disjunction of events is the maximum of the possibilities of each event in this disjunction. Thereby, if we want to learn parameters from data in the possibilistic framework, two steps are primordial: the first one focuses in counting the occurrence of observations in the dataset to estimate non-normalized distributions. While the second aims to approximate the latter by possibility distributions. This paper rigorously addresses this problem by first proposition of a new possibilistic networks sampling method used to evaluate learning algorithms in which we control the imprecision degree in the generated datasets. In the final part of this paper, we propose a likelihood function exploring the link between random sets theory (additive) and possibility theory (maxitive) which will be deployed to learn possibilistic networks parameters. This paper is organized as follows: Section \ref{s1} gives a brief introduction to possibility theory and presents possibilistic networks and their learning from data. Section \ref{s2} proposes a possibilistic networks sampling algorithms. Section \ref{s3} defines a new possibilistic likelihood function and proposes a possibilistic networks parameters learning approach. \section{Basic concepts and possibilistic networks} \label{s1} Possibilistic networks \cite{fonck} represent the possibilistic counterpart of Bayesian networks \cite{book-pearl} in the possibilistic framework coined by Zadeh \cite{Zadeh-1978} and developed by Dubois and Prade \cite{dubois2006possibility,dubois1998possibility}. This section first presents basic notations used throughout the paper and introduces possibility theory. Then, it defines possibilistic networks and discusses existing learning methods. \subsection{Basic concepts of possibility theory} \subsubsection{Notations and definitions} Let $V=\{X_1,...,X_n\}$ be a set of variables such that $D_i$ denotes the domain of $X_i$ and $x_{ik}$ denotes an instance of $X_i$, i.e. each $x_{ik} \in D_i$ corresponds to a state (a possible value) of $X_i$. The agents knowledge (state set) of $X_i$ can be encoded by a possibility distribution $\pi(X_i)$ corresponding to a mapping from the universe of discourse $D_i$ to the unit interval [0,1]. For any state $x_{ik} \in D_i$, $\pi(x_{ik}) = 1$ means that $x_{ik}$ realization is totally possible $\pi(x_{ik}) = 0$ means that $x_{ik}$ is an impossible state. It is generally assumed that at least one state $x_{ik}$ is totally possible and $\pi$ is then said to be normalized. Extreme cases of knowledge are presented by \emph{complete knowledge}, i.e. $\exists x_{ik} \in D_i$ s.t. $\pi(x_{ik})=1$ and $\forall x_{ij} \in D_i$ s.t. $x_{ij} \not= x_{ik}, \pi(x_{ij})=0$ and \emph{total ignorance}, i.e. $ \forall x_{ik} \in D_i, \pi(x_{ik})=1$ (all values in $D_i$ are possible). The definition of a possibility distribution could be generalized to a set of variables $V$ defined on the universe of discourse $\Omega= D_1 \times...\times D_n$ encoded by $\pi$. $\pi$ corresponds to a mapping from $\Omega$ to the unit interval [0,1]. $\omega$ is called interpretation or event and is denoted by a tuple $(x_{1k},...,x_{nl})$. Given a possibility distribution $\pi$, we can define for any subset $A \subseteq D_i$ two dual measures: possibility measure $\Pi(A) = \underset {x_{ik} \in A} \max\ \pi (x_{ik})$ and necessity measure $N(A)= 1 - \Pi(\bar{A})$ where $\Pi$ assesses at what level $A$ is consistent with our knowledge represented by $\pi$ whereas $N$ evaluates at what level $\bar{A}$ is impossible. The particularity of the possibilistic scale is that it can be interpreted in two ways: (i) an ordinal manner which means that possibility degrees reflect only a specific order between possible values. (ii) a numerical way meaning that possibility degrees make sense in the ranking scale. These two interpretations induce two definitions of possibilistic conditioning which consists in reviewing a possibility distribution by a new certain information $A$, an interpretation of $A \subseteq \Omega$. The \emph{product-based} conditioning is defined as follows: \begin{equation} \pi(\omega| A) = \left\{\begin{array}{cc} \frac{\pi(\omega)}{\Pi(A)} & \text{if} ~ \omega \in A \\0 & \text{otherwise}. \end{array} \right. \label{cond1} \end{equation} The \emph{min-based} conditioning is defined as follows: \begin{equation} \label{cond2} \pi(\omega \mid_m A) = \left \{ \begin {array}{ll} 1 & \text{si} \ \pi(\omega) = \Pi(A) \mbox { \text{and} } \omega \in A \\ \pi (\omega) & \text{if} \ \pi(\omega) < \Pi(A) \mbox { \text{and} } \omega \in A \\ 0 & \text{otherwise}. \\ \end {array} \right. \end{equation} \subsubsection{Possibility theory and random sets theory} One view of possibility theory is to consider a possibility distribution $\pi$ on $X_i$ as a \textit{counter function} of a random set \cite{shafer1976mathematical} pertaining to $D_i$. A random set in $D_i$ is a random variable which takes its values on subsets of $D_i$. More formally, let $D_i$ be a finite domain. A basic probability assignment or mass function is a mapping $m:2^{D_i} \longmapsto [0,1]$ such that $\sum_{A_{ik}\subseteq D_i}(m(A_{ik}))=1$ and $m(\emptyset)=0)$. A set $A_{ik} \subseteq D_i$ such that $m(A_{ik})>0$ is called a focal set. The possibility degree of an event $x_{ik}$ is the probability of the possibility of the event i.e. the probability of the disjunction of all events (focal sets) in which this event is included \cite{book-borgelt}: \begin{equation} \label{def2} \pi(x_{ik}) = \underset{A_{ik}|x_{ik}\in A_{ik}}{\sum} m(A_{ik}) \end{equation} A random set is said to be \textit{consistent} if there is at least one element $x_{ik}$ contained in all focal sets $A_{ik}$ and the possibility distribution induced by a consistent random set is, thereby, normalized. Exploring this link between possibility theory and random sets theory has been extensively studied, in particular, in learning tasks, we cite for instance \cite{book-borgelt,joslyn1997measurement}. \subsubsection{Variable sampling} \label{echan} The variable sampling corresponds to the generation of a dataset representative of its possibility distribution. In the numerical interpretation, two approaches \cite{chanas1988single,guyonnet2003hybrid} have been proposed to sample a variable. These methods are based on $\alpha$-$cut$ notion: $\alpha$-$cut(X_i)=\{x_{ik} \in D_i$ s.t. $ \pi(x_{ik}) \geq \alpha\} $ where $\alpha$ is randomly generated from [0,1]. The method proposed by Guyonnet et al. in \cite{guyonnet2003hybrid} focuses on the generation of imprecise data by returning all values of $\alpha$-$cut(X_i)$ for any variable $X_i$. Chanas and Nowakowski proposed another method in \cite{chanas1988single} which is dedicated to the generation of precise data by returning a single value uniformly chosen from $\alpha$-$cut(X_i)$. \subsection{Possibilistic networks} \subsubsection{Definition} \label{def} Possibilistic networks \cite{fonck} are the possibilistic counterpart of Bayesian networks \cite{book-pearl,book-jensen} sharing the same \textit{graphical component} i.e. a directed acyclic graph (DAG) which encodes a set of independence relations between $V=\{X_1,...,X_n\}$ where each variable $X_i \in V$ is conditionally independent of its non-descendent given its parents. The \textit{numerical component} substitutes the probabilistic framework by the possibilistic one by assigning a conditional possibility distribution to each node $X_i \in V$ in the context of its parents (denoted by $Pa(X_i)$), i.e. $\pi (X_i|Pa(X_i))$. The two definitions of the possibilistic conditioning lead naturally to two different ways to define possibilistic networks \cite{fonck,book-borgelt}: \emph{product-based} possibilistic networks based on the \emph{product-based} conditioning expressed by Equation \ref{cond1}. These models are theoretically and algorithmically close to Bayesian networks. In fact, these two models share the graphical component, i.e. the DAG and the product operator in the computational process. This is not the case of \emph{min-based} possibilistic networks based on \emph{min-based} conditioning defined by Equation \ref{cond2} that represents a different semantic. In both cases, possibilistic networks are a compact representation of possibility distributions. More precisely, the joint possibility distribution could be computed by the possibilistic chain rule expressed as follows: \begin{equation} \label{chain} \pi_\otimes(X_1,..., X_n)= \otimes_{i=1..n}\pi(X_i \mid_{\otimes} Pa(X_i)) \end{equation} where $\otimes$ corresponds to the minimum operator (min) for \emph{min-based} possibilistic networks and to the product operator (*) for \emph{product-based} possibilistic networks. \subsubsection{Learning from data} Few attempts have been proposed to learn possibilistic networks from data. In fact, Sang{\"u}esa et al. \cite{sanguesa1998possibilistic} have proposed two hybrid methods handling precise data: the first one learns trees and the second one learns the more general structure of DAGs. Borgelt et al. \cite{book-borgelt} have adapted two methods initially proposed to learn Bayesian networks: K2 \ and maximum weight spanning tree \cite{chow1968approximating} to learn possibilistic networks from imprecise data. These attempts concern mainly the structure learning and ignore parameters learning problem. Indeed, Sang{\"u}esa et al. learn probability distributions and transform them into possibility ones. Borgelt et al. methods estimate a possibility distribution using possibilistic histograms i.e. based of number of occurrence of different values of $X_i$ in the dataset. Let $\mathcal{D}_i = \{ d_i^{(l)} \}$ be a dataset relative to a variable $X_i$, $d_i^{(l)} \in D_i$ (resp. $d_i^{(l)}\subseteq D_i$) if data are precise (resp. imprecise). The number of occurrences of each $x_{ik} \in D_i$, denoted by $N_{ik}$, is the number of times $x_{ik}$ appears in $\mathcal{D}_i$: $N_{ik}= \text{card}(\{l \ \text{s.t.} \ x_{ik} \in d_i^{(l)}\})$. The sub-normalized estimation $\hat{\pi}(x_{ik})$ is expressed by: \begin{equation} \label{jos} \hat{\pi}(x_{ik}) = \frac{N_{ik}}{N} \end{equation} where $N$ is the number of observations in $\mathcal{D}_i$. N is equal (resp. lower or equal) to the sum of $N_{ik}$ if data are precise (resp. imprecise). Equation \ref{jos} could be defined on a set of variables $X_i,X_j,...X_w$. In this case, $N_{ik}$ becomes $N_{ik,jl,...,wp}= N(\{x_{ik}x_{jl}...x_{wp}\} \subseteq \mathcal{D}_{ijw})$. \section{Evaluation process for possibilistic networks learning algorithms} \label{s2} In the probabilistic case, evaluating Bayesian networks learning algorithms is ensured using the following process: we select an arbitrary Bayesian network either a synthetic one or a gold standard from which we generate a dataset using Forward Sampling algorithm \cite{henrion1986propagating}. Then, we try to recover the initial network using a learning algorithm and we compare the initial network with the learned one. In \cite{HaddadLA15}, we have proposed to transpose the evaluation strategy proposed in the probabilistic case to the possibilistic one. In what follows, will mainly concentrate on sampling possibilistic networks which consists in generating a dataset representative of their joint distributions. The sampling process constructs a database of N (predefined) observations by instantiating all variables in $V$ w.r.t. their possibility distributions. Obviously, variables are most easily processed w.r.t. a topological order, since this ensures that all parents are instantiated. Instantiating a parentless variable corresponds to computing its $\alpha$-$cut$. Instantiating a conditioned variable corresponds to computing also its $\alpha$-$cut$ given its sampled parents values. This could not be directly applied to conditional possibility distribution which is composed of more than one distribution depending on the number of the values of its sampled parents. So, to instantiate a conditioned variable $X_i$ s.t. $Pa(X_i=A)$, we compute $\alpha$-$cut$ from $\Pi(X_i|Pa(X_i)=A)$, computed as follows: \begin{equation} \label{eqqq} \Pi(X_i|Pa(X_i)=A) = \max_{a_i\in A} \pi(X_i|a_i) \pi(a_i) \end{equation} The main limitation of this sampling process is that it generates a particular case of imprecise datasets i.e. obtained data relative to a variable $X_i$ are conditionally consonant with respect to the sampled values of its parents. This is due the fact that the sampling process is based on the $\alpha$-cut notion which returns generally most possible values as observed ones. In what follows, we propose to parametrize this sampling process in order to generate more generic imprecise data by controlling the imprecision degree in generated datasets. In fact, we propose an extension to the sampling process proposed in \cite{HaddadLA15} in which we control the imprecision degree of generated data. The aim of controlling the imprecision degree in generated datasets is to create different forms of imprecision around the most possible value i.e. varying the values in the dataset but we conserve the most possible combination of $\Omega$. Given an imprecision degree $\theta_{imp}$ and a variable $X_i$ such that the $\alpha$-cut$(X_i)$ presents values returned by the sampling process, we generate all subsets pertaining to this $\alpha$-cut including the most possible value and we assign a probability equal to $\theta_{imp}$ to $\alpha$-cut$(X_i)$ and a probability equal to each subset $S_{X_i}$, $\theta_{imp}^{card(S_{X_i})-1}*(1-\theta_{imp})^{card(\alpha\text{-cut}(X_i))-card(S_{X_i})}$ to remaining subsets. Finally, we sample this probability distribution and we replace $\alpha$-cut$(X_i)$ by the sampled subset in the dataset. The proposed sampling process is formally described by Algorithm \ref{algo2}. \begin{algorithm} \caption{Sampling process (imprecision control)} \label{algo2} \begin{algorithmic} \STATE Input: Possibilistic network \STATE Output: Observation\\ \Begin{ \STATE \% Process nodes in a topological order \ForEach {$X_i \in V$} {\eIf{$X_i$ is parentless}{observation$(X_i)$=$\alpha$-cut$(X_i)$}{Compute $\Pi(X_i|Pa(X_i)=$observed) using Equation \ref{eqqq}\\ observation$(X_i)$= $\alpha$-cut$(X_i)$ from $\Pi(X_i|Pa(X_i)=$observed)} } $p(\alpha$-cut$({X_i}))$=$\theta_{imp}$\\ \ForEach {$S_{X_i} \subseteq$ cut} {$p(S_{X_i})=\theta_{imp}^{card(S_{X_i})-1}*(1-\theta_{imp})^{card(\alpha\text{-$cut$}(X_i))-card(S_{X_i})}$} observation($X_i$)=sample($p$) \STATE Return observation } \end{algorithmic} \end{algorithm} \section{Parameters learning of possibilistic networks} \label{s3} \subsection{New possibilistic likelihood function} The formulation of our likelihood function is made in two steps: first, we propose a likelihood function defined on random sets. Then, we propose an approximation of this likelihood function which leads to the definition of our possibilistic likelihood. \begin{definition} \label{def3} Let $G$ be a DAG and $ \{m_1, m_2, ..., m_n\}$ be the parameters relative to $\{X_1, X_2, ..., X_n\}$ to be estimated and $\mathcal{D}_{ij} = \{ d_{ij}^{(l)} \}$ be a dataset relative to a variable $X_i$ and its parents $Pa(X_i)=j$, $d_{ij}^{(l)} \subseteq D_{ij}$. The number of occurrences of each $A_{ik} \subseteq {D_i}$ such that $Pa(X_i)=j$ ($j \subseteq {D_j}$), denoted by $N_{ijk}$, is the number of times $A_{ijk}$ appears in $\mathcal{D}_{ij}$: $N_{ijk}= \text{card}(\{l \ \text{s.t.} \ A_{ijk} = d_{ij}^{(l)}\})$. We express the likelihood function as follows: \begin{equation} \label{L1} mL(m,G,\mathcal{D})= \prod_{i=1}^n \prod_{j=1}^{q_i} \prod_{k=1}^{r_i} N_{ijk} \log m_{ijk} \end{equation} where $mL$ is expressed by random sets of domains variables i.e. for each $X_i$, $q_i$ is card($2^{Pa(X_i)}$) and $r_i$ is card($2^{D_i}$), $m_{ijk}$ is the parameter to be estimated when $X_i=A_{ik}$ and $Pa(X_i)=j$. \end{definition} For numerical stability reasons, we propose the log-likelihood function. Equation \ref{L1} becomes: \begin{equation} \label{randomLL} mLL(m,G,\mathcal{D})= \sum_{i=1}^n \sum_{j=1}^{q_i} \sum_{k=1}^{r_i} N_{ijk} \log m_{ijk} \end{equation} Note that mass functions associated to random sets is a probability distribution, the partial derivative of the $mLL(m,G,\mathcal{D})$ follows the same principle of the partial derivative of the probabilistic likelihood function \cite{neapolitan2004learning} and reaches its maximum in $\hat{m}_{ijk}=\frac{N_{ijk}}{\sum_{k=1}^{r_i} {N_{ijk}}}$. Note that if mass functions are defined on singletons, i.e, available data are precise, the likelihood function defined in Equation \ref{randomLL} recovers the probabilistic one. However, in the opposite case, computing the likelihood functions is computationally expensive. In fact, a random set relative to a variable $X_i$ is defined on $2^{D_i}$ and its cardinality grows exponentially with the the number of values in $D_i$ \cite{dubois1990consonant}. Consequently, we propose to investigate the link between possibility distributions and mass functions presented in Equation \ref{def2} and to define an approximation of random sets likelihood function, i.e. a possibilistic likelihood expressed by possibility distributions defined on singletons. More formally, we express the possibilistic likelihood function as follows: \begin{definition} \label{def4} Let $G$ be a DAG and $ \{\pi_1, \pi_2, ..., \pi_n\}$ be the parameters relative to $\{X_1, X_2, ..., X_n\}$ to be estimated and $\mathcal{D}_{ij} = \{ d_{ij}^{(l)} \}$ be a dataset relative to a variable $X_i$ and its parents $Pa(X_i)=j$, $d_{ij}^{(l)} \subseteq D_{ij}$. The number of occurrences of each $x_{ik} \in D_i$ such that such that $Pa(X_i)=j$, denoted by $N_{ijk}$, is the number of times $x_{ijk}$ appears in $\mathcal{D}_{ij}$: $N_{ijk}= \text{card}(\{l \ \text{s.t.} \ x_{ijk} \subseteq d_{ij}^{(l)}\})$. We express the possibilistic likelihood as follows: \begin{equation} \pi LL(\pi,G,\mathcal{D})= \sum_{i=1}^n \sum_{j=1}^{q_i} \sum_{k=1}^{r_i} N_{ijk} \log \pi_{ijk} \end{equation} where for each $X_i$ $q_i$ is $\text{card}(Pa(X_i))$ and $\text{card}(r_i=|D_i)$, $\pi_{ijk}$ is the parameter to be estimated when $X_i=x_{ik}$ and $Pa(X_i)=j$. \end{definition} \subsection{Possibilistic-likelihood-based parameters learning algorithm } In the probabilistic case, learning Bayesian networks parameters is performed satisfying \textit{maximum likelihood} principle \cite{Heckerman1999} which evaluates at what level learned parameters fit the dataset. As far as we know, such a measure has not been proposed in the possibilistic framework. The absence of a learning possibilistic networks parameters method could be justified by the fact that the learning is usually viewed as an objective task i.e. based on computing frequency of observations while possibility theory has been almost based on the subjective opinions. This is to some extent true, especially, when we deal with measurement devices leading to precise observations (one possible value per variable). In this case, probability theory remains the most adequate alternative. However, when measurement devices provide imprecise data and we want to model data as they have been collected i.e. including imprecision due to the physical measurement itself, non-classical uncertainty theories stand out as best alternatives. In our case, we choose to use possibility theory since it is able to offer a natural and simple formal framework representing imprecise and uncertain information. The latter refers to the study of maxitive and minitive set-functions and can be interpreted as an approximation of upper and lower frequentist set probabilities in the presence of imprecise observations and this link will be explored in the following. In fact, we use the possibilistic likelihood in Definition \ref{def4} to learn possibilistic networks parameters. \begin{proposition} Given a DAG, a fixed parameter $\pi_{ijk}$ and an imprecision degree $S_i$ (prefixed value) relative to the variable $X_i$ the maximum possibilistic likelihood estimates are the parameter values that maximize $\pi LL(\pi,G,\mathcal{D})$. We assume that $\sum_{k=1}^{r_i} {\pi_{ijk}}$ is a constant equal to $S_{i}$, $\pi LL(\pi,G,\mathcal{D})$ reaches it maximum in $\hat{\pi}_{ijk}= argmax(\pi LL(\pi,G,\mathcal{D}))=\frac{N_{ijk}}{\sum_{k=1}^{r_i} {N_{ijk}}}*S_{i}$. \end{proposition} \begin{proof} Let $S_{i}$ be $\sum_{k=1}^{r_i} \pi_{ijk}$. So, the parameters $\pi_{ijk}$ are related by the following formula: $\pi_{ijri}=S_{i}-\sum_{k=1}^{r_i-1} \pi_{ijk}$. Then, $\pi LL(\pi,G\mathcal{D})$ could also be rewritten as follows: \begin{equation} \pi LL(\pi,G,\mathcal{D})= \sum_{i=1}^n \sum_{j=1}^{q_i} ((\sum_{k=1}^{r_{i-1}} N_{ijk} \pi_{ijk}) + N_{ijr_i} \log (S_{i}-\sum_{k=1}^{r_i-1} \pi_{ijk})) \end{equation} So, its derivative w.r.t a parameter $\pi_{ijk}$ is: \begin{center} $\frac{\partial \pi LL(\pi,G,\mathcal{D})}{\partial \pi_{ijk}}= \frac{N_{ijk}}{\pi_{ijk}}=\frac{N_{ijr_i}}{S-\sum_{k=1}^{r_i-1} \pi_{ijk}} =\frac{N_{ijk}}{\pi_{ijk}}-\frac{N_{ijr_i}}{\pi_{ijr_i}} $ \end{center} So, the value $\hat{\pi}_{ijk}$ of the parameter of $\pi_{ijk}$ maximizing the possibilistic likelihood sets this derivative equal to 0 and satisfies thereby: \begin{center} $\frac{N_{ijk}}{\hat{\pi}_{ijk}}=\frac{N_{ijr_i}}{\hat{\pi}_{ijr_i}} $ \end{center} We have: \begin{center} $\frac{N_{ij1}}{\hat{\pi}_{ij1}}=\frac{N_{ij2}}{\hat{\pi}_{ij2}}=...=\frac{N_{ijr_{i-1}}}{\hat{\pi}_{ijr_{i-1}}}=\frac{N_{ijr_{i}}}{\hat{\pi}_{ijr_{i}}} = \frac{\sum_{k=1}^{r_i} N_{ijk}}{\sum_{k=1}^{r_i} \hat{\pi}_{ijk}}=\frac{\sum_{k=1}^{r_{i}} N_{ijk}}{S_{i}}$ \end{center} So, $\hat{\pi}_{ijk}=\frac{N_{ijk}}{\sum_{k=1}^{r_i} {N_{ijk}}}$*$S_{i}$. \end{proof} Note that $S_i$ corresponds to the imprecision degree relative to a variable $X_i$ and could be fixed by an expert, inferred from the dataset to learn from or based on variables description. To obtain normalized possibility distributions, we divide every obtained distribution by its maximum. This operation will eliminate the effect of the imprecision degree and let us to be objective in the learning task. However, it remains possible to fix an imprecision degree per value of variables of the studied domain. Note that if obtaining possibility distributions are equal to zeros, we add an initial count (1) to all instances $N_{ijk}$ whose number are then added to the total number of instances. \section{Conclusion} In this paper, we propose an evaluation strategy to possibilistic networks parameters learning algorithms. A sampling method has been proposed to generate an imprecise dataset from a possibilistic network. In the second part of this paper, we propose a new \emph{product-based} possibilistic networks parameters learning algorithm based on a possibilistic likelihood function exploring the link between random sets theory and possibility theory. \bibliographystyle{plain}
1,314,259,993,149
arxiv
\section{Introduction} Exit problems for random perturbations of dynamical systems form an important classical field in the theory of stochastic processes. These problems provide a multitude of interesting questions at the intersection of dynamical systems and stochastic analysis, and are tightly related to asymptotic analysis of linear second order parabolic and elliptic equations with a small parameter. The most celebrated results for exit problems are large deviation estimates for the exit location and exit time by Freidlin and Wentzell~(see, e.g., \cite{FW2012}), in the case of a domain containing one or several stable equilibria where the dynamics exhibits metastable behavior. There are situations though where the analysis at the level of large deviations is not sufficient, and one is forced to study distributional scaling limits for the exit distributions. In~\cite{Bak2010} and~\cite{Bak2011}, this kind of analysis was carried out for diffusions near noisy heteroclinic networks, where multiple hyperbolic critical points (or, {\it saddle points}) of the deterministic dynamics are connected to each other by {\it heteroclinic orbits} (or, {\it connections}). If small noise is present, a typical trajectory near such a network spends a long time diffusing near the critical points, where the vector field is weak, eventually deciding between outgoing heteroclinic connections and following one of them until it reaches the neighborhood of the next saddle point. Consequently, a natural approach based on the strong Markov property was an iterative study of the exit from the neighborhood of the saddles and the motion along heteroclinic paths. Early results in this direction~\cite{Kifer1981}, \cite{Eizenberg:MR749377}, \cite{Bak2008}, \cite{Day95}, \cite{Mikami1995} established that with high probability, the exit from a neighborhood of an unstable equilibrium happens along the manifold associated with the top eigenvalue $\lambda>0$ of the linearization of the system and that the leading order asymptotics of the exit time is deterministic and is of the order of $\lambda^{-1}\log{\varepsilon}^{-1}$. However, these results were not detailed enough to allow for an efficient iteration scheme. The necessary refinement of the analysis of the exit distribution was developed in~\cite{Bak2010}, \cite{Bak2011} (see also \cite{AB2011} where a technical no-resonance requirement was lifted for planar systems). This led to the first rigorous mathematical description of non-Markovian limiting effects and other behaviors in such systems despite the existing nonrigorous studies in~\cite{ASK03}, \cite{AS1999}, \cite{SH1990}. For a recent survey on heteroclinic networks, see \cite{Field2015}. In \cite{Bak2010}, \cite{Bak2011}, and \cite{AB2011}, it was assumed that the top eigenvalues of the linearizations of the system near the critical points were simple. It was obtained then that if one starts near the critical point (or its stable manifold), then in the vanishing noise limit, the exit distribution satisfies a scaling limit theorem with explicitly computed scaling exponent and limiting distribution. In this paper, we are interested in a situation where the geometric multiplicity of the leading eigenvalue $\lambda>0$ is equal to~$1$ and the algebraic multiplicity equals the dimension of the unstable manifold. For simplicity, we exclude the presence of the stable manifold, although our analysis carries over to the hyperbolic situation with obvious modifications. Namely, we consider a vector field in arbitrary dimension, with one fully unstable critical point and linearization given by a matrix whose Jordan form contains exactly one Jordan block of full dimension. With random initial conditions close to the critical point, we consider the small white noise perturbation of this vector field and study the limiting behavior of the joint distribution of the exit point and exit time in the limit of vanishing noise. Curiously, the limiting behavior is more involved compared to the case of the leading eigenvalue of algebraic multiplicity one, where the exit point satisfies a simple limit theorem. Namely, in our setting, we obtain that for small values of the noise magnitude ${\varepsilon}$, the exit happens near one of two points $q_+$ and $q_-$ associated with the main direction of the Jordan basis and, near each of $q_{\pm}$, the random exit point $z_{\varepsilon}$ can be represented by the following expansion: \begin{equation} \label{eq:main_expansion} z_{\varepsilon}=q_{\pm} + \left(\frac{1}{\log {\varepsilon}^{-1}} +\frac{(d-1) \log\log{\varepsilon}^{-1}}{\log^2{\varepsilon}^{-1}} +\frac{\eta}{\log^2{\varepsilon}^{-1}}\right)h^{\pm}_1\\ +\frac{1}{\log^2{\varepsilon}^{-1}}h^{\pm}_2+o_{\mathrm{P}}\left(\frac{1}{\log^2{\varepsilon}^{-1}}\right), \end{equation} for some deterministic vectors $h_1^{\pm},h_2^{\pm}$ and a random variable $\eta$. In other words, given the direction of the exit (``$+$'' or ``$-$''), the leading correction to $q_{\pm}$ is deterministic and equals \[\left(\frac{1}{\log {\varepsilon}^{-1}}+\frac{(d-1) \log\log{\varepsilon}^{-1}}{\log^2{\varepsilon}^{-1}}\right)h^{\pm}_1,\] while the remainder \[ \frac{1}{\log^2{\varepsilon}^{-1}}\left(\eta h^{\pm}_1+h^{\pm}_2\right)+o_{\mathrm{P}}\left(\frac{1}{\log^2{\varepsilon}^{-1}}\right) \] satisfies a scaling limit theorem. Moreover, we show that given the direction of the exit (``$+$'' or ``$-$''), the exit time satisfies \begin{equation}\label{eq:tau-D-e-asymp-in-intro} \tau_{{\mathfrak{D}}}^{\varepsilon}=\frac{1}{\lambda}\log{\varepsilon}^{-1}-\frac{d-1}{\lambda}\log\log{\varepsilon}^{-1}+\rho+C^{\pm}+o_{\mathrm{P}}(1), \end{equation} for a centered random variable $\rho$ that does not depend on the direction of the exit and deterministic constants $C^{\pm}$. In fact, this is also in contrast with the case of the leading eigenvalue of algebraic multiplicity $1$, where the leading deterministic term is simply $\frac{1}{\lambda}\log{\varepsilon}^{-1}$. We note that \eqref{eq:tau-D-e-asymp-in-intro} was first obtained in~\cite{Buterakos} for the case where the drift contains no other terms except for the linear one given by the Jordan block. The precise statements of our results are given in Section~\ref{sec:setting}. We remark that, according to \eqref{eq:main_expansion}, the leading contributions to the deviation from $q_{\pm}$ happen along $h_1^{\pm},h_2^{\pm}$. In fact, our proof also shows how to compute smaller contributions along other directions. We also note that it is easy to obtain a generalization of our result for the case where the linearization has other eigenvalues besides the leading $\lambda$. The paper is organized as follows. In Section~\ref{sec:setting}, we describe the setting and the main result. The proof of the main result in Section~\ref{sec:proof-of-main} is based on the analysis of the linearized system in Section~\ref{sec:proof-of-linear}. {\bf Acknowledgment.} We would like to thank the referee for valuable constructive remarks. They helped to improve the paper in various ways. Yuri Bakhtin gratefully acknowledges partial support from NSF via grant DMS-1460595. \section{Setting and main result}\label{sec:setting} We will consider the family of stochastic differential equations \begin{equation}\label{eq:SDE} dX_{{\varepsilon}}(t)=b\left(X_{{\varepsilon}}(t)\right)dt+{\varepsilon}\sigma\left(X_{{\varepsilon}}(t)\right)dW(t), \end{equation} on a bounded domain ${\mathfrak{D}}_0\subseteq \mathbb{R}^d$, $d\in{\mathbb N}$. Our results are most meaningful for $d\ge 2$, but we include $d=1$ for completeness. The drift is given by a vector field~$b\in\mathcal{C}^{2}({\mathfrak{D}}_0;\mathbb{R}^d)$. The random perturbation is given via a standard $d$-dimensional Brownian motion $W=(W_1,\ldots,W_d)$ defined on some probability space $(\Omega,\mathcal{F},\mathrm{P})$. The noise magnitude is given by a small parameter ${\varepsilon}>0$ in front of the diffusion coefficient $\sigma$ which is assumed to be a $\mathcal{C}^2$-smooth uniformly elliptic matrix-valued function, i.e., $\sigma\in\mathcal{C}^2\left({\mathfrak{D}}_0;M_d(\mathbb{R})\right)$, where $M_d(\mathbb{R})$ is the space of $d$-by-$d$ matrices with real entries and there are positive constants $\sigma_{\min},\sigma_{\max}$ such that \[ \sigma_{\min}|\xi|^2\leq\langle \sigma(x)\xi,\xi\rangle \le \sigma_{\max}|\xi|^2 \qquad\forall \xi\in\mathbb{R}^d,\ x\in{\mathfrak{D}}_0. \] Here $\langle\cdot,\cdot\rangle$ is the standard inner product and $|\cdot|$ is the Euclidean norm in ${\mathbb R}^d$. We will also use $\mathop{\mathrm{dist}}(\cdot,\cdot)$ for the Euclidean point-to-point and point-to-set distances in ${\mathbb R}^d$. Standard results on stochastic differential equations (see e.g \cite{KS1991}) imply that for any starting location $X^{{\varepsilon}}(0)\in{\mathfrak{D}}_0$, the equation~\eqref{eq:SDE} has a unique strong solution up to \[ \tau_{{\mathfrak{D}}_0}^{{\varepsilon}}=\inf\{t\geq 0: X_{{\varepsilon}}(t)\in\partial {\mathfrak{D}}_0\}, \] the exit time from ${\mathfrak{D}}_0$. Let $(S^t)$ be the flow generated by the vector field $b$, i.e., $x(t)=S^tx_0$ is the solution of the autonomous ordinary differential equation \begin{equation}\label{eq:deterministic-ODE} \dot{x}(t)=b(x(t)),\qquad x(0)=x_0\in{\mathfrak{D}}_0. \end{equation} This flow is defined forwards and backwards in time as long as the trajectory stays within ${\mathfrak{D}}_0$. In this paper, we are interested in the asymptotic behavior, as ${\varepsilon}\downarrow 0$, of the distribution of the exit location and the exit time \[ \tau_{{\mathfrak{D}}}^{{\varepsilon}}=\inf\{t>0: X_{{\varepsilon}}(t)\notin{\mathfrak{D}}\} \] from a subdomain ${\mathfrak{D}}$ compactly contained in ${\mathfrak{D}}_0$. We make the following assumptions on ${\mathfrak{D}}$ and the vector field $b$: \begin{enumerate}[(I)] \item The limit set of $S^{t}$ in ${\mathfrak{D}}$ consists of a single point assumed to be the origin~$0$ without loss of generality. \item For every $x\in\bar{{\mathfrak{D}}}\setminus\{0\}$, there is a time $T(x)$ such that $S^tx\notin {\mathfrak{D}}$ for $t>T(x)$ while $S^tx\in{\mathfrak{D}}$ for all $-\infty<t<T(x)$. Here $\bar{\mathfrak{D}}$ denotes the closure of ${\mathfrak{D}}$. We will denote the exit point associated with $x$ by $\pi(x)$: \begin{equation} \label{eq:deterministic_exit_points} \pi (x) = S^{T(x)}x,\quad x\in \bar{\mathfrak{D}}\setminus\{0\} \end{equation} \item \label{cond:linear-part-Jordan} The vector field satisfies \[ b(x)=Ax+\psi(x)|x|^2, \qquad x\in {\mathfrak{D}}, \] where $\psi$ is a ${\mathcal C}^2$ vector-valued function on ${\mathfrak{D}}_0$, and $A=Db(0)$ ($D$ stands for the Jacobian matrix) is a $d$-by-$d$ matrix with one real eigenvalue $\lambda>0$ of geometric multiplicity $1$ but algebraic multiplicity $d$, i.e., it is similar to a single Jordan block \begin{equation}\label{eq:linearized-matrix} \begin{bmatrix} \lambda & 1 & 0 & 0 & \dots & 0 \\ 0 & \lambda & 1 & 0 &\dots & 0 \\ \vdots & \ddots & \ddots & \ddots & \ddots & \vdots\\ 0 & 0 & \dots & \lambda & 1 & 0\\ 0 & 0 & \dots & 0& \lambda & 1\\ 0 & 0 & \dots &0 & 0 & \lambda \end{bmatrix}. \end{equation} We assume without loss of generality that $A$ is already of this form, i.e., that the generalized eigenvector basis $\{e_1,\dots, e_d\}$ coincides with the canonical basis of $\mathbb{R}^d$. \end{enumerate} We define $q_-, q_+\in \partial{\mathfrak{D}}$ to be the points such that the curve \[ \gamma=\gamma_+\cup\gamma_-\cup\{0\},\qquad \gamma_\pm=\{S^{-t}q_\pm: t\in\mathbb{R}_+\}, \] is $\mathcal{C}^2$-smooth and tangent to the eigenvector $e_1$ at the origin. \begin{enumerate} \item[(IV)] \label{cond:transversality} We require $\partial{\mathfrak{D}}$ to be ${\mathcal C}^2$ in neighborhoods of $q_-$ and $q_+$ and transversal to $\gamma$ at these points. \end{enumerate} We assume without loss of generality that $e_1$ points in the direction of $q_+$. The importance of these boundary points comes from the fact that the distribution $X_{{\varepsilon}}(\tau_{{\mathfrak{D}}}^{{\varepsilon}})$ is asymptotically concentrated on $\{q_-,q_+\}$. Our main result describes the joint fluctuations of the random exit location around this limit and the exit time $\tau_{{\mathfrak{D}}}^{{\varepsilon}}$. For a vector $y\in{\mathbb R}^d$ , we denote by $y^{(i)}=\langle y, e_i \rangle$ its $i$-th component in the canonical basis. \begin{theorem}\label{thm:main-theorem} Assuming the setting described above, let $X_{{\varepsilon}}(0)={\varepsilon}\xi_{{\varepsilon}}$, where $\xi_{{\varepsilon}}$ is a family of $d$-dimensional random variables independent of $W$ and converging in probability to some random variable $\xi_0$. Then on the same probability space there are events $A^{\pm}$, $d$-dimensional random variables $(\mu^{\pm}_{\varepsilon})_{{\varepsilon}>0}$, $1$-dimensional random variables $\rho$, $\eta$, $(\theta_{\varepsilon}^{\pm})_{{\varepsilon}>0}$, deterministic vectors $h_1^{\pm},h_2^{\pm}\in{\mathbb R}^d$, and constants $C^{\pm}\in{\mathbb R}$ with the following properties:\\ on $A^\pm$, \begin{equation}\label{eq:tau-D-e-asymp-in-theorem} \tau_{{\mathfrak{D}}}^{\varepsilon}=\frac{1}{\lambda}\log{\varepsilon}^{-1}-\frac{d-1}{\lambda}\log\log{\varepsilon}^{-1}+\rho+C^{\pm}+\theta^{\pm}_{\varepsilon} \end{equation} and \begin{equation} \label{eq:asymptotics-of-global-exit-location} X_{\varepsilon}(\tau_{\mathfrak{D}}^{\varepsilon})=q_{\pm} + \left(\frac{1}{\log {\varepsilon}^{-1}} +\frac{(d-1) \log\log{\varepsilon}^{-1}}{\log^2{\varepsilon}^{-1}} +\frac{\eta}{\log^2{\varepsilon}^{-1}}\right)h^{\pm}_1\\ +\frac{1}{\log^2{\varepsilon}^{-1}}h^{\pm}_2+\frac{\mu_{\varepsilon}^{\pm}}{\log^2{\varepsilon}^{-1}}; \end{equation} \[ \theta^{\pm}_{\varepsilon}\stackrel{\mathrm{P}}{\to} 0,\quad \mu^{\pm}_{\varepsilon}\stackrel{\mathrm{P}}{\to}0,\quad {\varepsilon}\downarrow 0; \] if $d=1$, then $h_1^\pm=h_2^\pm=\mu_{\varepsilon}^{\pm}=0$; If $d\ge 2$, the vector $h_1^\pm$ is tangent to $\partial {\mathfrak{D}}$ at $q_\pm$. If $\partial{\mathfrak{D}}$ is flat (coincides with a hyperplane of codimension~$1$) in a small neighborhood of~$q_\pm$, then $h_2^\pm$ is also tangent to~$\partial{\mathfrak{D}}$. Moreover, the escape trajectory converges to the curve $\gamma$, i.e., \begin{equation} \sup_{0\le t\le \tau_{{\mathfrak{D}}}^{\varepsilon}} \mathop{\mathrm{dist}}(X_{\varepsilon}(t),\gamma)\stackrel{\mathrm{P}}{\to}0,\quad {\varepsilon}\downarrow 0. \label{eq:exit_happens_along_gamma} \end{equation} \end{theorem} {\noindent \bf Remarks:} \begin{enumerate} \item Precise expressions for the random variables involved in the statement of this theorem will be given in the course of the proof and in the auxiliary statements that we will invoke. The events $A^\pm$ in this theorem are defined by $A^\pm=\{\mathop{\mathrm{sign}}\chi^{(d)}=\pm 1\}=\{\pm \chi^{(d)}>0\}$. Here~$\chi$ is a random vector responsible for the asymptotic direction of exit introduced in the main auxiliary Theorem~\ref{thm:linear-result}, see~\eqref{eq:eta-and-chi}. It is defined in \eqref{eq:introducing_chi} in terms of the ingrediends in the variation of constants formula (the initial condition and the contribution from noise) for an auxiliary equation defined in \eqref{eq:definitions_of_N_and_D}. The random variable $\eta$ is also defined in~\eqref{eq:eta-and-chi}, in the statement of Theorem~\ref{thm:linear-result}. \item The random variable $\rho$ serves both directions of exit. The only difference between the two directions in the asymptotic behavior of the exit time is encoded in constants $C^{\pm}$. In fact,~$\rho$~is defined up to an additive shift that has to be compensated by adjusting~$C^{\pm}$. One can achieve uniqueness of $\rho$ and $C^{\pm}$ by requiring $\mathsf{E}\rho=0$. \item As it will be clear from the proof, the direction of exit and the scaling limit of the exit distribution are asymptotically determined by the noise picked up in an infinitesimal neighborhood of the origin in the direction of $e_{d-1}$ and $e_d$. \item It will become clear that in some situations we can, in fact, provide more detailed information than Theorem~\ref{thm:main-theorem}. A nice formulation is possible, for example, in the linear case, see \eqref{eq:precise-formula-in-all-directions}. \item Although it is possible to consider more general scalings $X_{\varepsilon}(0)={\varepsilon}^\alpha \xi_{\varepsilon}$ for a convergent family $(\xi_{\varepsilon})_{{\varepsilon}\geq 0}$ and an arbitrary scaling exponent $\alpha>0$, it will be clear from our analysis that the case $\alpha=1$ considered in Theorem~\ref{thm:main-theorem} is the most interesting one. In fact, if $\alpha<1$, then the noise is asymptotically negligible and the behavior is dominated by the deterministic dynamics, and the case where $\alpha>1$ effectively reduces to $\alpha=1$ since ${\varepsilon}^\alpha \xi_{\varepsilon}={\varepsilon}\cdot{\varepsilon}^{\alpha-1}\xi_{\varepsilon}$ and ${\varepsilon}^{\alpha-1}\xi_{\varepsilon}\stackrel{\mathrm{P}}{\to}0$, so the influence of the initial condition asymptotically vanishes. \item In~\cite{Bak2010} and~\cite{Bak2011}, the results had to be stated in terms of convergence in distribution since the contributions from the stable directions were of the leading order of magnitude and converged only in distribution. In the setting of the present paper, in the absence of stable directions, we are able to state the results in terms of convergence in probability. However, our result holds even when any nonleading, positive or negative, eigenvalues are present, and a smooth conjugation to linear dynamics exists. In this case, the contributions from nonleading eigendirections are of smaller order than the scales relevant for the asymptotics in~\eqref{eq:asymptotics-of-global-exit-location}. \item One can restate the theorem for the situation where only convergence in distribution is required for the initial condition, and use Skorokhod's representation theorem on realization of weak convergence by almost sure convergence. \item Let us emphasize the connection to the existing results. It was shown for a more general setting in \cite{Eizenberg:MR749377} that the marginal distribution of $X_{{\varepsilon}}\left(\tau_{{\mathfrak{D}}}^{{\varepsilon}}\right)$ asymptotically concentrates on $\{q_+,q_-\}$. The precise asymptotics of the marginal limiting law of $\tau_{{\mathfrak{D}}}^{{\varepsilon}}$ was computed for linear drift in~\cite{Buterakos}. The main novelty in our result is the expansion \eqref{eq:asymptotics-of-global-exit-location} providing a precise asymptotic description of fluctuations of the random exit point around $q_\pm$, along with joint asymptotics for the fluctuations of the exit time. \end{enumerate} \section{Proof of Theorem \ref{thm:main-theorem}} \label{sec:proof-of-main} Our approach is based on two steps: (i) studying the system in a small neighborhood of the origin where a change of coordinates conjugates the dynamics to a linear system; (ii) describing the behavior of $X_{{\varepsilon}}$ as it follows the curve $\gamma$ between the linearizable neighborhood and the exit points~$q_\pm$. We start with the first part. It was demonstrated in \cite{Eizenberg:MR749377}, that under condition \eqref{cond:linear-part-Jordan}, there is a neighborhood ${\mathfrak{U}}$ of the origin and a smooth diffeomorphism $f:{\mathfrak{U}}\mapsto\mathbb{R}^n$ given by \begin{equation} \label{eq:def-of-f} f(x)=\lim_{t\to\infty} e^{At} S^{-t}x=x-\int_0^\infty e^{As}\psi(S^{-s}x)|S^{-s}x|^2 ds, \end{equation} with inverse $g$ that conjugates the linear and non-linear dynamics, i.e. \begin{equation}\label{eq:conjugation-relation} f(S^tx)=e^{At}f(x)\qquad\textrm{or}\qquad Df(x)b(x)=Af(x). \end{equation} The integral term in~\eqref{eq:def-of-f} is quadratic to the leading order in small $x$, which implies \begin{equation} f(0)=0,\qquad Df(0)=I, \label{eq:Df(0)} \end{equation} where $I$ is the identity matrix. When $X_{{\varepsilon}}(0)\in {\mathfrak{U}}$, let $\tau_{{\mathfrak{U}}}^{{\varepsilon}}$ be the first time when $X_{{\varepsilon}}(t)$ exits ${\mathfrak{U}}$. If we set $Y_{{\varepsilon}}(t)=f(X_{{\varepsilon}}(t))$, then It\^o's formula and \eqref{eq:conjugation-relation} imply that this process satisfies the stochastic differential equation \begin{equation}\label{eq:linear-SDE} dY_{{\varepsilon}}(t)=AY_{{\varepsilon}}(t)dt+{\varepsilon}\tilde{\sigma}\left(Y_{{\varepsilon}}(t)\right)dW(t)+\frac{{\varepsilon}^2}{2} L(Y_{{\varepsilon}}(t))dt, \end{equation} for $t<\tau_{{\mathfrak{U}}}^{{\varepsilon}}$, where \[ \tilde{\sigma}(y)=Df\left(g(y)\right)\sigma(g(y)),\qquad L_i(y)=\sum_{j,l=1}^d\partial_j\partial_l f_i(g(y))a_{jl}\left(g(y)\right),\quad i=1,\dots,d, \] and $a(x)=(\sigma\sigma^T)(x)$. We denote $\|y\|_{\infty}=\max\{|y^{(k)}|:\ k=1,\dots,d\}$. We are going to study the precise asymptotics of the exit time and location from the box \[ {\mathfrak{B}}=\{\|y\|_{\infty}\leq R\}, \] where $R$ is chosen small enough such that $g({\mathfrak{B}})\subset{\mathfrak{U}}$. Namely, the following theorem, proved in Section \ref{sec:proof-of-linear}, characterizes the joint scaling behavior, as ${\varepsilon}\downarrow 0$, of the stopping time \[ \tau_{\mathfrak{B}}^{\varepsilon} = \inf\{t>0:\ \|Y_{\varepsilon}(t)\|_{\infty}=R\} \] and the exit point $Y_{{\varepsilon}}(\tau_{\mathfrak{B}}^{{\varepsilon}})$. \begin{theorem}\label{thm:linear-result} Let $Y_{{\varepsilon}}(0)={\varepsilon}\tilde{\xi}_{{\varepsilon}}$, where $\tilde{\xi}_{{\varepsilon}}$ is a family of $d$-dimensional random variables independent of $W$ and converging in probability to some $\tilde{\xi}_0$. Then, on the same probability space, there are $d$-dimensional random variables $N$, $\chi$, $(\zeta_{\varepsilon})_{{\varepsilon}>0}$, $1$-dimensional random variables $\rho, \eta$, $(\theta_{\varepsilon}^\pm)_{{\varepsilon}>0}$ with the following properties: \begin{equation}\label{eq:tau-B-e-asymp-in-theorem} \tau_{{\mathfrak{B}}}^{\varepsilon}=\frac{1}{\lambda}\log{\varepsilon}^{-1}-\frac{d-1}{\lambda}\log\log{\varepsilon}^{-1}+\rho+\theta_{\varepsilon}^{\pm}; \end{equation} \[ \rho=-\frac{1}{\lambda}\log\frac{|\chi^{(d)}|}{R(d-1)!\lambda^{d-1}}; \] on $A^\pm=\{\pm \chi^{(d)}>0\}$, \begin{multline} \label{eq:asymptotics-of-exit-from-small} Y_{\varepsilon}(\tau_{\mathfrak{B}}^{\varepsilon})=\pm R\Biggl[e_1+ \lambda(d-1)\left(\frac{1}{\log {\varepsilon}^{-1}} +\frac{(d-1) \log\log{\varepsilon}^{-1}}{\log^2{\varepsilon}^{-1}} +\frac{\eta}{\log^2{\varepsilon}^{-1}}\right)e_2 \\+\frac{\lambda^2(d-1)(d-2)}{\log^2{\varepsilon}^{-1}}e_3+\frac{\zeta_{\varepsilon}}{\log^2{\varepsilon}^{-1}}\Bigg]; \end{multline} \[ \theta^\pm_{\varepsilon}\stackrel{\mathrm{P}}{\to} 0,\quad \zeta_{\varepsilon}\stackrel{\mathrm{P}}{\to}0,\quad {\varepsilon}\downarrow 0; \] \[ \langle\zeta_{\varepsilon},e_1\rangle=0; \] \begin{equation} \label{eq:eta-and-chi} \chi=\tilde{\xi}_0+N,\quad\eta=-\lambda\frac{\chi^{(d-1)}}{\chi^{(d)}}+\log\frac{|\chi^{(d)}|}{R(d-1)!\lambda^{d-1}}; \end{equation} $N$ is independent of $\tilde \xi_0$, it is centered Gaussian, with covariance matrix given by \begin{equation}\label{eq:limit-cov-small-box} \mathrm{E}N^{(i)}N^{(j)}=\sum_{p=0}^{d-i}\sum_{q=0}^{d-j}\binom{p+q}{q} \frac{(-1)^{p+q}}{(2\lambda)^{p+q+1}}a_{p+i,q+j}(0); \end{equation} Also, \begin{equation} \label{eq:motion-is-close-to-axis} \sup_{t\leq \tau_{\mathfrak{B}}^{\varepsilon}}\mathrm{dist}\big(Y_{\varepsilon}(t),\mathrm{span}(e_1)\big)\stackrel{\mathrm{P}}{\to} 0,\quad {\varepsilon} \downarrow 0. \end{equation} \end{theorem} \begin{remark}\rm The term containing $e_3$ in~\eqref{eq:asymptotics-of-exit-from-small} is not present for $d=1,2$. The term containing~$e_2$ in~\eqref{eq:asymptotics-of-exit-from-small} does not appear for $d=1$. This may be formally achieved by setting $e_i=0$ for $i>d$ and also can be formally seen from the presence of factors $(d-1)$ and $(d-2)$ in front of these terms. In fact, in the case $d=1$, the identity~\eqref{eq:asymptotics-of-exit-from-small} is trivial, and the identity~\eqref{eq:tau-B-e-asymp-in-theorem} is contained in~\cite{Day95}. \end{remark} \begin{remark}\rm Only components $N^{(d-1)}$, $N^{(d)}$ are effectively used in the theorem, but it is convenient to introduce all $d$ coordinates to be used in the proof. \end{remark} \begin{remark}\rm The theorem implies that the asymptotic choice of the outgoing direction is described by \[ \mathrm{P}\left\{Y_{{\varepsilon}}^{(1)}(\tau_{{\mathfrak{B}}}^{{\varepsilon}})=\pm R\right\}\to \mathrm{P}\left\{\pm \chi^{(d)}>0\right\},\quad {\varepsilon} \downarrow 0. \] \end{remark} \begin{corollary} \label{cor:linear-result-to-X} There are deterministic vectors $u^{\pm}_1,u^{\pm}_2\in{\mathbb R}^d$, and a family of random vectors $(\beta^{\pm}_{\varepsilon})_{{\varepsilon}>0}$ such that $\beta_{\varepsilon}^{\pm}\stackrel{\mathrm{P}}{\to}0$ and, on events $A^{\pm}$ introduced in Theorem~\ref{thm:linear-result}, \begin{multline} \label{eq:asymptotics-of-exit-from-small-transformed} X_{\varepsilon}(\tau_{\mathfrak{B}}^{\varepsilon})=g(Y_{\varepsilon}(\tau_{\mathfrak{B}}^{\varepsilon}))=g(\pm Re_1) + \left(\frac{1}{\log {\varepsilon}^{-1}} +\frac{(d-1) \log\log{\varepsilon}^{-1}}{\log^2{\varepsilon}^{-1}} +\frac{\eta}{\log^2{\varepsilon}^{-1}}\right)u^{\pm}_1\\ +\frac{1}{\log^2{\varepsilon}^{-1}}u^{\pm}_2+\frac{\beta^{\pm}_{\varepsilon}}{\log^2{\varepsilon}^{-1}}. \end{multline} If $d=1$, then $u^{\pm}_1=u^{\pm}_2=\beta_{\varepsilon}^\pm=0$. If $d=2$, then $u^{\pm}_1$ and $u^{\pm}_2$ are tangent to $\partial {\mathfrak{B}}$ at $g(\pm Re_1)$ and collinear to each other. Also, \begin{equation} \label{eq:tracking_gamma_in_small_neighb} \sup_{0\le t\le \tau_{{\mathfrak{B}}}^{\varepsilon}} \mathop{\mathrm{dist}}(X_{\varepsilon}(t),\gamma)\stackrel{\mathrm{P}}{\to}0,\quad {\varepsilon}\downarrow 0. \end{equation} \end{corollary} \begin{proof} The expansion \eqref{eq:asymptotics-of-exit-from-small-transformed} follows directly from~\eqref{eq:asymptotics-of-exit-from-small} and the Taylor expansion for $g$ near~$\pm Re_1$ if we take into account that the only nonnegligible nonlinear contributions come from quadratic terms and appear in the form of $1/\log^2{\varepsilon}^{-1}$ with coefficients given by second partial derivatives of $g$ at $\pm Re_1$. \end{proof} \bigskip \noindent\textit{Proof of Theorem \ref{thm:main-theorem}.~} This proof is based on~Theorem~\ref{thm:linear-result}, its Corollary~\eqref{cor:linear-result-to-X}, the strong Markov property, and a simple analysis of the evolution of $X_{{\varepsilon}}$ along $\gamma$ between leaving $g({\mathfrak{B}})$ and exiting~${\mathfrak{D}}$, which is dominated by the deterministic dynamics applied to the initial condition $X_{\varepsilon}(\tau_{{\mathfrak{B}}}^{{\varepsilon}})$ since the noise contributions are much smaller. The restriction of the map $\pi$ defined in~\eqref{eq:deterministic_exit_points} to a relative neighborhood $U\subset g(\partial {\mathfrak{B}})$ of $g(\pm Re_1)$ is the Poincar\'e map for the flow $(S^t)$ between the surfaces $U$ and $\partial{\mathfrak{D}}$. Due to our smoothness assumptions and the transversality condition~(IV), $\pi$ is ${\mathcal C}^2$ in~$U$. So if $X_{\varepsilon}(\tau_{\mathfrak{B}}^{\varepsilon})\in U$, then applying the second order Taylor expansion of $\pi$ near $g(\pm Re_1)$ to $X_{\varepsilon}(\tau_{\mathfrak{B}}^{\varepsilon})$ \eqref{eq:asymptotics-of-exit-from-small-transformed} gives \begin{multline} \label{eq:asymptotics-under-Poincare-map} \pi\left(X_{\varepsilon}(\tau_{\mathfrak{B}}^{\varepsilon})\right)=q_{\pm} + \left(\frac{1}{\log {\varepsilon}^{-1}} +\frac{(d-1) \log\log{\varepsilon}^{-1}}{\log^2{\varepsilon}^{-1}} +\frac{\eta}{\log^2{\varepsilon}^{-1}}\right)h^{\pm}_1 \\ +\frac{1}{\log^2{\varepsilon}^{-1}}h^{\pm}_2+\frac{\tilde \beta_{\varepsilon}^{\pm}}{\log^2{\varepsilon}^{-1}} \end{multline} for some $\tilde \beta_{\varepsilon}^{\pm}\stackrel{\mathrm{P}}{\to}0$. Namely, $h_1^\pm$ is linear in $u^\pm_1$: \[h_1^\pm = D \pi (g(\pm Re_1))u^\pm_1,\] whereas $h_2^\pm$ is composed of the linear part $D \pi (g(\pm Re_1))u^\pm_2$ and a quadratic form in $u^\pm_1$: \[ {h_2^\pm}^{(k)}=\sum_{i} \frac{\partial}{\partial x^{(i)}} \pi^{(k)} (g(\pm Re_1)){u^\pm_2}^{(i)} +\frac{1}{2}\sum_{i,j} \frac{\partial^2}{\partial x^{(i)}\partial x^{(j)}} \pi^{(k)} (g(\pm Re_1)){u^\pm_1}^{(i)}{u^\pm_1}^{(j)}. \] The classical expansion of solutions in powers of small ${\varepsilon}$ on finite time intervals, see~\cite[Chapter 2]{FW2012}, implies that for any $T$, \begin{equation*} \mathrm{P}\left\{\sup_{t\in [0,T]} \left|X_{\varepsilon}(\tau^{\varepsilon}_{\mathfrak{B}}+t) - S^t g(X_{\varepsilon}(\tau^{\varepsilon}_{\mathfrak{B}}))\right|>{\varepsilon}^{1/2}\right\}\to 0,\quad {\varepsilon}\to 0. \end{equation*} Since $S^t g(X_{\varepsilon}(\tau^{\varepsilon}_{\mathfrak{B}}))$ is itself close to $\gamma$, we can use the transversality condition to obtain~\eqref{eq:exit_happens_along_gamma} from~\eqref{eq:tracking_gamma_in_small_neighb} and to derive~\eqref{eq:tau-D-e-asymp-in-theorem} (with $C^\pm=T(g(\pm Re_1))$) from \eqref{eq:tau-B-e-asymp-in-theorem}. We also get \[ \mathrm{P}\{|X_{\varepsilon}(\tau_{{\mathfrak{D}}}^{{\varepsilon}})-\pi(X_{\varepsilon}(\tau_{\mathfrak{B}}^{\varepsilon}))|>{\varepsilon}^{1/2}\}\to 0, \quad {\varepsilon}\to 0. \] Combining this with~\eqref{eq:asymptotics-under-Poincare-map}, we obtain~\eqref{eq:asymptotics-of-global-exit-location} and complete the proof of Theorem~\ref{thm:main-theorem}. \qed \section{The linear system}\label{sec:proof-of-linear} In this section, we prove Theorem \ref{thm:linear-result}. We will repeatedly make use of the elementary formulas \begin{equation}\label{eq:elementary-formula} (1+x)^p=1+px+\mathcal{O}(x^2),\qquad \log(1+x)=x+\mathcal{O}(x^2),\quad x\to 0. \end{equation} Duhamel's formula for the SDE \eqref{eq:linear-SDE} implies \begin{equation}\label{eq:Duhamel} Y_{{\varepsilon}}(t)={\varepsilon} e^{At}\left[\tilde\xi_{{\varepsilon}}+ N_{{\varepsilon}}(t)+{\varepsilon} D_{{\varepsilon}}(t)\right],\quad t\le \tau_{{\mathfrak{U}}}^{\varepsilon}, \end{equation} where \begin{equation} \label{eq:definitions_of_N_and_D} N_{{\varepsilon}}(t)=\int_0^t e^{-As}\tilde{\sigma}(Y_{{\varepsilon}}(s))dW(s),\qquad D_{{\varepsilon}}(t)=\frac{1}{2}\int_0^t e^{-As} L(Y_{{\varepsilon}}(s))ds. \end{equation} Recall that \[ e^{At}=e^{\lambda t}\begin{bmatrix} 1 & t & \frac{ t^2}{2!} & \frac{ t^3}{3!} & \dots & \frac{ t^{d-1}}{(d-1)!} \\ 0 & 1 & t & \frac{t^2}{2!} &\dots & \frac{ t^{d-2}}{(d-2)!} \\ \vdots & \ddots & \ddots & \ddots & \ddots & \vdots\\ 0 & 0 & \dots & 1 & t & \frac{t^2}{2!}\\ 0 & 0 & \dots & 0& 1 & t\\ 0 & 0 & \dots &0 & 0 & 1 \end{bmatrix},\qquad e^{-As}=e^{-\lambda s}\begin{bmatrix} 1 & - s & \frac{(-s)^2}{2!} & \dots & \frac{(- s)^{d-1}}{(d-1)!} \\ 0 & 1 & - s &\dots & \frac{(- s)^{d-2}}{(d-2)!} \\ \vdots & \ddots & \ddots & \ddots & \vdots\\ 0 & 0 & \dots & 1 & - s\\ 0 & 0 & \dots & 0 & 1 \end{bmatrix}, \] so for any vector $\xi\in\mathbb{R}^d$, we have \begin{equation}\label{eq:multipl-with-exp-coordinatewise} (e^{At}\xi)^{(i)}=e^{\lambda t}\sum_{j=0}^{d-i}\frac{ t^j}{j!}\xi^{(i+j)},\qquad (e^{-As}\xi)^{(i)}=e^{-\lambda s}\sum_{j=0}^{d-i}\frac{(- s)^j}{j!}\xi^{(i+j)}. \end{equation} In particular, \begin{equation}\label{eq:sum-form-N} N_{{\varepsilon}}^{(i)}(t)=\int_0^te^{-\lambda s}\sum_{k=1}^d\sum_{j=0}^{d-i}\frac{(-s)^j}{j!}\tilde{\sigma}_{i+j,k}(Y_{{\varepsilon}}(s))\,dW_k(s), \end{equation} and \begin{equation}\label{eq:sum-form-D} D_{{\varepsilon}}^{(i)}(t)=\frac{1}{2}\int_0^te^{-\lambda s}\sum_{j=0}^{d-i}\frac{(- s)^j}{j!}L_{i+j}(Y_{{\varepsilon}}(s))\,ds. \end{equation} The following lemma implies that $N_{{\varepsilon}}(t)$ is of the order of one for all times while $D_{{\varepsilon}}(t)$ is bounded. \begin{lemma}\label{lem:a-priori-N-D} There are constants $c,D_0>0$ such that \begin{equation}\label{eq:bounded-D} \sup_{{\varepsilon}\geq 0}\mathrm{P}\left\{\sup_{t\leq\tau_{{\mathfrak{B}}}^{{\varepsilon}}}\|N_{{\varepsilon}}(t)\|_{\infty}>z\right\}\leq\frac{c}{z^2} ,\qquad\sup_{{\varepsilon}>0,\ t\leq\tau_{{\mathfrak{B}}}^{{\varepsilon}}}\|D_{{\varepsilon}}(t)\|_{\infty}\leq D_0. \end{equation} \end{lemma} \begin{proof}The second claim follows from $\|e^{-As}\|_{\infty}\leq Cs^{d-1}e^{-\lambda s}$ and the boundedness of $L$ (which is due to the boundedness of $\sigma$, $g$, and $f$ together with its derivatives): \[ \sup_{t\leq\tau_{{\mathfrak{B}}}^{{\varepsilon}}}\|D_{{\varepsilon}}(t)\|_{\infty}\leq C\sup_{t\geq 0}\int_0^te^{-\lambda s}s^{d-1}ds<\infty. \] To prove the first claim, observe (from, e.g., \eqref{eq:sum-form-N}) that each component $N_{{\varepsilon}}^{(i)}(t)$ is a martingale. Thus we write using Chebyshev's inequality and the BDG inequality \begin{multline*} \mathrm{P}\left\{\sup_{t\leq\tau_{{\mathfrak{B}}}^{{\varepsilon}}}\|N_{{\varepsilon}}(t)\|_{\infty}>z\right\}\leq d\max_{i=1,\dots, d}\mathrm{P}\left\{\sup_{t\leq\tau_{{\mathfrak{B}}}^{{\varepsilon}}}|N_{{\varepsilon}}^{(i)}(t)|>z\right\}\\ \leq \frac{d\cdot\max_{i=1,\dots,d}\mathrm{E}\sup_{t\leq \tau_{{\mathfrak{B}}}^{{\varepsilon}}}|N_{{\varepsilon}}^{(i)}(t)|^2}{z^2}\leq C\frac{\max_{i=1,\dots,d}\sup_{t\geq 0}\mathrm{E}\langle N_{{\varepsilon}}^{(i)}\rangle_{t\wedge\tau_{{\mathfrak{B}}}^{{\varepsilon}}}}{z^2}, \end{multline*} where $\langle\cdot\rangle$ is the quadratic variation process. The right-hand side is uniformly bounded due to the argument that we have used above for $D_{{\varepsilon}}$. \end{proof} We are going to study the precise asymptotics of the exit time and location from the box ${\mathfrak{B}}$. We do this in two steps: (1) in a small ${\varepsilon}$-dependent neighborhood \[ {\mathfrak{B}}_{{\varepsilon}}=\{\|y\|_{\infty}\leq {\varepsilon}^{\alpha}\} \] of the origin, where $\alpha\in(0,1)$ so ${\mathfrak{B}}_{{\varepsilon}}$ is still larger than the noise magnitude; (2) between exiting~${\mathfrak{B}}_{{\varepsilon}}$ and the final exit from ${\mathfrak{B}}$. In part (1), $Y_{{\varepsilon}}$ is close to the origin, which allows us to control the error of the linear approximation and to approximate $N_{{\varepsilon}}(t)$ determining the exit direction by a Gaussian random vector. In part (2), the deterministic process dominates, and and we can control the deviations of $Y_{{\varepsilon}}$ from the corresponding solution of \eqref{eq:deterministic-ODE}. \subsection{Exit from a small neighborhood of the origin} Let ${\tilde\tau_\eps}$ be the exit time of the process $Y_{\varepsilon}$ from ${\mathfrak{B}}_{{\varepsilon}}$. Our assumption on $X_{{\varepsilon}}(0)$ implies \[ \lim_{{\varepsilon}\downarrow 0}\mathrm{P}\{Y_{{\varepsilon}}(0)\in{\mathfrak{B}}_{{\varepsilon}}\}=1. \] \begin{lemma}\label{lem:small-box-exit-large} The exit time ${\tilde\tau_\eps}$ converges to infinity in probability, i.e., for all $T\ge 0$, \[ \lim_{{\varepsilon}\downarrow 0}\mathrm{P}\{{\tilde\tau_\eps}\leq T\}=0. \] \end{lemma} \begin{proof} Using \eqref{eq:Duhamel}, we can write \begin{multline*} \mathrm{P}\{{\tilde\tau_\eps}\leq T\}\leq \mathrm{P}\left\{C {\varepsilon} e^{\lambda T} T^{d-1}\|\tilde\xi_{{\varepsilon}}\|_{\infty}\geq \frac{{\varepsilon}^\alpha}{4}\right\} + \mathrm{P}\left\{C {\varepsilon} e^{\lambda T} T^{d-1}\sup_{t\leq T\wedge\tau_{{\mathfrak{B}}}^{{\varepsilon}}}\|N_{{\varepsilon}}(t)\|_{\infty}\geq\frac{{\varepsilon}^{\alpha}}{4}\right\}\\ +\mathrm{P}\left\{C {\varepsilon}^2e^{\lambda T} T^{d-1}\sup_{t\leq T\wedge\tau_{{\mathfrak{B}}}^{{\varepsilon}}}\|D_{{\varepsilon}}(t)\|_{\infty}\geq\frac{{\varepsilon}^{\alpha}}{4}\right\}, \end{multline*} where we used $\sup_{t\in[0,T]}\|e^{At}\|_{\infty}\leq C e^{\lambda T} T^{d-1}$. The first term converges to zero by the tightness of $\tilde\xi_{{\varepsilon}}$, while the second and third terms do so by Lemma \ref{lem:a-priori-N-D}. \end{proof} \begin{lemma}\label{lem:N-limit-small-box} As $\epsilon\downarrow 0$, $N_{{\varepsilon}}({\tilde\tau_\eps})$ converges in probability to a centered Gaussian vector N, independent of $\tilde\xi_0$, with covariance matrix described in \eqref{eq:limit-cov-small-box}. \end{lemma} \begin{proof} Let us consider the Gaussian martingale \[ M(t)=\int_0^te^{-As}\sigma(0)dW(s), \] with quadratic variation matrix \[ \langle M\rangle_t=\int_0^te^{-As}a(0)e^{-A^Ts}ds. \] This matrix is uniformly bounded in $t$ and therefore the martingale convergence theorem implies the existence of the almost sure, componentwise limit \[ N=\int_0^\infty e^{-As}\sigma(0)dW(s)=\lim_{t\to\infty}M(t), \] a centered Gaussian vector with covariance matrix that can be computed using \eqref{eq:multipl-with-exp-coordinatewise}: \[ \mathrm{E}N^{(i)}N^{(j)}=\sum_{p=0}^{d-i}\sum_{q=0}^{d-j}\frac{(-1)^{p+q}}{p!q!}a_{p+i,q+j}(0)\int_0^\infty e^{-2\lambda s}s^{p+q}ds. \] Using this and $\int_0^{\infty}x^n e^{-ax}dx=n!/a^{n+1}$, we derive~\eqref{eq:limit-cov-small-box}. A straightforward calculation based on the BDG inequality, the Lipschitzness of $\tilde{\sigma}(\cdot)$, the identity $Df(0)=I$, and the definition of ${\tilde\tau_\eps}$ shows that \[ \sup_{t\leq {\tilde\tau_\eps}}\|N_{{\varepsilon}}(t)-M(t)\|_{\infty}\stackrel{\mathrm{P}}{\to} 0. \] This, together with Lemma \ref{lem:small-box-exit-large}, finishes the proof. \end{proof} Let us now introduce the exit time in the individual directions \[ \tau_{i}^{{\varepsilon}}=\inf\{t>0: |Y_{{\varepsilon}}^{(i)}(t)|={\varepsilon}^{\alpha}\}, \] where we recall that $Y_{{\varepsilon}}^{(i)}$ is the $i$th component of $Y_{{\varepsilon}}$. Clearly, ${\tilde\tau_\eps}=\min_i \tau_{i}^{{\varepsilon}}$. \begin{lemma}\label{lem:typical-exit-side} The exit happens in the direction of $e_1$ with overwhelming probability, i.e., \[ \lim_{{\varepsilon}\downarrow 0}\mathrm{P}\left\{{\tilde\tau_\eps}=\tau_{1}^{{\varepsilon}}\right\}=1. \] \end{lemma} \begin{proof} Observe that \eqref{eq:Duhamel}, \eqref{eq:multipl-with-exp-coordinatewise}, and \eqref{eq:sum-form-N} combined with Lemmas \ref{lem:small-box-exit-large}--\ref{lem:N-limit-small-box} imply the first order approximation \begin{equation}\label{eq:Y-i-formulas-small-box} Y_{{\varepsilon}}^{(i)}({\tilde\tau_\eps})={\varepsilon} e^{\lambda {\tilde\tau_\eps}}\frac{{\tilde\tau_\eps}^{d-i}}{(d-i)!}\left[\tilde\xi_{{\varepsilon}}^{(d)}+N_{{\varepsilon}}^{(d)}({\tilde\tau_\eps})\right]\left(1+o_{\mathrm{P}}(1)\right),\quad i=1,\dots,d, \end{equation} where we write $A_{\varepsilon}=o_{\mathrm{P}}(B_{\varepsilon})$ if $A_{\varepsilon}/B_{\varepsilon} \stackrel{\mathrm{P}}{\to} 0$ as ${\varepsilon}\downarrow 0$. This means, by Lemma \ref{lem:small-box-exit-large}, that \[ \frac{Y_{{\varepsilon}}^{(i)}({\tilde\tau_\eps})}{Y_{{\varepsilon}}^{(1)}({\tilde\tau_\eps})}\stackrel{\mathrm{P}}{\to} 0, \quad {\varepsilon}\downarrow 0, \qquad i=2,\dots,d, \] which proves the claim since \[ \mathrm{P}\left\{{\tilde\tau_\eps}\neq\tau_1^{{\varepsilon}}\right\} \leq\sum_{i=1}^d\mathrm{P}\left\{\left|\frac{Y_{{\varepsilon}}^{(i)}({\tilde\tau_\eps})}{Y_{{\varepsilon}}^{(1)}({\tilde\tau_\eps})}\right|\geq 1\right\}\to 0. \] \end{proof} Let us introduce the abbreviations \begin{equation} \label{eq:introducing_chi} \chi_{{\varepsilon}}(t)=\tilde\xi_{{\varepsilon}}+N_{{\varepsilon}}(t)+{\varepsilon} D_{{\varepsilon}}(t),\qquad \chi_{{\varepsilon}}=\chi_{{\varepsilon}}({\tilde\tau_\eps}),\qquad \chi=\tilde\xi_0+N, \end{equation} and notice that \begin{equation} \label{eq:chi-converges} \chi_{{\varepsilon}}\stackrel{\mathrm{\mathrm{P}}}{\to}\chi \end{equation} due to Lemma \ref{lem:N-limit-small-box} and \eqref{eq:bounded-D}. We can now formulate the main result of the subsection providing several leading terms in an expansion for ${\tilde\tau_\eps}$. To avoid lengthy formulas, we introduce the random variables \[ G_{{\varepsilon}}(\alpha)=\log\left(|\chi_{{\varepsilon}}^{(d)}|\frac{(1-\alpha)^{d-1}}{(d-1)!\lambda^{d-1}}\right),\qquad \eta_{\varepsilon}(\alpha)=-\lambda\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{\varepsilon}^{(d)}}+G_{\varepsilon}(\alpha), \] where $\chi_{\varepsilon}^{(d-1)}=0$ for $d=1$. \begin{proposition}\label{prop:tau-B-eps-asymp} The following representation holds: \begin{equation}\label{eq:asymp-exit-time} {\tilde\tau_\eps}=\frac{1-\alpha}{\lambda}\log{\varepsilon}^{-1}-\frac{d-1}{\lambda}\log\log{\varepsilon}^{-1}-\frac{1}{\lambda}G_{\varepsilon}(\alpha)+\frac{ \tilde{K}({\varepsilon})}{\lambda}, \end{equation} where \begin{equation}\label{eq:e-K-eps} \tilde{K}({\varepsilon})=\frac{(d-1)^2}{1-\alpha}\frac{\log\log{\varepsilon}^{-1}}{\log{\varepsilon}^{-1}}+ \frac{d-1}{(1-\alpha)\log{\varepsilon}^{-1}}\eta_{\varepsilon}(\alpha)+o_{\mathrm{P}}\left(\frac{1}{\log{\varepsilon}^{-1}}\right),\quad {\varepsilon}\downarrow 0. \end{equation} In particular, \begin{equation} \label{eq:K-eps-o1} \tilde{K}({\varepsilon})=o_{\mathrm{P}}(1). \end{equation} \end{proposition} \begin{proof} Using \eqref{eq:Duhamel}, \eqref{eq:multipl-with-exp-coordinatewise}, \eqref{eq:sum-form-N}, Lemmas \ref{lem:small-box-exit-large}--\ref{lem:N-limit-small-box}, and keeping one more term compared to~\eqref{eq:Y-i-formulas-small-box}, we obtain \[ Y_{{\varepsilon}}^{(1)}({\tilde\tau_\eps})={\varepsilon} e^{\lambda{\tilde\tau_\eps}}\frac{{\tilde\tau_\eps}^{d-1}}{(d-1)!}\chi_{{\varepsilon}}^{(d)}\left(1+\frac{(d-1)}{{\tilde\tau_\eps}}\frac{\chi_{{\varepsilon}}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{-2}\right)\right), \] where $A_{\varepsilon}=\mathcal{O}_{\mathrm{P}}(B_{\varepsilon})$ means that the distributions of $A_{\varepsilon}/B_{\varepsilon}$ form a tight family for small ${\varepsilon}$. On $\{{\tilde\tau_\eps}^{{\varepsilon}}=\tau^{{\varepsilon}}_1\}$, which has probability converging to one by Lemma \ref{lem:typical-exit-side}, we have $|Y_{{\varepsilon}}^{(1)}({\tilde\tau_\eps})|={\varepsilon}^{\alpha}$ and consequently ${\tilde\tau_\eps}$ is a solution of the equation \begin{equation}\label{eq:exit-time-eq} {\varepsilon}^{\alpha}={\varepsilon} e^{\lambda{\tilde\tau_\eps}}\frac{{\tilde\tau_\eps}^{d-1}}{(d-1)!}|\chi_{{\varepsilon}}^{(d)}|\left(1+\frac{(d-1)}{{\tilde\tau_\eps}}\frac{\chi_{{\varepsilon}}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{-2}\right)\right). \end{equation} We now define $\tilde K({\varepsilon})$ by \eqref{eq:asymp-exit-time}, or, equivalently, by \begin{equation} \label{eq:exp-of-lambda-tau} e^{\lambda {\tilde\tau_\eps}}=\frac{{\varepsilon}^{-(1-\alpha)}e^{\tilde{K}({\varepsilon})}}{(\log{\varepsilon}^{-1})^{d-1}|\chi_{{\varepsilon}}^{(d)}|}\frac{(d-1)!\lambda^{d-1}}{(1-\alpha)^{d-1}}, \end{equation} Plugging this into \eqref{eq:exit-time-eq}, we obtain \begin{multline} \label{eq:solving-for-K-0} 1=e^{\tilde{K}({\varepsilon})} \left[1-\frac{d-1}{1-\alpha}\frac{\log\log{\varepsilon}^{-1}}{\log{\varepsilon}^{-1}}-\frac{G_{\varepsilon}(\alpha)}{(1-\alpha)\log{\varepsilon}^{-1}}+\frac{\tilde K({\varepsilon})}{(1-\alpha)\log{\varepsilon}^{-1}}\right]^{d-1} \times \\ \times \left[1+\frac{(d-1)}{\tilde \tau_{\varepsilon}}\frac{\chi_{{\varepsilon}}^{(d-1)}}{\chi_{\varepsilon}^{(d)}}+o_{\mathrm{P}}\left(\frac{1}{\log{\varepsilon}^{-1}}\right)\right]. \end{multline} Due to~\eqref{eq:chi-converges}, this implies~\eqref{eq:K-eps-o1}. By~\eqref{eq:K-eps-o1} and~\eqref{eq:asymp-exit-time}, \begin{equation} \label{eq:asymp-for-tilde-tau} {\tilde\tau_\eps}=\frac{1-\alpha}{\lambda}\log{\varepsilon}^{-1}\left(1+o_{\mathrm{P}}(1)\right). \end{equation} Substituting \eqref{eq:K-eps-o1} and~\eqref{eq:asymp-for-tilde-tau} into \eqref{eq:solving-for-K-0} and using~\eqref{eq:chi-converges}, we get \begin{multline} \label{eq:solving-for-K-1} 1=e^{\tilde{K}({\varepsilon})}\left[1-\frac{d-1}{1-\alpha}\frac{\log\log{\varepsilon}^{-1}}{\log{\varepsilon}^{-1}}-\frac{G_{\varepsilon}(\alpha)}{(1-\alpha)\log{\varepsilon}^{-1}}+o_{\mathrm{P}}\left(\frac{1}{\log{\varepsilon}^{-1}}\right)\right]^{d-1}\times \\ \times \left[1+\frac{(d-1)\lambda}{(1-\alpha)\log{\varepsilon}^{-1}}\frac{\chi_{{\varepsilon}}^{(d-1)}}{\chi_{\varepsilon}^{(d)}}+o_{\mathrm{P}}\left(\frac{1}{\log{\varepsilon}^{-1}}\right)\right], \end{multline} which can be written, using \eqref{eq:elementary-formula}, as \begin{equation} \label{eq:solving-for-K-2} 1=e^{\tilde{K}({\varepsilon})}\left[1-\frac{(d-1)^2}{1-\alpha}\frac{\log\log{\varepsilon}^{-1}}{\log{\varepsilon}^{-1}}-\frac{d-1}{(1-\alpha)\log{\varepsilon}^{-1}}\eta_{\varepsilon}(\alpha)+o_{\mathrm{P}}\left(\frac{1}{\log{\varepsilon}^{-1}}\right)\right]. \end{equation} Combining this with the other formula in \eqref{eq:elementary-formula} implies \eqref{eq:e-K-eps} thus completing the proof. \end{proof} Finally, we describe the asymptotic exit location from ${\mathfrak{B}}_{{\varepsilon}}$ in terms of ${\tilde\tau_\eps}$. Of course, we could use the asymptotics obtained in Proposition \ref{prop:tau-B-eps-asymp}. However, we are ultimately interested in the exit distribution from ${\mathfrak{B}}$ and our choice turns out to be convenient when we combine the following result with the dynamics between leaving ${\mathfrak{B}}_{\varepsilon}$ and leaving ${\mathfrak{B}}$ in the next subsection. \begin{proposition}\label{prop:Y-eps-asymp} For $i=1,\dots, d$, \[ Y_{{\varepsilon}}^{(i)}({\tilde\tau_\eps})=\frac{{\varepsilon}^{\alpha}\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)})}{{\tilde\tau_\eps}^{i-1}}\frac{(d-1)!}{(d-i)!}\left[1-\frac{i-1}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{\varepsilon}^{(d)}}+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{-2}\right)\right]. \] \end{proposition} \begin{proof} The claim is trivial for $i=1$. For $i\ge 2$, we first observe that \eqref{eq:Duhamel} and \eqref{eq:multipl-with-exp-coordinatewise} imply \begin{equation}\label{eq:Y-formula-with-dotdot} Y_{{\varepsilon}}^{(i)}(t)={\varepsilon} e^{\lambda t}\left[\frac{t^{d-i}}{(d-i)!}\chi_{{\varepsilon}}^{(d)}(t)+\frac{t^{d-i-1}}{(d-i-1)!}\chi_{{\varepsilon}}^{(d-1)}(t)+\dots\right]. \end{equation} Plugging in $t={\tilde\tau_\eps}$ and using \eqref{eq:exit-time-eq} to write \[ e^{\lambda {\tilde\tau_\eps}}=\frac{{\varepsilon}^{\alpha}(d-1)!}{{\varepsilon}|\chi_{\varepsilon}^{(d)}|\, {\tilde\tau_\eps}^{d-1}\, \left|1+\frac{d-1}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{-2}\right)\right|}, \] we obtain \[ Y_{{\varepsilon}}^{(i)}({\tilde\tau_\eps})=\frac{{\varepsilon}^{\alpha}\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)})}{{\tilde\tau_\eps}^{i-1}}\frac{(d-1)!}{(d-i)!}\cdot\frac{1+\frac{d-i}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{\varepsilon}^{(d)}}+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{-2}\right)}{1+\frac{d-1}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{\varepsilon}^{(d)}}+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{-2}\right)},\quad i=2,\dots,d-1. \] Using \eqref{eq:elementary-formula} and collecting similar terms yields the desired formula. For $i=d$, there is only one term in \eqref{eq:Y-formula-with-dotdot}, so \[ Y_{\varepsilon}^{(d)}({\tilde\tau_\eps})={\varepsilon} e^{\lambda{\tilde\tau_\eps}}\chi_{{\varepsilon}}^{(d)}=\frac{{\varepsilon}^{\alpha}\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)})(d-1)!}{{\tilde\tau_\eps} ^{d-1}}\frac{1}{1+\frac{d-1}{ {\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{\varepsilon}^{(d)}}+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{-2}\right)}, \] and the proof finishes by \eqref{eq:elementary-formula} in this case as well. \end{proof} \subsection{The exit from ${\mathfrak{B}}$}\label{sec:between-B-eps-and-B} We now study the process $Y_{{\varepsilon}}$ after the exit from ${\mathfrak{B}}_{{\varepsilon}}$. Namely, we consider $\bar{Y}_{\varepsilon}(t)=Y_{\varepsilon}(t+{\tilde\tau_\eps})$, which solves the SDE \eqref{eq:linear-SDE} with initial condition given by Proposition \ref{prop:Y-eps-asymp} \begin{equation}\label{eq:init-cond-bar} \bar{Y}^{(i)}_{{\varepsilon}}(0)=\frac{{\varepsilon}^{\alpha}\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)})}{{\tilde\tau_\eps}^{i-1}}\frac{(d-1)!}{(d-i)!}\left[1-\frac{i-1}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{\varepsilon}^{(d)}}+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{-2}\right)\right], \qquad i=2,\dots,d, \end{equation} while $\bar{Y}_{\varepsilon}^{(1)}(0)={\varepsilon}^{\alpha}\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)})$. Our goal is to describe the limiting distribution of the exit time $\bar{\tau}^{{\varepsilon}}$ and the exit location $\bar{Y}_{\varepsilon}(\bar{\tau}^{{\varepsilon}})$ from ${\mathfrak{B}}$. To this end, let us first prove a result essentially saying that the deterministic dynamics completely dominates the process in this regime. \begin{lemma}\label{lem:Duhamel-big-box} For $t\leq\bar{\tau}^{{\varepsilon}}$, we have \begin{equation} \label{eq:Y-bar-main-term-and-error} \bar{Y}_{{\varepsilon}}(t)=e^{At}\bar{Y}_{\varepsilon}(0)+g_{\varepsilon}(t) , \end{equation} where $g_{\varepsilon}$ is a continuous process such that $\sup_{t\leq\bar{\tau}^{\varepsilon}}|g_{\varepsilon}(t)|=\mathcal{O}_{\mathrm{P}}({\varepsilon})$. \end{lemma} \begin{proof} Duhamel's formula implies \begin{equation} \label{eq:Duhamel-for-bar-process} \bar{Y}_{{\varepsilon}}(t)=e^{At}\left[\bar{Y}_{{\varepsilon}}(0)+{\varepsilon}\bar{N}_{{\varepsilon}}(t)+{\varepsilon}^2 \bar{D}_{{\varepsilon}}(t)\right], \quad t\le \bar\tau_{\varepsilon}, \end{equation} where \[ \bar{N}_{{\varepsilon}}(t)=\int_0^t e^{-As}\tilde{\sigma}(\bar{Y}_{{\varepsilon}}(s))dW(s),\qquad \bar{D}_{{\varepsilon}}(t)=\frac{1}{2}\int_0^t e^{-As}a(\bar{Y}_{{\varepsilon}}(s))ds. \] Repeating the proof of Lemma \ref{lem:a-priori-N-D}, one can prove \begin{equation} \sup_{{\varepsilon}\geq 0}\mathrm{P}\left\{\sup_{t\leq\bar{\tau}^{\varepsilon}}\|\bar{N}_{{\varepsilon}}(t)\|_{\infty}>z\right\}\leq\frac{c}{z^2} ,\qquad\sup_{{\varepsilon}>0,\,t\leq\bar{\tau}^{\varepsilon}}\|\bar{D}_{{\varepsilon}}(t)\|_{\infty}\leq D_0, \end{equation} implying that $\sup_{t\leq\tau_{{\mathfrak{B}}}^{{\varepsilon}}}\left[|{\varepsilon}\bar{N}_{{\varepsilon}}(t)+{\varepsilon}^2\bar{D}_{{\varepsilon}}(t)|\right]=\mathcal{O}_{\mathrm{P}}({\varepsilon})$. \end{proof} By~\eqref{eq:init-cond-bar} and Proposition~\ref{prop:tau-B-eps-asymp}, we see that the error term $g_{{\varepsilon}}(t)$ in \eqref{eq:Y-bar-main-term-and-error} does not play any role in the scaling limit of $\bar Y_{\varepsilon}$. \begin{lemma}\label{lem:Y-bar-i-asymp-t} For $t\leq\bar \tau^{\varepsilon}$ and $i=1,\dots,d$, \begin{multline*} \bar{Y}_{\varepsilon}^{(i)}(t)=\frac{{\varepsilon}^{\alpha}e^{\lambda t}\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)})}{{\tilde\tau_\eps}^{i-1}}\left(1+\frac{t}{{\tilde\tau_\eps}}\right)^{d-i}\frac{(d-1)!}{(d-i)!}\times\\ \times \left[1-\frac{1}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}\frac{(i-1){\tilde\tau_\eps}+(d-1)t}{{\tilde\tau_\eps}+t}+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{-2}\right)\right]. \end{multline*} \end{lemma} \begin{proof} By \eqref{eq:multipl-with-exp-coordinatewise}, the $i$th coordinate of $e^{At}\bar{Y}(0)$ can be written as \begin{align}\label{eq:Y-bar-i-expand} e^{\lambda t}&\left[\sum_{j=0}^{d-i}\frac{t^j}{j!}\bar{Y}_{\varepsilon}^{(i+j)}(0)\right]\\ &\notag=\frac{{\varepsilon}^{\alpha}e^{\lambda t}\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)})}{{\tilde\tau_\eps}^{i-1}}\frac{(d-1)!}{(d-i)!}\left[\sum_{j=0}^{d-i}\left(\frac{t}{{\tilde\tau_\eps}}\right)^j\frac{(d-i)!}{j!(d-i-j)!}\left(1-\frac{i+j-1}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}\right)+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{-2}\right)\right]. \end{align} After a little manipulation, the sum in the bracket can be written as \begin{multline*} \left(1-\frac{(i-1)}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}\right)\sum_{j=0}^{d-i}\binom{d-i}{j}\left(\frac{t}{{\tilde\tau_\eps}}\right)^j-\frac{(d-i)}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}\frac{t}{{\tilde\tau_\eps}}\sum_{j=0}^{d-i-1}\binom{d-i-1}{j}\left(\frac{t}{{\tilde\tau_\eps}}\right)^j \\ = \left(1-\frac{(i-1)}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}\right)\left(1+\frac{t}{{\tilde\tau_\eps}}\right)^{d-i}-\frac{(d-i)}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}\frac{t}{{\tilde\tau_\eps}}\left(1+\frac{t}{{\tilde\tau_\eps}}\right)^{d-i-1} \\ = \left(1+\frac{t}{{\tilde\tau_\eps}}\right)^{d-i}\left[1-\frac{1}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}\frac{(i-1){\tilde\tau_\eps}+(d-1)t}{{\tilde\tau_\eps}+t}\right]. \end{multline*} The result now follows by plugging this back into \eqref{eq:Y-bar-i-expand} and using Lemma \ref{lem:Duhamel-big-box}. \end{proof} The next lemma shows that with probability close to one, the exit from ${\mathfrak{B}}$ happens close to $\pm R e_1$ and that up until this exit $Y_{\varepsilon}$ follows closely the corresponding deterministic orbit contained in the subspace generated by $e_1$. We introduce \[ \bar{\tau}_{i}^{{\varepsilon}}=\inf\{t>0: |\bar{Y}_{{\varepsilon}}^{(i)}(t)|=R\} \] and let $\bar{\tau}^{{\varepsilon}}=\min\{\bar{\tau}_i^{{\varepsilon}}\}$. \begin{proposition}\label{prop:tau-bar-one-asymp} As ${\varepsilon}\downarrow 0$, we have \begin{align} &\mathrm{P}\{\bar{\tau}^{\varepsilon}\neq \bar{\tau}_1^{{\varepsilon}}\}\to 0, \label{eq:exit-in-the-right-direction} \\ \bar{\tau}^{{\varepsilon}}=\frac{\alpha}\lambda\log{\varepsilon}^{-1}&+\frac{d-1}{\lambda}\log(1-\alpha)+\frac{1}{\lambda}\log R+\frac{\bar{K}({\varepsilon})}{\lambda}, \label{eq:asympt_of_bar_tau} \end{align} where \begin{multline}\label{eq:final-expr-for-Kbar} \bar{K}({\varepsilon})=-\frac{\alpha(d-1)^2}{1-\alpha}\frac{\log\log{\varepsilon}^{-1}}{\log{\varepsilon}^{-1}}-\frac{(d-1)\log R}{\log{\varepsilon}^{-1}}-\frac{\alpha}{1-\alpha}\frac{(d-1)\log\frac{|\chi_{\varepsilon}^{(d)}|}{(d-1)!\lambda^{d-1}}}{\log{\varepsilon}^{-1}} \\ - \frac{(d-1)^2}{1-\alpha}\frac{\log(1-\alpha)}{\log{\varepsilon}^{-1}}+\frac{\alpha(d-1)\lambda}{(1-\alpha)\log{\varepsilon}^{-1}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{\varepsilon}^{(d)}} +o_{\mathrm{P}}\left(\frac{1}{\log{\varepsilon}^{-1}}\right). \end{multline} In particular, \begin{equation} \label{eq:barK-o1} \bar{K}({\varepsilon})=o_{\mathrm{P}}(1). \end{equation} Moreover, \begin{equation} \label{eq:following-deterministic-traj} \sup_{t\leq\bar{\tau}^{\varepsilon}}\left|\bar{Y}_{{\varepsilon}}(t)-{\varepsilon}^{\alpha}\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)})e^{\lambda t}\left(1+\frac{\lambda t}{(1-\alpha)\log{\varepsilon}^{-1}}\right)^{d-1}e_1\right|\stackrel{\mathrm{P}}{\to} 0,\quad {\varepsilon}\downarrow 0. \end{equation} \end{proposition} \begin{proof} It immediately follows from Lemma \ref{lem:Y-bar-i-asymp-t} that \[ \sup_{t\leq\bar{\tau}^{{\varepsilon}}}\frac{\bar{Y}_{\varepsilon}^{(i)}(t)}{\bar{Y}_{\varepsilon}^{(1)}(t)}\stackrel{\mathrm{P}}{\to} 0,\qquad {\varepsilon}\downarrow 0. \] which proves~\eqref{eq:exit-in-the-right-direction}. By the definition of $\bar{\tau}^{{\varepsilon}}$, we have $|Y^{(1)}_{\varepsilon}(\bar{\tau}^{{\varepsilon}})|=R$ with probability close to one and thus we can use Lemma \ref{lem:Y-bar-i-asymp-t} for $i=1$ to obtain \begin{equation}\label{eq:tau-bar-eq} R={\varepsilon}^{\alpha}e^{\lambda\bar{\tau}^{\varepsilon}}\left(\frac{{\tilde\tau_\eps}+\bar{\tau}^{{\varepsilon}}}{{\tilde\tau_\eps}}\right)^{d-1} \left|1-\frac{d-1}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}\frac{\bar{\tau}^{{\varepsilon}}}{{\tilde\tau_\eps}+\bar{\tau}^{{\varepsilon}}}+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{d-2}\right)\right|. \end{equation} We define $\bar K({\varepsilon})$ by \eqref{eq:asympt_of_bar_tau} which is equivalent to \begin{equation} \label{eq:seek-solution-for-bar-tau} \lambda\bar{\tau}^{{\varepsilon}}=\log\frac{R}{{\varepsilon}^{\alpha}}+(d-1)\log(1-\alpha)+\bar{K}({\varepsilon})\quad\textrm{\ or}\quad e^{\lambda\bar{\tau}^{{\varepsilon}}}={\varepsilon}^{-\alpha}R(1-\alpha)^{d-1}e^{\bar{K}({\varepsilon})}. \end{equation} Using this along with \eqref{eq:asymp-exit-time},\eqref{eq:K-eps-o1}, we obtain \begin{equation} \lambda({\tilde\tau_\eps}+\bar{\tau}^{\varepsilon})=\log{\varepsilon}^{-1}-(d-1)\log\log{\varepsilon}^{-1}-\log\frac{|\chi_{\varepsilon}^{(d)}|}{R(d-1)!\lambda^{d-1}}+o_{\mathrm{P}}(1)+\bar K({\varepsilon}). \label{eq:sum-of-taus} \end{equation} Plugging~\eqref{eq:seek-solution-for-bar-tau} and~\eqref{eq:sum-of-taus} into \eqref{eq:tau-bar-eq}, using $\chi_{{\varepsilon}}\stackrel{\mathrm{d}}{\to}\chi$ and \eqref{eq:asymp-exit-time},\eqref{eq:K-eps-o1}, we obtain \begin{equation*} 1=e^{\bar{K}({\varepsilon})}\left(1+o_\mathrm{P}(1)+\frac{\bar K({\varepsilon})}{\ln{\varepsilon}^{-1}}(1+o_{\mathrm{P}}(1))\right). \end{equation*} Therefore,~\eqref{eq:barK-o1} holds. Using it in~\eqref{eq:sum-of-taus} gives \begin{equation} \label{eq:total-exit-time-representation} \lambda({\tilde\tau_\eps}+\bar{\tau}^{\varepsilon})=\log{\varepsilon}^{-1}-(d-1)\log\log{\varepsilon}^{-1}-\log\frac{|\chi_{\varepsilon}^{(d)}|}{R(d-1)!\lambda^{d-1}}+o_{\mathrm{P}}(1). \end{equation} Now a straightforward calculation based on \eqref{eq:asymp-exit-time}, \eqref{eq:tau-bar-eq}, \eqref{eq:seek-solution-for-bar-tau}, \eqref{eq:total-exit-time-representation}, and \eqref{eq:elementary-formula} reveals \begin{multline*} \ 1=e^{\bar{K}({\varepsilon})}\Bigg[1+\frac{\alpha}{1-\alpha}\frac{(d-1)^2\log\log{\varepsilon}^{-1}}{\log{\varepsilon}^{-1}}+(d-1)\frac{\log R}{\log{\varepsilon}^{-1}} +\frac{\alpha}{1-\alpha}\frac{(d-1)\log\left[\frac{|\chi_{\varepsilon}^{(d)}|}{(d-1)!\lambda^{d-1}}\right]}{\log{\varepsilon}^{-1}}\\ +\frac{(d-1)^2}{1-\alpha}\frac{\log(1-\alpha)}{\log{\varepsilon}^{-1}}-\frac{\alpha}{1-\alpha}\frac{(d-1)\lambda}{\log{\varepsilon}^{-1}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{\varepsilon}^{(d)}}+o_{\mathrm{P}}\left(\frac{1}{\log{\varepsilon}^{-1}}\right)\Bigg]. \end{multline*} Another application of \eqref{eq:elementary-formula} finishes the proof of~\eqref{eq:final-expr-for-Kbar}. To prove~\eqref{eq:following-deterministic-traj}, we note that Lemma \ref{lem:Y-bar-i-asymp-t} and the result on $\bar{\tau}^{{\varepsilon}}$ implies \[ \bar{Y}_{{\varepsilon}}^{(1)}(t)={\varepsilon}^{\alpha}e^{\lambda t}\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)}) \left(1+\frac{t}{{\tilde\tau_\eps}}\right)^{d-1}\left(1+o_{\mathrm{P}}(1)\right),\qquad\sup_{t\leq\bar{\tau}^{{\varepsilon}}}\bar{Y}_{\varepsilon}^{(i)}(t)=o_{\mathrm{P}}(1),\qquad i=2,\dots,d. \] Combining this with \eqref{eq:asymp-for-tilde-tau}, we complete the proof. \end{proof} \\ We are ready to finish the proof of the result on the linear system exiting from the box ${\mathfrak{B}}$. \smallskip \noindent\textit{Proof of Theorem \ref{thm:linear-result}}. Clearly, $\tau_{{\mathfrak{B}}}^{{\varepsilon}}={\tilde\tau_\eps}+\bar{\tau}^{{\varepsilon}}$ and as a simple consequence of Proposition \ref{prop:tau-B-eps-asymp} and Proposition \ref{prop:tau-bar-one-asymp}, we have \begin{equation}\label{eq:tau-B-e-full-asymp} \tau_{{\mathfrak{B}}}^{\varepsilon}=\frac{1}{\lambda}\log{\varepsilon}^{-1}-\frac{d-1}{\lambda}\log\log{\varepsilon}^{-1}-\frac{1}{\lambda}\log\frac{|\chi_{\varepsilon}^{(d)}|}{R(d-1)!\lambda^{d-1}}+\frac{K({\varepsilon})}{\lambda}, \end{equation} where \[ K({\varepsilon})=\frac{(d-1)^2\log\log{\varepsilon}^{-1}}{\log{\varepsilon}^{-1}}+\frac{d-1}{\log{\varepsilon}^{-1}}\eta_{{\varepsilon}}+o_{\mathrm{P}}\left(\frac{1}{\log{\varepsilon}^{-1}}\right),\quad{\ } \eta_{{\varepsilon}}=-\lambda\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}+\log\left[\frac{|\chi_{\varepsilon}^{(d)}|}{R(d-1)!\lambda^{d-1}}\right]. \] Combining \eqref{eq:tau-bar-eq} with Lemma \ref{lem:Y-bar-i-asymp-t}, we obtain \begin{align*} Y_{\varepsilon}^{(i)}(\tau_{\mathfrak{B}}^{\varepsilon})&=\frac{R\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)})}{{\tilde\tau_\eps}^{i-1}}\left(1+\frac{\bar{\tau}^{\varepsilon}}{{\tilde\tau_\eps}}\right)^{-(i-1)}\frac{(d-1)!}{(d-i)!}\frac{1-\frac{1}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}\frac{(i-1){\tilde\tau_\eps}+(d-1)\bar{\tau}^{\varepsilon}}{{\tilde\tau_\eps}+\bar{\tau}^{\varepsilon}}+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{-2}\right)}{1-\frac{1}{{\tilde\tau_\eps}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}\frac{(d-1)\bar{\tau}^{{\varepsilon}}}{{\tilde\tau_\eps}+\bar{\tau}^{{\varepsilon}}}+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{-2}\right)} \\ &= \frac{R\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)})}{\left[\tau_{\mathfrak{B}}^{\varepsilon}\right]^{i-1}}\frac{(d-1)!}{(d-i)!}\left[1-\frac{i-1}{\tau_{\mathfrak{B}}^{\varepsilon}}\frac{\chi_{\varepsilon}^{(d-1)}}{\chi_{{\varepsilon}}^{(d)}}+\mathcal{O}_{\mathrm{P}}\left({\tilde\tau_\eps}^{-2}\right)\right], \end{align*} where we used \eqref{eq:elementary-formula} in the second equality. A straightforward calculation using the asymptotics of $\tau_{{\mathfrak{B}}}^{\varepsilon}$ and \eqref{eq:elementary-formula} implies \begin{equation}\label{eq:precise-formula-in-all-directions} Y_{\varepsilon}^{(i)}(\tau_{\mathfrak{B}}^{\varepsilon})=\frac{\lambda^{i-1}R\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)})}{\log^{i-1}{\varepsilon}^{-1}}\frac{(d-1)!}{(d-i)!}\left[1+\frac{i-1}{\log{\varepsilon}^{-1}}\left((d-1)\log\log{\varepsilon}^{-1}+\eta_{\varepsilon}\right)+o_{\mathrm{P}}\left(\frac{1}{\log{\varepsilon}^{-1}}\right)\right]. \end{equation} Using this identity for $i=1,2,3$, we obtain \begin{multline} \label{eq:asymptotics-of-exit-from-small-in-proof} Y_{\varepsilon}(\tau_{\mathfrak{B}}^{\varepsilon})=R\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)})\Biggl[e_1+ \lambda(d-1)\left(\frac{1}{\log {\varepsilon}^{-1}} +\frac{(d-1) \log\log{\varepsilon}^{-1}}{\log^2{\varepsilon}^{-1}} -\frac{\eta_{\varepsilon}}{\log^2{\varepsilon}^{-1}}\right)e_2 \\+\frac{\lambda^2(d-1)(d-2)}{\log^2{\varepsilon}^{-1}}e_3+o_\mathrm{P}\left(\frac{1}{\log^2{\varepsilon}^{-1}}\right)\Bigg]. \end{multline} Due to~\eqref{eq:chi-converges} and $\mathrm{P}\{\chi^{(d)}=0\}=0$, we can conclude that $\mathrm{P}\{\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)})=\mathop{\mathrm{sign}}(\chi^{(d)})\}\to 1$ as ${\varepsilon}\to 0$. Using this, we see that on $A^{\pm}=\{\pm\chi^{(d)}>0\}$, the expansions~\eqref{eq:tau-B-e-asymp-in-theorem} and~\eqref{eq:asymptotics-of-exit-from-small} of the theorem follow from \eqref{eq:tau-B-e-full-asymp},\eqref{eq:asymptotics-of-exit-from-small-in-proof}, and \begin{equation*} \left(\chi_{{\varepsilon}}^{(d-1)},\chi_{\varepsilon}^{(d)},\mathop{\mathrm{sign}}(\chi_{\varepsilon}^{(d)}), \eta_{{\varepsilon}}\right)\stackrel{\mathrm{P}}{\to}\left(\chi^{(d-1)},\chi^{(d)},\mathop{\mathrm{sign}}(\chi^{(d)}), \eta\right),\quad {\varepsilon}\downarrow 0. \end{equation*} Also,~\eqref{eq:motion-is-close-to-axis} follows from Proposition~\ref{prop:tau-bar-one-asymp}. \qed \bibliographystyle{Martin}
1,314,259,993,150
arxiv
\section{\label{sec:Introduction}Introduction} In the foundations of quantum physics the notion of contextuality can be formulated in purely probabilistic terms within the framework of the Kolmogorovian probability theory \citep*{Larsson2002,Khrennikov2008_Bell-Boole,Khrennikov2008_EPR-Bohm,DK2013PLoS,DK2014LNCSQualified,DK2014PLOSconditionalization,DK2014FOOP,DK2014Advances,DK2014Scripta,DKL2015FooP}. The notion applies to any system of random variables recorded under different (mutually incompatible) conditions. Contextuality means that these random variables cannot be ``sewn together'' into a single system of jointly distributed random variables if one assumes that all or some of them preserve their identity across different conditions. Within the Kolmogorovian framework the existence of this single joint distribution is equivalent to the presentability of all random variables involved as functions of one and the same (``hidden'') random variable \citep{SuppesZanotti1981,Fine1982b,DzhafarovKujala2010}. In spite of its long history (dating from Specker's \citeyearpar{Specker1960} example with three boxes, contextuality does not have a standard definition \citep{KochenSpecker1967,Laudisa1997,Spekkens2008,Kirchmair2009,Badziag2009,Khrennikov2009,Cabello2013PRL}, and is often confounded with such notions as nonlocality and lack of realism (the notions we will not get into in this chapter). All authors who use this term in quantum theory, however, agree on the possibility of detecting contextuality in the spins of entangled particles by violations of Bell-type inequalities \citep{Fine1982b,Bell1964,ClauserHorneShimonyHolt1969}. Many other tests have been developed for systems of random variables in and outside quantum physics, notably in psychology \citep{KujalaDzhafarov2008b,DzhafarovKujala2012a,DzhafarovKujala2012b,DzhafarovKujala2013ProcAMS}. All of these tests are necessary (sometimes also sufficient) conditions for non-contextuality, because of which all of them presuppose or are directly making use of the condition known in psychology as marginal selectivity \citep{TownsendSchweickert1989,Dzhafarov2003c} and in quantum physics as no-signaling \citep{Cereceda2000,Masanes2006,Oas2014}. In this chapter we use the first term, as more general and purely probabilistic (see Section \ref{sec:Consequences-of-the}).\footnote{Within our most recent publications developing the theory, this property is also referred to by the technical name \index{consistently connected}\emph{consistent connectedness}.} If marginal selectivity is violated, no ``sewing together'' of the kind mentioned above is possible. The problem associated with this fact is that in some cases (including all cases known to us in psychology) violations of marginal selectivity can be readily attributed to the lack of selectivity in the dependence of random variables on various components of the conditions under which they are recorded. If a person is asked to judge brightness and size of a visually presented object, it is not difficult to construct a model in which the judgment of brightness is directly influenced by physical intensity and also directly influenced by object's physical size. In the EPR/Bohm paradigm, if the two measurements of spins in entangled particles are separated by a time-like interval, the spatial axis chosen by Bob (for one of the particles) can in principle initiate a process that will directly influence the spin recorded by Alice (for another particle). We will refer to the dependence of an output distribution on the ``wrong'' input as a direct cross-influence. The Bell-type inequalities (e.g., in the CHSH form, \citealp{ClauserHorneShimonyHolt1969}) cannot be derived under direct cross-influences, and whether or not they are violated therefore becomes irrelevant. It seems strange and intellectually unsatisfying, however, that we can detect contextuality when marginal selectivity holds precisely, but we cannot speak of contextuality at all when it is violated, however slightly. In this chapter we review (in the context of systems with binary inputs and binary random variables as outputs) a recently proposed definition and measure of contextuality \citep{KujalaDzhafarovLarsson2015,DKL2015FooP} that overcome this difficulty: even in the presence of direct cross-influences (say, from Bob's setting to Alice's measurements and vice versa) we can detect the presence and compute the degree of contextual influences ``on top of'' the direct cross-influences. The theory can be generalized to arbitrary systems with deterministic inputs and random outputs, but we do not attempt to present it here. We have made an effort to keep the presentation on a very nontechnical level. This level would be difficult to maintain in a more systematical or more general presentation. \section{\label{sec:The-System-The}The System $\left(\alpha,\beta,A,B\right)$} Consider a system with two binary inputs, $\alpha,\beta$, and two outputs that are binary random variables, $A,B$. Alice chooses the value of $\alpha$ to be either $\alpha_{1}$ or $\alpha_{2}$, and she records the corresponding value of $A$ as either $+1$ or $-1$. Bob chooses the value of $\beta$ to be either $\beta_{1}$ or $\beta_{2}$, and he records the value of $B$ as either $+1$ or $-1$. Alice and Bob do this repeatedly in successive trials, so that each input choice and output recording by Alice is paired with an input choice and output recording by Bob. They send their paired choices of inputs and recordings of the outputs to Charlie, who creates four tables of joint distributions: for every $i\in\left\{ 1,2\right\} $ and $j\in\left\{ 1,2\right\} $, the distribution is\begin{equation}% \begin{tabular}{|c|cc|c|} \cline{1-3} $\phi=(\alpha_{i},\beta_{j})$ & $B_{ij}=+1$ & $B_{ij}=-1$ & \multicolumn{1}{c}{}\tabularnewline \hline $A_{ij}=+1$ & $\Pr\left[A_{ij}=1,B_{ij}=1\right]$ & $\ldots$ & $\Pr\left[A_{ij}=1\right]$\tabularnewline $A_{ij}=-1$ & $\ldots$ & $\ldots$ & $\ldots$\tabularnewline \hline \multicolumn{1}{c|}{} & $\Pr\left[B_{ij}=1\right]$ & $\ldots$ & \multicolumn{1}{c}{}\tabularnewline \cline{2-3} \end{tabular}\end{equation}Charlie knows that the only variables that can possibly influence $A$ are $\alpha$ and $\beta$, so he labels $A$ recorded under conditions $\phi=\left(\alpha_{i},\beta_{j}\right)$ as $A_{ij}$, allowing thereby $A_{ij}$ to have up to four different distributions. Each of these distributions can be represented by $\Pr\left[A_{ij}=1\right]$, or equivalently by the expected value $\left\langle A_{ij}\right\rangle =2\Pr\left[A_{ij}=1\right]-1$. The notation $B_{ij}$ for Bob, and the values $\Pr\left[B_{ij}=1\right]$ and $\left\langle B_{ij}\right\rangle $ are analogous. Charlie thus deals with eight random variables, \begin{equation} A_{11},B_{11},A_{12},B_{12},A_{21},B_{21},A_{22},B_{22}.\label{eq:the 8} \end{equation} With respect to the joint distribution of $A_{ij}$ and $B_{ij}$, their individual distributions are referred to as marginal. The joint distribution for $\left(A_{ij},B_{ij}\right)$ is uniquely determined by the two marginal probabilities and the joint probability $\Pr\left[A_{ij}=+1\textnormal{ and }B_{ij}=+1\right]$. Equivalently, it is determined by the two expected values $\left\langle A_{ij}\right\rangle ,\left\langle B_{ij}\right\rangle $ and the product expected value \begin{equation} \left\langle A_{ij}B_{ij}\right\rangle =\Pr\left[A_{ij}=B_{ij}\right]-\Pr\left[A_{ij}\not=B_{ij}\right]. \end{equation} \section{\label{sec:Selectivity-of-influences}Selectivity of influences and marginal selectivity} \index{selective influence}Let us assume that Charlie, based on some theory, expects that the dependence of $A,B$ on $\alpha,\beta$ is selective: Bob's choice of $\beta$ value does not influence Alice's $A$ and vice versa: \begin{equation} \begin{array}{c} \xymatrix{\alpha\ar[d] & \beta\ar[d]\\ A & B } \end{array}\label{eq:diagram selective} \end{equation} This means that $A_{i1}$ and $A_{i2}$ are one and the same random variable for every $i\in\left\{ 1,2\right\} $, and so are $B_{1j}$ and $B_{2j}$ for every $j\in\left\{ 1,2\right\} $. Charlie can therefore relabel $A_{ij}$ into $A_{i}$ and $B_{ij}$ into $B_{j}$. But he can also approach this in a more cautious way. He can retain the double indexation and ask the following question: given the eight random variables in (\ref{eq:the 8}) of which we know the expectations \begin{equation} \left(\left\langle A_{ij}B_{ij}\right\rangle ,\left\langle A_{ij}\right\rangle ,\left\langle B_{ij}\right\rangle \right),\;i,j\in\left\{ 1,2\right\} ,\label{eq:expectations observed} \end{equation} can we impose a joint distribution on these eight random variables\footnote{To impose a joint distribution on (\ref{eq:the 8}) means to create a vector of jointly distributed $A'_{11}$, $B'_{11}$, $\ldots$, $A'_{22}$, $B'_{22}$ called a \index{coupling}coupling for (\ref{eq:the 8}), such that the pairs $\left(A'_{ij},B'_{ij}\right)$ have the same distributions as $\left(A_{ij},B_{ij}\right)$ for all $i,j\in\left\{ 1,2\right\} $. No other subset of (\ref{eq:the 8}) has a joint distribution. In this chapter we conveniently confuse random variables and their primed counterparts. See \citep{DK2013PLoS,DK2014LNCSQualified,DK2014PLOSconditionalization,DK2014FOOP,DK2014Advances,DzhafarovKujala2010,DzhafarovKujala_handbook} for detailed discussions. } such that \begin{equation} \begin{array}{cc} \Pr\left[A_{i1}\not=A_{i2}\right]=0 & \textnormal{ for }i\in\left\{ 1,2\right\} \\ \Pr\left[B_{1j}\not=B_{2j}\right]=0 & \textnormal{ for }j\in\left\{ 1,2\right\} \end{array}?\label{eq:identity connection} \end{equation} If the answer is affirmative, then the situation is equivalent to the existence of a joint distribution of the single-indexed $A_{1},B_{1},A_{2},B_{2}$ such that \begin{equation} \left(\left\langle A_{i}B_{j}\right\rangle ,\left\langle A_{i}\right\rangle ,\left\langle B_{j}\right\rangle \right)=\left(\left\langle A_{ij}B_{ij}\right\rangle ,\left\langle A_{ij}\right\rangle ,\left\langle B_{ij}\right\rangle \right),\;i,j\in\left\{ 1,2\right\} . \end{equation} However, and this is the reason we call Charlie's approach cautious, the answer does not have to be affirmative. One situation that precludes this is if the following equalities are violated at least for one $i$ or one $j$: \begin{equation} \left\langle A_{i1}\right\rangle =\left\langle A_{i2}\right\rangle ,\;\left\langle B_{1j}\right\rangle =\left\langle B_{2j}\right\rangle .\label{eq:marginal invariance} \end{equation}\index{marginal selectivity}% These equalities represent marginal selectivity of $A$ with respect to changes in $\beta$ and of $B$ with respect to changes in $\alpha$. This marginal selectivity is an obvious consequence of (\ref{eq:identity connection}). If, e.g., $\left\langle A_{11}\right\rangle $ were different from $\left\langle A_{12}\right\rangle $, then, as Bob changes the value of $\beta$ from $\beta_{1}$ to $\beta_{2}$, Alice's distribution of $A$ for one and the same choice of $\alpha=\alpha_{1}$ changes. $A_{11}$and $A_{12}$ cannot therefore be always equal, contravening (\ref{eq:identity connection}). In situations like this Charlie is forced then to revise his model (\ref{eq:diagram selective}) in favor of \begin{equation} \begin{array}{c} \xymatrix{\alpha\ar[d]\ar[dr] & \beta\ar[d]\ar[dl]\\ A & B } \end{array}.\label{eq:diagrams cross} \end{equation} This can be referred to as a model with direct cross-influences: the distribution (hence also identity) of the outputs is allows to be influenced by ``wrong'' inputs (``wrong'' from the point of view of the Charlie's original theory\footnote{This is the ``subjective'', or theory-laden aspect of the notion of contextuality: this notion acquires its meaning only in relation to some model, in this case represented by (\ref{eq:diagram selective}), that describes the system the way it ``ought to be'' or predicted to be by some theory. We will not elaborate, but this accords with our view \citep{DK2014Scripta} that while probabilities are objective, the identities of random variables are theory-laden.}). \section{\label{sec:Contextuality-under-marginal}Contextuality under marginal selectivity}\index{contextuality!---, under marginal selectivity} There is also another possibility for Charlie's question to have a negative answer. The marginal selectivity requirement may very well be satisfied, but the observed expectations (\ref{eq:expectations observed}) may be incompatible with the hypothesis (\ref{eq:identity connection}). The incompatibility means that a joint distribution of the eight random variables (\ref{eq:the 8}) that accords with both (\ref{eq:expectations observed}) and (\ref{eq:identity connection}) does not exist. This understanding of contextuality was first utilized by \citet{Larsson2002}. It helps to understand the essence of all Bell-type theorems. Stated in the form convenient for our purposes, the theorem that applies to all systems with two binary inputs and two binary random outputs \citet{Fine1982b} says: \begin{thm} [Fine, 1982]\label{thm:Fine}The observed expectations (\ref{eq:expectations observed}) are compatible with the identity connections (\ref{eq:identity connection}) if and only if marginal selectivity (\ref{eq:marginal invariance}) is satisfied for all $i,j\in\left\{ 1,2\right\} $, and \begin{equation} \max_{i,j\in\left\{ 1,2\right\} }\left|\left\langle A_{11}B_{11}\right\rangle +\left\langle A_{12}B_{12}\right\rangle +\left\langle A_{21}B_{21}\right\rangle +\left\langle A_{22}B_{22}\right\rangle -2\left\langle A_{ij}B_{ij}\right\rangle \right|\leq2.\label{eq:Bell/Fine/CHSH} \end{equation} \end{thm} The term \index{connection}``connections'' used in this formulation \citep{DK2013PLoS,DK2014LNCSQualified,DK2014PLOSconditionalization,DK2014FOOP} refers to the unobservable pairs \begin{equation} \left(A_{11},A_{12}\right),\left(A_{21},A_{22}\right),\left(B_{11},B_{21}\right),\left(B_{12},B_{22}\right). \end{equation} Their unobservable joint distributions are given by\begin{equation}% \begin{tabular}{c|cc|c} \cline{2-3} & $A_{i2}=+1$ & $A_{i2}=-1$ & \tabularnewline \hline \multicolumn{1}{|c|}{$A_{i1}=+1$} & $\Pr\left[A_{i1}=1,A_{i2}=1\right]$ & $\ldots$ & \multicolumn{1}{c|}{$\Pr\left[A_{i1}=1\right]$}\tabularnewline \multicolumn{1}{|c|}{$A_{i1}=-1$} & $\ldots$ & $\ldots$ & \multicolumn{1}{c|}{$\ldots$}\tabularnewline \hline & $\Pr\left[A_{i2}=1\right]$ & $\ldots$ & \tabularnewline \cline{2-3} \multicolumn{1}{c}{} & & \multicolumn{1}{c}{} & \tabularnewline \cline{2-3} & $B_{2j}=+1$ & $B_{2j}=-1$ & \tabularnewline \hline \multicolumn{1}{|c|}{$B_{1j}=+1$} & $\Pr\left[B_{1j}=1,B_{2j}=1\right]$ & $\ldots$ & \multicolumn{1}{c|}{$\Pr\left[B_{1j}=1\right]$}\tabularnewline \multicolumn{1}{|c|}{$B_{1j}=-1$} & $\ldots$ & $\ldots$ & \multicolumn{1}{c|}{$\ldots$}\tabularnewline \hline & $\Pr\left[B_{2j}=1\right]$ & $\ldots$ & \tabularnewline \cline{2-3} \end{tabular}\end{equation}for $i,j\in\left\{ 1,2\right\} $. If (\ref{eq:identity connection}) holds, i.e., the entries on the minor diagonals of the tables are zero, then the connections are called the identity ones. The compatibility of connections with the observed expectations (uniquely defining the observed distributions) means that each of the $2^{8}$ possible combinations \[ A_{11}=\pm1,B_{11}=\pm1,\ldots,A_{22}=\pm1,B_{22}=\pm1 \] is assigned a probability, so that the probabilities for all combinations containing, say, $A_{12}=1$ and $B_{12}=-1$ sum to the observed $\Pr\left[A_{12}=1,B_{12}=-1\right]$; and the probabilities for all combinations containing, say, $B_{12}=1$ and $B_{22}=1$ equals the hypothetical (unobservable) connection probability $\Pr\left[B_{12}=1,B_{22}=1\right]$. The inequalities (\ref{eq:Bell/Fine/CHSH}), in physics referred to as CHSH inequalities, can be violated, and they are de facto violated if $A$ and $B$ are spins of two entangled particles under certain choices of spatial axes ($\alpha$ and $\beta$) along which they are measured \citep{AspectGrangierRoger1981,AspectGrangierRoger1982,Weihs1998}. When these inequalities are violated while marginal selectivity is satisfied, we speak of contextuality: Alice's output $A$ under her choice of $\alpha_{1}$ does not change its distribution depending on Bob's choice of $\beta_{1}$ or $\beta_{2}$, but $A_{11}$ and $A_{12}$ still cannot be considered one and the same random variable (it should not come as a surprise that different random variables can have the same distribution). In the diagram below the interrupted lines indicate contextual influences: the dependence of identities of identically distributed random variables on the ``wrong'' inputs: \begin{equation} \begin{array}{c} \xymatrix{\alpha\ar[d]\ar@{-->}[dr] & \beta\ar[d]\ar@{-->}[dl]\\ A & B } \end{array} \end{equation} When the inequalities (\ref{eq:Bell/Fine/CHSH}) are violated, a measure of contextuality can be easily designed as follows. If (\ref{eq:identity connection}) were compatible with the observed expectations (\ref{eq:expectations observed}), then (by definition) Charlie could construct a joint distribution of the random variables (\ref{eq:the 8}) in which \begin{equation} \Delta=\Pr\left[A_{11}\not=A_{12}\right]+\Pr\left[A_{21}\not=A_{22}\right]+\Pr\left[B_{11}\not=B_{21}\right]+\Pr\left[B_{12}\not=B_{22}\right]\label{eq:C under MS} \end{equation} equals zero. If (\ref{eq:identity connection}) is incompatible with (\ref{eq:expectations observed}), then this $\Delta$ cannot be zero in any joint distribution imposed on (\ref{eq:the 8}). It is natural therefore to adopt the following \begin{defn} \label{def:contextuality under MS}Under marginal selectivity, the degree of contextuality in a system with given observed expectations (\ref{eq:expectations observed}) is the minimal value of $\Delta$ in (\ref{eq:C under MS}) for which a joint distribution for (\ref{eq:the 8}) exists. \end{defn} As it turns out, this minimal value of $\Delta$ equals \begin{equation} \Delta_{\min}=\max\left\{ 0,\Delta_{\textnormal{CHSH}}\right\} ,\label{eq:C_min} \end{equation} where \begin{equation} \begin{split} &\Delta_{\textnormal{CHSH}}=\\ &{\textstyle\frac{1}{2}}\max_{i,j\in\left\{ 1,2\right\} }\left|\left\langle A_{11}B_{11}\right\rangle +\left\langle A_{12}B_{12}\right\rangle +\left\langle A_{21}B_{21}\right\rangle +\left\langle A_{22}B_{22}\right\rangle -2\left\langle A_{ij}B_{ij}\right\rangle \right|-1\label{eq:C^0 general} \end{split} \end{equation} is ($\nicefrac{1}{2}$ times) the violation of the CHSH inequalities. This is a special case of the formula derived later in Theorem \ref{thm:C_min} without the assumption of marginal selectivity. As an example, let the observed expectations be at the Tsirelson bounds \citep{Tsirelson1980,Landau1987}. Then $\Delta_{\min}$ is $\sqrt{2}-1$. The largest possible value of $\Delta_{\min}$ is 1. \section{\label{sec:Contextuality-on-top}Contextuality on top of direct cross-influences}\index{contextuality!---, on top of direct cross-influences} The definition of contextuality given above does not work for the situation depicted in (\ref{eq:diagrams cross}), where marginal selectivity is not satisfied. In this case we have direct cross-influences from ``wrong'' inputs, and this precludes the possibility that $\Delta$ in (\ref{eq:C under MS}) is zero. In fact, we have the simple \begin{thm} \label{thm:C_0}Given the observed expectations $\left(\left\langle A_{ij}\right\rangle ,\left\langle B_{ij}\right\rangle \right)_{i,j\in\left\{ 1,2\right\} }$, the minimum possible value for $\Delta$ in (\ref{eq:C under MS}) is \begin{equation} \begin{split} \Delta_{0}=\textstyle\frac{1}{2}(&\left|\left\langle A_{11}\right\rangle -\left\langle A_{12}\right\rangle \right|+\left|\left\langle A_{21}\right\rangle -\left\langle A_{22}\right\rangle \right|+\\ &\left|\left\langle B_{11}\right\rangle -\left\langle B_{21}\right\rangle \right|+\left|\left\langle B_{12}\right\rangle -\left\langle B_{22}\right\rangle \right|).\label{eq:C_0 general} \end{split} \end{equation} \end{thm} \begin{proof} We minimize $\Delta$ if we minimize separately $\Pr\left[A_{11}\not=A_{12}\right]$, $\Pr\left[A_{21}\not=A_{22}\right]$, $\Pr\left[B_{11}\not=B_{21}\right]$, and $\Pr\left[B_{12}\not=B_{22}\right]$. Consider, e.g., the distribution of the connection $\left(A_{11},A_{12}\right)$:\begin{equation}% \begin{tabular}{|c|ccc|} \cline{2-4} \multicolumn{1}{c|}{} & $A_{12}=+1$ && $A_{12}=-1$\tabularnewline \hline \!\!$A_{11}=+1$\!\! & \!\!$\Pr\left[A_{11}=1,A_{12}=1\right]$ & \multicolumn{2}{c|}{\quad $\Pr\left[A_{11}=1\right]-\Pr\left[A_{11}=1,A_{12}=1\right]$\!\!}\tabularnewline \!\!$A_{11}=-1$\!\! & \multicolumn{2}{c}{\!\!$\Pr\left[A_{12}=1\right]-\Pr\left[A_{11}=1,A_{12}=1\right]$\quad } & $\ldots$\!\!\tabularnewline \hline \end{tabular}\end{equation}The largest possible value for the probability $\Pr\left[A_{11}=1,A_{12}=1\right]$ is $$\min\left\{ \Pr\left[A_{11}=1\right],\Pr\left[A_{12}=1\right]\right\} ,$$ whence the minimum of $\Pr\left[A_{11}\not=A_{12}\right]$, which is the sum of the entries on the minor diagonal, is $\left|\Pr\left[A_{11}=1\right]-\Pr\left[A_{12}=1\right]\right|=\frac{1}{2}\left|\left\langle A_{11}\right\rangle -\left\langle A_{12}\right\rangle \right|$. \end{proof} Under the marginal selectivity we have $\Delta_{0}=0$, and we speak of contextuality if the minimal value of $\Delta$ that is compatible with the observed expectations (\ref{eq:expectations observed}) is greater than $\Delta_{0}=0$. In the general case $\Delta_{0}>0$, and we need a more general definition of contextuality. The idea is simple. If $\Delta_{0}>0$, we have direct cross-influences (\ref{eq:diagrams cross}), and if $\Delta=\Delta_{0}$ is compatible with the observed expectations (\ref{eq:expectations observed}), then no contextuality is involved: direct cross-influences is all one needs to account for the system's behavior. If however $\Delta=\Delta_{0}$ is not compatible with the observed expectations (\ref{eq:expectations observed}), then we can speak of contextuality ``on top of'' the direct cross-influences. The natural measure of the degree of contextuality then is given by \begin{defn} \label{def:contextuality general}The degree of contextuality in a system with given observed expectations (\ref{eq:expectations observed}) is $\Delta_{\min}-\Delta_{0}$, where $\Delta_{\min}$ is the minimal value of $\Delta$ in (\ref{eq:C under MS}) for which a joint distribution for (\ref{eq:the 8}) exists. \end{defn} \section{\label{sec:General-formula-for}General formula for contextuality} We now need to derive a formula for $\Delta_{\min}$ of which (\ref{eq:C_min}) is a special case. \begin{thm} \label{thm:C_min}The minimum possible value $\Delta_{\min}$ for $\Delta$ that is compatible with the observed expectations (\ref{eq:expectations observed}) is \begin{equation} \Delta_{\min}=\max\left\{ \Delta_{0},\Delta_{\textnormal{CHSH}}\right\} ,\label{eq:C_min general} \end{equation} where $\Delta_{0}$ is given in (\ref{eq:C_0 general}) and $\Delta_{\textnormal{CHSH}}$ in (\ref{eq:C^0 general}).\end{thm} \begin{proof} By Lemma \ref{lem:2} (a computer-assisted result detailed in the next section), $\Delta$ is compatible with the observed $\left(\left\langle A_{ij}B_{ij}\right\rangle ,\left\langle A_{ij}\right\rangle ,\left\langle B_{ij}\right\rangle \right)_{i,j\in\left\{ 1,2\right\} }$ if and only if it satisfies \begin{align} \Delta &\textstyle \ge-1+\frac{1}{2}s_{1}\left(\left\langle A_{11}B_{11}\right\rangle ,\left\langle A_{12}B_{12}\right\rangle ,\left\langle A_{21}B_{21}\right\rangle ,\left\langle A_{22}B_{22}\right\rangle \right),\label{eq:S_lower_from_r}\\ \begin{split} \Delta &\textstyle \ge\frac{1}{2}( \left|\left\langle A_{11}\right\rangle -\left\langle A_{12}\right\rangle \right|+\left|\left\langle A_{21}\right\rangle -\left\langle A_{22}\right\rangle \right|+\\ &\textstyle\phantom{{}\ge2(}\left|\left\langle B_{11}\right\rangle -\left\langle B_{21}\right\rangle \right|+\left|\left\langle B_{12}\right\rangle -\left\langle B_{22}\right\rangle \right|), \end{split} \label{eq:S_lower_from_ab}\\ \Delta &\textstyle \le4-\left[-1+\frac{1}{2}s_{1}\left(\left\langle A_{11}B_{11}\right\rangle ,\left\langle A_{12}B_{12}\right\rangle ,\left\langle A_{21}B_{21}\right\rangle ,\left\langle A_{22}B_{22}\right\rangle \right)\right],\label{eq:S_upper_from_r}\\ \begin{split} \Delta &\textstyle \le4-\frac{1}{2}(\left|\left\langle A_{11}\right\rangle +\left\langle A_{12}\right\rangle \right|+\left|\left\langle A_{21}\right\rangle +\left\langle A_{22}\right\rangle \right|+\\ &\textstyle\phantom{{}\le4-2(}\left|\left\langle B_{11}\right\rangle +\left\langle B_{21}\right\rangle \right|+\left|\left\langle B_{12}\right\rangle +\left\langle B_{22}\right\rangle \right|),\label{eq:S_upper_from_ab} \end{split} \end{align} where $s_{1}(\cdots)$ is defined in \eqref{eq:s1} in Sec.~\ref{sec:techdetails} below and is equal to the $\max\left|\ldots\right|$-part of \eqref{eq:C^0 general}. These inequalities are always mutually compatible, whence $\Delta_{\min}$ is the larger of the two right-hand expressions in (\ref{eq:S_lower_from_r}) and (\ref{eq:S_lower_from_ab}). \end{proof} It follows that $\Delta_{\min}-\Delta_{0}$ is always nonnegative, and Definition \ref{def:contextuality general} is well-constructed: $\Delta_{\min}-\Delta_{0}=0$ indicates no contextuality, $\Delta_{\min}-\Delta_{0}>0$ indicates contextuality on top of the direct cross-influences. We can present the notion of (non-)contextuality in as close a form as possible to the traditional CHSH inequalities: \begin{thm} The system exhibits no contextuality if and only if \begin{equation} \begin{split} \left|\left\langle A_{11}B_{11}\right\rangle +\left\langle A_{12}B_{12}\right\rangle +\left\langle A_{21}B_{21}\right\rangle -\left\langle A_{22}B_{22}\right\rangle \right|&\leq2\left(1+\Delta_{0}\right),\\ \left|\left\langle A_{11}B_{11}\right\rangle +\left\langle A_{12}B_{12}\right\rangle -\left\langle A_{21}B_{21}\right\rangle +\left\langle A_{22}B_{22}\right\rangle \right|&\leq2\left(1+\Delta_{0}\right),\\ \left|\left\langle A_{11}B_{11}\right\rangle -\left\langle A_{12}B_{12}\right\rangle +\left\langle A_{21}B_{21}\right\rangle +\left\langle A_{22}B_{22}\right\rangle \right|&\leq2\left(1+\Delta_{0}\right),\\ \left|-\left\langle A_{11}B_{11}\right\rangle +\left\langle A_{12}B_{12}\right\rangle +\left\langle A_{21}B_{21}\right\rangle +\left\langle A_{22}B_{22}\right\rangle \right|&\leq2\left(1+\Delta_{0}\right), \end{split}\label{eq:familar} \end{equation} where $\Delta_{0}$ is the natural measure of violation of marginal selectivity, (\ref{eq:C_0 general}). If at least one of these inequalities is violated, then the largest difference between the left-hand side and $2\left(1+\Delta_{0}\right)$ is the degree of contextuality (after scaling by $\nicefrac{1}{2}$). \end{thm} The maximum value attainable by one of the linear combinations in (\ref{eq:familar}) is 4. It follows that the system exhibits no contextuality if the violation of marginal selectivity $\Delta_{0}$ in it is not less than 1. Put differently, if $\Delta_{0}\geq1$, any observed distributions of random variables can be accounted for in terms of direct cross-influences, with no contextuality involved. \section{\label{sec:Consequences-of-the}Consequences of the new definition of contextuality} The notion of contextuality was presented in Introduction to mean that random variables recorded under mutually incompatible conditions cannot be ``sewn together'' into a single system of jointly distributed random variables, provided one assumes that all or some of them preserve their identity across different conditions. We should now relax the assumption clause: \begin{quote} contextuality means that random variables recorded under mutually incompatible conditions cannot be ``sewn together'' into a single system of jointly distributed random variables, provided one assumes that their identity across different conditions changes as little as possibly allowed by direct cross-influences (equivalently, by observed deviations from marginal selectivity). \end{quote} As mentioned in Introduction, marginal selectivity is rarely satisfied outside quantum physics, and, in particular, is almost always violated in psychological experiments. Consider, e.g., a double-detection experiment, where a participant is presented two side-by-side flashes of light (left and right) and asked to say ``Yes/No'' to the question ``Is there a flash on the left?'' and another ``Yes/No'' to the question ``Is there a flash on the right?''. Each flash can be presented at two intensity levels: zero (no flash) and some very small value $s>0$. We have therefore four conditions: $\left(0,0\right),\left(0,s\right),\left(s,0\right),$$\left(s,s\right)$. Denoting the response about the left stimulus by $A$ and he response about the right stimulus by $B$, we get the eight random variables $A_{00},B_{00},\ldots,A_{ss},B_{ss}$. The situation is formally identical to the Alice-Bob paradigm. The ``normative'' diagram (\ref{eq:diagram selective}), with $\alpha,\beta$ being the two flash intensities, is very likely to be violated on the level of marginal probabilities: the answer about the left flash will almost certainly be influenced by the intensity of the right flash, and vice versa. Our definition of contextuality, however, allows one to determine whether contextuality is there on top of these direct cross-influences. Another example is taken from the work by \citet{AertsGaboraSozzo2013}. They estimated the probabilities with which people chose one of two presented to them animal names and one of two presented to them animal sounds. The results were as follows:\medskip{} \fbox{\begin{minipage}[t]{0.9\columnwidth}% \begin{center} Probability estimates from Table 1 of \citep{AertsGaboraSozzo2013}.$^{\;\dagger}$ \end{center} \medskip \begin{center} \begin{small} \setlength{\tabcolsep}{2pt} \begin{tabular}{c|cc|cc@{\hspace{6pt}}c|cc|c} \cline{2-3} \cline{7-8} \multirow{2}{*}{$\phi=(\alpha_{1},\beta_{1})$} & $B_{11}=$ & $B_{11}=$ & & & \multirow{2}{*}{$\phi=(\alpha_{1},\beta_{2})$} & $B_{12}=$ & $B_{12}=$ & \tabularnewline & Growls & Whinnies & & & & Snorts & Meows & \tabularnewline \cline{1-4} \cline{6-9} \multicolumn{1}{|c|}{$A_{11}=\textnormal{Horse}$} & .049 & .630 & \multicolumn{1}{c|}{.679} & \multicolumn{1}{c|}{} & $A_{12}=\textnormal{Horse}$ & .593 & .025 & \multicolumn{1}{c|}{.618}\tabularnewline \multicolumn{1}{|c|}{$A_{11}=\textnormal{Bear}$} & .259 & .062 & \multicolumn{1}{c|}{.321} & \multicolumn{1}{c|}{} & $A_{12}=\textnormal{Bear}$ & .296 & .086 & \multicolumn{1}{c|}{.382}\tabularnewline \cline{1-4} \cline{6-9} & .308 & .692 & & & & .889 & .111 & \tabularnewline \cline{2-3} \cline{7-8} \multicolumn{1}{c}{} & & \multicolumn{1}{c}{} & & & \multicolumn{1}{c}{} & & \multicolumn{1}{c}{} & \tabularnewline \cline{2-3} \cline{7-8} \multirow{2}{*}{$\phi=(\alpha_{2},\beta_{1})$} & $B_{21}=$ & $B_{21}=$ & & & \multirow{2}{*}{$\phi=(\alpha_{2},\beta_{2})$} & $B_{22}=$ & $B_{22}=$ & \tabularnewline & Growls & Whinnies & & & & Snorts & Meows & \tabularnewline \cline{1-4} \cline{6-9} \multicolumn{1}{|c|}{$A_{21}=\textnormal{Tiger}$} & .778 & .086 & \multicolumn{1}{c|}{.864} & \multicolumn{1}{c|}{} & $A_{22}=\textnormal{Tiger}$ & .148 & .086 & \multicolumn{1}{c|}{.234}\tabularnewline \multicolumn{1}{|c|}{$A_{21}=\textnormal{Cat}$} & .086 & .049 & \multicolumn{1}{c|}{.135} & \multicolumn{1}{c|}{} & $A_{22}=\textnormal{Cat}$ & .099 & .667 & \multicolumn{1}{c|}{.766}\tabularnewline \cline{1-4} \cline{6-9} & .864 & .135 & & & & .247 & .753 & \tabularnewline \cline{2-3} \cline{7-8} \end{tabular} \setlength{\tabcolsep}{6pt} \end{small} \par \end{center} \medskip{} $^{\dagger\;}$Based on 81 respondents per table. \medskip{} \end{minipage}} \medskip{} Here, $\alpha$ indicates one of the two animal dichotomies offered ($\alpha_{1}=\textnormal{Horse or Bear}$, $\alpha_{2}=\textnormal{Tiger or Cat}$), and $\beta$ analogously indicates one of two animal sound dichotomies. The value of $\Delta_{\textnormal{CHSH}}$ given by (\ref{eq:C^0 general}) equals 0.210 here, and Aerts et al.\ report it as evidence in favor of contextuality (note that the CHSH bound of $2$ corresponds to $\Delta_{\textnormal{CHSH}}=0$). We criticized this conclusion \citep{DzhafarovKujala2013Topics} by pointing out that the derivation of the CHSH inequalities is not valid without marginal selectivity, and the latter is clearly violated in the data: e.g., $\Pr\left[B_{12}=\textnormal{Snorts}\right]=0.889$ while $\Pr\left[B_{22}=\textnormal{Snorts}\right]=0.247$. We can now amend our criticism: the computation of $\Delta_{\textnormal{CHSH}}$ is meaningful even if marginal selectivity is contravened. One has, however, to compare $\Delta_{\textnormal{CHSH}}$ to $\Delta_{0}$ of (\ref{eq:C_0 general}) rather than to zero, and to compute $\max\left\{ \Delta_{0},\Delta_{\textnormal{CHSH}}\right\} -\Delta_{0}$ as the measure of contextuality. Unfortunately for the Aerts et al.'s conclusions, $\Delta_{0}$ in their data is too large (1.889) to allow for nonzero contextuality. In quantum physics, the no-signaling condition (a special case of marginal selectivity) can be ensured by separating the outputs from the ``wrong'' inputs by space-like intervals. There are, however, some indications that in the well-known experiments by \citet{Weihs1998}, where space-like separation is claimed to be the case, some violations of marginal selectivity were observed \citep{AdenierKhrennikov2007}. If so, and whatever the physical cause of these violations, our new approach provides a way of testing whether contextuality is still present in the data. Signaling is natural to assume in Leggett--Garg \citeyearpar{LeggettGarg1985} -type systems, with three binary random variables $X,Y,Z$ tied to three successive moments of time, $t_{1}<t_{2}<t_{3}$. Any two of these three random variables can be measured together, in one experiment, but not all three of them. If $X$ and $Z$ are measured together, then (in accordance with our general approach, see \citealp{DK2014LNCSQualified,DK2014PLOSconditionalization,DK2014FOOP,DK2014Advances,DK2014Scripta}) the identity of $X$ as a random variable may be different from the identity of $X$ when measured together with $Y$. This means that $X$ in the two situations should be labelled differently, say, $X_{13}$ and $X_{12}$, respectively (based on the time moments involved). Analogously, we have $Y_{12}$ and $Y_{23}$ depending on whether $Y$ is measured together with $X$ or with $Z$; and we have $Z_{13}$ and $Z_{23}$. \citet{SuppesZanotti1981} have shown that given uniform marginals, an equivalent condition for the existence of a joint distribution of \begin{equation} X_{12},X_{13},Y_{12},Y_{23},Z_{13},Z_{23}\label{eq:LG-obs-vars} \end{equation} under the constraint $X_{12}=X_{13}$, $Y_{12}=Y_{23}$, $Z_{13}=Z_{23}$ is \begin{equation} \begin{split} -1&\le\left\langle X_{12}Y_{12}\right\rangle +\left\langle Y_{23}Z_{23}\right\rangle +\left\langle X_{13}Z_{13}\right\rangle\\ &\leq1+2\max\left\{ \left\langle X_{12}Y_{12}\right\rangle ,\left\langle Y_{23}Z_{23}\right\rangle ,\left\langle X_{13}Z_{13}\right\rangle \right\} .\label{eq:LG-suppes-zanotti} \end{split} \end{equation} As a side product of our analysis, we show that this inequality in fact holds for arbitrary marginals as well and we generalize the inequalities to the signaling case. \begin{thm} The minimum possible value $\Delta'_{\min}$ for \begin{equation} \Delta'=\Pr\left[X_{12}\ne X_{13}\right]+\Pr\left[Y_{12}\ne Y_{23}\right]+\Pr\left[Z_{13}\ne Z_{23}\right]\label{eq:LG-Delta} \end{equation} that is compatible with the observed expectations \begin{equation} \left\langle X_{12}Y_{12}\right\rangle ,\,\left\langle X_{13}Z_{13}\right\rangle ,\,\left\langle Y_{23}Z_{23}\right\rangle ,\,\left\langle X_{12}\right\rangle ,\,\left\langle X_{13}\right\rangle ,\,\left\langle Y_{12}\right\rangle ,\,\left\langle Y_{23}\right\rangle ,\,\left\langle Z_{13}\right\rangle ,\,\left\langle Z_{23}\right\rangle \label{eq:LG-observable-expectations} \end{equation} is \begin{equation} \Delta'_{\min}=\max\left\{ \Delta'_{0},\Delta'_{\textnormal{SZ}}\right\} , \end{equation} where \begin{equation} \Delta'_{0}=\frac{1}{2}\left(\left|\left\langle X_{12}\right\rangle -\left\langle X_{13}\right\rangle \right|+\left|\left\langle Y_{12}\right\rangle -\left\langle Y_{23}\right\rangle \right|+\left|\left\langle Z_{13}\right\rangle -\left\langle Z_{23}\right\rangle \right|\right) \end{equation} is the natural measure of the violation of marginal selectivity and \begin{equation} \begin{array}{r@{}l} \Delta'_{\textnormal{SZ}}=-\frac{1}{2}+\frac{1}{2}\max\big\{\, & \left\langle X_{12}Y_{12}\right\rangle +\left\langle X_{13}Z_{13}\right\rangle -\left\langle Y_{23}Z_{23}\right\rangle ,\\ & \left\langle X_{12}Y_{12}\right\rangle -\left\langle X_{13}Z_{13}\right\rangle +\left\langle Y_{23}Z_{23}\right\rangle ,\\ -& \left\langle X_{12}Y_{12}\right\rangle +\left\langle X_{13}Z_{13}\right\rangle +\left\langle Y_{23}Z_{23}\right\rangle ,\\ -& \left\langle X_{12}Y_{12}\right\rangle -\left\langle X_{13}Z_{13}\right\rangle -\left\langle Y_{23}Z_{23}\right\rangle \big\} \end{array} \end{equation} is ($\nicefrac{1}{2}$ times) the maximum violation of the Suppes--Zanotti inequalities (\ref{eq:LG-suppes-zanotti}).\end{thm} \begin{proof} By Lemma~\ref{lem:2-1} of the next section, $\Delta'$ is compatible with the observed expectations (\ref{eq:LG-observable-expectations}) if and only if it satisfies \begin{align} \Delta' &\textstyle \ge-\frac{1}{2}+\frac{1}{2}s_{1}\left(\left\langle X_{12}Y_{12}\right\rangle ,\left\langle Y_{23}Z_{23}\right\rangle ,\left\langle X_{13}Z_{13}\right\rangle \right),\label{eq:S_lower_from_rxyz}\\ \Delta' &\textstyle \ge\frac{1}{2}\left(\left|\left\langle X_{12}\right\rangle -\left\langle X_{13}\right\rangle \right|+\left|\left\langle Y_{12}\right\rangle -\left\langle Y_{23}\right\rangle \right|+\left|\left\langle Z_{13}\right\rangle -\left\langle Z_{23}\right\rangle \right|\right),\label{eq:S_lower_from_xyz}\\ \Delta' &\textstyle \le3-\left[-\frac{1}{2}-\frac{1}{2}s_{1}\left(\left\langle X_{12}Y_{12}\right\rangle ,\left\langle Y_{23}Z_{23}\right\rangle ,\left\langle X_{13}Z_{13}\right\rangle \right)\right],\label{eq:S_upper_from_rxyz}\\ \Delta' &\textstyle \le3-\frac{1}{2}\left(\left|\left\langle X_{12}\right\rangle +\left\langle X_{13}\right\rangle \right|+\left|\left\langle Y_{12}\right\rangle +\left\langle Y_{23}\right\rangle \right|+\left|\left\langle Z_{13}\right\rangle +\left\langle Z_{23}\right\rangle \right|\right).\label{eq:S_upper_from_xyz} \end{align} These inequalities are always mutually compatible, whence $\Delta'_{\min}$ is the larger of the two right-hand expressions in (\ref{eq:S_lower_from_rxyz}) and (\ref{eq:S_lower_from_xyz}). \end{proof} \begin{defn} The degree of contextuality in a system with given observed expectations (\ref{eq:LG-observable-expectations}) is $\Delta'_{\min}-\Delta'_{0}$, where $\Delta'_{\min}$ is the minimal value of $\Delta'$ in (\ref{eq:LG-Delta}) for which a joint distribution for (\ref{eq:LG-obs-vars}) exists. \end{defn} Using essentially the same reasoning as for the EPR/Bohm paradigm, we come to the following \begin{thm} A Leggett--Garg-type systems exhibits no contextuality if and only if \begin{equation} \begin{split} \left\langle X_{12}Y_{12}\right\rangle +\left\langle Y_{23}Z_{23}\right\rangle -\left\langle X_{13}Z_{13}\right\rangle &\leq1+2\Delta'_{0},\\ \left\langle X_{12}Y_{12}\right\rangle -\left\langle Y_{23}Z_{23}\right\rangle +\left\langle X_{13}Z_{13}\right\rangle &\leq1+2\Delta'_{0},\\ -\left\langle X_{12}Y_{12}\right\rangle +\left\langle Y_{23}Z_{23}\right\rangle +\left\langle X_{13}Z_{13}\right\rangle &\leq1+2\Delta'_{0},\\ -\left\langle X_{12}Y_{12}\right\rangle -\left\langle Y_{23}Z_{23}\right\rangle -\left\langle X_{13}Z_{13}\right\rangle &\leq1+2\Delta'_{0}. \end{split}\label{eq:Leggett-Garg-noncontextuality} \end{equation} The largest in absolute value breach of one of these boundaries then can be taken as a measure of contextuality. \end{thm} Inequalities (\ref{eq:Leggett-Garg-noncontextuality}) can also be equivalently rewritten closer to the Suppes--Zanotti \citeyearpar{SuppesZanotti1981} formulation: \begin{equation} \begin{split} -1-2\Delta'_{0}&\le\left\langle X_{12}Y_{12}\right\rangle +\left\langle Y_{23}Z_{23}\right\rangle +\left\langle X_{13}Z_{13}\right\rangle\\ &\leq1+2\Delta'_{0}+2\max\left\{ \left\langle X_{12}Y_{12}\right\rangle ,\left\langle Y_{23}Z_{23}\right\rangle ,\left\langle X_{13}Z_{13}\right\rangle \right\} .\label{eq:Leggett-Garg-non-contextuality-Suppes-Zanotti} \end{split} \end{equation} \section{Technical details}\label{sec:techdetails} In this section, we give the technical details of the computer-assisted results used above. Refer to Fig.~\ref{fig:A-representation-of} for a graphical representation of the connections and observed pairs of random variables in the system.\footnote{Based on our most recent theoretical results \citep{KujalaDzhafarovLarsson2015,KujalaDzhafarov2015}, the computer-assisted proofs for the systems considered here can in fact be obtained analytically as well. However, the principles of computer-assisted proof laid out here are applicable in systems that are not covered by the analytical results.} \begin{lem} \label{lem:1}The necessary and sufficient condition for the connection expectations $\left(\left\langle A_{i1}A_{i2}\right\rangle ,\left\langle B_{1j}B_{2j}\right\rangle \right)_{i,j\in\left\{ 1,2\right\} }$ to be compatible with the observed expectations $$\left(\left\langle A_{ij}B_{ij}\right\rangle ,\left\langle A_{ij}\right\rangle ,\left\langle B_{ij}\right\rangle \right)_{i,j\in\left\{ 1,2\right\} }$$ is \begin{equation} \begin{split} s_{0}&\left(\left\langle A_{11}B_{11}\right\rangle ,\left\langle A_{12}B_{12}\right\rangle ,\left\langle A_{21}B_{21}\right\rangle ,\left\langle A_{22}B_{22}\right\rangle \right)\\ &\le6-s_{1}\left(\left\langle A_{11}A_{12}\right\rangle ,\left\langle B_{11}B_{21}\right\rangle ,\left\langle A_{21}A_{22}\right\rangle ,\left\langle B_{12}B_{22}\right\rangle \right),\\ s_{1}&\left(\left\langle A_{11}B_{11}\right\rangle ,\left\langle A_{12}B_{12}\right\rangle ,\left\langle A_{21}B_{21}\right\rangle ,\left\langle A_{22}B_{22}\right\rangle \right)\\ &\le6-s_{0}\left(\left\langle A_{11}A_{12}\right\rangle ,\left\langle B_{11}B_{21}\right\rangle ,\left\langle A_{21}A_{22}\right\rangle ,\left\langle B_{12}B_{22}\right\rangle \right), \end{split}\label{eq:compatibility} \end{equation} where \begin{equation} \begin{split} s_{0}\left(a,b,c,d\right) & =\max\left\{ \left(\pm a\pm b\pm c\pm d\right):\textnormal{ the number of minuses is even}\right\},\\%\label{eq:s0}\\ s_{1}\left(a,b,c,d\right) & =\max\left\{ \left(\pm a\pm b\pm c\pm d\right):\textnormal{ the number of minuses is odd}\right\}.\label{eq:s1} \end{split} \end{equation} \end{lem} \begin{proof} The joint distribution of the eight random variables $$A_{11},B_{11},A_{12},B_{12},A_{21},B_{21},A_{22},B_{22}$$ is fully described by the vector $\mathbf{q}\in[0,1]^{n},$ $q_{1}+\dots+q_{n}=1$, consisting of the probabilities of the $n=2^{8}=256$ different combinations of the values of the eight random variables. We then define a vector $\mathbf{p}\in[0,1]^{m}$, $m=32$, consisting of the $16$ observable probabilities $\Pr[A_{ij}=a,\ B_{ij}=b]$ for $a,b\in\{-1,1\},$ $i,j\in\{1,2\}$ and the $16$ connection probabilities given by $\Pr[A_{i1}=a,\ A_{i2}=a']$ and $\Pr[B_{1j}=b,\ B_{2j}=b']$ for $a,a',b,b'\in\{-1,1\}$ and $i,j\in\{1,2\}$. As every element of $\mathbf{p}$ is a ($2$-)marginal probability of the joint represented by $\mathbf{q}$, there exists a binary marix $M\in\{0,1\}^{m\times n}$ such that \begin{equation} \mathbf{p}=M\mathbf{q}.\label{eq:pMq} \end{equation} It follows that the observable probabilities $p_{1},\dots,p_{16}$ are compatible with the connection probabilities $p_{17},\dots,p_{32}$ if and only if there exists an $n$-vector $\mathbf{q}\ge0$ such that \eqref{eq:pMq} holds. As described in \citep[Text~S3]{DK2013PLoS}, the set of vectors $\mathbf{p}$ satisfying this constraint forms a polytope whose vertices are given by the columns of $M$ and whose half space representation can be obtained by a facet enumeration algorithm. As also described in \citep{DK2013PLoS}, this halfspace representation consists of $160$ inequalities and $16$ equations in $p_{1},\dots,p_{32}$. The $16$ equations correspond to the requirement that the $1$-marginals of the observable probabilities agree with those of the connections and that the observable probabilities are properly normalized. Expressing the probabilities in the vector $\mathbf{p}$ in terms of the observable and connection expectations $\left(\left\langle A_{ij}B_{ij}\right\rangle ,\left\langle A_{ij}\right\rangle ,\left\langle B_{ij}\right\rangle ,\left\langle A_{i1}A_{i2}\right\rangle ,\left\langle B_{1j}B_{2j}\right\rangle \right)$, $i,j\in\{1,2\}$, the $16$ equations become identically true (the parameterization already guarantees them), and of the $160$ inequalities, $128$ turn into exactly those represented by \eqref{eq:compatibility} and the remaining $32$ are trivial constraints of the form \begin{equation} -1+|\left\langle A\right\rangle +\left\langle B\right\rangle |\le\left\langle AB\right\rangle \le1-|\left\langle A\right\rangle -\left\langle B\right\rangle |\label{eq:implicit} \end{equation} for the $8$ pairs of random variables involved in \eqref{eq:compatibility}. The trivial constraints correspond to the implicit requirement that the observable and connection probabilities are nonnegative and thus they need not be explicitly shown in the statement of the theorem. \end{proof} This proof is different from the similar result in \citep{DK2013PLoS} in that the parameterization for the probabilities in $\mathbf{p}$ is more general (allowing for arbitrary marginals of the eight random variables) and so we obtain a more general condition for the compatibility of observable and connection probabilities than before. It should be noted that although the expectations $\left\langle A_{ij}\right\rangle ,\left\langle B_{ij}\right\rangle $, $i,j\in\{1,2\}$ do not explicitly appear in \eqref{eq:compatibility}, they are still present in the $32$ implicit constraints. \begin{lem} \label{lem:2}If the connection expectations $\left(\left\langle A_{i1}A_{i2}\right\rangle ,\left\langle B_{1j}B_{2j}\right\rangle \right)_{i,j\in\left\{ 1,2\right\} }$ are compatible with the observed expectations $\left(\left\langle A_{ij}B_{ij}\right\rangle ,\left\langle A_{ij}\right\rangle ,\left\langle B_{ij}\right\rangle \right)_{i,j\in\left\{ 1,2\right\} }$, then, with $\Delta$ defined as in \eqref{eq:C under MS}, \begin{equation} \begin{split} \Delta&\textstyle\ge-1+\frac{1}{2}s_{1}\left(\left\langle A_{11}B_{11}\right\rangle ,\left\langle A_{12}B_{12}\right\rangle ,\left\langle A_{21}B_{21}\right\rangle ,\left\langle A_{22}B_{22}\right\rangle \right),\\ \Delta&\textstyle\ge\frac{1}{2}(\left|\left\langle A_{11}\right\rangle -\left\langle A_{12}\right\rangle \right|+\left|\left\langle A_{21}\right\rangle -\left\langle A_{22}\right\rangle \right|+\\ &\phantom{{}\ge2(}\left|\left\langle B_{11}\right\rangle -\left\langle B_{21}\right\rangle \right|+\left|\left\langle B_{12}\right\rangle -\left\langle B_{22}\right\rangle \right|),\\ \Delta&\textstyle\le4-\left[-1+\frac{1}{2}s_{1}\left(\left\langle A_{11}B_{11}\right\rangle ,\left\langle A_{12}B_{12}\right\rangle ,\left\langle A_{21}B_{21}\right\rangle ,\left\langle A_{22}B_{22}\right\rangle \right)\right],\\ \Delta&\textstyle\le4-\frac{1}{2}(\left|\left\langle A_{11}\right\rangle +\left\langle A_{12}\right\rangle \right|+\left|\left\langle A_{21}\right\rangle +\left\langle A_{22}\right\rangle \right|+\\ &\phantom{{}\le4-2(}\left|\left\langle B_{11}\right\rangle +\left\langle B_{21}\right\rangle \right|+\left|\left\langle B_{12}\right\rangle +\left\langle B_{22}\right\rangle \right|). \end{split}\label{eq:Delta_system} \end{equation} Conversely, if these inequalities are satisfied for a given value of $\Delta$, then the connection expectations $\left(\left\langle A_{i1}A_{i2}\right\rangle ,\left\langle B_{1j}B_{2j}\right\rangle \right)_{i,j\in\left\{ 1,2\right\} }$ can always be chosen so that they are compatible with the observable expectations $\left(\left\langle A_{ij}B_{ij}\right\rangle ,\left\langle A_{ij}\right\rangle ,\left\langle B_{ij}\right\rangle \right)_{i,j\in\left\{ 1,2\right\} }$ and yield the given value of $\Delta$ in \eqref{eq:C under MS}.\end{lem} \begin{proof} Given the 160 inequalities (including the 32 implicit inequalities) of Lemma~\ref{lem:1} characterizing the compatibility of the connection expectations with the observable expectations, we amend this linear system with the equation \eqref{eq:C under MS} defining $\Delta$ written in terms of the expectations $$\left(\left\langle A_{i1}A_{i2}\right\rangle ,\left\langle B_{1j}B_{2j}\right\rangle ,\left\langle A_{ij}\right\rangle ,\left\langle B_{ij}\right\rangle \right)_{i,j\in\{1,2\}}.$$ Then, we use this equation to eliminate one of the connection expectation variables $\left(\left\langle A_{i1}A_{i2}\right\rangle ,\left\langle B_{1j}B_{2j}\right\rangle \right)_{i,j\in\{1,2\}}$ from the system (by solving the variable from the equation and then substituting the solution everywhere else). After that, we eliminate the three remaining connection expectation variables one by one using the Fourier--Motzkin elimination algorithm (see Theorem~\ref{thm:Fourier-Motzkin} below). After the elimination of each variable, we remove any redundant inequalities from the system by linear programming using the algorithm described in \citep[Text S3]{DK2013PLoS}. After having eliminated all connection expectation variables, we are left with the system \eqref{eq:Delta_system} (and implicit constraints of the form \eqref{eq:implicit} for the pairs $(A_{ij},B_{ij})$, $i,j\in\{1,2\}$). The Fourier--Motzkin elimination algorithm guarantees that the resulting system has a solution precisely when the original system has a solution with \emph{some} values of the eliminated variables.\end{proof} \begin{thm} [Fourier--Motzkin elimination]\label{thm:Fourier-Motzkin}\index{Fourier--Motzkin elimination}Given a system of linear inequalities in the variables $x$ and $\mathbf{y=}y_{1},\dots,y_{n}$, the system can always be rearranged in the following form \begin{equation} \begin{array}{lll} x\ge\mathbf{l}_{i}\mathbf{\cdot y}, & & i=1,\dots,n_{\mathbf{l}},\\ x\le\mathbf{u}_{i}\cdot\mathbf{y}, & & i=1,\dots,n_{\mathbf{u}},\\ 0\le\mathbf{n}_{i}\mathbf{\cdot y}, & & i=1,\dots,n_{\mathbf{n}}, \end{array} \end{equation} where $\mathbf{l}_{1},\dots,\mathbf{l}_{n_{l}},\mathbf{u}_{1},\dots,\mathbf{\mathbf{u}}_{n_{u}},\mathbf{n}_{1},\dots,\mathbf{n}_{n_{n}}\in\mathbb{R}^{n}$. Furthermore, given $\mathbf{y}\in\mathbb{R}$, this system is solved by $\mathbf{y}$ and some $x\in\mathbb{R}$ if and only if the following system is solved by $\mathbf{y}$: \begin{equation} \begin{array}{rcl} \mathbf{l}_{i}\cdot\mathbf{y}\le\mathbf{u}_{j}\cdot\mathbf{y}, & & i=1,\dots,n_{\mathbf{l}},\ j=1,\dots,n_{\mathbf{u}},\\ 0\le\mathbf{n}_{i}\mathbf{\cdot y}, & & i=1,\dots,n_{\mathbf{n}}. \end{array} \end{equation} \end{thm} \begin{figure} \begin{center} \fbox{\begin{minipage}[c]{.9\textwidth}% \[ \begin{array}{c} \xymatrix{A_{12}\ar@{.>}[d]\ar[r] & B_{12}\ar@{.>}[r]\ar[l] & B_{22}\ar@{.>}[l]\ar[r] & A_{22}\ar@{.>}[d]\ar[l]\\ A_{11}\ar@{.>}[u]\ar[r] & B_{11}\ar@{.>}[r]\ar[l] & B_{21}\ar@{.>}[l]\ar[r] & A_{21}\ar@{.>}[u]\ar[l] } \end{array}\tag{Bell-system} \] \[ \begin{array}{c} \xymatrix{Y_{12}\ar@{.>}[dr]\ar[r] & X_{12}\ar@{.>}[r]\ar[l] & X_{13}\ar@{.>}[l]\ar[r] & Z_{13}\ar@{.>}[dl]\ar[l]\\ & Y_{23}\ar@{.>}[ul]\ar[r] & Z_{23}\ar@{.>}[ur]\ar[l] } \end{array}\tag{LG-system} \] % \end{minipage}} \end{center} \protect\caption[.]{Random variables involved in the Bell-system and LG-system. The pairs of random variables whose joint distributions are empirically observed, e.g., $\left(A_{12},B_{12}\right)$ and $\left(X_{12},Y_{12}\right)$, are indicated by solid double-arrows. The pairs of random variables forming probabilistic connections (with unobservable joint distributions) are indicated by point double-arrows, e.g., $\left(A_{11},A_{12}\right)$ and $\left(X_{12},X_{13}\right)$. \label{fig:A-representation-of} } \end{figure} \begin{lem} \label{lem:1-1}The necessary and sufficient condition for the connection expectations $\left\langle X_{12}X_{13}\right\rangle $, $\left\langle Y_{12}Y_{23}\right\rangle $, $\left\langle Z_{13}Z_{23}\right\rangle $ to be compatible with the observed expectations $\left\langle X_{12}Y_{12}\right\rangle $, $\left\langle X_{13}Z_{13}\right\rangle $, $\left\langle Y_{23}Z_{23}\right\rangle $, $\left\langle X_{12}\right\rangle $, $\left\langle X_{13}\right\rangle $, $\left\langle Y_{12}\right\rangle $, $\left\langle Y_{23}\right\rangle $, $\left\langle Z_{13}\right\rangle $, $\left\langle Z_{23}\right\rangle $ is \begin{equation} s_{1}\left(\left\langle X_{12}Y_{12}\right\rangle ,\left\langle X_{13}Z_{13}\right\rangle ,\left\langle Y_{23}Z_{23}\right\rangle ,\left\langle X_{12}X_{13}\right\rangle ,\left\langle Y_{12}Y_{23}\right\rangle ,\left\langle Z_{13}Z_{23}\right\rangle \right)\le4.\label{eq:compatibility-1} \end{equation} where \begin{equation} \begin{split} s_{1}\left(a,b,c,d,e,f\right) = \max \{ &\left(\pm a\pm b\pm c\pm d\pm e\pm f\right):\\ &\textnormal{ the number of minuses is odd}\,\} .\label{eq:s1-1} \end{split} \end{equation} \end{lem} \begin{proof} The details are analogous to those of the proof of Lemma~\ref{lem:1}. The polytope in terms of probabilities is defined by 12 equations and 56 inequalities. The 12 equations correspond to the requirement that the 1-marginals of the observable probabilities agree with those of the connections and that the observable probabilities are properly normalized. Expressing the probabilities in terms of the observable and connection expections, the 16 equations become identically true and of the 56 inequalities, 32 turn into those represented by (\ref{eq:s1-1}) and the remaining 24 correspond to the trivial constraints of the form (\ref{eq:implicit}) for the 6 pairs of random variables appearing in (\ref{eq:s1-1}).\end{proof} \begin{lem} \label{lem:2-1}If the connection expectations $\left\langle X_{12}X_{13}\right\rangle ,\left\langle Y_{12}Y_{23}\right\rangle ,\left\langle Z_{13}Z_{23}\right\rangle $ are compatible with the observed expectations $$\left\langle X_{12}Y_{12}\right\rangle ,\left\langle X_{13}Z_{13}\right\rangle ,\left\langle Y_{23}Z_{23}\right\rangle, \left\langle X_{12}\right\rangle ,\left\langle X_{13}\right\rangle ,\left\langle Y_{12}\right\rangle ,\left\langle Y_{23}\right\rangle ,\left\langle Z_{13}\right\rangle ,\left\langle Z_{23}\right\rangle ,$$ then, with $\Delta'$ defined as in \eqref{eq:LG-Delta}, \begin{equation} \begin{split} \Delta'&\textstyle\ge-\frac{1}{2}+\frac{1}{2}s_{1}\left(\left\langle X_{12}Y_{12}\right\rangle ,\left\langle X_{13}Z_{13}\right\rangle ,\left\langle Y_{23}Z_{23}\right\rangle \right),\\ \Delta'&\textstyle\ge\frac{1}{2}\left(\left|\left\langle X_{12}\right\rangle -\left\langle X_{13}\right\rangle \right|+\left|\left\langle Y_{12}\right\rangle -\left\langle Y_{23}\right\rangle \right|+\left|\left\langle Z_{13}\right\rangle -\left\langle Z_{23}\right\rangle \right|\right),\\ \Delta'&\textstyle\le3-\left[-\frac{1}{2}+\frac{1}{2}s_{0}\left(\left\langle X_{12}Y_{12}\right\rangle ,\left\langle X_{13}Z_{13}\right\rangle ,\left\langle Y_{23}Z_{23}\right\rangle \right)\right],\\ \Delta'&\textstyle\le3-\frac{1}{2}\left(\left|\left\langle X_{12}\right\rangle +\left\langle X_{13}\right\rangle \right|+\left|\left\langle Y_{12}\right\rangle +\left\langle Y_{23}\right\rangle \right|+\left|\left\langle Z_{13}\right\rangle +\left\langle Z_{23}\right\rangle \right|\right). \end{split}\label{eq:Delta_system-1} \end{equation} Conversely, if these inequalities are satisfied for a given value of $\Delta'$, then the connection expectations $\left\langle X_{12}X_{13}\right\rangle ,\left\langle Y_{12}Y_{23}\right\rangle ,\left\langle Z_{13}Z_{23}\right\rangle $ can always be chosen so that they are compatible with the observable expectations $$\left\langle X_{12}Y_{12}\right\rangle ,\left\langle X_{13}Z_{13}\right\rangle ,\left\langle Y_{23}Z_{23}\right\rangle ,\left\langle X_{12}\right\rangle ,\left\langle X_{13}\right\rangle ,\left\langle Y_{12}\right\rangle ,\left\langle Y_{23}\right\rangle ,\left\langle Z_{13}\right\rangle ,\left\langle Z_{23}\right\rangle $$ and yield the given value of $\Delta'$ in \eqref{eq:LG-Delta}.\end{lem} \begin{proof} The details are analogous to those of the proof of Lemma~\eqref{lem:2}. \end{proof} \paragraph*{Acknowledgements. } This work was supported by NSF grant SES-1155956 and AFOSR grant FA9550-14-1-0318. The authors are grateful to J. Acacio de Barros and Gary Oas for numerous discussions of issues related to contextuality. \bibliographystyle{apalike}
1,314,259,993,151
arxiv
\section{Introduction} \label{sec:introduction} The current concordance cosmology predicts that smaller-scale structures form first and large-scale structures form through hierarchical merging \citep[e.g.,][]{1985ApJ...292..371D}. In this scenario, the continuous merging of dark matter halos (and galaxies within them) is the chief channel of galaxy formation and evolution, which involves a variety of interactions of galaxies with each other and with their environments \citep{voit2005_review}. Among many types of interactions, the interstellar medium (ISM) in galaxies moving through the intracluster medium (ICM) can be removed by the ICM's ram pressure \citep{1972ApJ...176....1G}. This process, so-called ram pressure stripping (RPS), has been extensively studied due in part to its unique observational signatures identified by disturbed gaseous medium with undisturbed stellar components of galaxies \citep[see][for reviews]{boselli2014_review,boselli2021_review}. Also, RPS is known to effectively affect the ISM content of galaxies on a relatively short time scale \citep{abadi1999,boselli2009}, dramatically changing galaxy evolution in high-density environments. The observational signatures of RPS are traced in multi-wavelengths. The atomic hydrogen (\ion{H}{1}) 21~cm line has been a pioneering tool to identify RPS galaxies since \ion{H}{1} gas is generally diffuse and extended well beyond the stellar disk, and hence vulnerable to the interaction with the surroundings. The early single-dish observations such as \citet{1984AJ.....89..758H} found the cluster population to be overall deficient in \ion{H}{1} compared to the field counterpart. More direct signatures of RPS such as gas truncation into the stellar disk and/or gas tails have been reported by many \ion{H}{1} imaging studies \citep[e.g.,][]{1990AJ....100..604C,2004AJ....127.3361K,2009AJ....138.1741C,2010MNRAS.403.1175S,2020A&A...640A..22R}. RPS galaxies also show common signs including locally enhanced synchrotron radiation or ionized gas tails which can be observed through radio continuum, optical lines, or X-ray emission \citep[e.g.,][]{gavazzi2001,2010A&A...512A..36V,2017ApJ...840L...7S,sun2010,poggianti2017_gasp_muse}. On the other hand, the case of molecular gas content, which is generally present in the inner part of galaxies with a higher density compared to the other ISM phases, does not show clear evidence of RPS \citep[e.g.,][]{1989ApJ...344..171K}. Depending on the sample selection and the observational strategy, only a handful number of studies find molecular gas deficiencies \citep[e.g.,][]{2009ApJ...697.1811F,corbelli2012,boselli2014_molecule_def,2017ApJ...843...50C}, and the impact of ram pressure on the molecular gas disk has long been under debate. However, more recent high-resolution radio observations begin to show that the morphological characteristics of RPS seen in \ion{H}{1} disks such as asymmetry and compression are shared by CO disks \citep[e.g.,][]{2017MNRAS.466.1382L,2018ApJ...866L..10L,2021ApJS..257...21B} and have revealed the enhancement of molecular gas \citep[e.g.,][]{moretti2018_molecule,moretti2020,moretti2020b,cramer2021}. These data imply that the molecular ISM is essentially affected by ram pressure in similar ways as more diffuse components although whether it is stripped along with atomic gas or not is still unclear. All these observations indicate that RPS is the process in which the multiphase ISM from cold molecular gas to hot ionized gas is involved. This naturally raises a question on the effect of RPS in star formation, whether RPS is only effective in stripping the diffuse gas limiting the supply of gas for future star formation \citep[e.g.,][]{2004ApJ...613..851K,2008AJ....136.1623C} or even enhance star formation by compressing the gas further \citep[e.g.,][]{merluzzi2013_sfr,2014ApJ...780..119K,2018ApJ...866L..25V}. There have been extensive observational studies showing that both scenarios are possible. Therefore more complete understandings of the impact of ram pressure and its consequences on star formation and galaxy evolution require the studies of the multiphase ISM responding to the ICM ram pressure. A large number of theoretical studies of RPS have been conducted mainly using numerical simulations in two contexts. One is self-consistent cosmological simulations within which galaxies experience RPS as they are moving in clusters \citep[e.g.,][]{2016A&A...591A..51S,2017MNRAS.468.4107R,2018ApJ...865..156J,yun2019_TNG}, and the other is more controlled simulations of a single galaxy interacting with an inflowing ICM \citep[so called wind-tunnel simulations, e.g.,][]{2006A&A...453..883V,2008A&A...481..337K,2009A&A...500..693J,2009ApJ...694..789T,2010ApJ...709.1203T,2012A&A...544A..54S,2014ApJ...795..148T,2014ApJ...784...75R,2018MNRAS.476.3781R,2020ApJ...905...31L,2021ApJ...911...68T}. Such simulations reproduce long tails seen in observations and show overall agreement with the prediction for the effectiveness of RPS when the ICM ram pressure is stronger than the ISM anchoring pressure \citep[e.g.,][]{2007MNRAS.380.1399R,2009ApJ...694..789T}. In both cases, the outer dimensions covered had to be larger than a few tens of kpc, banning the use of pc-scale, high-resolution required for explicit modeling of the ISM physics. Instead, subgrid models of multiphase ISM, star formation, and feedback \citep[e.g.,][]{2003MNRAS.339..289S} are often adopted. There exists a few RPS galaxy simulations that include gas cooling down to $\sim 100$~K, but most of them has not particularly focused on modeling of the full multiphase (cold, warm, and hot) ISM properly \citep[e.g.,][]{2010ApJ...709.1203T,2012MNRAS.422.1609T,2021ApJ...911...68T,2014MNRAS.438..444B}. The radiative heating by photoelectric effect of FUV on small grains is ignored, which is the major heating source of the warm and cold ISM \citep{1995ApJ...443..152W,2003ApJ...587..278W}. Simulations with not enough resolution \citep{2012MNRAS.422.1609T,2014MNRAS.438..444B} cannot resolve the Sedov-Taylor stage of SNe that are critical in driving turbulence and creating the hot gas \citep{2015ApJ...802...99K,2017ApJ...834...25K_KOR17,steinwandel2020}. Such simulations tend to overcooling the gas and confine the ISM in very thin, unresolved disks. The mass and volume distributions of the multiphase ISM that the ICM is interacting are severely compromised. To our best knowledge, only \citet{2020ApJ...905...31L} have marginally high resolution and set of physics to treat the full range of the multiphase ISM and star formation and feedback explicitly. Nevertheless, \citet{2021ApJ...911...68T} conducted the RPS simulation with a single galaxy with a full range of cooling function and claimed that RPS occurs via mixing between the ICM and ISM. This interesting result qualitatively agrees with the recent in-depth studies of radiative mixing layers \citep{2020ApJ...894L..24F,2021MNRAS.502.3179T} in context of shock/wind-cloud interaction simulations \citep{2018MNRAS.480L.111G,2020MNRAS.492.1970G,2021arXiv210713012G,2020MNRAS.492.1841L,2021MNRAS.501.1143K,2020MNRAS.499.4261S,2021arXiv210110344A} and starburst-driven galactic winds \citep{2020ApJ...895...43S} that emphasize the mixing-driven momentum transfer as the major acceleration mechanism for cooler, denser gas. To enable global galaxy simulations with a large dynamic range, the usual practice is to adaptively refine the resolution elements to achieve constant mass resolution \citep{2020ApJ...905...31L,2021ApJ...911...68T}. Given typical $\sim2$ decades temperature contrast between cold, warm, and hot phases in pressure equilibrium, the spatial resolution of adjacent thermal phases differs by a factor of 5. The interaction between hot and cooler phases and the mixing layers produced by such interaction can be severely altered by large differences in spatial resolution of interacting phases. Simulations with uniformly high resolution are thus necessary to model different phases and their interactions more robustly \citep[e.g.,][]{2018ApJ...853..173K,2020ApJ...895...43S}. Since the mixing-driven momentum transfer is key physical process in general multiphase hydrodynamical interactions, the multiphase RPS deserves more careful studies using self-consistent, multiphase ISM models that interacts with the ICM. To this end, we conduct a new suite of numerical simulations focusing on a smaller section of galactic disks with an inflowing ICM. Our numerical models build on the TIGRESS framework developed to model the star-forming ISM self-consistently \citep{2017ApJ...846..133K}. TIGRESS solves ideal magnetohydrodynamics (MHD) in a shearing-box with Athena \citep{2008ApJS..178..137S} and includes additional ISM physics including optically-thin cooling at full temperature range, self-gravity of gas and newly formed stars, star cluster formation in gravitationally bound objects using sink particles, and massive star feedback in the forms of supernovae (SNe) and far-ultraviolet (FUV) radiative heating. The original closed box model has been used to study the internal regulation of star formation rates (SFRs) and driving of multiphase outflows \citep{2020ApJ...900...61K,2018ApJ...853..173K,2017ApJ...846..133K} among other implications. In this paper, we choose a particular ISM model representing the solar neighborhood condition. We take a snapshot of self-consistently modeled multiphase ISM in a quasi-steady state and conduct controlled numerical experiments with different ICM ram pressures covering relatively weak and strong pressure regimes compared to the ISM anchoring pressure by stellar disks. Our chosen parameters approximately represent the conditions of NGC~4522, a prototypical RPS galaxy in Virgo, at different radii \citep{2004AJ....127.3361K}. As the first of a kind study using local models, this paper focuses on fostering an in-depth understanding of the inner workings of \emph{multiphase} RPS. In addition, with the help of self-consistent star formation and feedback models of TIGRESS, we investigate how RPS affects star formation in and out of the galactic disks. In the future, we further study the role of magnetic fields in RPS, especially on the dense, molecular gas. The structure of this paper is as follows. In \autoref{sec:method}, we summarize the TIGRESS framework and introduce the ISM and ICM models. In \autoref{sec:overall}, we first overview our simulations using the time evolution of horizontally-averaged and globally-integrated quantities. \autoref{sec:morphology} then delineate a variety of physical properties in two representative models. In \autoref{sec:stripping_transfer}, we analyze the mass, momentum, and energy transfers between thermal phases. We then check the prediction of the mixing-driven momentum transfer in \autoref{sec:stripping_mixing}. \autoref{sec:sfr} presents the impact of RPS on SFRs and the extraplanar star formation. \autoref{sec:discussion} discusses the main observational imprints from RPS by the mixing-driven momentum transfers. Also, we discuss our results in context and caveats. Finally, the main conclusions are summarized in \autoref{sec:conclusion}. \section{Numerical Methods and Models} \label{sec:method} In this section, we begin by summarizing the numerical methods employed in the TIGRESS framework to simulate the multiphase ISM with star formation and feedback in \autoref{sec:method_tigress} for completeness. We then explain the evolution of the ISM without ICM inflows in \autoref{sec:method_ism}. The readers who are familiar with the local ISM simulations and only interested in the ICM-ISM interaction can skip the first two subsections. \autoref{sec:method_icm} explains the ICM inflow setup. The tracer fields and gas phases used throughout the paper are defined in \autoref{sec:method_phase}. \subsection{TIGRESS framework}\label{sec:method_tigress} We use the TIGRESS framework developed by \citet[][]{2017ApJ...846..133K} to evolve the multiphase, turbulent, magnetized ISM with which the ICM interacts. We refer the reader to \citet{2017ApJ...846..133K} for full details of the methods and tests. TIGRESS solves the ideal MHD equations in a local shearing-box representing a $\sim$kpc patch of differentially rotating galactic disks using the Athena code \citep{2008ApJS..178..137S,2009NewA...14..139S}. Local Cartesian coordinates $x$ and $y$ correspond to the local radial and azimuthal directions of global galactocentric coordinates such that $(x,y)=(R-R_0, R_0[\phi - \Omega_0 t])$, while $z$ is the vertical coordinate. The simulation domain is corotating with galactic rotation speed at the center of the simulation domain, $\Omega_0\equiv\Omega(R_0)$, arising inertial forces including the Coriolis force and the tidal potential in the momentum equation. The flat rotation curve is assumed, $d\ln\Omega/d\ln R=-1$. We adopt shearing-periodic boundary conditions in the horizontal directions \citep{2010ApJS..189..142S} and outflow boundary conditions in the vertical directions. The bottom vertical boundary conditions are modified for the ICM inflows (see \autoref{sec:method_icm}). We solve Poisson's equation to obtain gravitational potential from gas and newly formed young stars using the FFT method with horizontally shearing-periodic and vertically open boundary conditions \citep{2001ApJ...553..174G,2009ApJ...693.1316K}. The gravitational potential of old stellar disks and dark matter halos is held fixed and only exerts vertical gravity. We introduce a sink particle when a gas cell is experiencing unresolved self-gravitating collapse as indicated by the Larson-Penston density threshold at $\rho_{\rm LP} \equiv (8.86/\pi) c_s^2/G\Delta x^2$, where $c_s\equiv (P/\rho)^{1/2}$ is the local sound speed, and $\Delta x$ is the side length of a cubic grid cell used in the simulation \citep{2013ApJS..204....8G}. We adopt additional criteria for the sink particle creation including a converging flow check (in all three directions) and a local potential minimum check. Typically, $\rho_{\rm LP}\sim 100\pcc$ for $8\pc$ resolution and $\sim 300\pcc$ for $4\pc$ resolution. Note that the typical mass of sink particles is in a range between a few $10^3\Msun$ to $10^5\Msun$, representing star clusters rather than individual stars. We treat each particle as a population of stars with a fully-sampled initial mass function of \citet{2001MNRAS.322..231K}. We use the STARBURST99 stellar population synthesis model to obtain SN rate and FUV luminosity for each star cluster \citep{1999ApJS..123....3L}. In addition to clustered SNe occurring at the position of the sink particle, we produce a massless particle with a probability of 50\% for each SN event to model a runaway star ejected from a binary OB star \citep{2011MNRAS.414.3501E}. The total SN rate still matches that of the original STARBURST99 model. For each SN event, we first identify the cells with distances from the explosion center smaller than $R_\mathrm{SNR}=3\Delta x$ and calculate the total mass $M_{\rm SNR}$ and volume $V_{\rm SNR}$ of the feedback region (or the SN remnant). If $M_{\rm SNR}/M_{\rm sf} <1$, where $M_{\rm sf} = 1540\Msun (n_{\rm amb}/\pcc)^{-0.33}$ is the shell formation mass at a given ambient medium density $n_{\rm amb}=M_{\rm SNR}/V_{\rm SNR}$ \citep{2015ApJ...802...99K}, we inject $10^{51}\erg$ dividing into thermal and kinetic energy with the Sedov stage energy ratio of $0.72:0.28$. Otherwise, we inject the terminal momentum of SNR $p_{\rm SNR}=2.8\times10^5\Msun\kms (n_{\rm amb}/\pcc)^{-0.17}$ as calibrated in \citet{2015ApJ...802...99K}. The total and metal mass of SN ejecta, $M_{\rm ej}= 10\Msun$ and $Z_{\rm SN}M_{\rm ej}=2\Msun$ with $Z_{\rm SN}=10Z_\odot$, are traced using passive scalars. See \autoref{sec:method_phase} for details. We use the total FUV luminosity from star clusters in the simulation domain to set the instantaneous photoelectric heating rate by interstellar radiation field \citep{1994ApJ...427..822B,2001ApJS..134..263W}. We apply the mean attenuation factor using the plane-parallel approximation as in \citet{2020ApJ...900...61K}. As a result, the heating rate varies in time self-consistently but is spatially constant. Optically-thin cooling is included in the energy equation using a tabulated cooling rate coefficient $\Lambda(T)$ from \citet{2002ApJ...564L..97K} at $T<10^{4.2}\Kel$ and \citet{1993ApJS...88..253S} at $T>10^{4.2}\Kel$ (collisional ionization equilibrium at solar metallicity is adopted) depending only on temperature. Although we follow the metallicity of gas in each cell (see \autoref{sec:method_phase}), we note that we do not use the metallicity information to set the cooling rate. More self-consistent treatment of radiation and chemistry and hence cooling and heating rates is being developed for the TIGRESS framework (J.-G. Kim et. al in prep.), which will enable further study of RPS with more realistic ISM. This extension is particularly important to pursue the extraplanar molecular gas in RPS galaxies. \subsection{ISM disk model}\label{sec:method_ism} In this work, we make use of the solar neighborhood model of the TIGRESS simulation suite, setting the parameters for gravitational potential of stars and dark matter (see below; \autoref{eq:phi_ext}). We adopt the angular velocity of galactic rotational $\Omega_0=28\kms\kpc^{-1}$, giving rise to the orbit time $\torb = 2\pi/\Omega_0 = 224\Myr$. We use a vertically elongated rectangular box with the outer dimensions of $(L_x, L_y, L_z)=(1024, 1024, 7168) \pc$. A uniform, cubic grid cell is used with the side length of $\Delta x=8\pc$ at which we achieve convergence of overall properties of the ISM, SFR, and outflows \citep[see][]{2017ApJ...846..133K,2018ApJ...853..173K,2020ApJ...900...61K}. This model is referred to as the {\tt noICM}{} model throughout the paper (identical to the solar neighborhood model, R8, presented in other works). Additional details of the solar neighborhood model can be found in \citet{2017ApJ...846..133K} for initial conditions, overall evolution, numerical convergence, and technical details, \citet{2018ApJ...853..173K,2020ApJ...894...12V} for galactic fountains and winds, and \citet{2020ApJ...898...52M} for the properties of gravitationally bound clouds and their connection with SFRs. The simulation starts from an idealized initial condition with horizontally uniform, vertically-stratified gas profiles with the initial gas surface density of $\Sigma_{\rm gas}=13\Surf$. We introduce initial velocity perturbation and set thermal pressure to ensure that the disk is in rough hydrostatic equilibrium. Soon after the simulation begins, the initially imposed velocity perturbation dissipates, and the gas cools. The overall vertical contraction occurs owing to the reduction of turbulent and thermal pressure, leading to a burst of star formation. SNe and FUV heating from newly formed massive stars respectively offset turbulence dissipation and gas cooling, recovering vertical support against gravity. The disk expands vertically, reducing SFRs and hence feedback. The reduction of feedback causes another disk contraction, and the cycle repeats. Each cycle has a period similar to the vertical oscillation time scales of $~40-50\Myr$ \citep[see][]{2020ApJ...900...61K}. Although the first burst is a consequence of the idealized initial setups, our simulations soon enter a self-consistently regulated state after a few star formation-feedback cycles ($t>100\Myr$ in this model). \begin{figure} \centering \includegraphics[width=\columnwidth]{noICM.png} \caption{Space-time diagrams of horizontally averaged (a) hydrogen number density $n_H$, (b) outgoing mass flux $\rho v_z {\rm sgn}(z)$, (c) turbulent pressure $\rho v_z^2$, (d) thermal pressure $P$, and (e) magnetic pressure $P_B\equiv B^2/(8\pi)$ for the {\tt noICM}{} model. We only show the evolution during a self-regulated state over $t\sim 250-500\Myr$ as a reference that can be directly compared with the models with the ICM. The horizontal dotted line demarks the midplane ($z=0$).} \label{fig:noICM} \end{figure} To overview the evolution in a quasi-steady state far from the initial burst, \autoref{fig:noICM} shows the horizontally-averaged physical quantities in the space (vertical coordinate $z$) and time ($t$) plane as defined by \begin{equation}\label{eq:havg} \abrackets{q(z;t)}\equiv\frac{\int q(x,y,z;t) dxdy}{L_x L_y} \end{equation} for a physical quantity $q$ of interest. We only show a self-regulated state over $t\sim250-500\Myr$. From top to bottom, we show (a) hydrogen number density $n_H$, (b) outgoing mass flux $\rho v_{\rm out}$, (c) turbulent pressure (or, equivalently, vertical momentum flux) $\rho v_z^2$, (d) thermal pressure $P$, and (e) magnetic pressure $P_B\equiv B^2/(8\pi)$. Here, the outward vertical velocity is defined by $v_{\rm out} \equiv v_z {\rm sgn}(z)$ such that $v_{\rm out}$ is positive (red) for outflow and negative (blue) for inflow about the midplane. Note that the midplane ($z=0$) in our simulation defines the symmetric plane of the fixed gravitational potential of stars and dark matter. Even without ICM inflows, the gas distribution can be largely asymmetric as stochastic SN explosion cannot be perfectly symmetric. As a result, the total gravitational potential including the gravitational potential of gas and young star clusters can be asymmetric. Within the simulation duration shown in \autoref{fig:noICM}, we can visually identify four strong outflow launching epochs ($t\sim250$, 320, 380, and 420~Myr; see panel (b)). These epochs are associated with strong star formation events. The outflows in our simulations show clear multiphase nature, consisting of fast, hot winds that escape the simulation domain and slow, warm fountains that fall back to the midplane \citep{2018ApJ...853..173K}. The hot winds ($T\simgt 10^{5-6}\Kel$) can be easily identified by the high thermal and turbulent pressure gas in the extraplanar region $z>1\kpc$ with a steeper slope in the $z$-$t$ plane (panels (c) and (d)). The outgoing mass flux in panel (b) associated with the hot winds is always red (only outflows). However, the warm fountains ($T\simlt 10^{4}\Kel$) are evident with alternating colors in panel (b), implying that the warm outflows are always followed by inflows (see also panel (a)). Only a small fraction of the warm outflow has reached high velocity to escape the simulation domain (see \citealt{2018ApJ...853..173K,2020ApJ...894...12V}). The magnetic pressure in panel (e) is overall subdominant, especially in low-density gas far from the midplane. The magnetic field strength grows over time via galactic dynamo and ranges from a few to ten micro Gauss in the warm and cold medium with comparable turbulent and mean field strengths \citep[see][]{2019ApJ...880..106K}. This is consistent with the observed magnetic field strength of neutral hydrogen in the solar vicinity \citep{2005ApJ...624..773H}. The full complexity of star formation/feedback and multiphase outflow/inflow cycles in the TIGRESS simulation suite is extensively discussed in \citet{2020ApJ...900...61K}. \subsection{ICM models}\label{sec:method_icm} \begin{deluxetable*}{lccccc} \tablecaption{ICM Model Parameters \label{tbl:models}} \tablehead{ \colhead{Model} & \colhead{$n_{\rm ICM}$} & \colhead{$v_{\rm ICM}$} & \colhead{$P_{\rm ICM}/k_{\rm B}$} & \colhead{$P_{\rm ICM}/{\mathcal{W}_{\rm GG}}$} & \colhead{$\Delta x$}\\ \colhead{} & \colhead{($10^{-4}\;\pcc$)} & \colhead{($10^3\;\kms$)} & \colhead{($10^4\;{\rm \, cm^{-3}\,K}$)} & \colhead{} & \colhead{(pc)} } \colnumbers \startdata {\tt ICM-P1} & 0.5 & 1 & 0.94 & 0.18 & 8 \\ {\tt ICM-P3}({\tt h}) & 1 & 1.4 & 3.6 & 0.69 & 8(4) \\ {\tt ICM-P7}({\tt h}) & 2 & 1.4 & 7.2 & 1.4 & 8(4)\\ {\tt ICM-P14} & 2 & 2 & 14 & 2.7 & 8 \enddata \tablecomments{ Column (1): model name. Column (2): hydrogen number density of the ICM. Column (3): relative velocity of the ICM and the ISM disk. Column (4): ICM pressure. Column (5): ratio of the ICM pressure to the ISM weight. $\mathcal{W}_{\rm GG}=5.27\times10^4k_{\rm B}\;{\rm \, cm^{-3}\,K}$ is an approximate ISM weight estimated by \autoref{eq:Wgg}. Column (6): spatial resolution. } \end{deluxetable*} We take the first snapshot shown in \autoref{fig:noICM} ($t\sim245\Myr$) as initial conditions and restart the simulation with an ICM inflow. Here and hereafter, we exclusively use the term ISM in simulations to denote the gas that was in the simulation domain before injecting the ICM. At this time, gas surface density in the {\tt noICM}{} model is reduced to $\Sgas= 9.5 \Surf$ as gas turns into stars and leaves the simulation domain as outflows through the vertical boundaries. We model the ICM inflow as a constant, unmagnetized, vertical inflow through the bottom boundary (i.e., face-on interaction). We set the ICM metallicity to $Z_{\rm ICM} = 0.1Z_\odot$ \citep[e.g,][]{2011MNRAS.414.2101U}, which serves as a tracer of the gas origin together with other passive scalars (see \autoref{sec:method_phase}). The ICM inflows are characterized by two parameters: hydrogen number density of the ICM $n_{\rm ICM}=\rho_{\rm ICM}/(1.4271 m_H)$ and inflow velocity $v_{\rm ICM}$. We adopt the ICM sound speed $c_{s, {\rm ICM}}=300\kms$. The total pressure of the ICM at injection is \begin{align}\label{eq:PICM} P_{\rm ICM} \equiv& \rho_{\rm ICM}(v_{\rm ICM}^2 + c_{s, {\rm ICM}}^2) = \rho_{\rm ICM}v_{\rm ICM}^2 (1+\mathcal{M}_{\rm ICM}^{-2})\nonumber\\ \approx& 1.73 \times 10^4 (1+\mathcal{M}_{\rm ICM}^{-2}) k_{\rm B}{\rm \, cm^{-3}\,K}\nonumber\\ &\left(\frac{n_{\rm ICM}}{10^{-4}\pcc}\right) \left(\frac{v_{\rm ICM}}{10^3\kms}\right)^2, \end{align} which is dominated by the ram pressure for our chosen $v_{\rm ICM}\ge10^3\kms$ (or Mach number of the ICM $\mathcal{M}_{\rm ICM}>3.3$).\footnote{Note that the adopted ICM sound speed (or $T_{\rm ICM}\sim 4 \times10^6$\Kel) is about a factor of two smaller than that of the ICM in the Virgo cluster ($T_{\rm ICM}\sim 2\times10^7\Kel$; \citealt{shibata2001_virgo_xray}), which is still smaller than the inflow velocity $v_{\rm ICM}$ so that the results are expected to be qualitatively unchanged.} In the simulations, however, as soon as the ICM sweeps up the ISM, a reverse shock thermalizes the inflowing ICM, and it is the hot ICM with the total pressure $P_{\rm ICM}$ dominated by the thermal term that interacts with the ISM. While the ISM is pushed away from the galactic disk owing to the interaction with the ICM, the stellar and dark matter components are not immediately disturbed in RPS galaxies. This is particularly true for our simulations because we use a fixed analytic potential for stellar and dark matter gravity. The gas weight under the external gravity\footnote{Note that the term ``external'' here is used not for the gravity from external galaxies but for the gravity from non-gaseous components.} is \begin{align}\label{eq:Wext} \mathcal{W}_{\rm ext} &\equiv \int_0^\infty \! \rho\left|\frac{d\Phi_{\rm ext}}{dz}\right| \, dz, \end{align} where the functional form of the external gravitational potential is \begin{align}\label{eq:phi_ext} \Phi_{\rm ext}(z)\equiv &2\pi G \Sigma_* z_*\sbrackets{\rbrackets{1+\frac{z^2}{z_*^2}}^{1/2}-1} \nonumber\\ &+ 2\pi G \rho_{\rm dm} R_0^2\ln\rbrackets{1+\frac{z^2}{R_0^2}}. \end{align} We adopt the parameters representing solar neighborhood conditions: galactocentric distance $R_0=8\kpc$, stellar surface density $\Sigma_*=42\Surf$, stellar scale height $z_*=245\pc$, and midplane dark matter density $\rho_{\rm dm}=6.4\times10^{-3}\rhounit$ \citep{2013ApJ...772..108Z,2015ApJ...814...13M}. When the gas is stripped far away from the stellar disk (i.e., the mean gas position is much larger than the stellar disk scale height, $z\gg z_*$), the stellar gravity (the first term in \autoref{eq:phi_ext}) becomes nearly constant such that $|d\Phi_*/dz| = 2\pi G \Sigma_*$. The ISM weight can then be well approximated by $\mathcal{W}_{\rm ext}\approx\mathcal{W}_{\rm GG}$, where \begin{align}\label{eq:Wgg} \mathcal{W}_{\rm GG} \equiv& 2\pi G \Sgas\Sigma_*\\ =&5.27\times10^4k_{\rm B}{\rm \, cm^{-3}\,K}\nonumber\\ &\rbrackets{\frac{\Sgas}{9.5\Surf}} \rbrackets{\frac{\Sigma_*}{42\Surf}}. \end{align} This ``restoring'' force per area (often called as ``anchoring'' pressure) originally presented in \citet{1972ApJ...176....1G} has been conveniently compared with the ICM ram pressure to determine the stripping condition \citep[e.g.,][]{2004AJ....127.3361K,2006A&A...453..883V,2007ApJ...659L.115C,koppen2018,jaffe2018}. Note that, for our adopted gravitational potential, the dark matter contribution in the vertical gravity keeps increasing with $z$ and becomes comparable to that of stars at the vertical boundaries $z\sim 3.5\kpc$. Therefore, $\mathcal{W}_{\rm GG}$ slightly underestimates the maximum $\mathcal{W}_{\rm ext}$ in our simulations by 25\%. \autoref{tbl:models} lists the ICM models. Column (1) is the model name; we adopt a nomenclature including the strength of the ICM ram pressure presented in Column (4). The higher resolution models ($\Delta x=4\pc$) have their name ending with `{\tt h}' (see Column (6)). For the higher resolution models, we refine the original data cube from the {\tt noICM}{} model using a zero gradient prolongation (i.e., volume- and area-averaged quantities in finer grid cells are the same as their parent cell values). Therefore, the initial conditions are identical across all models. Columns (2) and (3) are the number density and inflow velocity of the ICM, which set the ICM pressure (Column (4); \autoref{eq:PICM}). Column (5) shows the ratio of the ICM pressure and the maximum ISM weight under the stellar gravity (\autoref{eq:Wgg}), which is a rough estimate for the relative strength of the ICM pressure to the maximum ISM weight. Finally, we list the spatial resolution in Column (6). We consider four different ICM conditions (with additional two higher resolution runs), covering $P_{\rm ICM}/k_{\rm B} \sim 1-14 \times 10^4{\rm \, cm^{-3}\,K}$. Since the ISM condition is fixed, the relative strength of the ICM-ISM interaction simply increases as $P_{\rm ICM}$ increases; with {\tt ICM-P1}{} and {\tt ICM-P3}{(\tt h)} have $P_{\rm ICM}/\mathcal{W}_{\rm GG}<1$, and {\tt ICM-P7}{(\tt h)} and {\tt ICM-P14}{} have $P_{\rm ICM}/\mathcal{W}_{\rm GG}>1$. Throughout the paper, the former two models are referred to \emph{the weak ICM models}, and the latter two models are referred to \emph{the strong ICM models}, respectively. Our parameter choice brackets the relative strength of the ISM-ICM interaction seen in NGC 4522, a prototypical galaxy undergoing ram pressure stripping in the Virgo cluster \citep{2004AJ....127.3361K,2006A&A...453..883V}. In addition, the anchoring pressure of our simulation (\autoref{eq:Wgg}) is comparable to that near the truncation radius of NGC 4522 \citep[][]{2004AJ....127.3361K,2009AJ....138.1741C,2017MNRAS.466.1382L,2018ApJ...866L..10L}. Thus, our \emph{weak/strong} ICM models can represent the evolution of the \emph{inner/outer} part of the truncation radius of NGC 4522. \subsection{Tracer Fields and Gas Phases}\label{sec:method_phase} In the TIGRESS framework, the gas is divided into five thermal phases based on its temperature, corresponding typical discriminators of the three-phase ISM (but including thermally unstable phases; \citealt{mckee1977}). Each cell is exclusively assigned as cold, unstable, warm, intermediate (warm-hot ionized medium), and hot phase following the temperature criteria in \autoref{tbl:phase}. We often combine the cold, unstable, and warm phases and call them the cool phase. \begin{deluxetable}{lC} \tablecaption{Definition of Thermal Phases \label{tbl:phase}} \tablehead{ \colhead{Phase} & \colhead{Condition} } \startdata cold & T<184\Kel \\ unstable & 184\Kel<T<5050\Kel \\ warm & 5050\Kel<T<2\times10^4\Kel \\ intermediate & 2\times10^4\Kel<T<5\times10^5\Kel \\ hot & T>5\times10^5\Kel \enddata \tablecomments{The cold, unstable, and warm phases are combined and referred to as the cool phase.} \end{deluxetable} The hot gas in the {\tt noICM}{} model is created by SN shocks, while the warm and cold phases are maintained via radiative heating due to FUV radiation. With ICM inflows, a significant fraction of the hot ICM is directly added. The hot gas can accelerate the cooler gas directly through its pressure gradient (both ram and thermal pressure), but another significant (likely dominant) acceleration mechanism, as we shall show, is by the momentum transfer through the mixing of the hot gas into the cooler gas (\autoref{sec:stripping}; see also \citealt{2022ApJ...924...82F}). It is therefore critical to separate the origin of gas and trace the fraction of different origin gas in different thermal phases. We utilize passive scalars to track the mass fractions of the initial ISM, SN ejecta, and ICM in each cell. Here, we use the term passive scalar to denote the multiplication of gas density and tracer field (or specific scalar). Practically, we follow the total metallicity $Z$, SN ejecta mass fraction $\ssn{}$, and ICM mass fraction $\sicm{}$. The metallicity tracer field $Z$ is initialized with $Z_\odot=0.02$ at the beginning of the {\tt noICM}{} simulation, while the other two tracer fields are initialized to zero everywhere. In the code, additional continuity equations for $\rho Z$, $\rho \ssn$, and $\rho\sicm$ are solved with the velocity field of gas. For each SN event, we add the total and metal density of SN ejecta, $\rho_{\rm ej}\equiv M_{\rm ej}/V_{\rm SNR}$ and $Z_{\rm SN}\rho_{\rm ej}$, respectively, to passive scalars in the feedback region (of course, the SN ejecta density is added to the gas density as well). As the {\tt noICM}{} model has evolved for $\sim250\Myr$ before the restart with the ICM, the ISM disk's metallicity has been enriched by SN ejecta. When we restart simulations with ICM inflows, we adopt metallicity and SN ejecta fraction inherited from the {\tt noICM}{} model. The ICM inflow with the ICM tracer field $\sicm=1$ is then added and followed by another passive scalar. Also, the ICM metallicity is set to $Z_{\rm ICM}=0.1Z_\odot$ and mixed into the metal passive scalar $\rho Z$ as the ICM interacts with the existing gas. When sink particles are formed and accrete gas, passive scalars are also locked into particles, which represent the metallicity of star-forming gas. As mentioned above, we adopt three distinct metallicities for different origin gases: \begin{itemize} \item genuine ISM -- initial gas from the beginning of the {\tt noICM}{} model ($Z_{\rm ISM,0}=Z_\odot$), \item SN ejecta -- gas added in the feedback region by SNe ($Z_{\rm SN}=10Z_\odot$), and \item ICM -- gas added from the bottom boundaries as the ICM inflow ($Z_{\rm ICM}=0.1Z_\odot$). \end{itemize} The metallicity is a good proxy for the composition of the gas in simulations, providing potential observational imprints. In each cell, the metallicity is connected to the SN ejecta and ICM mass fractions as \begin{equation}\label{eq:Z} Z = Z_{\rm ISM,0} (1 - \ssn - \sicm) + Z_{\rm SN}\ssn + Z_{\rm ICM}\sicm. \end{equation} As presented in \citet{2020ApJ...895...43S}, the dominance of the mixing-driven momentum transfer from hot to cool phase can be simply evidenced by the linear relation between the source tracer field and velocity of the cool phase. The predicted outflow velocity of the cool phase in the case of mixing-driven momentum transfer is \begin{equation}\label{eq:vzcool_icm} v_{\rm z}^{\rm cool} = v_{\rm ICM}\sicm[cool] \end{equation} for ICM-accelerated outflows. In our simulations, both SNe and ICM create the hot gas so that both SN ejecta and ICM tracer fields can leave imprints on the accelerated cool outflows. However, as the relation only holds for the \emph{fresh} tracer field (the tracer field that is first transferred from hot to cool phase), the ICM mass fraction provides an ideal tracer for this purpose, especially for the first acceleration of the ISM. In contrast, as we restarted the simulations from the {\tt noICM}{} model after many feedback-star formation cycles, a lot of SN ejecta is already mixed into the cool phase that has been accelerated, fallen back, and reaccelerated many times. The total SN ejecta mass fraction in the cool phase is no longer representative of the amount of the hot gas that is currently mixed. Still, we can see signs of SN accelerated gas from the relatively metal-enriched gas that is moving faster (\autoref{sec:stripping_mixing}). \section{Overall Evolution} \label{sec:overall} In this section, we provide an overview of the RPS process in our simulations and visual impressions using a variety of quantities at different times in two representative models. \subsection{Overview of RPS in Simulations}\label{sec:overview} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{sicm_hot.png} \includegraphics[width=0.49\textwidth]{M3.png} \caption{Horizontally-averaged ICM mass fraction in the hot phase (left) and vertical momentum density (= upward mass flux; right) as a function of time for all ICM models. The symmetric plane of the external gravity ($z=0$) is indicated by the horizontal dotted lines. The orange dashed line denotes the ICM-ISM interface as defined by $\overline{\sicm[hot]}=0.5$. Two left- and right-pointing triangles demark the beginning of the early and active stripping stages as defined by the earliest time at which the ICM-ISM interface reaches $z=-500\pc$ and $500\pc$, respectively.} \label{fig:sicm} \end{figure*} To summarize the general response of the ISM as a whole to the ICM inflows, \autoref{fig:sicm} plots the vertical profiles of the ICM mass fraction in the hot phase (left) and vertical momentum density (= upward mass flux; right) as a function of time for all ICM models. The former is defined by the mass-weighted horizontal average of $\sicm$ of the hot gas, $\overline{\sicm[hot]}\equiv\abrackets{\rho\sicm}^{\rm hot}/\abrackets{\rho}^{\rm hot}$. From top to bottom, we show all models in ascending order of the ICM ram pressure including two high-resolution models shown in the 3rd and 5th rows. We plot the reference line of $\overline{\sicm[hot]}=0.5$ that defines the mean boundary of the ICM-ISM. Owing to the multiphase structure of the ISM, actual boundaries between the ICM and ISM are much more complex (see \autoref{sec:morphology}). The positive (upward) mass flux below the interface simply represents the mass flux of the ICM inflows. As soon as the simulations restarted with the ICM inflows, the ISM in the bottom half is quickly pushed up, and the ICM-ISM interface approaches the midplane in 10-50~Myr depending on the ICM inflow strength. Then, the interface either remains near the midplane with clear separation in the ICM fraction (weak ICM models) or continuously marches upward (strong ICM models). This dichotomy is in excellent agreement with the expectation based on the simple stripping condition listed in Column (5) of \autoref{tbl:models}. Using the position of the ICM-ISM interface, we divide overall evolution into three stages; compression stage, early stripping stage, and active stripping stage. The earliest time at which the ICM-ISM interface reaches $z=-500\pc$ and $z=500\pc$ respectively defines the beginning of the early and active stripping stages (marked by left- and right-pointing triangles in \autoref{fig:sicm}). Because the multiphase ISM is porous and has low-density channels through which the ICM can penetrate, the ICM gradually pollutes the ISM in the upper disk even when the interface stays near the disk midplane. The only exception is the {\tt ICM-P1}{} model, where the penetration of the ICM is not effective, and the ISM in $z>0$ remains unpolluted over the entire simulation duration ($\sicm<1\%$). As a result, the mass flux evolution in the upper disk of the {\tt ICM-P1}{} model is qualitatively similar to that in the {\tt noICM}{} model (\autoref{fig:noICM}(b)), indicating that the outflows are still driven by SN feedback. In the {\tt ICM-P3}{} model, the ICM mass fraction in the upper half increases quickly and becomes larger than 10\%. The mass flux in this model is overall enhanced, while the fountain component (alternating positive and negative signs) still exists, implying insufficient acceleration of the cool phase with the marginally weak ICM. The high-resolution model ({\tt ICM-P3h}{}) behaves essentially the same, while the late time evolution shows a strong outflowing epoch. Given the inherently stochastic nature of the evolution, this difference should not be interpreted as systematic resolution dependence. Rather, the qualitative similarity of the early evolution ($t<350\Myr$) means the convergence of the overall evolution. The ICM penetration in both {\tt ICM-P7}{} and {\tt ICM-P14}{} models is highly efficient and leads to the immediate enhancement of the ICM mass fraction in the upper disk. The ICM-ISM interface continuously moves upward in the {\tt ICM-P14}{} model, but the {\tt ICM-P7}{} model spends a quite long time ($\sim 70\Myr$) in the early stripping stage with a significantly larger ICM mass fraction ($>10\%$) in the upper disk than that in the {\tt ICM-P3}{} model. The net mass flux in the upper disk is always positive in the strong ICM models, demonstrating the dominant role of the ICM in driving outflows and implying RPS in action. Again, the high-resolution model ({\tt ICM-P7h}{}) shows a very similar evolution with its low-resolution counterpart. We emphasize that the multiphase RPS occurs continuously in both early and active stripping stages for the strong ICM models. In the early stripping stage, the ICM finds low-density channels in the porous ISM to penetrate. In doing so, the ICM begins to shred the ISM and transfer mass, momentum, and energy while mixing occurs. In the active stripping stage, which only exists in the strong ICM models, the ICM fills the volume in a wide range of the disk. The ICM mixing and momentum transfer occur almost all around the simulation volume, and the ISM is effectively accelerated and removed from the simulation domain. We will delineate the mixing-driven stripping in \autoref{sec:stripping}. \subsection{Time Evolution of Masses}\label{sec:tevol} \begin{figure} \centering \includegraphics[width=1\linewidth]{surf_tevol.png} \caption{Time evolution of (a) total (ICM+ISM) gas surface density, (b) stellar surface density of newly formed stars, and (c) surface density of outflowing gas. (b) and (c) are cumulatively calculated by counting all new stars' mass and integrating mass fluxes at both upper and lower boundaries from the restart of the simulations. The colored solid lines correspond to the models with the different ICM pressure, while the black dashed line is for the {\tt noICM}{} model.} \label{fig:surf} \end{figure} \autoref{fig:surf} shows the time evolution of (a) total (ICM+ISM) gas surface density, (b) surface density of new stars $\Sigma_{\rm new-star}$, and (c) surface density of gas passed through the vertical boundaries $\Sigma_{\rm out}$. $\Sigma_{\rm new-star}$ is defined by summing up the total mass of stars formed since we restart the simulations, and $\Sigma_{\rm out}$ is calculated by integrating the net mass flux at the vertical boundaries (both escaped to the top and injected from the bottom) over time. A clear dichotomy between the weak and strong ICM models is visible. The strong ICM models lose gas and stop forming stars after $\sim 100\Myr$. The weak ICM models retain (or even gain) gas and form more stars than the {\tt noICM}{} model. Despite the highly complex interaction between the multiphase ISM and ICM revealed in our simulations (as discussed in \autoref{sec:morphology}), the simple stripping condition estimated by \autoref{eq:Wgg} provides a reliable prediction for the fate of gas disk. This is in part because we only model the face-on, plane-parallel interaction, ideal for the simple criteria to work best. Even without the ICM, the {\tt noICM}{} model also loses its gas through star formation and outflows powered by SN feedback \citep{2018ApJ...853..173K,2020ApJ...900...61K}. The mean SFR and mass outflow rate over the simulation duration of $250-500\Myr$ are $\Ssfr = 3.1\times10^{-3}\sfrunit$ and $\dot{\Sigma}_{\rm gas,out} = 7.7\times10^{-4} \sfrunit$, respectively. With the ICM inflows, the total gas mass within the domain can increase as the ICM is added to the system unless the outflow and SFRs are greatly increased. The {\tt ICM-P1}{} model closely follows the evolution curve of the {\tt noICM}{} model as the enhancement in SFRs is compensated by the decrease in outflow rates (panel (a)). In the {\tt ICM-P3}{} model, $\Sigma_{\rm out}$ becomes negative, implying that the net inflow through the boundaries (panel (c)). Overall gas mass still decreases when taking into account the loss due to star formation. In the {\tt ICM-P7}{} and {\tt ICM-P14}{} models, the gas mass decreases quickly as the outward mass fluxes are greatly enhanced after the compression stage (see also \autoref{fig:sicm}). The gas compression by the ICM causes enhancement of the early star formation in all ICM models (see \autoref{sec:sfr} for in-depth analysis). The half-mass stripping time defined by the time interval between $\Sigma_{\rm gas}(t)=\Sigma_{\rm gas,max}$ and $\Sigma_{\rm gas,max}/2$ is $\sim 130\Myr$ and $60\Myr$ for the {\tt ICM-P7}{} and {\tt ICM-P14}{} models, respectively. At around similar time scale, star formation in the {\tt ICM-P7}{} and {\tt ICM-P14}{} models is completely quenched, showing a flattening in $\Sigma_{\rm new-star}$ (panel (b)). $\Sigma_{\rm out}$ flattens later after complete stripping (the ICM flows freely; panel (c)). Overall qualitative behaviors are converged with the resolution, but the later time evolution shows differences mainly due to the stochasticity of the evolution. \subsection{Morphological Evolution} \label{sec:morphology} \begin{figure*} \centering \includegraphics[width=\textwidth]{{P3h.0031}.png} \includegraphics[width=\textwidth]{{P7h.0031}.png} \caption{Detailed visualization of the multiphase ISM interacting with the ICM at the end of the compression stage ($t=275\Myr$) for the {\tt ICM-P3h}{} (top) and {\tt ICM-P7h}{} (bottom) models. In (a), column density integrated along the $y$-axis is shown with sink particles colored by their age. The other columns show physical quantities in one-zone thick slice centered at $x=0$. From left to right, we show (b) hydrogen number density ($n_H$), (c) temperature ($T$), (d) vertical velocity ($v_z$), (e) metallicity ($Z$), (f) ICM mass fraction ($\sicm$), (g) ram pressure ($\rho v_z^2$), (h) thermal pressure ($P$), and (i) magnetic pressure ($P_B = B^2/8\pi$). Only $z>-0.5\kpc$ region is shown to focus on the upper disk. Animations of this figure for each model are available in the electronic journal. The video begins at $t=244\Myr$ and ends at $t=450\Myr$. The real-time duration of the video is 20 s.} \label{fig:slc_early} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{{P3h.0098}.png} \includegraphics[width=\textwidth]{{P7h.0098}.png} \caption{Same as \autoref{fig:slc_early}, but at $t=340\Myr$.} \label{fig:slc_late} \end{figure*} To delineate the interaction between the ICM inflows and the multiphase ISM, \autoref{fig:slc_early} shows snapshots at $t=275\Myr$ for two representative models, {\tt ICM-P3h}{} (top) and {\tt ICM-P7h}{} (bottom). At this epoch, the ISM in the bottom half has already pushed up the midplane in both models, so we only show the upper disk $z>-0.5\kpc$. Visual impressions between projected density and slices are substantially different. The projected density shows an overall density distribution with mild fluctuations, with similar sharp cutoffs at around $z=0$ for both models. The immediate impression might be that there is a well-defined ICM-ISM interface near the midplane. However, the slices unveil a highly-porous, multiphase structure with a large density and temperature contrast. Also, significant penetration of the ICM (see (f)) depicts a significant difference between the two models. We emphasize that, compared to the projection maps, slices (or thin projections) of gas physical quantities are often more useful to deliver visual insights. This is particularly true when looking into the interaction between the different phases with large contrasts in physical properties. The density (panel (b)) and temperature (panel (c)) slices of both models show a large, cool gas structure that begins to face the ICM near the midplane. This structure looks like a continuous, single structure in density and temperature, but $v_z$ (panel (d)) and hence $\rho v_z^2$ (panel (g)) show a sharp change between the left and right sides. The enhanced ICM mass fraction $\sicm$ (panel (f)) and the relatively large outward velocity (panel (d)) in the right side of this structure imply that this is the gas originally in the bottom half accelerated by the ICM. The accelerated cool gas from the lower disk remains intact and appears as a high ram pressure chunk at around $z=0.5\kpc$ in the {\tt ICM-P3h}{} model, but it is already substantially shredded and fragmented in the upper disk in the {\tt ICM-P7h}{} model. Another interesting cool gas structure that shows a difference between the two models is one located at $z\sim 1\kpc$ near the left edge of the slice. In the {\tt ICM-P3h}{} model, this structure is still falling (panel (d)), while the same structure has already significantly shredded and accelerated by the interaction with the ICM in the {\tt ICM-P7h}{} model. There are plenty of similar falling gases (fountain flows) in the {\tt ICM-P3h}{} model across a wide range of $z$. In the {\tt ICM-P7h}{} model, such infalling fountain flows are no longer prevalent, but they are generally more compressed and even outflowing. Generally, the ICM in the {\tt ICM-P7h}{} model manages to intrude almost the entire regions of the upper disk. It is visually evident from the enhanced $\sicm$ seen in most regions, which correspond to the enhanced ram and thermal pressure and outward velocity. The metallicity (panel (e)) contains more complex information due to the additional contribution of the high-metallicity SN ejecta. As noted in \autoref{sec:method_phase}, the ISM has been enriched in the {\tt noICM}{} model over $\sim250\Myr$. The mean ISM metallicity at the beginning of the ICM models (shown as bright orange color in panel (e)) is thus larger than the initial metallicity $Z_\odot$. Without the ICM, it is the high metallicity gas injected by SNe that is filling low-density regions (it is still the case for $z>2\kpc$ in the {\tt ICM-P3h}{} model). With the ICM, now the low metallicity gas is filling the low-density regions (more evident in the {\tt ICM-P7h}{} model), which also show high pressures (panels (g) and (h)). As the high-pressure ICM compresses the ISM, the magnetic pressure is enhanced in cooler, denser gas (panel (i)). In \autoref{fig:slc_late}, we select snapshots at $t=340\Myr$ for the {\tt ICM-P3h}{} (top) and {\tt ICM-P7h}{} (bottom) models. The late-time evolution differs in the two standards more substantially and is even evident in the density projection (panel (a)). Since the ICM inflow alone cannot keep pushing the ISM away in the {\tt ICM-P3h}{} model, the bulk ISM is falling back as star formation has suppressed. In contrast, the strong ICM inflow alone can continue to strip the ISM in the {\tt ICM-P7h}{} model (see also \autoref{fig:sicm}). While disturbed significantly, in the {\tt ICM-P3h}{} model, the ICM fraction in the cool phase is still less than a few percent, and overall visual impression is not very different from what is shown in \autoref{fig:slc_early}. The hot ICM keeps penetrating through the low-density channels, creating shearing interfaces between the cool and hot phases in which the majority of mixing occurs. The volume fractions of the hot and cool phases are comparable all over the upper disk. We note that when the hot gas is only created by SNe in the {\tt noICM}{} model, the hot gas volume fraction is $\sim20-30\%$ near the midplane and increases to $\sim 50\%$ at $z\sim1\kpc$ and to $\sim 100\%$ at $z>2\kpc$ \citep{2020ApJ...897..143K}. Star formation continues in this model at slightly higher rates than the {\tt noICM}{} model. Once stars form, they fall faster than gas as stars do not feel the ICM pressure. This causes SN feedback in the ICM dominated regions below the midplane, sometimes creating metal-enriched hot bubbles between the hot ICM and cool ISM (panels (e) and (f)). At this time, the {\tt ICM-P7h}{} model already loses about $\sim 30\%$ of its total mass, and almost all ISM is pushed above $z\sim0.5\kpc$ (panel (a)). The cool ISM is highly fragmented and confined in smaller volumes (panels (a) and (b)). The dense gas structure facing the ICM at $z\sim0.5\kpc$ is the leftover from the first major stripping of the main ISM disk and falling. Stars were just born in this strongly compressed structure, which is the final major star formation event before complete quenching (there will be additional extraplanar star formation in the structure far above the midplane later). Star clusters formed at high $z$ during the previous evolutionary stage have fallen below the ICM-ISM interface, some of which still host SNe and create metal enriched bubbles at $z\sim0\kpc$. Even in the cool gas above $z>2\kpc$, the ICM fraction is quite high $\sim 10\%$ as the ICM is continuously mixed in to transfer momentum and maintain momentum flux against the weight at that height (\autoref{sec:stripping_mixing}). The acceleration was not sufficient to blow away this structure though. Except for these distinctive large structures, smaller clouds have already been ablated as evidenced by many tadpole structures whose density and temperature are respectively higher and lower than those of the typical hot ICM. The intermediate phase gas populated in the wakes of the front clouds in part escapes the domain and in part condenses back to the cool phase especially when the wakes meet the large cool structure in the back. From panels (d) to (g), it is evident that such acceleration is most efficient in the envelope of the large cool phase structure, where both ICM fraction and vertical velocity are relatively high. \section{Stripping of the multiphase ISM}\label{sec:stripping} The acceleration and stripping of the ISM is a generic feature of the disk interacting with the ICM. To provide a more quantitative view, we first investigate when and where the hot ICM exchanges its mass, momentum, and energy with the cool ISM. We then seek evidence of mixing-driven momentum transfer. \subsection{Mass, Momentum, and Energy Transfers between Thermal Phases}\label{sec:stripping_transfer} In this subsection, we define physical quantities averaged over a range of volume and time to understand how different gas phases exchange their mass, momentum, and energy at different regions and times. We begin by integrating the conserved form of the MHD equations over the entire horizontal area $A=L_xL_y$ and chosen vertical range ($z\in(z_{\rm min},z_{\rm max})$) to obtain a set of conservation equations \begin{equation}\label{eq:qdot} \dot {q} + [\mathcal{F}_q(z_{\rm max}) - \mathcal{F}_q(z_{\rm min})] A = +\dot{q}_{\rm source} - \dot{q}_{\rm sink}, \end{equation} where $q=M$, $p$, and $E$ for mass, vertical momentum, and total energy, which are respectively defined as \begin{equation}\label{eq:mass_mom} M \equiv \int_{z_{\rm min}}^{z_{\rm max}} \rho dV, \textrm{and}\quad p \equiv \int_{z_{\rm min}}^{z_{\rm max}} \rho v_z dV, \end{equation} and \begin{equation}\label{eq:energy} E \equiv \int_{z_{\rm min}}^{z_{\rm max}} \rbrackets{\frac{1}{2}\rho v^2 + \frac{P}{\gamma-1} + P_B} dV. \end{equation} The horizontally-averaged fluxes (the square bracket term in the left lend side of \autoref{eq:qdot}) are respectively defined by for mass, vertical momentum, and total energy as \begin{equation}\label{eq:massflux} \mathcal{F}_M(z) \equiv \frac{1}{A}\int \rho v_z dx dy, \end{equation} \begin{equation}\label{eq:momflux} \mathcal{F}_p(z) \equiv \frac{1}{A}\int \rbrackets{\rho v_z^2 + P + P_B- \frac{B_z^2}{4\pi}} dx dy, \end{equation} and \begin{equation}\label{eq:eneflux} \mathcal{F}_E(z) \equiv \frac{1}{A}\int \rbrackets{\rho v_z \rbrackets{\frac{v^2}{2} + \frac{\gamma}{\gamma-1}\frac{P}{\rho}} + \mathcal{S}_z} dx dy, \end{equation} where the adiabatic index $\gamma=5/3$ and the vertical component of the Poynting flux is $\mathcal{S}_z\equiv (v_z B^2 - B_z \vel\cdot\mathbf{B})/(4\pi)$. Note that we do not include the gravitational potential term in the energy flux such that the work done by gravity is appeared as a sink term in the right hand side of \autoref{eq:qdot}, similarly to the momentum sink term by weight. In the mass conservation equation, we have mass sink due to star formation ($\dot M_*$) and mass source due to SN ejecta ($\dot M_{\rm SN} = \dot{N}_{\rm SN}M_{\rm ej}$) if stars are born and SNe are exploded in the particular volume of interest. For our chosen stellar population synthesis model (STARBURST99 with the Kroupa IMF; \citealt{1999ApJS..123....3L}), 1 SN ejects $10\Msun$ for $\sim100\Msun$ of new star formed, implying $\dot{M}_{\rm SN} \sim 0.1 \dot{M}_*$ on average. We note that we did not subtract the mass of stars exploding as SNe in the simulations. The gravitational weight acts as sink of vertical momentum, $\dot{p}_{\rm sink} = \mathcal{W} A$ and \begin{equation}\label{eq:weight} \mathcal{W} \equiv \frac{1}{A} \int_{z_{\rm min}}^{z_{\rm max}} \rho\frac{d \Phi}{dz} dV, \end{equation} where $\Phi = \Phi_{\rm ext} + \Phi_{\rm sg} + \Phi_{\rm tidal}$ includes the external potential (\autoref{eq:phi_ext}), the self-gravity potential returned from the solution of the Poisson's equation for both gas and star cluster particles, and the tidal potential arising from the local rotating frame $\Phi_{\rm tidal} = - q\Omega^2 x^2$. Finally, there are energy sources from SN energy injection and shearing-box stress $\dot{E}_{\rm source} = \dot{N}_{\rm SN} E_{\rm SN} + w_{\rm xy}$, where $w_{\rm xy} = (q\Omega L_x)\int [\rho v_x\delta v_y - B_x B_y/(4\pi)]dydz$ is integrated over either of the $x$ boundaries \citep[][]{1995ApJ...440..742H}, and sinks from net radiative cooling $\dot{E}_{\rm cool} = \int (n_H^2\Lambda(T) - n_H\Gamma)dV$ and the work done by gravity \begin{equation}\label{eq:grav_work} \dot{E}_\Phi \equiv \int_{z_{\rm min}}^{z_{\rm max}} \rho \vel\cdot\nabla \Phi dV. \end{equation} As we exclusively assign the gas in different thermal phases, everything is separable by thermal phases. Since stars are formed from the cool (more precisely, cold) gas, mass sink by star formation can be accounted for the cool phase. We include SN mass and energy source to the hot phase; only a small fraction ($\lesssim 10\%$) of unresolved SNe injects mass and energy into the form of the cooler phases. By integrating \autoref{eq:qdot} over a time interval $\Delta t$, we obtain \begin{equation}\label{eq:qdot2} \sum_{\rm ph}\dot{q}_{\rm net}^{\rm ph} = 0, \end{equation} where \begin{equation}\label{eq:qnet} \dot{q}_{\rm net}^{\rm ph} \equiv \frac{\Delta q^{\rm ph}-\Delta q_{\rm source}^{\rm ph}+\Delta q_{\rm sink}^{\rm ph}}{\Delta t} + \sbrackets{F_{q, u}^{\rm ph} - F_{q, l}^{\rm ph}}A, \end{equation} where ph = cool, int, and hot (see \autoref{tbl:phase}). Here, $\Delta q$ means the temporal difference of mass, vertical momentum, and total energy. Similarly, the temporal difference of cumulative mass and energy injection by SNe and stellar mass formed defines SN source and star formation sink. The weight and gravity work terms are defined by time integration. The net cooling term is calculated by time integration of the instantaneous net cooling rate from simulation outputs. Similarly, the time-averaged fluxes at upper and lower faces are defined by \begin{equation}\label{eq:Favg} F_{q, u/l} \equiv \frac{1}{\Delta t}\int_{t}^{t+\Delta t} \mathcal{F}_q(z_{\rm max/min}) dt. \end{equation} By fully accounting for the temporal changes of each quantity, fluxes through vertical faces, and source/sink in each phase, $\dot{q}_{\rm net}^{\rm ph}$ represents any loss/gain of mass, momentum, and energy through phase transitions within space-time bins of interest. In practice, we measure each term using output snapshots dumped every $\sim1\Myr$. The cadence of snapshots was not fine enough to satisfy the conservation perfectly (\autoref{eq:qdot2}) as this \textit{post-processing} assumes that each variable involved in the time integration (e.g., cooling and flux terms) is constant over the snapshot interval of $1\Myr$.\footnote{One can use the \emph{instantaneous} conservation equations (\autoref{eq:qdot}). In this case, however, time derivative terms can be noisy and inaccurate.} Bearing this caveat in mind, we analyze mass, momentum, and energy transfers between phases in space and time bins and only consider them to be reliable when the net changes are distinctive over the level of non-conservation errors. \subsubsection{Mass Transfer}\label{sec:phase_transition} \begin{figure*} \centering \includegraphics[width=\textwidth]{Mdot_tz.png} \caption{Net mass change rates per unit area for each phase due to phase transition. The space-time bin is separated by $\Delta t=10\Myr$ and $\Delta z=400\pc$. Mass sink (source) by star formation (SN ejecta) is calculated in each space-time bin and added (subtracted) to the cool (hot) phase to isolate the gain and loss solely due to phase transition. The dashed lines in each panel show the ICM-ISM interface as defined in \autoref{fig:sicm}. The phase transition driven by the ICM shows a general trend (clearer with stronger ICM pressure) described by the gain (loss) of the hot (cool) phase near the ICM-ISM interface followed by the loss (gain) of the cool (hot) phase in the extraplanar region. } \label{fig:mdot_tz} \end{figure*} We first consider mass conservation to understand phase transition between thermal phases. \autoref{fig:mdot_tz} plots the net mass gain (red) and loss (blue) rates of cool, intermediate, and hot phases from left to right within the space-time bins. Here, we consider $\Delta z = 400\pc$ thickness slabs centered at $z = -3.6,\, -3.2,\, \cdots, -0.4,\, 0,\, 0.4\, \cdots,\, 3.2,\, 3.6\kpc$ and time interval of $\Delta t =10\Myr$. The mass sink by star formation is added in the cool phase (left column) using new star particles formed in given space-time bins. Similarly, the mass source by SN ejecta is subtracted in the hot phase (right column). The positive (red) and negative (blue) values in \autoref{fig:mdot_tz} are solely due to phase transition within the space-time bins. Without the ICM inflows ({\tt noICM}{}; row (a)), the net gain of the hot and intermediate phases at the midplane stands out as thin red strips. In this case, it is clustered SNe that creates the hot phase via shock-heating of the ambient medium, showing the net loss (blue strip) in the cool phase. In our simulations, superbubbles expand into an inhomogeneous ambient medium, also creating a lot of the intermediate phase through the ablation of the cool phase shown as the net gain in the intermediate phase. Both hot and intermediate phases subsequently cool above and below the midplane slab. Sometimes, this ``cooling'' region is further extended to $z\sim\kpc$. It can be both direct cooling of the hot shocked gas (shell formation; e.g., \citealt{2015ApJ...802...99K}) or mixing of the hot and cool gas (interface mixing; e.g., \citealt{2017ApJ...834...25K,2019MNRAS.490.1961E}). If the hot bubbles have completely cooled within the thickness of the midplane slab we consider ($z=\pm200\pc$), the net gain/loss in different phases will not be visible. In other words, most superbubbles in our simulations can expand with their radii larger than $200\pc$ over $10\Myr$ before they cool. The upper half of the {\tt ICM-P1}{} model shows overall similar results with the {\tt noICM}{} model. As the ICM ram pressure gets stronger, noticeable differences begin to appear. In the {\tt ICM-P3h}{} model, the layer in which the hot and intermediate phases gain the mass remains thin, while the cooling region is more extended toward higher-$z$ and becomes more prominent. The ICM-ISM interface stays near the midplane, and the effect of the ICM appears as the enhanced gain of the hot (and intermediate) phase in the midplane slab as the hot ICM shocks the cool ISM. At $t\sim 400\Myr$, the net gain in the hot phase is visible beyond the midplane, which is due to the successful penetration of the hot ICM through the entire upper disk (see \autoref{fig:sicm}). This breakout causes cool-to-hot phase transition followed by hot-to-cool phase transition at $t\sim 420-450\Myr$. With even stronger ICM inflows, the mass gaining layer for the hot and intermediate phases gets thicker (slightly thicker for the intermediate phase). Now, the cool-to-hot phase transition is dominated by the interaction between the hot ICM and the cool ISM rather than SN feedback, especially at late times when star formation is nearly quenched. The large energy flux carried by the hot ICM can heat and ablate a significant amount of the cool phase, converting the cool ISM into the hot phase while populating the intermediate phase in mixing layers. Above the cool-to-hot phase transition layer, there is an extended region where the hot-to-cool phase transition occurs. In this region, best viewed in {\tt ICM-P7h}{}, the hot and intermediate phases cool back to the cool phase. What is happening here is more like precipitation of the hot (with intermediate) phase into the volume filling cool phase that has been pushed by earlier interaction. This is somewhat different from the cooling of the mixed gas itself that drives phase transition from hot to cool in the cloud wakes as seen in radiative cloud-crushing simulations with large sizes \citep[e.g.,][]{2016MNRAS.462.4157A,2018MNRAS.480L.111G,2020MNRAS.492.1970G,2020MNRAS.499.4261S,2021MNRAS.501.1143K,2020MNRAS.492.1841L,2021arXiv210110344A}. As RPS is much more efficient in {\tt ICM-P14}{}, the hot-to-cool phase transition layer moves quickly outside the simulation domain. At $t>350\Myr$, only the cool-to-hot phase transition occurs within our simulation domain and the gas escapes mostly in the form of the hot phase (see also \autoref{fig:out_in_strong}). While not followed in our simulations, the hot-to-cool phase transition can still occur far above the disk midplane. \subsubsection{Momentum Transfer}\label{sec:mom_transfer} \begin{figure*} \centering \includegraphics[width=\textwidth]{pdot_tz.png} \caption{Net vertical momentum change rates per unit area for each phase due to phase transition. The space-time bin is separated by $\Delta t=10\Myr$ and $\Delta z=400\pc$. The weight of gas of each phase (only significant in the cool phase) is included as a sink. The dashed lines in each panel show the ICM-ISM interface as defined in \autoref{fig:sicm}. Noisy, checkerboard-like patterns below the ICM-ISM interface depict the non-conservative errors of the analysis, mainly due to significant time-varying fluxes. The hot ICM's momentum flux is transferred to the cool ISM.} \label{fig:pdot_tz} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{pdot_tz-P7h.png} \caption{Term-by-term decomposition of vertical momentum change rates (see \autoref{eq:qnet}) for the {\tt ICM-P7h}{} model. We show (a) net momentum change rates per unit area, which is identical to \autoref{fig:pdot_tz}(d). There is no explicit source term, while we show (b) the time dependent term and (f) sink term due to gravity. The flux term is further decomposed into the (c) kinetic, $\rho v_z^2$, (d) thermal, $P$, and (e) magnetic, $P_{\rm mag}-B_z^2/4\pi$, flux terms. The momentum transferred from hot to cool phase is mostly used to provide appropriate support against the increased weight. At later times (active stripping stage), the cool phase gains kinetic flux. } \label{fig:pdot_tz_P7h} \end{figure*} \autoref{fig:pdot_tz} shows net vertical momentum gain/loss rates per unit area for each phase. The weight of each phase is included as a sink, but only the weight of the cool phase is significant. To understand the plot, it is important to keep in mind that the vertical momentum flux $\rho v_z$ has a sign and the positive value (red) means the gain of the upward momentum flux as well as the loss of the downward momentum flux. The {\tt noICM}{} model shows the change of sign in each phase about $z=0$. The hot phase loses its upward momentum flux in the upper disk (blue in $z>0$) and downward momentum flux in the lower disk (red in $z<0$) -- in short, the hot phase loses the \emph{outward} momentum flux and the cool phase gains it. In other words, as SN-driven superbubbles expand, the cool phase is accelerated outward by transferring the outward momentum flux of the hot phase. Note that the weight term of the cool phase dominates the net gain (see also \autoref{fig:pdot_tz_P7h}). This means that the continuous momentum transfer from the hot phase (or SN momentum injection) enables the cool phase (mass dominating component) to remain vertically extended (more extended than by the thermal and magnetic support). In the ICM models, as soon as the ICM-ISM interface reaches the midplane $z=0$, \autoref{fig:pdot_tz} becomes just upward momentum gain/loss in the upper disk. As the ICM pressure gets stronger, the hot ICM dominates the momentum transfer, occurring in a larger region of the upper disk. In the weak ICM models, the momentum transfer is still limited within $z<1-2\kpc$. In these models, the vertical momentum flux gain in the cool phase is simply counterbalanced by the increased weight, confining the disk within the simulation domain (no significant stripping). In the {\tt ICM-P7h}{} model, significant hot-to-cool momentum transfers occur all over the upper disk as the ICM fills up a larger volume in this region. In contrast to the mass transfer, which shows a clear dichotomy of the cool-to-hot and hot-to-cool phase transition layers at different heights (\autoref{fig:mdot_tz}(d)), the vertical momentum is always transferred from the hot phase to the cool phase. However, the momentum transfer to the intermediate phase closely follows the mass transfer; it gains momentum when it gains mass. On the one hand, near the ISM-ICM interface, where the mass and momentum transfers have opposite signs in both cool and hot phases, the cool phase is accelerated and gains vertical momentum, while it is shredded and losing mass by hydrodynamic instabilities. Here, the momentum transfer is mainly due to the drag force from the hot gas. The intermediate phase is newly populated by the accelerated and shredded cool phase, gaining both mass and momentum. On the other hand, in the upper region of the disk, the hot phase, together with the intermediate phase, is continuously mixed and cooled back to the cool phase delivering both mass and momentum from hot (and intermediate) to cool phase. There, the momentum transfer is more mixing-dominated. To help further understanding of vertical momentum transfer in detail, \autoref{fig:pdot_tz_P7h} shows a decomposition of each term in \autoref{eq:qnet} in different phase. From top to bottom, each row shows (a) net gain/loss (identical to \autoref{fig:pdot_tz}(d)), (b) time-dependent term, (c) kinetic, (d) thermal, and (e) magnetic flux terms, and (f) sink term due to weight. In the third column (hot), the ICM shock front marching upward is visible at late times. The hot ICM is thermalized at the shock -- gaining thermal flux and losing kinetic flux. The thermalized hot ICM loses its thermal flux (pressure) due to the interaction with the ISM. This turns into the kinetic flux gains of all phases. The hot kinetic flux gain is due to the mass loading from the cool phase (\autoref{fig:mdot_tz}), and the cool kinetic flux gain represents acceleration by drag force. Note that the thermal flux gains in the cool and intermediate phases are minimal, as the majority of the thermal flux transferred to these phases is radiated away (see \autoref{sec:energy_transfer}). The magnetic flux term is subdominant, while this can be as important as the thermal term in weaker or no ICM models at the midplane. The weight term is dominated by the cool phase as shown in (f), which is larger than the kinetic flux gain of the cool phase at early times. At late times, the weight term becomes comparable to the kinetic flux gain of the cool phase, implying effective stripping. In short, the hot ICM’s momentum flux transferred to the cool phase has been used to extend the ISM vertically and support the increased weight. The kinetic momentum flux of the cool phase shows consistent gains at later times (active stripping phase). In the {\tt ICM-P14}{} model, where the ICM pressure is so strong that the momentum gain in the cool phase is now dominated by the kinetic flux gain, resulting in actual acceleration of the entire cool phase at a velocity larger than the escape velocity of the simulation domain. The majority of the cool ISM is ablated while it is accelerated and quickly stripped away from the simulation domain. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Edot_tz-P7h.png} \caption{Term-by-term decomposition of energy change rate (see \autoref{eq:qnet}) for the {\tt ICM-P7h}{} model. We show the decomposition of (a) the net energy change rates per unit area into (b) the time dependent term, (c) flux term (\autoref{eq:eneflux}), (d) source term (only in the hot phase), (e) sink term by cooling, and (f) sink term by gravity. Noisy, checkerboard like patterns below the ICM-ISM interface depicts the non-conservative errors of the analysis, mainly due to significantly time varying fluxes. The majority of the energy flux transferred from hot to intermediate and cool phases are lost through cooling.} \label{fig:Edot_tz_P7h} \end{figure*} \subsubsection{Energy Transfer}\label{sec:energy_transfer} In the simulations, in addition to the energy added by SNe and ICM inflows, there is radiative heating by FUV radiation in the cool phase. But, this radiative heating is balanced by cooling within the same phase. The remaining energy transfer across thermal phases is simple: always from hot to cooler phases regardless of the source of energy in the hot phase. An example, \autoref{fig:Edot_tz_P7h} shows the decomposition of the energy transfer terms for the {\tt ICM-P7h}{} model. The sink term due to radiative cooling in (e) is much larger than the residual energy flux, which is then shared by the actual kinetic flux gain in (c) and work done against gravity in (f) of cooler phases. As seen in \autoref{fig:pdot_tz_P7h}, there is little energy that ends up arriving in the thermal energy/pressure. In this particular model, the source term due to SNe is only significant at early times. Immediately after the ICM-ISM interface reaches the midplane, the overall energy loss is much larger than the explicit source term by SNe (panel (a) and (d)), implying that it is the energy mostly delivered by the ICM inflows (panel (c)). Compared to the {\tt noICM}{} model, the additional energy inputs from the ICM enhance radiative cooling in all phases, including, while limited, cooling in the hot phase. We discuss the enhancement of X-ray luminosity as a function of RPS strengths (\autoref{sec:diss_phase}). \subsection{Mixing Driven Acceleration and Stripping}\label{sec:stripping_mixing} \begin{figure*} \centering \includegraphics[width=\textwidth]{vel_scalar_metal_all.png} \caption{The mass distribution of the cool phase in the metallicity ($Z$) and vertical velocity ($v_z$) plane (1st and 3rd columns) and the ICM mass fraction $\sicm$ and vertical velocity ($v_z$) plane (2nd and 4th columns) over $z=1-2\kpc$. We consider two epochs at early times $t=260-280\Myr$ (left two columns) and late times $t=360-380\Myr$ (right two columns). From top to bottom, we show the models in ascending order of the ICM pressure, {\tt ICM-P1}{}, {\tt ICM-P3h}{}, {\tt ICM-P7h}{}, and {\tt ICM-P14}{}. As a reference, we show the mass distribution of the {\tt noICM}{} model in the the first and third columns as contours. In the second and fourth columns, we plot a linear relation between $v_z^{\rm cool}$ and $\sicm[cool]$ (\autoref{eq:vzcool_icm}). This can be translated into a linear relation between $v_z^{\rm cool}$ and $Z$ using \autoref{eq:Z} (assuming $\ssn=0$) as shown in the first and third columns.} \label{fig:svz_cool} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{out_in_strong.png} \caption{Outward fluxes of each phase through $z=3\kpc$ for the strong ICM models (top: {\tt ICM-P7h}{}; bottom: {\tt ICM-P14}{}) normalized by injected fluxes (by ICM and SNe). Colors of lines and symbols denote different thermal phases: blue for cool, orange for intermediate, and red for hot. The points denote the center of the time bin over which the time averaged outward flux (selecting only gas with $v_z>0$) is calculated using} \autoref{eq:Favg} with \autoref{eq:massflux} for mass flux, \autoref{eq:momflux} for momentum flux, and \autoref{eq:eneflux} for energy flux (neglecting the Poynting flux term). The injected ICM fluxes are similarly calculated at $z=-3\kpc$ for the gas with $v_z>0$. In the {\tt ICM-P7h}{} model (top), the outflow rates are more or less constant over time with significant mass carried out by the cool phase while the hot phase dominates energy flux. In contrast, the {\tt ICM-P14}{} model shows gradual decrease of the outgoing fluxes of the cool phase as the hot phase continuously shreds and entrains the cool phase in the hot outflows. \label{fig:out_in_strong} \end{figure*} Regarding the stripping of the ISM, the main question is how the cool ISM gets accelerated. If the cool ISM is accelerated as a whole by the drag force (or ram pressure force) from the ICM, the cool ISM simply gains outward momentum as is. In reality, the hot ICM penetrates through low-density channels in the multiphase ISM. Large relative velocities between the hot ICM and the cool ISM cause hydrodynamical instabilities, shredding the cool ISM and creating turbulent mixing layers. At the same time, strong radiative cooling due to the high cooling rate of the mixed gas in the mixing layers results in mass (and associated momentum and enthalpy) influx from the hot to cool gas. The competition between shredding and hot gas cooling leads to either net gain or loss of the cool gas mass. As the velocity associated with the mass gain (hot gas velocity) is in general significantly larger than the velocity associated with the mass loss (cool gas velocity), the cool ISM is accelerated by the momentum transfer associated with the mixing, while the direct acceleration still in play. As we show in \autoref{sec:phase_transition}, there is a significant phase transition occurs in the large regions of the ISM disk, especially in the strong ICM models, with corresponding momentum transfers seen in \autoref{sec:mom_transfer}. It is thus clear that the mixing must play a role. In this subsection, we seek evidence that the mixing-driven momentum transfer is actually a dominant mechanism for the cool ISM acceleration and stripping. The \emph{mixing-driven momentum transfer} simply means that the more the hot phase mixes in, the faster the cool phase moves. If this is the dominant mechanism, there must be a linear correlation between the velocity of the cool ISM and the mass fraction of the hot ICM (\autoref{eq:vzcool_icm}; see also \citealt{2020ApJ...895...43S,2021ApJ...911...68T}. The same is true for the SN-driven outflows. In our simulations, however, the SN ejecta tracer field is a less sensitive probe of the mixing-driven momentum transfer since the SN tracer field has been accumulated in all gas phases over many star formation-feedback cycles. On the other hand, the ICM mass fraction provides a telltale sign of the acceleration of the cool phase by the mixing of the hot ICM. Given the large difference in the metallicity of the ICM and ISM, this imprints a noticeable difference in the metallicity of the fast-moving cool ISM. We choose two epochs ($t=260-280\Myr$ and $t=360-380\Myr$) and select the cool phase within $z=1-2\kpc$. \autoref{fig:svz_cool} plots the mass distribution in the metallicity $Z$ and vertical velocity $v_{z}^{\rm cool}$ plane (first and third columns) and the ICM mass fraction $\sicm[cool]$ and vertical velocity $v_z^{\rm cool}$ plane (second and fourth columns). The solid lines in the second and fourth columns are corresponding predictions from \autoref{eq:vzcool_icm}. The corresponding predictions for metallicity assuming $\ssn=0$ using \autoref{eq:Z} are shown in the first and third columns. For the fourth column, we apply an offset using the mean $\sicm[cool]$ as the baseline ICM fraction in the cool phase is nonzero and gradually increases. Except for the {\tt ICM-P1}{} model, all show tight correlations between the ICM mass fraction in the cool phase and the outward vertical velocity of the cool phase in the early epoch. The high-velocity component accelerated by the ICM appears as the low-metallicity component (anti-correlated component) in the $Z$-$v_z^{\rm cool}$ panels. Similarly, the mixing of the hot gas created by SNe produces a correlation between the metallicity excess and vertical velocity, but this correlation is less clear. The mixing-driven acceleration by the SN-origin hot gas is only visible at the early epoch of the {\tt ICM-P3h}{} model -- the high-velocity component correlated with metallicity $Z/Z_\odot\ge 1.2$. The contribution of the ICM-mixing increases as the ICM pressure increases, dominating the SN-origin mixing component. Generally, the outflow velocity is above the simple linear prediction, indicating the acceleration by ram pressure still contributes on top of the mixing-driven momentum transfer. The direct ram pressure drag is more important in the earlier evolution and the stronger ICM model when the ISM react to the ICM as a whole. At late times, cool clouds are more fragmented so that the crossection to the ICM inflows gets smaller while the surface area of the mixing layers gets larger. In the late epoch (we omit the {\tt ICM-P14}{} model since there is almost no cool phase gas left at this epoch), a significant fraction of the cool phase gas falls back in the weak ICM models. Since this gas has been pushed upward mostly by the ICM previously, $\sicm[cool]$ is generally enhanced. The correlation between $\sicm[cool]$ and $v_z^{\rm cool}$ becomes less clear in the {\tt ICM-P3h}{} model due to the lack of continuous acceleration by the ICM but remains tight in the {\tt ICM-P7h}{} model. At this epoch, the mean metallicity of the cool phase in both {\tt ICM-P3h}{} and {\tt ICM-P7h}{} models is greatly reduced compared to that of the {\tt noICM}{} model shown in contours. The metallicity of the cool phase decreases at least 0.1 dex in both models due to the ICM mixing. We now ask, when the ISM is stripped by mixing with phase transition, which phase dominates in the outflows, hot (by shredding and escape before cooling) or cool (by cooling of stripped gas)? In \autoref{sec:phase_transition}, we show that the mass-dominating cool phase is shredded near the ICM-ISM interface and first stripped in the form of the intermediate and hot phases. Then, the significant cooling occurs within the simulation domain ($z\sim 2-3\kpc$) for the marginally strong ICM model ({\tt ICM-P7}{}), but the majority of the stripped gas escapes the simulation domain before cooling in the strongest pressure model ({\tt ICM-P14}{}). To provide a more quantitative view, we in \autoref{fig:out_in_strong} measure the outgoing fluxes ($F_{q,{\rm out}}$) normalized by the injected fluxes in the strong ICM models at $z=3\kpc$. The injected fluxes include the ICM inflow fluxes ($F_{q, {\rm in}}$) measured at $z=-3\kpc$ and those injected by SNe ($F_{q, {\rm SN}}$).\footnote{We calculate the mass, momentum, and energy fluxes from SNe using $\Delta N_{\rm SN}M_{\rm ej}/A\Delta t$, $\Delta N_{\rm SN} p_{\rm ref}/A\Delta t$, and $\Delta N_{\rm SN} E_{\rm SN}/A\Delta t$. For the momentum, we use the reference momentum $p_{\rm ref}=E_{\rm SN}/(2 v_{\rm cool})=1.25\times10^5\Msun \pc^{-2}$ of a SNR at the radiative stage \citep{2020ApJ...900...61K}, while SN ejecta mass $M_{\rm ej}=10\Msun$ and SN explosion energy $E_{\rm SN}=10^{51}\erg$ are our input parameters for SN feedback.} Throughout the simulation, the contribution of SN feedback to total injected mass, momentum, and energy fluxes are 5, 28, and 17 \% for the {\tt ICM-P7h}{} model and 3, 14, and 6 \% for the {\tt ICM-P14}{} model, respectively. In the top row ({\tt ICM-P7h}{}), there is a significant and continuous outflowing mass, momentum, and energy fluxes from the cool phase. This is in stark contrast to the weak ICM models (and the {\tt noICM}{} model), where all outflow fluxes at $z=3\kpc$ are dominated by the hot phase at a level of a few \% of the injected fluxes \citep[e.g.,][]{2020ApJ...900...61K} and frequently truncated by inflows. The cool outflows in the {\tt ICM-P7h}{} model consist of both directly accelerated cool phase (mostly at early times) and additional cool phase created in the hot-to-cool phase transition layer (see \autoref{fig:mdot_tz}). Roughly 10-20\% of the injected momentum flux is transferred to the outflowing cool phase. The energy flux in the hot phase at this distance is about 10-20\% of the injected flux as significant thermal energy is transferred to the cooler phases and then radiated away. As a consequence, the energy flux in the cool and intermediate phases is much lower (a few percent) again because of large cooling losses (see \autoref{fig:Edot_tz_P7h}). In the bottom row, strong outgoing fluxes in the cool phase exist only at very early times. The hot phase soon dominates all fluxes as the cool gas is shredded and evaporated to the hot phase. This \emph{mass loading} (or entrainment) to the hot outflowing gas increases the hot gas mass flux, at maximum, by a factor of two compared to the injected mass flux. Similar behavior is seen in the {\tt ICM-P7h}{} model at $t>400\Myr$. The maximum momentum transfer efficiency to the cool phase reaches up to 50\%. To summarize, both {\tt ICM-P7h}{} and {\tt ICM-P14}{} models show the mixing-driven ISM stripping soon after the direct cool phase acceleration, and the cool phase dominates the outflows in the early epoch. Then in {\tt ICM-P14}{}, the hot phase takes over the outflows whereas the cool phase outflow is maintained to some extent in {\tt ICM-P7}{}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Z.png} \caption{Outflow metallicity measured at $z=3\kpc$ for the (a) weak and (b) strong ICM models. Colors of lines and symbols denote different thermal phases: blue for cool, orange for intermediate, and red for hot. The same {\tt noICM}{} points are repeated in both panels.} \label{fig:Zout} \end{figure} Finally, we also make use of the metallicity of the outflowing gas to find the contribution of the ICM in accelerating the gas. \autoref{fig:Zout} plots the metallicity of the outflowing gas ($v_z>0$) at $z=3\kpc$ for the (a) weak and (b) strong ICM models along with the {\tt noICM}{} model in both panels as a reference. In the {\tt ICM-P1}{} model, the metallicities of the outflow in all three phases are essentially unchanged from those in the {\tt noICM}{} model. This is expected as the ICM cannot penetrate directly to the upper disk. The ICM is mixed into the ISM near the midplane, but the mass flux is insignificant. The outflowing gas is mostly driven out by SNe with enhanced metallicity. The ICM makes a noticeable difference in the {\tt ICM-P3h}{} model. At $t\sim300-330\Myr$, the cool outflow metallicity is clearly reduced while the hotter phases still show metallicities similar to those in the {\tt noICM}{} and {\tt ICM-P1}{} models. The reduced metallicity in the cool outflow means the ICM mixing-driven acceleration as evidenced in \autoref{fig:svz_cool}. At later times ($t>350\Myr$), the outflow metallicity of all phases is significantly reduced, equally in all phases, and increases again. The decrease of metallicity indicates the mixing of the ICM into the ISM is the main driver of the outflows, while the later increase of the metallicity signals that the SN feedback plays a major role in driving outflows. In the strong ICM models shown in \autoref{fig:Zout}(b), the metallicity of outflows is reduced at all times and keeps decreasing. This makes it plain that SNe in the strong ICM models is not a major driver of outflows, except very early time in the {\tt ICM-P7h}{} model as seen in \autoref{fig:svz_cool}. In \autoref{fig:Zout}(b), we also find that the outflow metallicity in the cool and intermediate phases is very similar for both strong ICM models, while the hot outflow metallicity is more reduced with stronger ICM pressure. The distribution of $\sicm[cool]$ shown in \autoref{fig:svz_cool} shows $\sicm[cool]<0.1-0.2$. Having demonstrated that the mixing is the main mechanism to drive outflows (or stripping), the limited range of the $\sicm[cool]$ implies that the outflowing cool gas would have been ablated and evaporated before mixing in too much hot ICM. The maximum ICM fraction then sets the maximum velocity and minimum metallicity difference of the cool gas accelerated by the ICM. \section{Impact of the ICM on Star formation} \label{sec:sfr} We take advantage of self-consistent modeling of star formation and feedback implemented in the TIGRESS framework to study the impact of the ICM ram pressure on star formation in and out of the ISM disks. We first present the changes in overall SFRs and their links to dense gas in the simulations in the presence of the ICM inflows. We then take a detailed look at extraplanar star formation. \subsection{Enhancement and quenching of star formation}\label{sec:sfr_dichotomy} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{SFR_Mdense.png} \caption{\textbf{Left:} time evolution of (a) SFR surface density $\Sigma_{SFR}$ (\autoref{eq:SFR}) and (c) dense gas surface density $\Sigma_{\rm dense}\equiv \Sigma_{\rm gas}(n_H>10\pcc)$. The colored solid lines correspond to the models with the different ICM pressure, while the black dashed line is the {\tt noICM}{} model. \textbf{Right:} box and whisker plots of (b) $\Sigma_{SFR}$ and (d) $\Sigma_{\rm dense}$ for early ($t<330\Myr$) and late ($t>330\Myr$) periods. Boxes include 25th to 75th percentile with median (white dashed horizontal line) and mean (red circle). Whiskers represent 5th to 95th percentile with outliers shown as diamonds.} \label{fig:sfr} \end{figure*} The general expectation is that strong ICM ram pressure that can strip the gas in galaxies will reduce SFRs in galaxies. At the same time, mild ICM ram pressure that compresses the gas in galaxies may enhance SFRs. Indeed, in our simulations, we find both effects depending on the ICM strengths and evolutionary stages. In short, SFRs are enhanced locally (inside the truncation radii) and temporarily (before active stripping), but the gas stripping eventually quenches star formation. \autoref{fig:sfr}(a) plots the time evolution of SFR surface density defined by the total mass of young stars formed in the last $t_{\rm bin}=10\Myr$: \begin{equation}\label{eq:SFR} \Sigma_{\rm SFR}(\Delta t=t_{\rm bin}) \equiv \frac{\Sigma m_{\rm sp}(t_{\rm m} < t_{\rm bin})}{L_xL_yt_{\rm bin}}, \end{equation} where $m_{\rm sp}$ and $t_{\rm m}$ are the mass and mass-weighted mean age of the sink particle representing star clusters, respectively. This roughly corresponds to SFRs traced by H$\alpha$ \citep[e.g.,][]{2012ARA&A..50..531K}. \autoref{fig:sfr}(b) shows the box and whisker plots, presenting the distributions of $\Sigma_{\rm SFR}$ over two periods separated by $t=330$~Myr, before and after quenching of star formation in the strong ICM models. The enhancement of SFRs compared to the {\tt noICM}{} model in the early epoch is common in all models with the ICM. At later times ($t>330$~Myr), such enhancement of SFRs persists in the weak ICM models, while the gas stripping quenches star formation in the strong ICM models. The enhancement levels in $\Sigma_{\rm SFR}$ are $\sim 30\%$ to $50\%$ in the weak ICM models for more than 200 Myr explored in this paper. Also, the temporal modulation of $\Sigma_{\rm SFR}$ in these models gets stronger with higher peaks. The enhancement of SFRs in the early epoch is mainly due to the compression of the overall ISM disk in the vertical direction. The introduction of the ICM inflows simply pushes the ISM from the lower disk to the midplane, effectively supplying more gas for star formation. In the weak ICM models, this \emph{additional} gas remains near the midplane where the majority of star formation takes place. However, strong ICM inflows can blow away the ISM altogether in $\sim 100\Myr$. \autoref{fig:sfr}(c) and (d) show the time evolution and the box and whisker plots of the dense gas surface density $\Sigma_{\rm dense}$ selected by $n_H>10\pcc$. The first compression increases the peak dense gas mass at $\sim 270\Myr$ by about a factor of two in all ICM models. The corresponding enhancement of SFRs is delayed by $\sim 10\Myr$, a free-fall time of gas at $n_H=10\pcc$ \citep{2020ApJ...898...52M}. The enhancement of $\Sigma_{\rm dense}$ persists in the weak ICM models. In the strong ICM models, however, the dense gas mass quickly decreases over time due to shredding and stripping by the ICM. In the {\tt ICM-P7}{} model, the dense gas still exists for a longer time than the {\tt ICM-P14}{} model. Some of the dense gas pushed far above the disk manages to form stars at late times (see \autoref{sec:extra_sf}). \subsection{Extraplanar Star Formation}\label{sec:extra_sf} \begin{figure*}[!ht] \includegraphics[width=1\linewidth]{loc.png} \caption{{\bf Top:} vertical position at which new star clusters are born. The size of each symbol represents the mass of star cluster. {\bf Bottom:} metallicity of new star clusters colored by the star formation position. Star clusters with significant low metallicity are marked by block dotted circles in both panels.} \label{fig:LOC} \end{figure*} One of the intriguing properties commonly found among the RPS galaxies is star-forming patches outside the stellar disk that remains intact. In \autoref{fig:LOC}(a), we show the vertical distance of the newly formed star clusters (sink particles) from the midplane $z_{\rm sf}$ over time. The size of symbols represents the mass of star clusters. The black star symbols are for the {\tt noICM}{} model. On the one hand, for the strong ICM models, the bulk ISM keeps moving away from the midplane. As a consequence, $z_{\rm sf}$ increases over time. This continues for the {\tt ICM-P14}{} model, while the {\tt ICM-P7}{} and {\tt ICM-P7h}{} models show turnover. Although one may get an impression that the ISM continuously moves upward in these models (see \autoref{fig:sicm}), the main gas reservoir is fragmented, and a large chunk of dense gas falls back (\autoref{fig:slc_late}(b)). As a result, two star-forming sites near and far from the midplane are visible at late times of the {\tt ICM-P7}{} model. On the other hand, for the weak ICM models, as more and more gas moved the upper disk, the ISM weight shortly dominates the ICM pressure. The entire ISM disk falls back, and so does the star formation location. This introduces larger amplitude vertical oscillations of $z_{\rm sf}$ in the {\tt ICM-P3}{} and {\tt ICM-P1}{} models than the {\tt noICM}{} model in which a small amplitude vertical oscillation is naturally introduced by the asymmetry (\autoref{fig:noICM}). \autoref{fig:LOC}(b) plots the metallicity of sink particles. Each symbol is now color-coded by $z_{\rm sf}$ shown in \autoref{fig:LOC}(a). In the {\tt noICM}{} model, the metallicity of new star clusters increases over time as the star-forming gas is continuously metal-enriched by mixing of the high-metallicity SN ejecta. The injected SN ejecta first goes into the hot phase and then quickly cools and mixes into the cool phase (see the top row of \autoref{fig:mdot_tz}). We find that the metallicity within the cool phase is nearly homogeneous, implying the efficient mixing of SN ejecta to the cold, star-forming gas. The metallicity of new stars born within the main ISM disk follows a common enrichment trend even with the ICM inflows, implying that they are born in the genuine ISM. However, in the strong ICM models, star clusters formed in the stripped gas far from the midplane at late times (marked by black circles) show lower metallicities compared to the enrichment trend. These star clusters are born in the gas that is experienced significant mixing with the low metallicity ICM. The gradual mixing of the ICM in the {\tt ICM-P3}{} and {\tt ICM-P3h}{} models also reduces the metallicity of new stars at late times ($t>400\Myr$), while higher SFRs with insignificant mass contribution from the ICM in the {\tt ICM-P1}{} model results in an even higher metallicity of new stars. While reduced, the metallicity is still much higher than the ICM metallicity, implying that the composition of the star-forming gas in the extraplanar region is dominated by the genuine ISM. In our simulations, there is no sign of the complete shredding and recondensation of the star-forming cold gas in the stripped tails within the simulation domain $z<3.5\kpc$, which will be generally true for the extraplanar star formation within a few kpc away from the disks of RPS galaxies. \section{Discussion}\label{sec:discussion} \subsection{Ram Pressure Stripping as a Mixing-Driven Acceleration Process: Observational Imprints} The multiphase nature of the ICM-ISM interaction is often neglected when developing theoretical understandings based on simple analytic models, although multiwavelength observations have revealed the multiphase gas involved in RPS galaxies such as cold molecular gas via CO \citep[e.g.,][]{2008A&A...491..455V,2015A&A...582A...6V,2018MNRAS.475.4055M,2017ApJ...839..114J,2019ApJ...883..145J}, cold and warm neutral gas via \ion{H}{1} \citep[e.g.,][]{1990AJ....100..604C,2009AJ....138.1741C,2010MNRAS.403.1175S}, warm ionized gas via H$\alpha$ \citep[e.g.,][]{fumagalli2014,boselli2016a}, and hot gas via X-rays \citep[e.g.,][]{sun2010,poggianti2019multiphase}. We show that the mass, momentum, and energy transfer from the hot ICM to the ISM via gas mixing is likely the dominant mechanism for stripping in our \emph{multiphase RPS} simulations. This is a qualitatively different process from that of a simple acceleration due to ram pressure without phase transition and mixing. A wealth of observational signatures will be imprinted on different gas phases. \subsubsection{Imprints in Metallicity of Stripped Tails} The main observational imprint of the mixing-driven acceleration model is on the anti-correlation of metallicity and cool gas velocity in the stripped tails (\autoref{fig:svz_cool}). If the ICM has distinctively lower metallicity than the ISM as we assumed, the fast-moving part of the stripped gas should have lower metallicity than the genuine ISM.\footnote{When SNe are the major source of the hot gas that mixes into the cool ISM, the metallicity in the fast-moving cool gas is likely enhanced compared to the genuine ISM \citep[see][]{2020ApJ...900...61K,2020ApJ...895...43S,2022ApJ...924...82F}.} Assuming $\ssn=0$, \autoref{eq:Z} and \autoref{eq:vzcool_icm} give the slope in the metallicity and velocity correlation $dZ/dv = (Z_{\rm ICM} - Z_{\rm ISM})/v_{\rm ICM}$. For $Z_{\rm ICM}/Z_{\rm ISM}=0.1$ and $v_{\rm ICM}=1000\kms$, this implies the metallicity reduction in the mixed cool gas is \begin{equation} \frac{Z_{\rm mix}}{Z_{\rm ISM}}= 1 - 0.09 \rbrackets{\frac{\Delta v}{100\kms}}, \end{equation} resulting in roughly 10\% reduction for 100~km/s difference in outflow velocity. The metallicity reduction decreases by a factor of two if $Z_{\rm ICM}/Z_{\rm ISM}=0.3$ and $v_{\rm ICM}=1400\kms$. It is also noteworthy that the mixed ICM fraction in the cool gas cannot be arbitrarily high. Although efficient cooling helps to keep the mixed gas cool, the cool ISM can evaporate if the energy flux from the hot ICM is too large to be radiated away in the mixing layer. In our models, the difference of $\sicm$ between high and low-velocity cool gas is typically less than 0.1. This limits the dynamic range of outflow velocity less than $<200\kms$ even in {\tt ICM-P14}{} with $v_{\rm ICM}=2000\kms$. Similar results are also seen in numerical simulations of an isolated galaxy that is experiencing the hot ICM inflows \citep{2021ApJ...911...68T}, where they follow accelerated clouds in the stripped tails of 10s to 100s~kpc scales. Although the stripped tails traveled very far from the disk can have much larger $\sicm$ of clouds up to 0.8, the range of $\sicm$ at a particular position is limited to $\sim0.1-0.2$, translating into the velocity difference $\simlt 200-400\kms$. In addition to the maximum ICM fraction, there must be a significant fraction of total gas across a wide range of outflow velocities for such mixed gas to be visible. Since the gas mass fraction is sharply decreasing at high velocities in general \citep{2020ApJ...903L..34K}, observable velocity ranges and hence the metallicity differences can be further limited. Finally, it can possess a cleaner signal only if the stripping occurs quicker than the enrichment by SNe. Such condition favors strong ram pressure stripping galaxies. All the above makes the signal in gas phase metallicity difference at different outflow velocities we are searching for very small (of an order of 10\% or less). If the mixing-driven stripping continues beyond the immediately stripped tails we model here, there must be a well-defined trend in the mixed gas fraction as a function of distance across very long tails of RPS galaxies (often dubbed jellyfish galaxies). Indeed, global RPS galaxy simulations forming such long tails ($>100\kpc$) show a correlation between clouds' distance and ICM mass fraction \citep{2021ApJ...911...68T}. With an assumption that the mixing rate is constant over time, \citet{2021ApJ...911...68T} laid out a simple model for the ICM mass fraction as a function of distance, which qualitatively agrees with the increasing ICM mass fraction in clouds farther away in their simulations. The recent analysis of MUSE observations of RPS galaxies shows that warm ionized gas metallicities decrease as a function of distance from stellar disks \citep{2021ApJ...922L...6F}. The stripped gas, in reality, would experience much more complicated dynamical and thermal evolution, including deceleration by gravity, evaporation/fragmentation, and perhaps recondensation/growth by cooling in the mixing layers. The simple extrapolation of the clouds' velocity and ICM mass fraction at very far distances may not work well in predicting velocity and metallicity correlations quantitatively. Still, potentially illuminating results in \citet{2021ApJ...911...68T} (see their Figure 9) are that the slope in the cold clouds' velocity and ICM fraction correlation remains nearly linear over a large range of distances. Again, high precision measurements of metallicity across velocity channels to measure the slope in the $v$ and $Z$ correlation will be the most direct ways to confirm whether the mixing-driven acceleration is the dominant mechanism for the ram pressure stripping. RPS galaxies often show star formation activity outside the main, old stellar disk \citep[e.g.,][]{1999AJ....117..181K,sun2007,poggianti2016}. The mixing of the ICM also creates an imprint on the metallicity of stars formed in the extraplanar region. In the strong ICM models, the extraplanar star formation in the stripped tails occurs 2-3~kpc above the stellar disk, creating star clusters with lower metallicity (by 0.05--0.1 dexes) than those formed in the disk (see \autoref{fig:LOC}). Observationally, the relative difference of stellar metallicities between young stars from unstripped inner part and stripped extraplanar region can be compared. If the intrinsic metallicity gradient of galaxies is subtracted, stacking analysis may enhance a potential signal. \subsubsection{Imprints in Gas Phases}\label{sec:diss_phase} RPS not only simply strips the cool ISM as is but also involves significant phase transition from cool to hotter phases (see \autoref{sec:stripping}). In the regions that experience strong RPS, the shredded cool gas escapes the simulation domain (stripped from galaxies) before it cools back (e.g., {\tt ICM-P14}{}). This is manifested in \ion{H}{1} deficiency in RPS galaxies \citep{2009AJ....138.1741C,2019MNRAS.487.4580R,2020A&A...640A..22R}. At the same time, the mass-loaded hot gas gets brighter in X-rays. The fate of such stripped gas is not traced in our simulations, but it is possible to cool back the stripped gas and form \ion{H}{1}/H$\alpha${} tails \citep[e.g.,][]{2019MNRAS.487.4580R,2020A&A...640A..22R}. In fact, large scale simulations do show late time cooling at more than tens of kpc away from the disk \citep{2012MNRAS.422.1609T,2021ApJ...911...68T,2022arXiv220101316L}, which might be responsible for the long, extended tails seen in H$\alpha${} and CO \citep[e.g.,][]{2008A&A...491..455V,2017MNRAS.466.1382L,2017ApJ...839..114J,2019ApJ...883..145J}. Recently, sensitive, high-resolution observations of molecular gas tracers reveal the prevalence of extraplanar molecular gas in RPS galaxies \citep[e.g.,][]{2008A&A...491..455V,2018MNRAS.475.4055M,2017ApJ...839..114J,2019ApJ...883..145J}. The origin of extraplanar molecular clouds is unclear whether they are remnants of directly stripped molecular gas from the ISM disk or destroyed and reformed in the extraplanar region. \citet{2018ApJ...866L..10L} used high-resolution ALMA observations of NGC~4522 and detected ${}^{13}$CO in the extraplanar molecular clumps at $\lesssim$ a few kpc above the stellar disk near truncation radii. Given the relatively short formation time of molecular gas ($\sim10\Myr$) compared to the stripping time scale ($\sim100\Myr$), they concluded that both scenarios are feasible. Our simulations lack the resolution and physical processes to follow the molecular species explicitly in the simulation. Instead, if we consider the dense gas ($n_H>10\pcc$; see \autoref{fig:sfr}) as a proxy of the molecular gas, we find that in the {\tt ICM-P7h}{} model where the later time extraplanar star formation occurs, (1) the dense gas fraction is comparable to the {\tt noICM}{} model and (2) most of the cold gas is located at 1-3~kpc above the midplane within which stars form at late times. This cold, dense gas is not completely shredded and recondensed, but a significant fraction of the ICM has been mixed in, which is evidenced by the lower metallicity of star clusters formed in the extraplanar regions (\autoref{fig:LOC}), up to $\sicm\sim 0.15$. Our simulations suggest that the extraplanar molecular gas (not those in the stripped tails farther than a few tens kpc from the disk) is mostly originated by the gas directly stripped from the disk. Still, the ICM mixing is important to accelerate the molecular gas (\autoref{fig:svz_cool}). Given that the {\tt ICM-P14}{} model quickly runs out its dense gas as the remaining enthalpy flux from the hot ICM after cooling is large enough to evaporate many small cold clouds, the marginally strong ICM condition can be optimal for pushing the molecular gas outward without destroying it. This translates into an expectation that the extraplanar molecular gas in active RPS galaxies can be most abundant near the truncation radii, which seems to be consistent with observations \citep{2018ApJ...866L..10L,2018MNRAS.475.4055M}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Xray.png} \caption{Soft X-ray surface brightness, $\mathcal{S}_{X,{\rm 0.3-2\,keV}}$, as a function of SFR surface density in the past 40~Myr, $\Sigma_{\rm SFR,40}$. The ICM models presented in this paper are shown as colored points, while the TIGRESS suite results (C.-G. Kim et al. in prep) are shown as gray points with their mean values as black circle for references. The two diagonal dashed lines denote the SN to soft X-ray efficiencies of $\epsilon_{\rm SN\rightarrow X}=1$ and 0.1\%. The horizontal dotted lines are for the ICM to soft X-ray conversion efficiency $\epsilon_{\rm ICM\rightarrow X}$ of 0.05\% for given ICM total energy flux.} \label{fig:Xray} \end{figure*} Another potentially strong observable signature of RPS can be an enhanced diffuse X-ray brightness \citep[e.g.,][]{2020MNRAS.494.5967K,2021ApJ...911..144C}. The diffuse thermal X-ray emission from the hot ISM is expected to correlate with SN rates and hence SFRs. From the observations of star forming galaxies, \citet{2012MNRAS.426.1870M} obtained a linear scaling relation between the diffuse (excluding resolved X-ray binaries) soft X-ray luminosity in $0.5-2$~keV ($L_{X,{\rm 0.5-2\,keV}}$) and SFR ($\dot{M}_*$) as $L_{X,{\rm 0.5-2\,keV}}/\dot{M}_* = 8.3\times10^{38}\ergs(M\odot\yr^{-1})^{-1}$ (see also \citealt{2003A&A...399...39R,2014MNRAS.437.1698M}). Assuming a canonical SN energy of $10^{51}\erg$, SN energy injection rate is $\dot{E}_{\rm SN} = 3.3\times10^{41}\ergs (\dot{M}_*/(M_\odot\yr^{-1}))$ for the standard initial mass function (1 SN per 100 $M_\odot$; e.g., \citealt{2001MNRAS.322..231K}). Then, the observed relation means the SN energy to soft X-ray conversion efficiency (or ``soft X-ray efficiency’’ in short) of $\epsilon_{\rm SN\rightarrow X}\equiv L_{X,{\rm 0.5-2\,keV}}/\dot{E}_{\rm SN}\sim0.25\%$.\footnote{\citet{2012MNRAS.426.1870M} calculated the intrinsic bolometric luminosity and derived the SN thermalization efficiency of $L_{\rm bol}/\dot{E}_{\rm SN}\sim 5\%$. But, the calculation of the intrinsic bolometric luminosity is largely model-dependent and requires many uncertain scaling factors. Here, we simply stick with the direct measurements of the diffuse soft X-ray luminosity and compare it with the forward-modelling results of our simulations.} The analysis of the TIGRESS suite \citep{2020ApJ...900...61K} shows similar soft X-ray efficiencies of $\epsilon_{\rm SN\rightarrow X}\sim0.1-0.2\%$ with higher efficiency for higher SFR surface density (C.-G. Kim et al. in prep; see \autoref{fig:Xray}). Here, we calculate the X-ray surface brightness for the ICM models and compare them with the TIGRESS suite. We first obtain the X-ray emissivity in the soft X-ray band ($0.3-2$~keV) for each cell using the {\tt apec.v2} table (\url{http://www.atomdb.org}; \citealt{2012ApJ...756..128F}) adopted in {\tt yt} \citep{2011ApJS..192....9T}. The gas metallicity is fixed to solar metallicity. We then integrate total soft X-ray luminosity and divide it with the area to get the mean X-ray surface brightness. Figure~\ref{fig:Xray} plots as colored points the soft X-ray surface brightness $\mathcal{S}_{X,{\rm 0.3-2\,keV}}$ as a function of $\Sigma_{\rm SFR,40}$ (\autoref{eq:SFR} with $t_{\rm bin}=40\Myr$, the duration of SNe in each cluster). The weak ICM models are all consistent with the relation with soft X-ray efficiency of $0.1\%$ (lower dashed line) as the {\tt noICM}{} model in the TIGRESS suite. However, the strong ICM models show the enhancement of X-ray surface brightness consistent with a lower X-ray efficiency for the ICM inflows as $\mathcal{S}_{X,{\rm 0.3-2\,keV}}/(\dot{E}_{\rm ICM}/A) \sim 0.05\%$ (horizontal dotted lines), where the ICM total energy flux is $\dot{E}_{\rm ICM}/A = 0.5\rho_{\rm ICM}v_{\rm ICM}(v_{\rm ICM}^2+5c_{s, {\rm ICM}}^2)$. We note that the X-ray surface brightness of the pure ICM is much lower than that from the ICM-ISM interaction, mainly due to the ICM's low density. Somewhat lower X-ray efficiency of the ICM can be understood because of a larger mixing area involved in the ICM-ISM interaction than that of superbubbles driven by SNe. In RPS galaxies, the diffuse X-ray brightness can then be enhanced by an order of magnitude compared to that expected purely from SNe before the majority of the shock-heated gas is stripped away (shown as decreasing X-ray at lower SFRs in the strong RPS models). \subsection{Ram pressure stripping and shock/wind-cloud interactions}\label{sec:rps_in_cloud_crushing} The detailed look of the RPS process as multiphase gasdynamical interaction is reminiscent of a collection of shock/wind-cloud interactions. The main question of shock/wind-cloud interaction studies is how cold gas can be accelerated before it is completely shredded. In adiabatic cases, the drag/acceleration time scale is always shorter than the cloud crushing time scale \citep{1994ApJ...420..213K}. In other words, the energy transferred from the hot shock/wind to the clouds is fully retained to heat up the clouds while surface instabilities shred them \citep[e.g.,][]{1994ApJ...420..213K,1994ApJ...433..757M,2009ApJ...703..330C,2015ApJ...805..158S,2015MNRAS.449....2M}. In order to prolong the cloud lifetime, several mechanisms have been proposed, including radiative cooling \citep[e.g.,][]{2009ApJ...703..330C} and magnetic fields \citep[in both wind and cloud, e.g,][]{2008ApJ...677..993D,2015MNRAS.449....2M,2020MNRAS.499.4261S}. Recently, it is realized that when clouds are large enough and cooling is strong, all the enthalpy flux can be radiated away while the significant mass and momentum of the hot phase are mixed and added into the cool phase without completely shredding the clouds. This allows the clouds to keep growing even while they are being accelerated by shock/wind-cloud interactions \citep[e.g.,][]{2016MNRAS.462.4157A,2018MNRAS.480L.111G,2020MNRAS.492.1970G,2021MNRAS.501.1143K,2020MNRAS.492.1841L,2019MNRAS.482.5401S,2020MNRAS.499.4261S}. It is certainly true that in our simulations, the chunk of the ISM that is facing the ICM inflows is large (a few 100 pc). The critical size above which the cool clouds can grow by cooling of the hot gas proposed by \citet{2018MNRAS.480L.111G} is \begin{equation} R_{\rm crit}\approx 2\pc \frac{T_{\rm cl,4}^{5/2} \mathcal{M}_{\rm wind}}{P_3\Lambda_{\rm mix,-21.4}}\frac{\chi}{100}, \end{equation} where $T_{\rm cl,4}\equiv T_{\rm cl}/10^4\Kel$ is the cloud temperature, $P_3 \equiv P/(10^3k_B\pcc\Kel)$ is the ambient pressure, $\mathcal{M}_{\rm wind}$ is the hot wind Mach number, $\Lambda_{\rm mix,-21.4}=\Lambda(T_{\rm mix})/(10^{-21.4}\ergs {\rm\,cm^{3}})$ is the cooling coefficient at the temperature of the mixed gas, and $\chi$ is the density contrast between wind and clouds. The critical size\footnote{The exact size criterion for the cool cloud growth is still under debate \citep{2021MNRAS.501.1143K}. \citet[][see also \citealt{2020MNRAS.499.4261S}]{2020MNRAS.492.1841L} suggest another criterion based on the hot gas cooling time and the predicted cloud lifetime from their simulations.} is of order of a few to tens of parsec at the typical conditions of our simulations with $P/k_B\sim 10^{4-5}\pcc\Kel$, $\chi=10^{2-3}$, and trans-to-subsonic wind Mach number (note that the ICM Mach number at injection was supersonic, but it is quickly thermalized and becomes subsonic at the time of interaction near the midplane). The bulk ISM from the first interaction cannot be completely shredded, while there are continuous shredding at the interfaces as the ICM penetrates through low density channels (see \autoref{fig:slc_early}). In the later time evolution, the strong ICM models successfully stripped the majority of the cool ISM from the disk midplane. There are smaller, fragmented cold cloudlets embedded in the ICM inflows (see \autoref{fig:slc_late}), which are vulnerable to shredding/evaporation. This results in a broad cold-to-hot phase transition layer seen in \autoref{fig:mdot_tz}(d) and (e). However, these clouds' wakes meet other cool gas and add their mass back to the cool phase. This evolution is more equivalent to that seen in shock-multicloud interactions \citep{2021MNRAS.506.5658B,2020MNRAS.499.2173B} rather than the growth of cool clouds in idealized shock/wind-cloud interaction simulations where the mass is added from an infinite hot reservoir via cooling of the mixed gas. \subsection{Star formation in RPS galaxies} The compression by the ICM can enhance SFRs by 30--50\% for a short period (a few tens of Myr), while the enhanced star formation is sustained in the weak ICM models (or inner part of an RPS galaxy) as the ISM remains compressed. In our strong models, representing the outer region of an RPS galaxy, star formation is quenched at time scales of $\sim100$ Myr. Many previous simulations of RPS stripping galaxies including star formation \emph{recipes} commonly show the enhanced star formation activity before quenching \citep[e.g.,][]{2008A&A...481..337K,2012A&A...544A..54S,2017MNRAS.468.4107R}. Despite the qualitative agreements in the roles of RPS in star formation, global star formation enhancement found in many of these simulations is usually higher than ours by a factor of a few and persists longer \citep[][]{2008A&A...481..337K,2012A&A...544A..54S,2017MNRAS.468.4107R}. Keep in mind that earlier simulations in this category adopt a parameterized model for the ISM (cannot directly follow the gas phase cooler than $10^4\Kel$) and star formation \citep[e.g.,][]{1992ApJ...399L.113C,2003MNRAS.339..289S} to model an entire galaxy in a wind tunnel \citep[e.g.,][]{2008A&A...481..337K,2012A&A...544A..54S} or in a galaxy cluster \citep[e.g.,][]{2017MNRAS.468.4107R}. Therefore, the star formation rates obtained in previous global simulations can be sensitive to the adopted star formation recipes, although the global nature of such models (e.g., ICM wind inclination) can also be a reason for the difference (see \autoref{sec:caveats}). Recently, \citet{2020ApJ...905...31L} presents simulations of an RPS galaxy with varying ICM inflow strengths and directions. Combined with adopted higher resolution (adaptive mesh refinement down to $20\pc$) and explicit ISM cooling and heating treatments, star formation in this work occurs in the cold, dense gas at number density above $100\pcc$, representing self-gravitating clouds, as in our simulations. In their moderate ICM inflow model, star formation at the outer region becomes suppressed during early 150 Myr, while the central region of the galaxy shows an enhancement of SFRs for several hundred Myr, compared to their {\tt NoWind} case. The quantitative agreements with our models are encouraging and indicative of the importance of high-resolution modeling of star formation in the multiphase ISM. Star formation enhancement prior to the quenching has been observed in RPS galaxies \citep[e.g.,][]{2006ApJ...649L..75C,2014ApJ...780..119K}. Recently, \citet{2018ApJ...866L..25V} reported a systematic enhancement of the SFR (0.2 dex) for 42 RPS galaxies compared to the counterpart galaxies. The spatially resolved SFRs have been estimated for some of those galaxies, showing signs of central SFR enhancement before quenching \citep{vulcani2020_sfr_resolved}. In addition, \citet{roberts2020_coma_rps_sfr} identified 41 RPS candidate galaxies in the Coma cluster and reported enhanced SFR (0.3 dex) of them. Meanwhile, \citet[][]{2006ApJ...649L..75C, 2008AJ....136.1623C} measured the age of the youngest stellar population at the \ion{H}{1} truncation radii of RPS galaxies to estimate a quenching time scale -- how long ago star formation has been quenched since the \ion{H}{1} gas stripping. \citet{2008AJ....136.1623C} derived the quenching time scale of a few hundred Myr for the Virgo RPS galaxies which appear to be currently undergoing active RPS. These results are broadly consistent with our results, while more spatially resolved analyses in observations are warranted for more quantitative comparisons. \subsection{Caveats and Future Perspectives}\label{sec:caveats} In this work, we had to limit our simulation domain to a kpc-size box to achieve high-resolution with explicit treatments of ISM physics \citep{2017ApJ...846..133K,2018ApJ...853..173K}. We thus cannot cover an entire galaxy nor model a galaxy orbiting within a realistic ICM. Consequently, we missed a few important physical processes involved in RPS. First of all, we have to fix the ICM inflow direction perpendicular to the disk, i.e., face-on interaction. The ICM inflow inclination can be arbitrary for galaxies infalling/orbiting in a cluster. If the interaction is more edge-on, the ICM may preferentially compress the inflow-side ISM, while strips the extraplanar gas more easily \citep{2006MNRAS.369..567R,2020ApJ...905...31L}. Second, the local model cannot capture global geometrical effects, which may be especially important in the stripping process at the truncation radius. After rapid stripping of gas outside the truncation radius, continuous stripping occurs through global hydrodynamical instabilities \citep[e.g.,][]{2005A&A...433..875R,2014ApJ...795..148T} in addition to local instabilities introduced by the penetrating ICM. Another interesting global effect is the inward radial migration of the stripped gas; the inner disk protects the tails from further interactions with the hot ICM. Such gas that is still bound to the galaxy will fall back \citep[e.g.,][]{2001MNRAS.328..185S,2009ApJ...694..789T,2014ApJ...795..148T}. In the future, more realistic, time-varying ICM inflows can be modeled, although global, cosmological models are needed for modeling of realistic variation of ICM ram pressure strengths and angles including the change of the orbits \citep[e.g.,][]{2019ApJ...874..161T}. Third, although we vary the ICM ram pressure, we only consider a representative ISM disk condition similar to solar neighborhood. It is generally expected that the ratio of the ICM ram pressure to the ISM anchoring pressure $P_{\rm ICM}/\mathcal{W}_{\rm GG}$ is the main control parameter that determines the dynamical impact of RPS. However, the microphysics of the ISM (e.g., chemistry and hence cooling and heating processes) will be particularly important for RPS in the multiphase ISM as the volume filling factors of different phase ISM vary over different conditions. The cooling rate in the mixing layer is one of the main parameter that determines the properties of mixing \citep{2021ApJ...911...68T,2022ApJ...924...82F}. In this regard, even for the same relative ram pressure strength, metallicity can change the efficiency of cooling and hence overall evolution of the multiphase RPS process. The major advantage of the local framework used in this study is detailed modeling of ISM physics, which can be improved further by the future extensions of the TIGRESS framework with radiation and chemistry (J.-G. Kim et al. in prep). These capabilities are critically important to understand the evolution of the cold molecular gas \citep{2021MNRAS.505.1083G}. Although the current model follows gas at cold temperature $T<100\Kel$, within which star formation is modeled, the questions of the molecular cloud stripping remain unanswered. Are they intact during the journey to the far extraplanar region? How long do they survive? Can molecular clouds form again in the stripped tails? Future work using the new TIGRESS framework will shed light on these questions. Finally, we point out the importance of thermal conduction, which is currently missing. As RPS should be viewed as the hydrodynamical interaction between gas phases at large temperature differences, conductive heat flux can be the dominant energy flux from the hot ICM to the ISM. The thermal conductivity of the ionized plasma increases steeply with the temperature \citep{1962pfig.book.....S}, but at the same time the conductive heat flux can be limited by the magnetic fields in the ISM that may wrap around the cool clouds as the ICM inflow sweeps up \citep[e.g.,][]{2008ApJ...678..274O}. Direct numerical simulations including anisotropic conduction within the TIGRESS framework where self-consistent magnetic field structure in the turbulent ISM is modeled are vital to understanding the role of thermal conduction in RPS galaxies. \section{Conclusions}\label{sec:conclusion} We conduct high-resolution MHD simulations of the ICM-ISM interaction to understand how the ICM strips the multiphase ISM from the galactic disk and how star formation changes in and out of the disk. We model the star-forming ISM using the TIGRESS numerical framework. We solve the ideal MHD equations in a local shearing-box with gas and (fixed) stellar gravity, optically thin cooling, star formation, and massive star feedback in the form of SNe and FUV radiative heating \citep{2017ApJ...846..133K}. We take a snapshot of a fully developed ISM from the solar-neighborhood model and simulate it with hot ICM inflows from the bottom boundaries to model face-on interactions of the disk ISM moving in a cluster. We adopt four different strengths of the ICM ram pressure $P_{\rm ICM} = \rho_{\rm ICM} v_{\rm ICM}^2$ while the ISM condition is fixed. The relative strength of the ICM ram pressure to the ISM anchoring pressure $\mathcal{W}_{\rm GG} = 2\pi G \Sigma_{\rm gas} \Sigma_*$ covers a range of conditions representing the inner and outer radii of the truncation radius of a galaxy experiencing ram pressure stripping. Our main findings are as follows: \begin{enumerate} \item We find that the simple RPS condition comparing $P_{\rm ICM}$ and $\mathcal{W}_{\rm GG}$ \citep{1972ApJ...176....1G} works well to predict overall stripping of the ISM disk even in our simulations with the multiphase ISM. Although the porous multiphase ISM structure allows the ICM to penetrate the disk through low-density channels and pollute the upper region of the disk regardless of the ICM strength, the effect of the ICM in accelerating the bulk ISM remains insignificant in the weak ICM models with $P_{\rm ICM}/\mathcal{W}_{\rm GG}<1$. In this case, the majority of the ICM stays below the disk midplane (\autoref{fig:sicm}), and the gas remains within the ISM disk over the entire simulation duration ($\sim 250\Myr$). However, the ICM-ISM interface marches toward the other side of the ISM disk in the strong ICM models with $P_{\rm ICM}/\mathcal{W}_{\rm GG}>1$. The ICM quickly strips the ISM in a half-mass stripping time scale of 60-130 Myr (\autoref{fig:surf}). \item In the strong ICM models, the mixing-driven momentum transfer from the ICM to the ISM plays an essential role in RPS (\autoref{sec:stripping}). At the ICM-ISM interface, the hot ICM inflow first shreds the cool ISM, adding mass into the hotter phases while all phases gain kinetic energy by ram pressure. In the stripped tails, the hot and intermediate phases (genuine ICM and shredded ISM) mix into the cool gas continuously. Most of the hot gas energy is radiated away, while mass and momentum are transferred to the cool phase. These hydrodynamical interactions between hot ICM (energy reservoir) and cool ISM (mass reservoir) result in accelerated cool gas after significant mixing of the hot ICM. \item The same momentum transfer process also occurs in the weak ICM models. But, the amount transferred to the ISM, together with the SN injected momentum fluxes, is simply used to support the deformed, one-sided disk with increased weight. There is not enough excess momentum and energy to drive strong, continuous outflows (or RPS) as in the strong ICM models. \item RPS via the mixing-driven momentum transfer imprints on the metallicity of the stripped tails. We find that star clusters formed in the stripped gas ($z>1\kpc$ from the midplane) show the metallicity lower than the new stars in the disk by $\sim 0.1$ dex (\autoref{fig:LOC}). Furthermore, we find a linear relationship between velocity and ICM mass fraction in the stripped cool gas as expected in the mixing-driven momentum transfer, giving rise to an equivalent anti-correlation between velocity and metallicity. \item Star formation is enhanced (30-50\%) in all ICM models at the early epoch of the simulation compared to the {\tt noICM}{} model. This enhancement persists in the weak ICM models for the entire simulation time ($\sim 250\Myr$), while the SFR is greatly reduced after $\sim$ 100 Myr in the strong ICM models. \end{enumerate} As the first results from novel RPS simulations using the local TIGRESS framework, we focus on general responses of the ISM disk with varying ICM inflow strengths. In a forthcoming paper, we will delve deep into the role of magnetic fields in the marginally strong model and the differential stripping of the cold and warm ISM. \acknowledgements We acknowledge the anonymous referee for comments and suggestions that improved the clarity and quality of this paper. AC and WC acknowledge support by the National Research Foundation of Korea (NRF), Grant No. 2018R1D1A1B07048314, 2022R1A2C100298211, and 2022R1A6A1A03053472. C.-G.K. were supported by the National Aeronautics and Space Administration (NASA) through ATP Grant Number NNX17AG26G and Chandra Award Number TM0-21009X. Resources supporting this work were provided in part by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center and in part by the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology’s High Performance Computing Center. \newpage{} \software{{\tt Athena} \citep{2008ApJS..178..137S,2009NewA...14..139S}, {\tt astropy} \citep{2013A&A...558A..33A,2018AJ....156..123T}, {\tt scipy} \citep{2020SciPy-NMeth}, {\tt numpy} \citep{vanderWalt2011}, {\tt IPython} \citep{Perez2007}, {\tt matplotlib} \citep{Hunter:2007}, {\tt xarray} \citep{hoyer2017xarray}, {\tt pandas} \citep{mckinney-proc-scipy-2010}, {\tt CMasher} \citep{CMasher}, {\tt adstex} (\url{https://github.com/yymao/adstex}) }
1,314,259,993,152
arxiv
\section{Introduction} The new generation of $B$ factories soon to come on line will open up an additional window on potential physics beyond the Standard Model(SM). Unlike the situation at new higher energy colliders, the physics beyond the SM will appear indirectly, {\it e.g.}, as deviations from SM expectations in decay rates, distributions and/or asymmetries obtained through precision measurements. Such measurements may be as important in probing the SM as are those currently performed by LEP/SLC and the Tevatron at higher energies and will rival others associated with the observation of CP violation in the $B$ system. Perhaps the most fundamental of all quantities associated with $b$ quark decay is the chirality of its charged current coupling. The possibility that the $b\rightarrow c$ charged current(CC) may have a sizeable right-handed(RH) component has been the subject of speculation for some time. Early on, Gronau and Wakaizumi, as well as a number of other authors{\cite {gro}}, speculated that the $b\rightarrow c$ coupling might be almost, if not {\it purely}, RH. Thanks to measurements performed by both the CLEO{\cite {cleo}} and L3{\cite {l3}} Collaborations, to which we return at some length below, we now know that this hypothesis cannot be true. The relative strength of the RH $b\rightarrow c$ coupling in comparison to the corresponding SM left-handed(LH) coupling must be somewhat less than unity; the leptonic current in the decay is highly constrained to be LH. It is important to observe that the results of these experiments cannot exclude a RH coupling of modest strength. Other data, such as the apparently small value of the $\Lambda_b$ polarization observed in $Z$ decay{\cite {lambda}} by ALEPH, qualitatively support the hypothesis of potentially sizeable RH couplings unless some exotic depolarization mechanisms are at work. Thus the current experimental situation remains unsatisfying and is far from resolving the issue of the presence of RH couplings in $b\rightarrow c$ transitions. On the theoretical side, in a completely different context, Voloshin{\cite {vol}} has recently suggested that a RH $b\rightarrow c$ coupling of modest strength may help to resolve the well-known{cite {blok}} $B$ semileptonic branching fraction($B_\ell$) and charm counting($n_c$) problems{\cite {prob}}. Thus we are left with the questions: given the present data, it is possible that such a RH coupling actually exists and can it assist with the $B_\ell$ and $n_c$ problems? How can future $B$ factory data help clarify this situation? In this paper we will examine the simultaneous compatibility of the CLEO and, to a lesser extent, the L3 constraints on the RH $b\rightarrow c$ coupling and the desire to address the charm counting/branching fraction problem along the lines suggested by Voloshin. The important role played by the ALEPH $\Lambda_b$ polarization measurement is also examined. This discussion stresses both what we can learn from the current data about possible $b\rightarrow c$ RH couplings and what can be learned through future precision measurements at $B$ factories to resolve the present ambiguous situation. Most of this analysis, associated with the constraints from the CLEO, L3 and ALEPH data, can be performed in a completely {\it model-independent} fashion without any reference as to the possible origin of the $b\rightarrow c$ RH coupling. However, in order to subsequently approach the $n_c-B_\ell$ problem a more rigid theoretical framework, such as the Left-Right Symmetric Model(LRM){\cite {lrm}}, needs to be invoked. Other more general frameworks are possible but are beyond the scope of the present paper. This paper is organized as follows. In Section 2, we re-examine and update the constraints imposed on the relative strength of the RH to LH $b\rightarrow c$ coupling due to current experimental data from CLEO, ALEPH and L3 using a model-independent approach and relying on Heavy Quark Effective Theory. As we will see there is a tantalizing, though still not compelling, hint of RH interactions in the CLEO data. The importance of both the small observed $\Lambda_b$ polarization as a possible, though still ambiguous, signature for RH currents is discussed. We also examine how this scenario may be distinguished from the SM with exotic depolarization mechanisms. The present ALEPH data is shown to be consistent with a moderately strong RH current coupling. In examining how we can extract further information from the present data and with an eye towards future measurements we discuss several new observables and their potential usefulness in probing for RH couplings. Many of these observables have either not been measured or have yet to be examined with any degree of precision. Several of these quantities can be probed at the $Z$ or during the first year of running at the new $B$ factories. In Section 3, we present an overview of the LRM and discuss the meaning of the current experimental constraints within this specific context keeping detailed discussion of the required LRM particle content to a minimum. As we do not want to restrict or constrain ourselves to a specific version of this model, we tacitly avoid at this point any discussion of loop processes which may involve the full particle spectrum of a realistic, probably supersymmetric LRM. In Section 4 we describe the nonleptonic $b\rightarrow c$ decays in the presence of RH currents and their associated decay widths including LO and estimates of the NLO QCD corrections based upon what is currently known in the case of the SM. In Section 5, we use the LRM as input to scan the full model parameter space allowed by the CLEO data to discover and identify sub-regions that will lead to a simultaneously decrease in the values of both $B_\ell$ and $n_c$ in comparison to the SM expectations. While such regions are shown to exist they occur in only a small, fine-tuned, fraction of the entire parameter space volume. Our summary and conclusions can be found in Section 6. In the Appendix we speculate on the possible forms of $V_R$ and point out how certain penguin processes can lead to difficulties with the solutions to the $n_c-B_\ell$ problem that we've obtained. \section{Constraints on Right-Handed $b\rightarrow c$ Couplings} \subsection{Model-Independent Notation} Allowing for both LH and RH $b\rightarrow c$ couplings while, following Voloshin, maintaining the leptonic current as purely LH to satisfy the well-known $\mu$ decay constraints{\cite {muon}} without fine-tuning neutrino masses, the general four-fermion interaction describing $B$ semileptonic decay can be written as \begin{equation} {\cal H}_{sl}= {4G_F\over {\sqrt 2}}V^L_{cb}[(\bar c_L \gamma_\mu b_L)+ \xi (\bar c_R \gamma_\mu b_R)](\bar \ell_L \gamma^\mu \nu_L)\,, \end{equation} where here we will treat $\xi$ as a {\it complex} parameter, $\xi=|\xi|e^{i\Delta}$, though CP violation will be ignored in the discussion that follows{\cite {prep}}. As we will see the additional phase degree of freedom will play a very important role in obtaining signals and constraints on RH currents. We recall that in the original Gronau and Wakaizumi scenario the leptonic current in $B$ decays was also RH{\cite {gro}} and neutrino masses were tuned to allow for the semileptonic decay process. How do we ascertain the allowed range of $\xi$? Again following Voloshin, the first place to obtain a constraint is inclusive semileptonic $b$ decay at the quark level. The most obvious observable is the inclusive decay partial width that can be written as \begin{equation} \Gamma(b\rightarrow c\ell \nu)\sim |V^L_{cb}|^2 f(x)\eta_L \Bigg[1+|\xi|^2+2Re(\xi) {g(x)\over {f(x)}} {\eta_R\over \eta_L}\Bigg]\,. \end{equation} For zero mass leptons, $f,g$ are the well-known kinematic phase space functions{\cite {vol,fuji,bsgme}} of the ratio $x=m_c/m_b \simeq 0.29$: \begin{eqnarray} f & = & (1-x^4)(1-8x^2+x^4)-24x^4\ln x \,, \\ g & = & -2x[(1-x^2)(1+10x^2+x^4)+12x^2(1+x^2)\ln x] \,, \nonumber \end{eqnarray} and for $x=0.29$ we find $f\simeq 0.542$ and $g\simeq -0.196$. For semileptonic decay to $\tau$'s, the corresponding phase space suppression factors can be decomposed as $f_\tau=(I_1+I_2)/2$ and $g_\tau=(I_1-I_2)/2$ in terms of the integrals \begin{eqnarray} I_1&=&\int_{y^2}^\Delta ds~Z\Bigg[\Delta(4s-y^2)+2\Delta\Sigma(1+2y^2/s)-( \Sigma+2s)(2s+y^2)\Bigg]\,, \\ I_2&=&\int_{y^2}^\Delta ds~Z\Bigg[\Sigma(4s-y^2)+2\Delta\Sigma(1+2y^2/s)-( \Delta+2s)(2s+y^2)\Bigg]\,, \nonumber \end{eqnarray} where $y=m_\tau/m_b$, $\Sigma=(1+x)^2$ and $\Delta=(1-x)^2$ with \begin{equation} Z=\Bigg[1-{y^2\over {s}}\Bigg]^2\Bigg[(\Sigma-s)(\Delta-s)\Bigg]^{1/2}\,. \end{equation} Numerically, one finds $f_\tau \simeq 0.122$ and $g_\tau \simeq -0.0490$ for $x=0.29$ and $y\simeq 0.372$. Note that when RH currents are present the ratio of the $b$ semileptonic decay width to $\tau$'s to that for massless leptons becomes a weak function of $\xi$ with overall variations of order a few per cent; this dependence occurs due to a mismatch in the phase space ratios: $g_\tau/f_\tau \neq g/f$. This effect is most likely too small to be observed experimentally, however, but should be kept in mind. The parameters $\eta_{L,R}$ represent both perturbative and non-perturbative strong interaction corrections which also depend on $x$ as well as the lepton mass and the relevant strong interaction scale $\mu$. When needed in the numerical discussion below we will {\it assume} that the effects of all strong interaction corrections in the $b\rightarrow c$ semileptonic decay are at least approximately insensitive to the chirality of the charged current coupling, {\it i.e.}, $\eta_L=\eta_R=\eta$ as was done Voloshin. (Certainly, an explicit calculation needs to be performed to verify this assumption.) To leading order in QCD for massless leptons and with $x=0.29$ one has the perturbative contributions $\eta=1-{2\alpha_s \over {3\pi}}(2.53)+{\cal O}(\alpha_s^2)$ whereas for $\tau$'s one obtains $\eta=1-{2\alpha_s \over {3\pi}}(2.11)+{\cal O}(\alpha_s^2)${\cite {hokim}}. The complete NLO expressions are not yet available in either case. Only the terms of order $\alpha_s^2\beta_0$, with $\beta_0$ being the one-loop QCD beta function, are known at present{\cite {luke}}, so for now we will truncate these corrections at this order but include them in our detailed numerical analysis below. It is amusing to note that the existence of a RH coupling means that a measurement of the partial width $\Gamma$ yields not the true but an {\it effective} value of $V^L_{cb}$ from inclusive data when the result is interpreted in terms of the SM, {\it i.e.}, \begin{equation} |V^L_{cb}|^{inc}_{eff}=|V^L_{cb}|\cdot \Bigg[1+|\xi|^2+2Re(\xi){g\over {f}} \Bigg]^{1/2}\,. \end{equation} under the assumption that $\eta_L=\eta_R$. This result will have important consequences for us below. To obtain more information from this inclusive decay additional observables are required. Indeed, many authors have speculated about how one can experimentally extract information about potential RH couplings in inclusive semileptonic $b$ decay. Dittmar and Was{\cite {dw}} suggested examining simultaneously both the charged lepton and neutrino, {\it i.e.}, missing energy, spectra arising from $b$ semileptonic decay at the $Z$ peak. When one looks at the squared matrix element for this process in the free-quark limit, after all traces are performed, the sensitivity to $\xi$ becomes immediately transparent: \begin{equation} |{\cal M}|^2 \simeq p_\ell\cdot p_c p_\nu \cdot p_b+|\xi|^2 p_\ell \cdot p_b p_\nu \cdot p_c -m_bm_cp_\ell \cdot p_\nu Re(\xi)\,, \end{equation} with the $p_i$ labelling the corresponding particle four-momentum. The $\xi$ sensitivity is seen to be particularly enhanced due to the large value of the ratio $m_c/m_b \simeq 0.29$ with the phase of $\xi$ playing a very important role. (For completeness we note that one can find the full expression for the resulting unpolarized charged lepton spectra in the $Z$ rest frame at leading order is given by Fujikawa and Kawamoto{\cite {fuji}}. The corresponding neutrino spectrum can be trivially obtained through the interchange of the LH and RH couplings.) Interestingly, as mentioned earlier, L3{\cite {l3}} performed a simultaneous measurement of both the charged lepton and missing energy spectra in $b$ decay and excluded very large values of $\xi$, {\it i.e.}, purely RH couplings, by more than $6\sigma$ and $\xi \simeq 1$, {\it i.e.}, purely vector couplings, by more than $3\sigma$. They did not, however, attempt a fit to $\xi$ as the required sensitivity to $\xi << 1$ was not available once detector cuts and hadronic as well as other systematic uncertainties were taken into account. However, values of $|\xi|<< 1$ were certainly not excluded and we will attempt to further quantify these results below. We now turn to each of the three experiments CLEO, ALEPH and L3 and survey the constraints that they are presently imposing on $\xi$ and what can be learned from comparable measurements at future $B$ factories even if relatively low integrated luminosities are available. \subsection{CLEO} In addition to inclusive semileptonic decay one may hope to obtain information on possible RH coupling through exclusive decay measurements due to the enriched nature of the accessible final states. In this regard CLEO{\cite {cleo}} has performed a detailed examination of both the $B\rightarrow D$ and $B\rightarrow D^*$ exclusive semileptonic modes. In the $B \rightarrow D$ case the impact of RH currents is well known to be rather minimal for massless leptons since the final state and the corresponding hadronic matrix element are rather simple. In this care, their only effect is to scale the anticipated partial rate by an overall factor, $|1+\xi|^2$, to which we will return below. A more complex and interesting pattern occurs for the $B\rightarrow D^*$ case. The CLEO analysis{\cite {cleo}} that examined the exclusive decay $B^0\rightarrow D^-*(\rightarrow D\pi)\ell \nu$ sought to extract form factor information and, in particular, to measure the forward-backward asymmetry of the charged lepton, $A_{FB}$, the average $D^*$ polarization, $\Gamma_L/\Gamma_T$, as well as $V^L_{cb}$. The data sample of $\sim 780$ events employed in their analysis resulted from an initial set of $2.6\cdot 10^6$ $B\bar B$'s corresponding to an integrated luminosity of $\simeq 2.4 fb^{-1}$ at the $\Upsilon(4S)$. Following the general analysis as presented in Ref.{\cite {cleo,vold}}, one begins with an initially four-fold differential distribution but this is a bit unwieldy. Integration over two of the three decay angles (the others of which we will subsequently return to below) leads to the following double differential decay distribution for this process in the massless lepton limit: \begin{equation} {d^2\Gamma\over {dq^2dz}} \sim |V^L_{cb}|^2 Pq^2\Bigg[(1-z)^2|H_+|^2+(1+z)^2 |H_-|^2+2(1-z^2)|H_0|^2\Bigg]\,, \end{equation} where $P$ is the $D^*$ momentum in the $B$ frame, $q^2$ is the four-momentum transfer from the $B$ to the $D^*$ and $z=\cos \theta_\ell$ with $\theta_\ell$ being the decay angle of the $\ell$ in the virtual $W$ rest frame. $P$ is given by \begin{equation} P={1\over {2M}} \Bigg[(M^2-m^2-q^2)^2-4m^2q^2\bigg]^{1/2}\,, \end{equation} and the helicity amplitudes $H_{\pm,0}$ are functions of $q^2$ which are generally expressed in terms of the conventional form factors $A_{1,2}$ and $V$ as \begin{eqnarray} H_\pm(q^2) &=& (M+m)A_1(q^2)\mp{2MP\over {(M+m)}}V(q^2)\,, \nonumber \\ H_0(q^2) &=& [2m\sqrt {q^2}]^{-1} \Bigg[(M^2-m^2-q^2)(M+m)A_1(q^2)-{4M^2P^2 \over {(M+m)}}A_2(q^2)\Bigg]\,, \end{eqnarray} where $M(m)$ is the mass of the $B(D^*)$. Meticulously following Neubert{\cite {adn}} one may use suggestive versions of the above form factors that have very well defined limits when Heavy Quark Effective Theory(HQET) becomes exact: \begin{eqnarray} A_1(q^2) &=& {(M+m)\over {2\sqrt{Mm}}}\Bigg[1-{q^2\over {(M+m)^2}}\Bigg]h(w) \,, \nonumber \\ A_2(q^2) &=& {(M+m)\over {2\sqrt{Mm}}}R_2(w)h(w)\,, \nonumber \\ V(q^2) &=& {(M+m)\over {2\sqrt{Mm}}}R_1(w)h(w)\,. \end{eqnarray} Here we define as usual $w=(M^2+m^2-q^2)/(2Mm)$. In the exact HQET limit both $R_{1,2}\rightarrow 1$ and $h(w)$ becomes the Isgur-Wise function so that the $R_i$ can be considered as representing small corrections in both $\alpha_s$ and $1/m$ to the case of pure leading order HQET. Generically, $h$ has a linear form, $h(w)=h(1)[1-\rho^2(w-1)]$ although other structures are possible. While the forward-backward asymmetry can be obtained by integration of the expressions above, the ratio $\Gamma_L/ \Gamma_T$ can be determined from the decay angular distribution of the $D$ in the $D^*$ frame when $D^*\rightarrow D\pi$ ($\cos \theta_V$, in the notation of Ref.{\cite {cleo}}). Following Ref.{\cite {cleo,vold}} we can write the relevant double-differential distribution in this case as \begin{equation} {d^2\Gamma\over {dq^2d\cos \theta_V}} \sim Pq^2\Bigg[(|H_+|^2+|H_-|^2)(1- \cos^2 \theta_V)+2|H_0|^2 \cos^2\theta_V \Bigg]\,. \end{equation} $\Gamma_L/ \Gamma_T$ essentially probes the relative weights of the $H_0$ and $H_\pm$ helicity amplitudes as we will see shortly. So far this discussion has been quite general. To include the effects of $\xi \neq 0$ in comparison to SM expectations we simply make the replacements $V\rightarrow V(1+\xi)$ and $A_{1,2}\rightarrow A_{1,2}(1-\xi)$ in the expressions for the helicity amplitudes above and recall that $\xi$ is complex. This follows directly from the rescaling of the LH and RH current amplitudes as seen in Eq.(1). Once particular expressions for $R_{1,2}$ and $h$ are assumed we may directly calculate $A_{FB}$, $\Gamma_L/\Gamma_T$, as well as the total decay rate, which then gives us $V_{cb}^{L~exc}(D^*)$. We obtain \begin{eqnarray} A_{FB}&=&{\int dq^2[\int_0^{z_0}-\int_{-z_0}^0]dz ~{d^2\Gamma\over {dq^2dz}} \over {\int dq^2\int_{-z_0}^{z_0}dz ~{d^2\Gamma\over {dq^2dz}}}}\,, \nonumber \\ {\Gamma_L\over {\Gamma_T}}&=&{\int dq^2 ~2z_0(1-z_0^2/3)Pq^2 H_0^2\over {\int dq^2 ~z_0(1+z_0^2/3)Pq^2 (H_+^2+H_-^2)}}\,, \end{eqnarray} where $z_0(q^2)$ expresses a potential minimum lepton momentum cut used to identify the event: \begin{equation} z_0=min\Bigg[1, -{4Mp_\ell^{cut}-M^2-q^2-m^2\over {2PM}}\Bigg]\,. \end{equation} CLEO, for example, employs a typical lepton momentum cut of $\simeq 1$ GeV. These expressions can be re-written to clearly display their $\xi$ dependence as \begin{eqnarray} A_{FB} &=& {(1-|\xi|^2)C\over {(1+|\xi|^2)A-2BRe(\xi)}}\,, \nonumber \\ {\Gamma_L\over {\Gamma_T}} &=& {4\over {3}} {[1+|\xi|^2-2Re(\xi)]D\over {(1+|\xi|^2)E+2FRe(\xi)}}\,. \end{eqnarray} Experimentally, CLEO{\cite {cleo}} finds $A_{FB}=0.197\pm 0.037$ and $\Gamma_L/\Gamma_T=1.55\pm 0.29$, which are of course both consistent with SM/HQET expectations. Here $A-F$ are a simple set of numbers which result from performing the double integration over the relevant kinematics. For a fixed set of $R_{1,2}$ and $h$, the values of $A-F$ are completely determined subject to experimental cuts, and these results can be combined to constrain $\xi$. In addition, from the expression for the overall partial width we also obtain \begin{equation} |V^L_{cb}|^{exc}_{eff}(D^*)=|V^L_{cb}|\cdot \Bigg[1+|\xi|^2-2Re(\xi){B\over {A}} \Bigg]^{1/2}\,, \end{equation} when the value is again interpreted in the SM; this result should then be compared with Eq.(6). Note that since one finds that $-B/A \neq g/f$, the apparent values of $V^L_{cb}$ extracted from exclusive $B\rightarrow D^*$ and inclusive measurements will be {\it different} when $Re(\xi)\neq 0$. Demanding that the {\it true} $V^L_{cb}$ take on the same value in both cases imposes an extra constraint on $\xi$. In order to employ this additional constraint we use the specific numerical results as provided in the recent review of both inclusive and exclusive semileptonic decay data by Buras{\cite {buras}} to obtain $V_{cb}^{L~exc}(D^*)/V_{cb}^{L~inc}=0.967\pm 0.105$. This value is completely consistent with unity, as anticipated, but will still provides an additional requirement on $\xi$. A similar situation, as mentioned above, occurs in the case of $B\rightarrow D$ semileptonic decays where we now would find simply \begin{equation} |V^L_{cb}|^{exc}_{eff}(D)=|V^L_{cb}|\cdot \Bigg[1+|\xi|^2+2Re(\xi) \Bigg]^{1/2}\,, \end{equation} which should be compared with that from the $D^*$ mode above. Given the present experimental situation{\cite {cleo}} adding this additional constraint will not influence the results of the fit obtained below. However, future measurements may make this an important input into analyses of RH currents. Our procedure is the following: for a fixed set of $R_{1,2}(w)$ and $h(w)$ we calculate the integrals $A-F$ and then perform a simultaneous $\chi^2$ fit to the CLEO results on $A_{FB}$ and $\Gamma_L/\Gamma_T$ as well as to the ratio $V_{cb}^{L~exc}(D^*)/V_{cb}^{L~inc}$ treating $|\xi|$ and $c_\Delta=\cos \Delta$ as free parameters (recall, $\Delta$ is the phase of $\xi$). Possible correlations are ignored. We then choose another set of $R_{1,2}$ and $h$ and repeat the process. Each repetition will thus generate a $95\%$ CL allowed region in the $c_\Delta-|\xi|$ plane. To be specific we employ forms of $R_{1,2}(w)$ and $h(w)$ suggested by Neubert{\cite {adn}} and by Close and Wambach(CW){\cite {cw}} as well as several other sets suggested by the first paper in Ref.{\cite {cleo}}. As a typical example, with $R_1^{CW}=1.15[1-0.06(w-1)]$, $R_2^{CW}=0.91[1+0.04(w-1)]$ and $\rho^2=0.91$ we obtain $A\simeq 0.116$, $B\simeq 0.105$, $C\simeq 0.024$, $D\simeq 0.060$, $E\simeq 0.056$, and $F\simeq 0.044$. These values do indeed reproduce the well known SM expectations{\cite {adn}} in the $\xi \rightarrow 0$ limit. \vspace*{-0.5cm} \noindent \begin{figure}[htbp] \centerline{ \psfig{figure=hqetfit.ps,height=14cm,width=16cm,angle=-90}} \vspace*{-1cm} \caption{$95\%$ CL allowed region (below the curves) in the $|\xi|-c_\Delta$ plane obtained from a fit to CLEO data as well as the experimental value of the ratio $V_{cb}^{L~exc}(D^*)/V_{cb}^{L~inc}$. Each of the six curves corresponds to a unique choice of $R_{1,2}$ and $h$. The SM limit lies along the horizontal axis at $|\xi|=0$. The locations of the six $\chi^2$ minima are also shown for completeness and are seen to be reasonably clustered.} \label{hqet} \end{figure} \vspace*{0.4mm} The results of this fit are shown in Fig.~\ref{hqet} which displays the $95\%$ CL upper bound on $|\xi|$ as a function of $c_\Delta$ for several different choices of $R_{1,2}$ and $h$. The most important features of these results to notice are: ($i$) the bounds we obtain are not very sensitive to the exact choice of these HQET functions and ($ii$) the constraints on $|\xi|$ are strongest when $\xi$ is real. We note that Voloshin's preferred range of values of $\xi=0.14\pm 0.18$ lie mostly inside the allowed region. It is clear that at the moment the existing constraints on $\xi$ are quite poor and that values of $|\xi|$ of order 0.25 are certainly allowed by current data. We note that for the six sets of HQET functions used in this analysis the resulting best fit values for $\xi$ are reasonably clustered and indicate a magnitude $\simeq 0.20-0.35$ and a sizeable phase. With the far larger data sets soon to be available from $B$ factories it is quite important for this analysis to be be revisited and refined in the not too distant future. We note that a somewhat smaller allowed region results if the unpublished results from CLEO that now include the charged $B$ decay modes are used{\cite {unp1}}. \vspace*{-0.5cm} \noindent \begin{figure}[htbp] \centerline{ \psfig{figure=btodq2dis.ps,height=14cm,width=16cm,angle=-90}} \vspace*{-1cm} \caption{Normalized $q^2$ distributions for the process $B\rightarrow D^*\ell \nu$. Here $x=q^2/M^2$ and the curves correspond to the SM(solid) and $\lambda=0.5(-0.5)$(dotted and dashed, respectively). A possible $p_t$ cut on the charged lepton momenta has been ignored. Results are shown for both the Neubert as well as the Close and Wambach HQET functions corresponding to the pair of curves for each case.} \label{q2dis} \end{figure} \vspace*{0.4mm} One might ask if there are other observables associated with this exclusive decay that could allow for some additional sensitivity to RH current interactions{\cite {vold}}. To this end we briefly examine both the $q^2$ and $\chi$ distributions which can be measured using the recoil momentum of the $D^*$ and identifying the angle between the $W$ and $D^*$ event planes, respectively. Once integrated over all other variables, the deviations in both these distributions from the SM expectations are found to be totally controlled by the value of the ratio $\lambda=2|\xi|c_\Delta/(1+|\xi|^2)$. The form of the $q^2$ distribution can be obtained immediately from the double differential expression above. Fig.~\ref{q2dis}, where we have used the HQET functions, $R_i$, of Neubert{\cite {adn}} and those of Close and Wambach{\cite {cw}}, shows that the normalized distribution is only very weakly dependent on the existence of RH currents. Specifically, we see a direct comparison of the SM distribution, $\lambda=0$, with that expected for the cases of $\lambda=\pm 0.5$. From the figure it appears unlikely that the shape of the $q^2$ distribution will yield any useful information on RH currents unless very high precision data is obtainable. Note there is little difference between the curves generated with the two different sets of HQET functions. In the case of the normalized $\chi$ distribution, the shape is controlled by a single parameter {\it if} all the other variables have been completely integrated over, {\it i.e.}, \begin{equation} {dN\over {d\chi}}={1\over {\pi}} ( 1-\Omega \cos 2\chi) \,, \end{equation} where \begin{equation} \Omega={\int dq^2 ~Pq^2 Re(H_+^*H_-)\over {\int dq^2 ~Pq^2 (H_+^2+H_-^2+H_0^2)}}\,, \end{equation} which we may rewrite to show the $\lambda$ dependence explicitly as \begin{equation} \Omega=-{(T_2+T_1\lambda)\over {(2T_1+T_3)+(2T_2-T_3)\lambda}}\,, \end{equation} with the $T_i$ being a set of kinematic integrals. In the SM one finds that $\Omega \simeq 0.175(0.192)$ using Neubert(CW) HQET functions. Note that if instead only the even(odd) values of $\cos \theta_V$ are integrated over, the normalized $\chi$ distribution picks up an additional term of the form \begin{equation} {dN\over {d\chi}}\rightarrow {dN\over {d\chi}}\mp {3\over {8}}\Sigma \cos \chi \,, \end{equation} where \begin{equation} \Sigma={\int dq^2 ~Pq^2 Re~H_0^*(H_+-H_-)\over {\int dq^2 ~Pq^2 (H_+^2+H_-^2+H_0^2)}}\,. \end{equation} For this variable the $\xi$ and $c_\Delta$ dependencies become are somewhat more complex and cannot be expressed simply through the parameter $\lambda$; $\Sigma$ is expressible as \begin{equation} \Sigma={T_4{\mbox {$(1-|\xi|^2)$}\over \mbox {$(1+|\xi|^2)$}}\over {(2T_1 +T_3)+(2T_2-T_3)\lambda}}\,, \end{equation} with $T_4$ being another kinematic integral. In the SM one finds that $\Sigma \simeq -0.25(-0.22)$ for Neubert(CW) HQET functions. \vspace*{-0.5cm} \noindent \begin{figure}[htbp] \centerline{ \psfig{figure=omega.ps,height=14cm,width=16cm,angle=-90}} \vspace*{-1cm} \caption{Value of the $\Omega$ parameter which controls the shape of the $\chi$ distribution as a function of $\lambda$. The results are shown for both the Neubert(solid) as well as the Close and Wambach(dashed) HQET functions which are seen to yield quite similar results.} \label{omega} \end{figure} \vspace*{0.4mm} Fig.\ref{omega} shows that $\Omega$ is quite sensitive to positive values of $\lambda$ so that one may hope to get a reasonable sensitivity to RH interactions if $\Omega$ could be precisely measured. Present data from CLEO is found to be consistent{\cite {cleo}} with the expectations of the SM for $\Omega$ but the statistics are still rather poor. To get an idea of the potential sensitivity we have performed a straightforward two parameter (normalization and $\Omega$) fit to the existing binned data as presented in Ref.{\cite {cleo}}. To obtained improved statistics in this first fit we have combined the data in both the $\cos \theta_V >0$ and $\cos \theta_V <0$ regions. Unfortunately, the resulting the distribution of the data shows little sensitivity to $\Omega$. After background subtraction this fit yields $\Omega=0.126\pm 0.120$ at $95\%$ CL which is certainly consistent with the SM. This constraint subsequently implies that $\lambda$ lies in the $95\%$ CL range $-3.3(-2.8)\leq \lambda \leq 0.71(0.75)$ for Neubert(CW) HQET functions using the results in Fig.\ref{omega}. As one would expect from the low sensitivity to negative $\lambda$, our bound in this case is rather poor. A second more hopeful possibility is to fit the shape of the $\chi$ distribution for {\it both} $\Omega$ and $\Sigma$ by treating the $\cos \theta_V>0$ and $\cos \theta_V<0$ regions independently; here there is a loss of statistics but a dramatic increase in sensitivity to RH couplings. Following the same analysis as above we arrive at the results presented in Fig.\ref{fun} for the allowed region in the $c_\Delta-|\xi|$ plane for Neubert, CW as well as Isgur and Wise{\cite {isgw}} HQET functions. Note that the allowed region resulting from this fit is somewhat sensitive to the HQET $R_i$ choice, quite unlike the other observables that we have examined up to this point. Although this result seems to support the possibility that RH currents may indeed be present one must be hesitant to form such a hasty conclusion without further analysis. First, the only believable fit of this kind must be performed by the CLEO Collaboration and we note the apparent strong sensitivity of our result to the choice of the $R_i$ HQET functions. However, it is certainly most clear that our understanding of potential RH currents in $b$ decay would very much profit from higher precision measurements of the $\chi$ distribution. This seems possible during the first year of $\Upsilon(4S)$ running of BABAR and BELLE since the CLEO data sample used in this analysis corresponded to only 2.6 million $B\bar B$ pairs. We note in passing that using the still unpublished CLEO data from the charged $B$ decay mode{\cite {unp1}} already strengthens the case for right-handed couplings based on the fit to the $\chi$ distribution. Another question one might ask is what the allowed range for the parameters $\Omega$ and $\Sigma$ are given the CLEO constraints we have extracted from the earlier fit. To obtain such results we need to scan the $|\xi|-c_\Delta$ region below the envelope of curves shown in Fig.1 to get the extrema. We find $0.053\leq \Omega \leq 0.207$ and $-0.345 \leq \Sigma \leq -0.115$ for the Neubert HQET functions; correspondingly, for the CW HQET functions we obtain $0.089\leq \Omega \leq 0.218$ and $-0.310 \leq \Sigma \leq -0.106$. \vspace*{-0.5cm} \noindent \begin{figure}[htbp] \centerline{ \psfig{figure=omsigfit.ps,height=14cm,width=16cm,angle=-90}} \vspace*{-1cm} \caption{$95\%$ CL fit to the shape of the CLEO $\chi$ distribution assuming Neubert(dotted), CW(dashed) or ISGW(solid) HQET functions. The allowed region is either below the dotted line or within the dashed or solid enclosure. As before the diamonds locate the $\chi^2$ minima for the three sets of $R_i$.} \label{fun} \end{figure} \vspace*{0.4mm} As a final note, if the $\tau$ polarization in the decay $B\rightarrow D^*\tau \nu$ can be measured, Wakaizumi has shown{\cite {wak}} that it provides yet another quantity which is fairly sensitive to $\xi \neq 0$. This decay mode will thus yield even more observables which can be used to probe for $b\rightarrow c$ RH currents due to the addition of finite mass terms associated with the $\tau$. Of course for this mode there is a loss in statistics due to the additional phase space suppression due to the $\tau$ mass as well as the associated $\tau$ reconstruction efficiency to be dealt with. An analysis of these prospects is, however, beyond the scope of the present work{\cite {prep}}. \subsection{ALEPH} Unfortunately, other data cannot at present improve significantly upon the CLEO bounds without further employing some rather strong assumptions. For example, in principle the low $\Lambda_b$ polarization observed in $Z$ decay{\cite {lambda}} by ALEPH can be used to obtain such a constraint. We recall that a $b$ quark produced at the $Z$ in the SM is highly polarized, {\it i.e.}, $P=-0.935$ and radiative effects have been shown to reduce this value only slightly{\cite {slight}}. During the hadronization process some of the memory of the original $b$ polarization is lost but it had been anticipated that in the $b\rightarrow \Lambda_b$ process a large part of the original polarization would be kept{\cite {keep}}. Falk and Peskin{\cite {keep}} estimated on the basis of HQET that the resulting $\Lambda_b$ polarization would be $P=-(0.69\pm 0.06)$. The ALEPH analysis is based on $\sim 3\cdot 10^6$ hadronic $Z$ decays which yielded a sample of $462\pm 31$ $\Lambda_b$ candidates. The method used by ALEPH to extract the value of $P$ for the $\Lambda_b$ was first suggested by Bonvicini and Randall{\cite {rand}} who noted that the ratio of the average values of the neutrino and lepton energies in semileptonic $B$ decay, $y=<E_\ell>/<E_\nu>$, was particularly sensitive to the polarization of the $b$ quark. This variable, being an energy ratio, is quite insensitive to $b$ fragmentation, detector acceptance and reconstruction effects as well as the uncertainties in the ratio $m_c/m_b$. We note that the direct comparison of the average charged lepton and neutrino energies from $b$ quark decays with theoretical expectations, as was done by L3{\cite{l3}}, was found to lead to substantial fragmentation uncertainties although values of $\xi$ of order unity were clearly excluded; we will return to the L3 data below. It has also been found that $\alpha_s$ and $1/m_b^2$ corrections{\cite {corr}} to the parton level expectations for $y$ are quite small (and, hence, will subsequently be ignored). Of course these results have only been explicitly demonstrated in the case of purely LH couplings. In our analysis we will make the reasonable assumption that they remain true when both LH and RH couplings are present. The averages of $E_\ell$ and $E_\nu$ can be calculated directly from the decay at rest spectra through the boost relations $E_\ell=\gamma(E_\ell^*+\beta p_L^*)$, ~{\it etc}., with $\beta \simeq 1$ and $p_L^*$ being the lepton's momentum in the boost direction. In order to remove selection cut and energy reconstruction errors which produce a bias in $y$, ALEPH instead determined the ratio of ratios $R_y=y_{data}/y_{MC}(0)$ where $y_{MC}(0)$ is the $y$ value obtained from a Monte Carlo simulation employing the SM in the limit of zero polarization. The value of $R_y$ was then compared with the Monte Carlo-corrected SM theory prediction to extract the value of $P$. What ALEPH found was $R_y=1.10\pm 0.13$ (including systematic errors in quadrature)that then yielded the intriguingly small value $P=-0.23^{+0.26}_{-0.23}$, which is significantly smaller in magnitude, by $\simeq 2\sigma$, than the expectations of Falk and Peskin. To investigate the double ratio $R_y$ in the case when RH currents are present, we must return to the normalized double-differential charged lepton decay distribution. In the $b$ rest frame to leading order and neglecting the lepton mass we find \begin{equation} {dN\over {dzd\cos \theta}}=\Bigg[{R(x,z)+P\cos \theta ~Q(x,z) \over {(1+|\xi|^2)f(x)+2Re(\xi)g(x)}}\Bigg]\,, \end{equation} where $z=2E_\ell/m_b$ and $\theta$ is the angle between the $b$ and $\ell$ momenta with $f(x)$ and $g(x)$ given above. Explicitly, we find that $R=R_{LL}+R_{RR}|\xi|^2+2Re(\xi)R_{RL}$ and $Q=Q_{LL}+Q_{RR}|\xi|^2+ 2Re(\xi)Q_{RL}$ with \begin{eqnarray} R_{LL}&=&{z^2(1-x^2-z)^2\over {(1-z)^3}}\Bigg[(1-z)(3-2z+x^2)+2x^2\Bigg] \,, \nonumber \\ R_{RR}&=&{6z^2(1-x^2-z)^2\over {(1-z)}}\,, \\ R_{LR}&=&-{6xz^2(1-x^2-z)^2\over {2(1-z)^2}}\,, \nonumber \end{eqnarray} and \begin{eqnarray} Q_{LL}&=&{z^2(1-x^2-z)^2\over {(1-z)^3}}\Bigg[(1-z)(1-2z+x^2)-2x^2\Bigg] \,, \nonumber \\ Q_{RR}&=&{6z^2(1-x^2-z)^2\over {(1-z)}}\,, \\ Q_{LR}&=&-{6xz^2(1-x^2-z)^2\over {2(1-z)^2}}\,. \nonumber \end{eqnarray} These results confirm those obtained by Tsai{\cite {tsai}} long ago in a different form and context. The corresponding expressions for the neutrino spectrum can be obtained from the explicit relations above by interchanging the role of the left- and right-handed labels. Using these results we can calculate $y$ following Bonvicini and Randall{\cite {rand}}, rescale this value by the SM result assuming $P=0$, and include the Monte Carlo corrections of ALEPH. Given an assumed value for $P$ we can then fit to the ALEPH data to obtain an allowed region in the $|\xi|-c_\Delta$ plane. The result of this analysis is shown in Fig.~\ref{lambda} and is compared to the CLEO allowed region obtained above {\it assuming} the estimate of the polarization retention of Falk and Peskin, $P=-(0.69\pm 0.06)$, is correct. Here we see that at the $95\%$ CL almost the entire plane is allowed except for a possible small region on the lower right which only appears in the case of $P=-0.75$. As $P$ increases in magnitude, we note that the allowed parameter space region shrinks somewhat in size. We also see from the figure that the location of the best fit is quite sensitive to the assumed true value of the polarization. We note that if there are additional dynamical mechanisms{\cite {sal}} which could lead to a further reduction in the expected value of $P$ in the SM {\it and} they could be reliably trusted quantitatively, then the limits we would obtain on RH $b\rightarrow c$ couplings might be improved. In the future if the central value obtained by ALEPH was verified and the errors were reduced by a factor of two the size of the allowed region would shrink substantially and form a band approximately $\delta |\xi|\simeq \pm 0.25$ wide on either side of the best fit points shown in the Figure. \vspace*{-0.5cm} \noindent \begin{figure}[htbp] \centerline{ \psfig{figure=lambdab.ps,height=14cm,width=16cm,angle=-90}} \vspace*{-1cm} \caption{Comparison of the envelope of the $95\%$ CL allowed regions obtained with CLEO data (solid curve) with those obtainable from the ALEPH $\Lambda_b$ polarization results assuming $P=-(0.69\pm 0.06)$ corresponding to the dotted, dashed and dash dotted curves. The region below the slightly tilted horizontal curves and outside the `nose' on the lower right-hand side for the case of $P=-0.75$ are allowed. The location of the corresponding $\chi^2$ minima for $P$=$-0.63$, $-0.69$, and $-0.75$, respectively, are also displayed from right to left.} \label{lambda} \end{figure} \vspace*{0.4mm} If the apparent low value of the polarization as measured by ALEPH is verified by future experiments then there are only two conclusions. Within the SM framework there must be some new source of depolarization and indeed $P\simeq -0.23$. Alternatively, right-handed currents are present and the true value of $P$ is closer to the HQET expectations of $P\simeq -0.69$ but {\it appears} low when interpreted in terms of the SM. As discussed above, a reduction in the ALEPH error by a factor of two, assuming the same central value, would clearly define a small allowed region when combined with the results from CLEO. Unfortunately, a measurement of $R_y$ {\it alone}, no matter how precise, will be able to eliminate the possibility of some exotic depolarization mechanism and allow us to conclude that RH couplings exist. However, an analysis of the higher $y$ moments or other possible distributions may be most helpful as suggested by Diaconu {\it et al.} {\cite {rand}} in an important paper. For simplicity we first consider only the ratio of the second moments of the decay distributions here. (We have not examined moments higher than second.) Within the SM if $x=0.29$ and $P=-0.23$ we can uniquely predict the value of the quantity $R_{2y}=y_2/y_2(0)=1.181$, where $y_2=<E^2_\ell>/<E^2_\nu>$, although $\alpha_s$ and $1/m_b^2$ corrections are somewhat larger here. In the case where RH currents are present and $P=-0.69$, we can invert the $R_y$ relation and find $|\xi|$ as a function of $c_\Delta$ and then calculate the corresponding value of $R_{2y}$. We find in this case that for the central value $R_y=1.10$ one obtains $1.198 \leq R_{2y} \leq 1.227$ apart from the above corrections; note that this range does not overlap with the SM expectation but the separation between the two is quite small. It would thus appear that simultaneous very high precision measurements of both $R_y$ and $R_{2y}$, as well as possible higher moments, are required in order to be able to resolve the ambiguity and determine if RH currents are indeed present in semileptonic $b$ decays. Given the current and anticipated sizes of the errors a determination of at least these first two moments alone will not necessarily prove useful. As a second possibility we note that Diaconu {\it et al.} ~also suggest a number of other variables which can be used to probe the $\Lambda_b$ polarization. One of these is the difference in the charged lepton and neutrino rapidities, $\Delta \eta=\eta_\ell-\eta_\nu$, where these rapidities are measured with respect to the boost direction. This quantity is directly proportional to the polarization and, being a rapidity difference, is fortunately insensitive to fragmentation uncertainties. We find that with RH currents contributing one obtains \begin{equation} \Delta \eta=P\int_{-1}^{1}d\cos \theta ~\eta (1-|\xi|^2){\int ~dz (Q_{LL}- Q_{RR})\over {(1+|\xi|^2)f(x)+2Re(\xi)g(x)}}\,, \end{equation} where $\eta={1\over {2}} log{(1+\cos \theta)\over {(1-\cos \theta)}}$. Numerically we confirm the SM result and more generally obtain \begin{equation} \Delta \eta \simeq {-0.632P(1-|\xi|^2)\over {(1+|\xi|^2)+2|\xi| c_\Delta(-0.362) }}\,, \end{equation} so that in the SM for $P=-0.69(-0.23)$ we would obtain $\Delta \eta=0.436(0.145)$. In the case of RH currents, repeating the above procedure to find $|\xi|$ as a function of $c_\Delta$ from the data on $R_y$ we are led to the prediction that $\Delta \eta=0.238-0.257$, assuming that $P=-0.69$, which is quite different from either the SM expectation with a low value of $P$ or the HQET SM prediction. Again it appears that the RH current and exotic depolarization mechanism possibilities may be separable using precision measurements. However in this case we note that the required level of precision for this variable is far less that that for $R_{2y}$ giving us some hope that such a separation may indeed be possible at future $B$ factories{\cite {unp2}. \subsection{L3} Following the same approach as above one might attempt to further quantify the L3{\cite {l3}} constraints on $\xi$ by constructing the $y$ values using the results presented in their Table 2 and including some corrections associated with their Monte Carlo. This would then be similar to the ALEPH analysis but now one is actually probing the initial $b$ quark polarization about which there is far less uncertainty. Of course, in principle, only L3 can perform this procedure but our rudimentary study will provide an indication for the location and size of the allowed region associated with their data. If we simply double their errors but then ignore both the $\alpha_s$ and $1/m_b^2$ corrections as well as fragmentation and energy scale uncertainties and neglect any correlations, we can obtain an estimate for the associated allowed region in the $c_\Delta-|\xi|$ plane. This most likely substantially underestimates the present experimental and theoretical uncertainties. Here we also need to input the parton-level polarization, $P=-0.935$. The results of these questionable considerations are shown in Fig.~\ref{l3stuff} and are compared to the CLEO analysis constraints. From this figure we see that the crude estimate of the L3 constraints and those obtained above from CLEO are not in conflict and even tend to prefer similar regions of the parameter space. The sizes of the two allowed regions are rather comparable and substantially overlap. It is also clear from the figure that the L3 data certainly excludes both a $(V+A)\times (V-A)$ as well as a $V\times (V-A)$ interaction as several $\sigma$ as claimed. Before we can draw any stronger conclusions, however, this analysis needs to be repeated by L3 themselves with the additional $\alpha_s$ and $1/m_b^2$ corrections included. We can conclude that future spectra determinations from inclusive decays will indeed be useful in probing for RH currents provided high statistics are available and systematic experimental uncertainties are under control. Since the L3 analysis is based on a sample of only $10^6$ $Z$'s it is clear that a higher statistical study can be performed. \vspace*{-0.5cm} \noindent \begin{figure}[htbp] \centerline{ \psfig{figure=l3res.ps,height=14cm,width=16cm,angle=-90}} \vspace*{-1cm} \caption{Comparison of the envelope of the $95\%$ CL allowed regions obtained with CLEO data(solid curve) with an estimate of the upper bound obtainable from the analysis of the L3 charged lepton and neutrino data(dotted curve). The location of the $\chi^2$ minima from the L3 analysis is also shown.} \label{l3stuff} \end{figure} \vspace*{0.4mm} \section{The Left-Right Model and $\xi$} If RH currents do exist, given the CLEO allowed region in the $|\xi|-c_\Delta$ plane shown in Fig.1, we want to know if there are any sub-regions of this allowed space that can yield a simultaneous lowering of the SM predictions for $B_\ell$ and $n_c$. To address this question we will need to go beyond the physics described by the effective Hamiltonian in Eq.(1) since, {\it e.g.}, we need to discuss non-leptonic decay modes such as $b\rightarrow c\bar ud(s)$ and $b\rightarrow c\bar cs(d)$ and the role RH currents may play in these channels. To do this we need to incorporate the physics of ${\cal H}_{sl}$ into a larger framework, {\it e.g.}, the LRM{\cite {lrm}}. We remind the reader that other frameworks, such as SUSY loops, compositeness or $R-$parity violation schemes, are also possible{\cite {alex2}} sources of effective RH currents. In order to be self-contained let us briefly review the relevant parts of the LRM we need for our discussion below; for details of the model the reader is referred to {\cite {lrm}}. The LRM is based on the extended gauge group $SU(2)_L \times SU(2)_R \times U(1)$. Due to this extension there are both new neutral and charged gauge bosons, $Z', W^{\pm}_R$, in addition to those present in the Standard Model. In this scenario the left-(right-)handed fermions of the SM are assigned to doublets under the $SU(2)_{L(R)}$ group and a RH neutrino is introduced. The Higgs fields which can directly generate SM fermion masses are thus in {\it bi-doublet} representations, {\it i.e.}, they transform as doublets under both $SU(2)$ groups. The LRM is quite robust and possesses a large number of free parameters which play an interdependent role in the calculation of observables and in the existing constraints on the model resulting from various experiments. As far as $B$ physics and the subsequent discussion are concerned there are several parameters of direct interest. The most obvious free parameter is the ratio of the $SU(2)_R$ and $SU(2)_L$ gauge couplings ~$0.55<\kappa=g_R/g_L\leq 2$; the lower limit is a model constraint while the upper one is simply a naturalness assumption. GUT embedding scenarios {\it generally} suggest that $\kappa \leq 1${\cite {desh}}. For simplicity we will assume that $\kappa=1$ in almost all of our discussion below. The extended gauge symmetry is broken in two stages. First the $SU(2)_L \times SU(2)_R\times U(1)$ symmetry is broken down to the SM via the action of Higgs fields that transform either as doublets or triplets under $SU(2)_R$. This choice of Higgs representation determines both the mass relationship between the $Z'$ and $W_R$ (analogous to the condition that $\rho=1$ in the SM with only Higgs doublets and singlets) as well as the nature of neutrino masses; in particular, the Higgs triplet choice which we employ here allows for the implementation of the see-saw mechanism and yields a heavy RH neutrino. After complete symmetry breaking the resulting $W_L-W_R$ mixing is described by two parameters, a real mixing angle, $\phi$, and a phase, $\omega$. Note that it is usually $t=\tan \phi$ which appears in expressions directly related to observables. The additional phase, as always, can be a new source of CP violation. The mixing between $W_L$ and $W_R$ results in the mass eigenstates $W_{1,2}$, with a ratio of masses, $r=M_1^2/M_2^2$, (with $M_2 \simeq M_R$). In most models $T$ is then naturally of order a few times $r$ or less in the large $M_2$ limit. Of course, $W_1$ is the state directly being produced at both the Tevatron and LEPII and is identical to the SM $W$ in the $\phi \rightarrow 0$ limit. We note that when $\phi$ is non-zero, $W_1$ no longer couples to a purely LH current. Of course if a heavy RH neutrino is indeed realized then the effective {\it leptonic} current coupling to $W_1$ remains purely LH as far as all low energy experiments are concerned. As is well-known, one of the strongest set of `classical' constraints on this model arises from polarized $\mu$ decay{\cite {muon}}, which are trivial to satisfy in the case of a heavy RH neutrino and this justifies the appearance of only LH leptonic couplings in Eq.(1). Removal of these constraints provides significantly more freedom in the remaining LRM parameter space. Thus the tree-level $\mu$ decay Hamiltonian is just \begin{equation} {\cal H}_\mu={g_L^2c_\phi^2(1+rt^2)\over {2M_1^2}}(\bar \nu_{\mu_L} \gamma_\lambda \mu_L)(\bar e_L \gamma^\lambda \nu_{e_L})\,, \end{equation} so that the tree-level definition of $G_F$ is simply \begin{equation} {G_F\over {\sqrt 2}}={g_L^2c_\phi^2(1+rt^2)\over {8M_1^2}}\,. \end{equation} We see that if $r$ and $t$ are of order $\simeq 10^{-2}$ or less the numerical influence of mixing in this relationship will be quite small. We note that it is important to be reminded that the extended Higgs sector associated with both the breaking of the LRM group down to $U(1)_{em}$ and the complete generation of fermion masses may also have an important role to play in low energy physics through both the existence of complex Yukawa and/or flavor-changing neutral current type couplings. However, this sector of the LRM is highly model dependent and is of course quite sensitive to the detailed nature of the fermion mass generation problem. For purposes of brevity and simplicity and because tree-level neutral Higgs exchange can little influence the decay processes we are interested in these too will be ignored in the following discussion and we will focus solely on the effects associated with $W_{1,2}$ exchange. We do note that these additional Higgs fields can potentially play a very important role in loop processes as will be briefly discussed later. Additional parameters arise in the quark sector; in principle the effective mass matrices for the SM fermions may be non-hermitian implying that the two matrices involved in the bi-unitary transformation needed to diagonalize them will be unrelated. This means that the elements of the mixing matrix, $V_R$, appearing in the RH charged current for quarks will be {\it unrelated} to the corresponding elements of $V_L=V_{CKM}$. $V_R$ will then involve 3 new angles as well as 6 additional phases all of which are {\it a priori} unknown parameters{\cite {herczeg}}. The possibility that $V_L$ and $V_R$ may be unrelated is sometimes overlooked when considering the potential impact of the LRM on low energy physics and there has been very little detailed exploration of this more general situation due to the rather large parameter space. Certainly as the elements of $V_R$ are allowed to vary the impact of the extended gauge sector on $B$ physics in general will be greatly effected. Some well-known constraints on the LRM, such as Tevatron direct $W'$ searches{\cite {cdfd0}}, are quite sensitive to variations in $V_R${\cite {oldt}} as well as the properties of the RH neutrino and $W_2$ masses as low as $450-500$ GeV can very easily be accommodated by the present data. To be conservative, and with future Tevatron searches in mind, however, we will assume below that $M_2 \geq 720$ GeV{\cite {cdfd0}}, {\it i.e.}, $r\leq 0.012$, for any $V_R$ implying that the magnitude of $t$ is also less than $\sim 0.012$. Other constraints on the LRM parameter space involve loop processes such as the $K_L-K_S$ mass difference{\cite {bbs,lang}} and $b\rightarrow s\gamma${\cite {old}}. Clearly the bounds obtained from these processes depend not only on the gauge sector but also on {\it all} the particles that can participate in the loops such as SUSY partners, extra Higgs fields, additional heavy fermions, {\it etc}.,~whose existence is sensitive to the finer details of the model. These possibilities are beyond the necessities of the current discussion where we are solely interested in tree-level $B$ decays. Our philosophy as outlined in the introduction will be to leave for now all discussions of loop graphs which display any sensitivity to the details of the LRM spectrum and take these issues up briefly later. Using the definitions above for the LRM parameters we can now express $\xi$ in terms of these more fundamental quantities; we find \begin{equation} \xi={\kappa t(1-r)\over {(1+rt^2)}}~{e^{i\omega}V_{cb}^R\over {V_{cb}^L}}\,. \end{equation} Note that we can absorb the sign of $t$ into the phase $\omega$ here so that $t$ can be treated as a positive parameter in our discussion below. As mentioned above we will take $\kappa=1$ for simplicity in our numerical analysis below; to first order it simply rescales the value of $t$. Employing the results from Buras{\cite {buras}} for the value of $V_{cb}^{L~inc}(D^*)$ from inclusive $B$ semileptonic decays, we can invert the expression above to obtain \begin{equation} |V_{cb}^R|=(39.9\pm 2.2)\cdot 10^{-3}~{|\xi|\over {[1+|\xi|^2+2\xi c_\Delta {g\over {f}}]^{1/2}}}~{(1+rt^2)\over {\kappa t(1-r)}}\,, \end{equation} so that for typical values such as $|\xi|=0.2$, $c_\Delta$=0, and $x=0.29$, we obtain \begin{equation} |V_{cb}^R|=(0.782\pm 0.042)~\Bigg[{10^{-2}\over {\kappa t}}\Bigg]~[1 +{\cal O}(r,rt^2)]\,, \end{equation} which suggests that $|V_{cb}^R|$ is reasonably large and perhaps of order unity over most of the allowed parameter space shown in Fig.~\ref{hqet}. From these considerations we learn several things which follow immediately from the unitarity of $V_R$: ($i$) A large value for $|V_{cb}^R|$ implies that the sum $|V_{cd}^R|^2+|V_{cs}^R|^2$ is small thus somewhat suppressing potential RH contributions to the decays $b\rightarrow c\bar cs(d)$, which is fortunate for charm counting purposes. If either of these elements were large one might expect a significant increase in $n_c$ due to RH current contributions. As we will see below, just the opposite occurs. As will be noted, this also assists in suppressing RH contributions to $K_L-K_S$ mixing. ($ii$) Since unitarity requires $|V_{ud}^R|^2+|V_{us}^R|^2+|V_{ub}^R|^2= |V_{ub}^R|^2+|V_{cb}^R|^2+|V_{tb}^R|^2$ it follows immediately that $|V_{ud}^R|^2+|V_{us}^R|^2 >|V_{cb}^R|^2$. However, since $|V_{cb}^R|^2$ is apparently large this inequality implies that the sum $|V_{ud}^R|^2+|V_{us}^R|^2$ is larger still. This would mean that decay modes such as $b\rightarrow c\bar ud(s)$ may receive large RH contributions. We note that if we further assume that $|V_{ud}^R|^2 << |V_{us}^R|^2$ these RH contributions may also lead to an increase in $K$ production in $B$ decays{\cite {alex}}, which it has been argued is a signal for enhanced $b \rightarrow sg$. Also if $|V_{ud}^R|^2 << |V_{us}^R|^2$ one finds that the Tevatron search reach{\cite {cdfd0}} for $W_2$ would be seriously degraded by about a factor of 2{\cite {oldt}} in mass. ($iii$) It would appear that $|V_{ub}^R|$ will be too small to significantly influence $b\rightarrow u$ processes, though this needs further examination. ($iv$) A large $V_{cb}^R$ implies that the sum $|V_{td}^R|^2 +|V_{ts}^R|^2$ is also large{\cite {alex2}} with implications for the complete structure of $V_R$ that we will ignore for now but will return to haunt us in our discussion below. ($v$) The fact that unitarity requires $|V_{cb}^R| \leq 1$ itself provides an additional constraint on the remaining LRM parameters. \section{Non-leptonic $b\rightarrow c$ Decays with RH Currents} As a final step in our analysis we need the complete non-leptonic Hamiltonian; at the tree-level this can now be written down immediately. For the sample case of $b\rightarrow c\bar u d$ we can write, following the notation in Refs.{\cite {bsgme,new}}, \begin{equation} {\cal H}_{nl}={4G_F\over {\sqrt 2}}\Bigg[C_{2L}O_{2L}+C_{12L}O_{12L}+ L\rightarrow R ~\Bigg]\,, \end{equation} where $O_{2L}=(\bar c\gamma_\mu P_L b)(\bar d\gamma^\mu P_L u)$, $O_{12L}=(\bar c\gamma_\mu P_R b)(\bar d \gamma^\mu P_L u)$, {\it etc}.~ and where $P_{L,R}$ are helicity projection operators. At the {\it weak} scale the operator coefficients are given by \begin{eqnarray} C_{2L} &=& (V_{cb}^L)(V_{ud}^{L*}) \,, \nonumber \\ C_{12L} &=& \Bigg[{\kappa t(1-r)\over {(1+rt^2)}}\Bigg](V_{cb}^R)(V_{ud}^{L*}) \,, \nonumber \\ C_{12R} &=& \Bigg[{\kappa t(1-r)\over {(1+rt^2)}}\Bigg](V_{cb}^L)(V_{ud}^{R*}) \,, \\ C_{2R} &=& \Bigg[{\kappa^2 (r+t^2)\over {(1+rt^2)^2}}\Bigg](V_{cb}^R) (V_{ud}^{R*}) \,. \nonumber \end{eqnarray} Note that if we neglect the light quark masses the appropriate phase space functions for this particular decay mode will be given by $f$ and $g$. The modifications necessary for the study of the decay $b\rightarrow c\bar u s$ are obvious. Similarly for the corresponding decays $b\rightarrow c\bar cs(d)$ we simply change the appropriate CKM factors in the above and employ the appropriate phase space functions, $f_c$ and $g_c$, which are given by the phase space integrals $I_{1,2}$ in Section 2 with the replacement $y\rightarrow x$. For $x=0.29$ these are found numerically to be $f_c \simeq 0.222$ and $g_c \simeq -0.086$. The neglect of the strange quark mass, $m_s \simeq 100-150 ~MeV$, is found to be an excellent approximation here. To proceed with this calculation we need to compute the QCD corrections associated with the Renormalization Group running from the weak matching scale down to $\mu \sim m_b$. To this end we follow the analysis of Bagan {\it et al.}{\cite {bagan}} which allows us to write the partial width for this process as \begin{equation} \Gamma(b\rightarrow c\bar ud(s))= \Gamma_{SM}\Bigg[1+\eta_1+\eta_2+\eta_3\Bigg]\,, \end{equation} where $\Gamma_{SM}=3X_1\Gamma_0f|V_{cb}^L|^2(|V_{ud}^L|^2+|V_{us}^L|^2)$, $\Gamma_0$ is the canonical $\mu$ decay width with the replacement $\mu \rightarrow m_b$, and $X_1$ represents the results of SM QCD corrections(to which we will return below). The $\eta_i$ are LRM contributions which given by \begin{eqnarray} \eta_1 &=& \Bigg[{\kappa(r+t^2)\over {t(1-r)}}\Bigg]^2|\xi|^2y\,, \nonumber \\ \eta_2 &=& {X_2\over {X_1}}\Bigg[|\xi|^2+{\kappa^2 t^2(1-r)^2\over {(1+rt^2)^2} }y\Bigg]\,, \\ \eta_3 &=& 2{g\over {f}}{X_3\over {X_1}}Re(\xi)\Bigg[1+{\kappa^2(r+t^2)\over { (1+rt^2)}}y\Bigg]\,, \nonumber \end{eqnarray} with $y=(|V_{ud}^R|^2+|V_{us}^R|^2)/(|V_{ud}^L|^2+|V_{us}^L|^2)\simeq ~(|V_{ud}^R|^2+|V_{us}^R|^2)$. As pointed out in the discussion above, if $|V_{cb}^R|$ is large we anticipate that $y$ is near unity. For the decays $b\rightarrow c\bar cs(d)$ we make the obvious CKM replacements and the change the $X_i \rightarrow X_i'$, $f,g\rightarrow f_c,g_c$ and let $y\rightarrow y_c$ where $y_c=(|V_{cd}^R|^2+|V_{cs}^R|^2)/(|V_{cd}^L|^2+|V_{cs}^L|^2)\simeq ~1-|V_{cb}^R|^2$, with the last near equality resulting from unitarity and the fact that $|V_{ub}^L|^2$ is very small. If $|V_{cb}^R|^2$ is large then clearly $y_c$ must then be small. At leading order(LO) in QCD the $X_i=X_i'$ are completely calculable and are simple polynomials in the parameter \begin{equation} z=\Bigg[{\alpha_s(M_W)\over {\alpha_s(\mu)}}\Bigg]^{3/23}\,, \end{equation} and its inverse; here we will assume that $\alpha_s(M_Z)=0.118$ and $\mu \sim m_b$. Explicitly, we obtain \begin{eqnarray} X_1 &=& {1\over {3}}\Bigg[2z^4+z^{-8}\Bigg]\,, \nonumber \\ X_2 &=& {1\over {9}}\Bigg[8z^2+z^{-16}\Bigg]\,, \\ X_3 &=& {1\over {9}}\Bigg[4z^3+4z^{-3}+2z^{-6}-z^{-12}\Bigg]\,, \nonumber \end{eqnarray} where we have made use of the results of Altarelli and Maiani{\cite {hokim}} as well as Cho and Misiak{\cite {old}}. Note all $X_i\rightarrow 1$ as $z\rightarrow 1$ and the QCD corrections vanish. In the SM, NLO multiplicative corrections to the LO values of $X_1$ and $X_1'$ are now known{\cite {bagan}} to be $\simeq 1.061$ and $\simeq 1.29$, respectively, for $\mu=m_b$, $x=0.29$, and using pole quark masses($m_b=4.8$ GeV), both of which we adopt in the numerical analysis below. Unfortunately, the corresponding NLO corrections to $X_{2,3}$ and $X_{2,3}'$ are not yet known. The best we can do until such calculations are performed is to follow Voloshin's philosophy and assume the multiplicative corrections in these cases are essentially the same as those for $X_1$ and $X_1'$. Since, as we will see below, we will be more interested in the {\it shifts} in the values of $n_c$ and $B_\ell$ due to RH currents than the values themselves, we anticipate that this assumption may be a fair approximation. We note that in making this assumption we are also ignoring the possibility that the detailed LRM particle spectrum may lead to substantial modifications in these SM values, in particular, those contributions arising from penguins. These assumptions need to be verified by future direct calculations. \section{$\delta B_\ell$ and $\delta n_c$} From the discussion in the previous section we are ready to calculate both $\delta B_\ell=B_\ell(LRM)-B_\ell(SM)$ and $\delta n_c=n_c(LRM)-n_c(SM)$ where the SM results are given by the above expressions in the limit where all RH couplings are turned off. As is well known, the combined experimental and theoretical situation is quite puzzling. From the reviews of both Drell and Sachrajda{\cite {prob}} we see that $B_\ell=0.1018\pm 0.0040$ on the $\Upsilon (4S)$ while $B_\ell=0.1095\pm 0.0032$ at the $Z$. Similarly, $n_c=1.119\pm 0.053$ and $1.202\pm 0.067$ on the $\Upsilon (4S)$ and $Z$, respectively. Numerically, in the SM limit our calculations essentially reproduce the earlier results of Bagan {\it et al.}{\cite {bagan}} which we have closely followed; in this limit we obtain $B_\ell=0.123$ and $n_c=1.24$ for the SM predictions assuming $x=0.29$ and $\mu=m_b$. We will implicitly assume that there are no new $b\rightarrow no~charm$ final states, such as $b\rightarrow sg$, which are enhanced due to RH currents. It is clear that if we take these experimental numbers at face value we would like to decrease the theoretical predictions for $B_\ell$ by $0.015-0.020$ and $n_c$ by at least 0.03. Our analysis consists of an extensive scan of the model parameter space spanned by $r$, $t$, $|\xi|$, $c_\Delta$ and $y$ and demanding that a number of requirements be satisfied simultaneously. Our input parameters are chosen as follows. We begin by picking a `point' inside of the CLEO allowed region in the $c_\Delta-|\xi|$ plane so that this constraint is already satisfied. We assume the scale size of the $c_\Delta-|\xi|$ grid to be $0.01\times 0.01$ so that are approximately $1.5\cdot 10^4$ points in this sample. Next, we choose a value for the two LRM parameters $r$ and $t$; for simplicity $\kappa$ is set to unity. Keeping in mind the CDF/D0{\cite {cdfd0}} bounds and the strong suggestion that $t$ cannot be much larger than $r$, we let $r=0.0025$, 0.005, 0.0075, 0.010 or 0.012 and allow $t$ to vary over the range 0 to 0.012 in steps of 0.0005. (Remember that due to the phase freedom in angle $\Delta$ we can treat $t \geq 0$ in this discussion.) Clearly if the $W_2$ mass is too large and/or the mixing angle is too small the effects of RH currents will not be of a noticeable magnitude. This restricts our attention to $W_2$ masses in the approximate range $730-1600$ GeV. Thus we see that for every choice of $c_\Delta$ and $|\xi|$ there are 120 pairs of $(r,t)$ values giving us a total of $\simeq 1.8\cdot 10^6$ points to examine in the $r-t-|\xi|-c_\Delta$ parameter subspace. The first constraint we impose is the requirement that $|V_{cb}^R|$ be less than unity by using Eq. (32). Of course if this constraint is not satisfied for any of the $r$ or $t$ values this point on the $c_\Delta-|\xi|$ grid is removed from any further consideration. If satisfied, the result fixes the value of $y_c$ in the subsequent calculations. To proceed we must choose a value of $y$ in the range $0<y<1$ which we do in grid steps of 0.01. We then impose our second constraint that $y\geq |V_{cb}^R|^2$ so that only the larger of the $y$ values survive. Out of the original $\simeq 1.8\cdot 10^8$ points in the five-dimensional $r-t-|\xi|-c_\Delta-y$ parameter space being scanned, only $\simeq 27.5\cdot 10^6$ survive these first two constraints. For these remaining points we next calculate $\delta B_\ell$ and $\delta n_c$ for each particular choice of input parameters and impose our final loose requirement that $\delta B_\ell \leq -0.01$ and $\delta n_c \leq -0.025$. Again, if these constraints cannot be met at a particular point on the $c_\Delta-|\xi|$ grid, independently of the chosen values of $r,t$ and $y$, it is removed. Only 6284 points in the the $r-t-|\xi|-c_\Delta-y$ five-dimensional parameter space now remain; this number is further reduced to 972 if we strengthen our requirement on $\delta n_c$ to be $\leq -0.03$. It is clear from these numbers that a rather high degree of fine-tuning is required to push $B_\ell$ and $n_c$ in the proper direction and to produce shifts with interesting magnitudes. For most values of the parameters the resulting shifts in $B_\ell$ and/or $n_c$ are much too small to be of interest. The combination of these requirements is found to be extremely demanding on the model parameter space, yet two distinct sub-regions do survive. \vspace*{-0.5cm} \noindent \begin{figure}[htbp] \centerline{ \psfig{figure=voloshin2_res.ps,height=14cm,width=16cm,angle=-90}} \vspace*{-1cm} \caption{Location of surviving points in the $\delta B_\ell-\delta n_c$ plane. The 972 survivors of the $\delta n_c <-0.03$ cut are shown explicitly. The lines represent smoothed versions of the actual locations. The solid(dash-dotted) line corresponds to the solutions with $c_\Delta >(<) 0$. } \label{res} \end{figure} \vspace*{0.4mm} If one plots the values of $\delta B_\ell$ and $\delta n_c$ for the survivors we find that they essentially lie only along two straight lines in the $\delta B_\ell-\delta n_c$ plane with the choice of line depending upon the sign of $c_\Delta$ as shown in Fig.~\ref{res}. The corresponding location of these same points with $\delta n_c \leq -0.03$ projected onto the $c_\Delta-|\xi|$ plane are shown in Fig.~\ref{res2}. It is amusing to note that the points with $c_\Delta>0$ lie within the region associated with the fit to CLEO's data on the $\chi$ distribution in $B\rightarrow D^*(\rightarrow D\pi)\ell \nu$ obtained above. Assuming $\delta n_c \leq -0.025(-0.03)$ approximately $92.5(75.1)\%$ of the survivors are found to lie in the $c_\Delta >0$ region. The fractional volume of the $\delta n_c \leq -0.025$ parameter space which also allows $\delta n_c \leq -0.03$ is $\simeq 15.5\%$. While the $c_\Delta<0$ parameter space is only reduced to $51.4\%$ of its previous size by strengthening this $\delta n_c$ cut, the $c_\Delta>0$ subspace is drastically reduced to only $12.6\%$ of its previous population by this same cut. What are some of the various properties of the parameter space points that satisfy all our requirements? Mostly they are exactly what one would naively expect. First, all of the 972 survivors have $t\geq 0.0095$ since larger mixing angles are required to enhance the contributions of the RH currents. Second, in all cases $|V_{cb}^R|\geq 0.908$ and there is a significant preference for larger values of $r$, {\it i.e.}, there are only 4(37) cases with $r=0.0025(0.005)$. \vspace*{-0.5cm} \noindent \begin{figure}[htbp] \centerline{ \psfig{figure=voloshin2_res2.ps,height=14cm,width=16cm,angle=-90}} \vspace*{-1cm} \caption{Locations of the zones containing the 972 surviving points in the $c_\Delta-|\xi|$ plane, in comparison to the envelope of that allowed by CLEO at $95\%$ CL, which simultaneously satisfy $\delta n_c \leq -0.03$ and $\delta B_\ell \leq -0.01$.} \label{res2} \end{figure} \vspace*{0.4mm} \section{Discussion and Conclusions} The chirality of the $b\rightarrow c$ coupling is one of the most important quantities in $B$ physics. The original work of Gronau and Wakaizumi demonstrated to us just how little was actually known about this coupling at that time. Since then, after extensive theoretical and experimental effort, the situation remains far from being completely clarified. While CLEO and ALEPH have certainly demonstrated that the $b\rightarrow c$ coupling is dominantly LH in agreement with the SM, their results remain consistent with the possibility of a sizeable RH coupling. Furthermore, the interpretation of the low value of the $\Lambda_b$ polarization obtained by L3 remains ambiguous and could either be a first signal for RH currents or simply a sign of our ignorance of the strong interactions. All of these experimental analyses have been based on relatively small sample sizes and need to be repeated and improved upon. As we saw in the analysis above, the exclusive $B\rightarrow D^*$ semileptonic decay provides us with a large number of observables that can be used to probe for RH couplings of reasonably small strength. In addition to the overall partial width, expressible in terms of $V_{cb}^{L~exc}$, the measurements of the $\cos \theta_\ell$ and $\cos \theta_V$ distributions lead directly to the quantities $A_{FB}$ and $\Gamma_L/\Gamma_T$, respectively. By using HQET we performed a fit to the present CLEO results for these quantities and demonstrated that the current bound on the RH coupling strength still remains rather poor especially if it is allowed to be complex. Improved statistics available at upcoming $B$ factories will help tremendously here. Furthermore, while we showed that the $q^2$ distribution was not very sensitive to RH couplings, the $\chi$ distribution was found to be particularly so and yielded tantalizing indications for the existence of RH currents. Present CLEO data was shown to indicate that future measurements of this distribution will be extremely useful in either constraining or discovering RH couplings. More recent but yet unpublished CLEO results{\cite {unp1}} seem to strengthen the case for the existence of right-handed currents based on the $\chi$ distribution. The low $\Lambda_b$ polarization result obtained by ALEPH also remains tantalizing and certainly needs updating. Unfortunately, given our incomplete knowledge of QCD, any interpretation of the result in terms of RH currents can not be made at present. However, if high precision measurements of the lepton and missing energy spectra become available with only a factor of a few increase in statistics, we saw in the analysis above that sufficient observables do exist to separate the two possible explanations. Further observables may be found to strengthen any conclusions one may draw from future data. The low $\Lambda_b$ measurement seems to be confirmed by as yet unpublished data from both ALEPH and DELPHI{\cite {unp2}}. Under the assumption that $b\rightarrow c$ RH currents {\it do} exist consistent with the bounds from CLEO, we have tried to address the question raised by Voloshin as to whether such new interactions could assist in solving the long standing problem associated with $B_\ell$ and $n_c$. To address this point we needed to go beyond the model independent results of the previous section and incorporate our $b\rightarrow c$ RH coupling scenario into a larger framework, the most natural one being the LRM. Within this scheme, making a number of assumptions about both the detailed particle spectrum of the model and the nature of the NLO QCD corrections to the RH pieces of the nonleptonic Hamiltonian operator coefficients, we were able to demonstrate that two small regions of the LRM parameter space do exist that push both $B_\ell$ and $n_c$ in the right directions with sufficient magnitudes to be phenomenologically interesting. These small parameter space regions result from a reasonably highly tuned set of LRM parameters and in all cases $V_{cb}^R$ was found to have a magnitude of order unity. Hopefully measurements at the new $B$ factories which are soon to turn on will yield signals for physics beyond the Standard Model. Perhaps right-handed currents will be among them. \noindent{\Large\bf Acknowledgements} The author would like to thank J.L. Hewett, A. Kagan, C. Diaconu, S. Stone, Y. Grossman, M. Dittmar, A. Ryd, K. Kiers, J. Wells and M. Worah for discussions related to this work. \newpage \vspace{1in} \centerline{\bf APPENDIX} \vspace{0.1in} In this Appendix, we outline some of the implications of the scenarios discussed above that lead to lower values of both $B_\ell$ and $n-c$ while satisfying the CLEO constraints. Such results, for example, may lead one to speculate on just what form the matrix $V_R$ might take if this type of solution to the $B_\ell-n_c$ problem were to be realized. This will directly lead to a number of wide ranging implications in all low energy sectors of the theory and not just in $B$ physics. (In fact, there are too many for us to comment upon here with any depth of discussion.) Unfortunately, we do not yet have available a global analysis of RH current phenomenology for arbitrary forms of $V_R$ with a left-right mixing at the per cent level. Such an analysis would be extremely useful for our discussion but is far beyond the scope of the present paper. If we hypothesize{\cite {lang,new}} that in each row or column there is a single element with a magnitude near unity, as is true for the conventional CKM matrix, then there are only two RH mixing matrices which allow for large $V_{cb}^R$. Following the notation employed in our earlier work{\cite {new}}, we can write these `large element' forms symbolically, neglecting any phases, as \begin{eqnarray} {\cal M}_C & = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right)\,,\quad\quad {\cal M}_D & = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{array}\right)\,, \end{eqnarray} with the true $V_R$ being a perturbation about one of these skeletons, just as the CKM is a perturbation about the diagonal unit matrix. As noted elsewhere{\cite {new}} the structure of these matrices combined with small values of $t$ allow us to easily circumvent the traditional constraints on the LRM from the magnitude of $K_L-K_S$ mixing. However, we also observe here the necessity that at least one of $V_{td}^R$ or $V_{ts}^R$ is large which can lead us to some potential problems with the observed rate for and limits upon the processes $b\rightarrow d,s\gamma${\cite {alex2}}. As has been discussed in previous analyses of the $b\rightarrow s\gamma$ process within the context of the LRM{\cite {old,bsgme,new}}, interference terms between the RH and LH $W_1-W_2$ contributions obtain enhancements by helicity-flip factors of $m_t/m_b$ though they are also simultaneously suppressed by a factor of $t$. While the pure SM piece is proportional to the product $V_{tb}^LV_{td,s}^L$, these new interference terms are correspondingly proportional to either $V_{tb}^LV_{td,s}^R$ or $V_{tb}^RV_{td,s}^L$; the later being quite small in our case. However, here we have seen that at least one of the products $V_{tb}^LV_{td,s}^R$ is of order unity. This implies that in the models we have found here such LH-RH interference terms arising from $W_{1,2}-t$ quark penguins may be dangerously large, by factors of order 10, in at least one of the $b\rightarrow s\gamma$ or $b\rightarrow d\gamma$ modes. Of course one may argue that we are most likely quite ignorant of the true LRM model spectrum and that loop contributions from the non-$W_i-t$ diagrams may eliminate this problem. As is well known, SUSY and charged Higgs exchanges can, for example, yield significant contributions to these penguins and can possibly leading to a fine-tuned cancellation amongst the various pieces. This is an unnatural, yet potentially possible, solution. Another potentially important constraint arises from the determination of the relative branching fractions for the $B\rightarrow \psi \pi$ and $B\rightarrow \psi K$ modes by CLEO{\cite {sheldon}} to be $4.3\pm 2.3\%$. This measurement was inconsistent with the predictions of the Gronau and Wakaizumi model which predicted a ratio of $\sim 10^{-7}$. For the class of models presented here, this result implies only that $V_{cd}^R$ is probably somewhat smaller than $V_{cs}^R$, which is not an unexpected result. As is well-known there are many other non$-B$ physics constraints on the form of $V_R$ which need to be examined. These are mostly concerned with the specific elements $V_{ud,s}^R$; some of these have a rather long, even controversial, history. Many of these constraints have been extensively reviewed in detail some time ago by Langacker and Uma Sankar{\cite {lang}} and as stated above it is beyond the scope of the present paper to discuss them at any length except for several comments. These low-energy constraints include, amongst others, potential violations of CKM universality, as suggested by Wolfenstein{\cite {wolf}}, and/or violations of PCAC relations, as suggested by Donoghue and Holstein{\cite {don}}. For the universality constraint, we note that Buras{\cite{buras}} reports $\sum_i|V_{ui}^{L~eff}|^2=0.9972\pm 0.0013$, which is more than $2\sigma$ below the SM expectation, perhaps hinting at new physics. {\it If} no other new physics sources enter other than the existence of $V_R$, we can use the result of this sum to constrain both $Re(V_{ud,s}^R)$ even when $\kappa t \sim 0.01$. For example, using $|V_{ud}^L|\sim 0.98$ and $|V_{us}^L/V_{ud}^L|\sim 0.22$, this constraint implies \begin{equation} -0.133\pm 0.066 \simeq \Bigg[{{\kappa t}\over {10^{-2}}}\Bigg]~[|V_{ud}^R| \cos \Delta_d +0.22|V_{us}^R|\cos \Delta_s]\,, \end{equation} where $\Delta_i$ is sum of $\omega$ plus the phase of $V_{ui}^R$. This constraint is easily satisfied for either of the two forms of $V_R$ suggested above assuming reasonable $\Delta_i$. Some possibilities, suggested by an earlier analysis of Matrix D{\cite {new}} are to either have $V_{us}^R$ essentially unit magnitude but with a rather large phase together with $|V_{ud}^R|\sim \lambda^2 \simeq 0.05$ with arbitrary phase or to have instead $|V_{ud}^R|\sim \lambda \simeq 0.2$ with bot $V_{ud,s}^R$ having sizeable phases. (At this point we remind the reader, however, that in extended gauge theories such as the LRM there can be other potentially significant contributions to universality violation, {\it e.g.}, $Z'$ exchange, as has been discussed by Marciano and Sirlin{\cite {bill}}.) Interestingly, such possible solutions are also found to easily satisfy a number of additional constraints including those from PCAC{\cite {don}} (though these need to be updated), those from muon capture on $^3$He{\cite {gova}}, and those on the phase of $g_A/g_V$ in neutron beta decay{\cite {pdg}}. Similarly, the scaling of the strengths of the $V,A$ currents imply that the extracted value of the ratio $(g_A/g_V)/f_\pi$ is exactly the same as in the SM with no violation of the Goldberger-Treiman relation{\cite {gold}} occurring in the presence of RH currents. While safely avoiding all these bounds, however, these solutions do not help in explaining a possible disparity between the values of $V_{ud}^L$ extracted from neutron decay and that obtained from $0^+\rightarrow 0^+$ and $^{19}$Ne beta decay{\cite {hag}}. This at the very least would require a quite sizeable $V_{ud}^R$. In these same scenarios one might expect somewhat larger effects due to RH currents to now appear in the strange quark sector. Perhaps one of the most significant effects of RH currents here, apart from overall changes in normalizations of constants, is in the $F$ and $D$ parameters describing hyperon decay. The values extracted for these parameters from $\Delta S=0$ and $\Delta S=1$ transitions, corrected for the not yet completely understood $SU(3)$ breaking effects, would appear somewhat different. The reason here is clear: the ratio of the axial-vector to vector coupling constants in the two cases are shifted away from their SM values by different amounts depending on the form of $V_R$. Although the data remains rather poor, this possibility is not unsupported by the recent analysis of Ratcliffe{\cite {pgr}}. The implications are, of course, far reaching and extend as far as tests of the Bjorken Sum Rule{\cite {bj}}. We also remind the reader of the well known{\cite {pdg}} potential discrepancy between the value of $V_{us}$ extracted from the vector current coupling in $K_{e3}$ decays and that from hyperon decay data, which probes both axial-vector as well as vector couplings. Clearly, is a possible signature of RH currents arises in $B$ decays, the search for their influence elsewhere becomes ever more important. \newpage \def\MPL #1 #2 #3 {Mod. Phys. Lett. {\bf#1},\ #2 (#3)} \def\NPB #1 #2 #3 {Nucl. Phys. {\bf#1},\ #2 (#3)} \def\PLB #1 #2 #3 {Phys. Lett. {\bf#1},\ #2 (#3)} \def\PR #1 #2 #3 {Phys. Rep. {\bf#1},\ #2 (#3)} \def\PRD #1 #2 #3 {Phys. Rev. {\bf#1},\ #2 (#3)} \def\PRL #1 #2 #3 {Phys. Rev. Lett. {\bf#1},\ #2 (#3)} \def\RMP #1 #2 #3 {Rev. Mod. Phys. {\bf#1},\ #2 (#3)} \def\ZPC #1 #2 #3 {Z. Phys. {\bf#1},\ #2 (#3)} \def\IJMP #1 #2 #3 {Int. J. Mod. Phys. {\bf#1},\ #2 (#3)}
1,314,259,993,153
arxiv
\chapter{On the Wave Function Spread and Localization} \quad In nuclear physics, we often use the three-dimensional harmonic-oscillator (3D HO) potential as a zeroth order approximation to the nuclear mean-field potential. This is usually done in the center-of-mass coordinate system, assuming that all the nucleons experience the same attractive potential, $ H_{0}=\sum \left( \frac{\vec{p}_{i}^{2}}{2m}+m\Omega ^{2}\vec{x} _{i}^{2}\right) $\cite{MoshinskyBookOnHO}. If we assume the same localization for the nucleons, there seems to be a localization paradox since we are dealing with fermions that must obey the Pauli exclusion principle. However, this apparent paradox is resolved by using many-particle Slater determinant wavefunctions, constructed by filling the single-particle levels of the three-dimensional harmonic-oscillator potential. The Slater determinant form satisfies the Pauli principle requirements and yields different localization structure for each nucleon. A harmonic-oscillator potential is appropriate near stable equilibrium where the interaction potential should have a local minimum. If rotational invariance applies, then the potential near stable equilibrium should actually be a three-dimensional harmonic-oscillator potential. However, because of the Pauli principle, only the closed-shell nuclei have a spherical shape, other nuclei have non-spherical ground state distributions that can be characterized as oblate, prolate or tri-axial. As a consequence, for non-closed shell nuclei, a deformed three-dimensional harmonic-oscillator potential is a more appropriate ``mean field''. This idea is incorporated in the deformed Nilsson model \cite{Nilsson model}: \[ H=\frac{\vec{p}^{2}}{2m}+m\Omega ^{2}\vec{x}^{2}+\varepsilon m\Omega ^{2}x_{3}^{2}+v_{ll}\vec{l}^{2}+v_{ls}\vec{l}\cdot \vec{s}. \] Here, $\varepsilon $ is a measure of the deformation when $\vec{l}^{2}$ and $\vec{l}\cdot \vec{s}$ provide for the correct shell closures and magic numbers in nuclei. Large scale numerical calculations usually use basis functions of the three-dimensional harmonic oscillator (3D HO). This way the wave function parameter $\omega $ will not match the corresponding parameter $\Omega $ in the Hamiltonian. For example, $\Omega _{z}=\Omega \sqrt{1+\varepsilon}$ may be very different from $\omega =\Omega $ in super-deformed nuclei. It is therefore interesting to look at the behavior of the fixed-basis calculations with respect to localization and energy scale for the one-dimensional harmonic oscillator (1D HO). Here we fix the parameters of the 1D HO Hamiltonian to be $m=\Omega =\hbar =1$. Thus, its spectrum is simple: $E_{n}=n+\frac{1}{2}.$ The basis consists of displaced and scaled harmonic-oscillator wave functions, $\Psi_{n}((q+\xi )/\sigma);$ the $\xi =0$ states are squeezed/stretched states, and $\sigma =1$ states are coherent states \cite{Tapia-1993}. All the calculations are done using the default settings for Mathematica 4.1 \cite {Mathematica 4.1}. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{1DHOScaleConvergence}} \end{center} \caption{Ground-state convergence for the harmonic-oscillator problem with $\Omega =1$ using squeezed basis states. The number of basis states needed for $10^{-4}$ convergence accuracy of the ground-state eigenvalue is shown in green squares. The red circles are for the ground-state eigenfunction.} \label{1DHOScaleConvergence} \end{figure} Fig. \ref{1DHOScaleConvergence} shows the number of the fixed-basis states needed to achieve convergence to the $10^{-4}$ in the ground-state eigenvalue and eigenvector as a function of the parameter $\omega$. The convergence criteria for the eigenvalues requires two successive eigenvalues to be less than the accuracy limit ($10^{-4}$) apart. The convergence criteria for the eigenvectors is $\left| \left| H\Psi -E\Psi \right| \right| $ $<$ the accuracy limit. As expected, the eigenvalues converge much earlier than the eigenvectors. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{1DHOWFSpead}} \end{center} \caption{Role of the wave function spread. (a) Stretched basis states with $\omega=0.2$ within the harmonic-oscillator potential $\Omega=1$. (b) Consecutive approximations of the harmonic-oscillator ground state (red) using the stretched basis states. (c) Squeezed basis states with $\omega=4$ within the harmonic-oscillator potential $\Omega=1$. (d) Consecutive approximations of the harmonic-oscillator ground state (red) using the squeezed basis states.} \label{1DHOWFSpead} \end{figure} Fig. \ref{1DHOWFSpead} (a) shows the first few basis states ($\omega =0.2$), the harmonic-oscillator potential ($\Omega =1$), and the true ground-state wave function. Fig. \ref{1DHOWFSpead} (b) shows the calculated ground-state wave functions at different dimensions of the basis states with $\omega =0.2$. From these graphs, it is clear that when $\omega <\Omega =1,$ Fig. \ref {1DHOWFSpead} (a) and (b), one uses more and more basis states to produce the correct wave function behavior within the classically forbidden region. When $\omega >\Omega =1,$ Fig. \ref{1DHOWFSpead} (c) and (d), the focus is actually concentrated on getting the correct shape of the wave functions within the potential well. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{1DHOLocalizationConvergence}} \end{center} \caption{Ground-state convergence for the harmonic-oscillator problem with $\Omega =1$ using coherent basis states. The number of basis states needed for $10^{-4}$ convergence accuracy of the ground-state eigenvalue is shown with the green squares. The red circles are for the ground-state eigenfunction.} \label{1DHOLocalizationConvergence} \end{figure} Fig. \ref{1DHOLocalizationConvergence} is similar to Fig. \ref {1DHOScaleConvergence} but shows the convergence within the displaced (coherent states) harmonic-oscillator wave function basis ($\Psi _{n}(q+\xi ) $). Due to parity conservation, there is a good symmetry under the $\xi \rightarrow -\xi $ transformation. An example of the basis structure and convergence path similar to Fig. \ref{1DHOWFSpead} is shown in Fig. \ref{1DHOWFLocalization} for $\xi =-2.$ \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{1DHOWFLocalization}} \end{center} \caption{Role of the wave function localization. (a) Coherent basis states with displacement $\xi =-2$ within the harmonic-oscillator potential $\Omega =1$. (b) Consecutive approximations of the harmonic-oscillator ground state (red) using the coherent basis states.} \label{1DHOWFLocalization} \end{figure} Although one should be able to solve any problem in any arbitrarily chosen orthonormal basis, the considerations presented here point to the need for properly modified basis states to reduce the model-space dimension and thus to avoid problems due to numerical noise. In the process of optimizing the basis set for a particular Hamiltonian, the orthogonality of the basis would inevitably be destroyed. In the toy model of a two-mode system, a few possible types of basis-state refinements were considered. Originally, the oblique-basis method was concerned with two or more basis sets as described in the toy model and in nuclear physics applications. However, the idea of the basis refinement can be extended in quite a general way as described in the next section. \chapter{Variationally-Improved Basis Method} \quad The usual fixed-basis method can be derived from the Rayleigh-Ritz variation principle. If one considers minimization of $E\left(\vec{c}\right) $ with respect to $\vec{c}$ for a Hamiltonian $(H)$ using the basis states $\phi _{n}\left(x;\omega \right) $: \[ E\left[ \vec{c}\right] =\left\langle \Psi \left| H\right| \Psi \right\rangle-\lambda (\vec{c}\cdot \vec{c}-1),\quad \Psi \left(x\right) =\sum_{n}c_{n}\phi _{n}\left(x;\omega \right) \] of a Hermitian operator $Y\left(\omega \right) $ such that \[ Y\left(\omega \right) \phi _{n}\left(x;\omega \right) =Y_{n}\phi _{n}\left(x;\omega \right), \] then $\delta E[\vec{c}]/\delta c_{n}^{*}=0$ is equivalent to solving the matrix eigenvalue problem $\sum_{m}H_{nm}c_{m}=\lambda c_{n},$ where $ H_{nm}=\left\langle \phi _{n}\left| H\right| \phi _{m}\right\rangle,$ and thus the set of $\lambda $s provides information for the eigenvalues of $H$. If the set $\left\{\phi _{n}\right\} $ is taken to be non-orthogonal, then we have a generalized eigenvalue problem $\sum_{m}\left(H_{nm}-\lambda \mu _{nm}\right) c_{m}=0$ where $\mu _{nm}=\left\langle \phi _{n}|\phi _{m}\right\rangle$. Notice that there is a freedom that we have not yet specified: it is the choice of $\omega $ and $Y\left( \omega \right)$. Here, $\omega $ is the set of parameters characterizing the Hermitian operator $Y\left( \omega \right) .$ Usually one fixes $\omega $ from experience or by simply applying Rayleigh-Ritz variation with respect to $\omega $ in $E\left[ \omega \right] =\left\langle \phi _{0}\left( \omega \right) \left| H\right| \phi _{0}\left( \omega \right) \right\rangle $. Thus, we have an orthonormal basis $\phi _{n}$ with the same $\omega $ for any $n.$ One can try other procedures for fixing $\omega $ and $Y\left( \omega \right) $ as well \cite {Skyrme-1957 CinQM}. All this seems fine as long as the spectrum of $H$ is expected to be similar to the spectrum of $Y\left( \omega \right) ,$ but what if the potential for $Y\left( \omega \right) $ does not match the ``landscape'' of the potential for $H$. For example, would the harmonic-oscillator potential wave functions be appropriate for solving an anharmonic potential problem or a double-well potential? In principle, one should be able to use any basis, but it may be at the expense of long and tedious calculations. Therefore, we may try to let $\omega $ be a free parameter for different basis functions $\phi _{n}$. Then we can find $\omega _{n}$ by variation of $E\left[ \omega \right] =\left\langle \phi _{n}\left( \omega \right) \left| H\right| \phi _{n}\left( \omega \right) \right\rangle $. Often $\omega $ is related to the relevant energy scale. If we start with the correct wave function, but with the `wrong' parameters, then clearly a variational approach on the parameters will give us the right answer immediately. In the case of the harmonic-oscillator wave functions, one can argue that a calculation with a varying $\omega $ is equivalent to a multi-shell calculation with a fixed $\omega $ parameter. Since the harmonic-oscillator basis is a complete basis, then each function $\phi _{n}\left(x,\omega _{n}\right) $ can be expanded in the basis associated with $\Omega $, for example: \[ \phi _{n}\left(x,\omega _{n}\right) =\sum_{k}c_{n}^{k}\phi _{k}\left(x, \Omega \right). \] Therefore, $\phi _{n}\left(x,\omega _{n}\right) $ can be viewed as the result of a multi-shell calculation with the harmonic-oscillator parameter $\Omega$. Next, we discuss how in general one can refine any initial basis set so that each basis vector in the new and improved basis is the optimal one with respect to the Hamiltonian under consideration. Then, instead of refining the basis vectors, one can effectively renormalize the parameters in the Hamiltonian. The main idea is to optimize each trial vector by applying the Rayleigh-Ritz variational principle on $E\left[ \Psi ,\emph{A}\right] $: \[ E\left[ \Psi ,\emph{A}\right] =\left\langle A\Psi \left| H\right| A\Psi \right\rangle ,\quad \delta E\left[ \Psi ,\emph{A}\right] =0, \] where $\emph{A}$ represents the affine group $\emph{A}$ in $\mathbf{R}^{n}$. An element $a$ of $\emph{A}$ has a rotational component $r$ and a translational component $t,$ so that $(ax)_{j}=r_{i}^{j}x_{j}+t_{j}$. If $G$ is the symmetry group of $H$, then: \[ g^{-1}Hg=H,\quad g\in G. \] Therefore, the Rayleigh-Ritz variational principle should be applied with respect to the homogeneous space $M=\emph{A/G}$ that excludes the symmetry transformations $G$. For example, a translational symmetry of $H$ means that scaling and rotation are the relevant transformations. Since the physical systems usually have rotational and translational symmetry, then only scaling is left as a relevant operation for constructing a variationally-improved basis: \[ \Psi \left( x_{1},...,x_{n}\right) \rightarrow \Psi \left( sx_{1},...,sx_{n}\right) . \] The transformation $\left| \Psi \right\rangle \rightarrow \left| \emph{A} \Psi \right\rangle $ can be defined to maintain the normalization of the states: \[ \Psi \left( x\right) \rightarrow A\Psi \left( x\right) =\sqrt{\det \left( r\right) }\Psi \left( rx+t\right) . \] However, this transformation is not a unitary transformation in general, and therefore, it will not map orthonormal states into a new set of orthogonal states. Using scaling as a variational parameter has been done previously. Specifically, in the context of the confined systems it was used by Martin and Cruz to study hydrogen and helium enclosed in a spherical shell \cite {Marin and Cruz-1991 AJP}, \cite{Marin and Cruz-1991 JPB}. If $\int dy=\int \det \left( r\right) dx$ is used when $y=rx+t$, then the variationally-improved basis can be treated as a renormalization problem for the Hamiltonian $H$: \begin{eqnarray*} E\left[ \Psi ,\emph{A}\right] &=&\left\langle A\Psi \left| H\right| A\Psi \right\rangle =\int \det \left( r\right) \Psi ^{*}\left( rx+t\right) H\left( p,x\right) \Psi \left( rx+t\right) dx= \\ &=&\int \Psi ^{*}\left( y\right) H\left( rp,r^{-1}\left( y-t\right) \right) \Psi \left( y\right) dy. \end{eqnarray*} Rescaling in the above way seems to be related more to the scaling methods in condensed matter physics. For example, consider the one-dimensional harmonic oscillator: \[ H=\frac{1}{2}P^{2}+\frac{1}{2}Q^{2}. \] Then, the equation for the scale parameter $s$ from $E\left[ \Psi ,\emph{s} \right] $ when $\Psi \left( x\right) \rightarrow \sqrt{s}\Psi \left( sx\right) $ is: \begin{eqnarray*} E\left[ \Psi ,\emph{s}\right] &=&\frac{1}{2}s^{2}\left\langle P^{2}\right\rangle +\frac{1}{2}\frac{1}{s^{2}}\left\langle Q^{2}\right\rangle , \\ \frac{\partial E\left[ \Psi ,\emph{s}\right] }{\partial s^{2}} &=&0\Rightarrow s^{2}=\sqrt{\frac{\left\langle Q^{2}\right\rangle }{ \left\langle P^{2}\right\rangle }}=\frac{\Delta q}{\Delta p}. \end{eqnarray*} Here, $\left\langle Q^{2}\right\rangle =\left\langle \Psi \left| Q^{2}\right| \Psi \right\rangle $ and $\Delta q=\sqrt{\left\langle Q^{2}\right\rangle }$. It is assumed that $\Psi $ is such that $\left\langle Q\right\rangle =\left\langle P\right\rangle =0$, which means that the localization of the wave function has been selected. Evaluating $E\left[ \Psi ,\emph{s}\right] $ at the extremum $s^{2}=\Delta q/\Delta p$ gives: \[ E\left[ \Psi \right] =\Delta q\Delta p. \] Finally, using $\left[ p,q\right] =-i\Rightarrow \Delta q\Delta p\geq \frac{1 }{2}$, we find that the minimum of the energy is exactly the zero point energy for the harmonic oscillator $E_{0}=\frac{1}{2}.$ Notice that quantum mechanics was only used to provide us with a constraint on the fluctuations of the observables $q$ and $p$; other than that, we can consider the system as purely statistical. Thus, different $\Delta q\Delta p$ will give different value of $E\left[ \Psi \right] $. Turning this argument around, we would expect $\Delta q\Delta p\geq \left( n+\frac{1}{2}\right) $ when $ \Delta q$ and $\Delta p$ are evaluated in the space of wave functions with $n$-nodes. Another interesting way to obtain the same result is to use $H$ expressed in terms of the operators $a^{+}$ and $a$. Then, by using coherent states as trail wave functions $a\left| z\right\rangle =z\left| z\right\rangle $ we have $E\left[ z\right] =\left| z\right| ^{2}+\frac{1}{2}$ and thus $E_{0}= \frac{1}{2}$. \chapter{On the Loss of Hermiticity} \quad When the choice of the basis is not carried out with appropriate attention, an operator, supposedly Hermitian, may acquire a non-hermitian matrix realization within this basis. For example, a wrong basis may produce a non-Hermitian matrix for the Hamiltonian under consideration. Although this is unlikely to be encountered within the finite shell-model calculations using an occupation number representation, it is an obstacle when one wishes to use a hard core potential and a harmonic-oscillator basis \cite{MoshinskyBookOnHO}. Here we discuss the problem of a free particle in a one-dimensional box in the harmonic-oscillator basis. In order to proceed, we notice that the Hilbert space for the harmonic oscillator is not quite the same as for the free particle in a one-dimensional box. This is clear from the domains of the wave functions. The harmonic-oscillator wave functions are defined on the whole real axis $\mathbf{R}^{1},$ when the wave functions for a free particle in a box are defined on a finite interval $[-L,L].$ This discrepancy is easily fixed by projecting the harmonic-oscillator wave functions onto the interval $[-L,L]$, which changes the inner product for the wave functions: \[ \left( f,g\right) =\int_{-\infty }^{\infty }f^{*}\left( x\right) g\left( x\right) dx\rightarrow \int_{-L}^{L}f^{*}\left( x\right) g\left( x\right) dx. \] However, in this basis the matrix corresponding to $H$ will be nonhermitian in general. To understand the loss of hermiticity, we look at the off-diagonal matrix elements of the momentum operator ($P=-i\hbar \frac{\partial }{\partial q}$): \begin{eqnarray*} (\Psi _{m},P\Psi _{n}) &=&\int\limits_{-L}^{L}\Psi _{m}^{*}(q)(P\Psi _{n}(q))dq=\int\limits_{-L}^{L}\Psi _{m}^{*}(q)(-i\hbar \frac{\partial \Psi _{n}(q)}{\partial q})dq= \\ &=&i\hbar \int\limits_{-L}^{L}\frac{\partial (\Psi _{m}^{*}(q)\Psi _{n}(q))}{\partial q}dq+i\hbar \int\limits_{-L}^{L}\frac{\partial \Psi _{m}^{*}(q)}{ \partial q}\Psi _{n}(q)dq= \\ &=&i\hbar \left. (\Psi _{m}^{*}(q)\Psi _{n}(q))\right| _{-L}^{L}+\int\limits_{-L}^{L}\left( -i\hbar \frac{\partial }{\partial q} \Psi _{m}(q)\right) ^{*}\Psi _{n}(q)dq= \\ &=&i\hbar \left. (\Psi _{m}^{*}(q)\Psi _{n}(q))\right| _{-L}^{L}+(P\Psi _{m},\Psi _{n}). \end{eqnarray*} It is clear from the above expression that the \textit{hermiticity will be maintained only when all of the basis functions are zero\footnote{Wave functions with the same value at $\pm$ L is a necessary condition; wave functions should be zero at $\pm$ L only for an infinite potential.} at the boundary of the interval} [-L,L]. This condition is essential for solving exactly the quantization of a free particle in a one-dimensional box. \chapter{Coherent Behavior, Quasi-Symmetries and Quasi-Labels} \quad Recently the notion of a quasi-symmetry and adiabatic mixing has been introduced in nuclear physics \cite{Adiabatic mixing}. The toy model of a harmonic oscillator in a one-dimensional box can be used to introduce and illustrate one possible definition of a quasi-symmetry, an asymptotic label (quasi-label), and a coherent behavior associated with a quasi-symmetry. First, we define a similarity relation of two states $\left| \Phi \right\rangle $ and $\left| \Psi \right\rangle $ with respect to some Hermitian operator $\mathcal{H}$, and denote it as: \[ \left| \Phi \right\rangle \stackrel{\mathcal{H}}{\sim }\left| \Psi \right\rangle . \] In this approach, the operational definition of such a similarity relation uses the eigenvectors $\left| \mathcal{H};\Lambda \right\rangle $ and the eigenvalues $\Lambda $ of $\mathcal{H}$: \[ \mathcal{H}\left| \mathcal{H};\Lambda \right\rangle =\Lambda \left| \mathcal{ \ H};\Lambda \right\rangle . \] We would say that $\left| \Phi \right\rangle $ and $\left| \Psi \right\rangle $ are $\mathcal{H}$ similar ($\left| \Phi \right\rangle \stackrel{\mathcal{H}}{\sim }\left| \Psi \right\rangle $), if there is a function $f$, eventually monotonic, that maps the distribution $\rho _{\Phi }\left( \Lambda \right) =\left| \left\langle \mathcal{H};\Lambda |\Phi \right\rangle \right| ^{2}$ to $\rho _{\Psi }\left( \Lambda \right) =\left| \left\langle \mathcal{H};\Lambda |\Psi \right\rangle \right| ^{2}$ so that: \[ \left| \rho _{\Phi }\left( \Lambda \right) \right| ^{2}\approx \left| \rho _{\Psi }\left( f\left( \Lambda \right) \right) \right| ^{2}. \] In simple words, this means that the shape of the probability distribution $\rho _{\Phi}$ is similar to the shape of the probability distribution $\rho_{\Psi}$. If $G$ is a symmetry group for the operator $\mathcal{H}$, then the eigenspace for a given $\Lambda $ may be degenerate, and any function $\left| \Phi \right\rangle $ obtained from $\left| \Psi \right\rangle $ by a unitary transformation $U\in G$ ($\left| \Phi \right\rangle =U\left| \Psi \right\rangle $) will be $\mathcal{H}$ similar to $\left| \Psi \right\rangle $. Thus, $G$ defines an intrinsic symmetry for the wave functions that are similar to $\left| \Psi \right\rangle $. Therefore, $\left| \Psi \right\rangle $ can be viewed as an ``intrinsic state'' with respect to the symmetry of $\mathcal{H}$. In the case when $\mathcal{H}$ is one of the exact limits of a Hamiltonian $H$, i.e. $H=\mathcal{H}+\lambda ^{-1}V$, then one can define an adiabatic mixing of the states $\left| \mathcal{H};\Lambda \right\rangle $ due to the interaction $V$. An asymptotic label (quasi-label) $\Lambda $ can be assigned to each eigenvector $\left| \Psi ;\lambda \right\rangle $ of $H=\mathcal{H}+\lambda ^{-1}V$ in the limit $\lambda \rightarrow $ $\infty$: \[ \mathcal{H}\left| \Psi ;\Lambda ,\lambda \rightarrow \infty \right\rangle =\Lambda \left| \Psi ;\Lambda ,\lambda \rightarrow \infty \right\rangle \] thus, \[ \left| \Psi ;\Lambda ,\lambda \right\rangle {\rightarrow }\left| \mathcal{H} ;\Lambda \right\rangle ,\quad when\quad \lambda \rightarrow \infty . \] Assigning $\Lambda $ by using the natural order of the levels must be done carefully by tracing the sign of each level crossing. Once the asymptotic label $\Lambda $ has been assigned for a state $\left| \Psi \right\rangle $, then a coherent behavior with respect to an observable can be defined as well. There is a quasi-symmetry $\mathcal{H}$ for the observable $O:\left| \Psi \right\rangle \rightarrow \mathbf{R}$, if its value $O\left[ \left| \Psi ;\Lambda ,\lambda \right\rangle \right] $ does not depend much on the parameter $\lambda $: \[ O\left[ \left| \Psi ;\Lambda ,\lambda \right\rangle \right] \approx O\left[ \left| \mathcal{H};\Lambda \right\rangle \right] . \] Some common functions for $O\left[ \left| \Psi ;\Lambda ,\lambda \right\rangle \right] $ are related to the expectation values of $O:$ \[ \left\langle \Psi ;\Lambda ,\lambda \rightarrow \infty \right| O\left| \Psi ;\Lambda ,\lambda \rightarrow \infty \right\rangle \approx \left\langle \Lambda \right| O\left| \Lambda \right\rangle . \] In general, a coherent behavior with respect to other quantities can be defined as well. For example, a relative transition rate from a state $ \left| \Psi ;J\right\rangle $ to a state $\left| \Psi ;J+2\right\rangle$ due to a transition operator, say $E2,$ can be defined, say by using $B\left( E2,\Psi ,J\right) $, where $\Psi $ is the ``intrinsic state'' upon which the band is built. All the states within the band should actually be within the class of $\mathcal{H}$ equivalent states. In particular, the asymptotic label $\Lambda $ can be used as a band label. Notice that, as in the toy model studied, it may happen that at finite $\lambda$ the wave function $\left| \Psi \right\rangle $ has been assigned label $\Lambda $ while its components along the space $\left| \mathcal{H} ;\Lambda \right\rangle $ are practically missing. Following the results from the toy model, we can define some possible types of spectral structures that may exhibit such coherent behavior with respect to a Hamiltonian $H=\mathcal{ H}+V,$ and thus to specify a quasi-symmetry. Specifically, setting $\lambda =1,$ $H=\mathcal{H}+V$ has a quasi-symmetry if: \begin{eqnarray*} H\left| \Psi ;\Lambda _{n}\right\rangle &=&E\left( \Lambda _{n}\right) \left| \Psi ;\Lambda _{n}\right\rangle , \\ \mathcal{H}\left| \Lambda _{n}\right\rangle &=&\Lambda _{n}\left| \Lambda _{n}\right\rangle ,\quad \Lambda _{n+1}>\Lambda _{n}, \\ \Lambda _{n} &>&\left\langle \Lambda _{n}\left| V\right| \Lambda _{n}\right\rangle >\Lambda _{n+1}-\Lambda _{n}, \\ E\left( \Lambda _{n}\right) &\approx &\left\langle \Lambda _{n}\left| H\right| \Lambda _{n}\right\rangle . \end{eqnarray*} Here, $\Lambda _{n}$ is the corresponding asymptotic label of the state $\left| \Psi ;\Lambda _{n}\right\rangle .$ The term $\left\langle \Lambda _{n}\left| V\right| \Lambda _{n}\right\rangle >\Lambda _{n+1}-\Lambda _{n}$ means that $V$ mixes strongly different $\left| \Lambda _{n}\right\rangle$ states. Therefore, \textit{perturbation theory cannot be applied in the usual small perturbation regime}. However, $\Lambda _{n}>\left\langle \Lambda _{n}\left| V\right| \Lambda _{n}\right\rangle $ together with $ E\left( \Lambda _{n}\right) \approx \left\langle \Lambda _{n}\left| H\right| \Lambda _{n}\right\rangle =\Lambda _{n}+\left\langle \Lambda _{n}\left| V\right| \Lambda _{n}\right\rangle $ means that the spectral structure of $H$ in this region is similar to the spectral structure of $\mathcal{H}$ within a few percent: \[ \frac{E\left( \Lambda _{n}\right) -\Lambda _{n}}{E\left( \Lambda _{n}\right) }\approx \frac{\left\langle \Lambda _{n}\left| V\right| \Lambda _{n}\right\rangle }{\Lambda _{n}+\left\langle \Lambda _{n}\left| V\right| \Lambda _{n}\right\rangle }\approx \frac{\left\langle \Lambda _{n}\left| V\right| \Lambda _{n}\right\rangle }{\Lambda _{n}}<1. \] This seems to be the situation discussed in the toy model case. Since $\Lambda _{n}\sim n^{2},$ then $2\Lambda _{n}>\Lambda _{n+1}$ gives $(n-1)^{2}>8$ which is always satisfied for $n>4.$ \chapter{Guide to the Oblique-Basis Package} \quad Although most of the routines\footnote{The \textit{Oblique Basis package 2002 is }available from the author upon request.} in the \textit{Oblique-Basis Package 2002} can be compiled by using the \textit{makefile} routines provided, there is a need to follow a few simple steps in order to be able to carry out oblique-basis calculations. Here we describe some of the technical problems and their solutions that one may face in using the \textit{Oblique-Basis package 2002}. The process of running an oblique-basis calculation consists of four main steps which are described below: \begin{itemize} \item[(1)] selecting and generating basis states ($nuke$, $PNGGMJ$), \item[(2)] preparing interaction file(s) ($IsoInt2pn,$ $MakeInteractions$), \item[(3)] evaluating matrix elements for the chosen interactions ($su3pn$), \item[(4)] solving the generalized eigenvalue problem which includes: obtaining the eigenvectors of the Hamiltonian, calculating expectation values and transition probabilities for some desired operators ($GLanczos$). \end{itemize} In the parentheses are given the names of some relevant routines. Some current limitations of the \textit{Oblique-Basis package 2002} are related to the single-particle basis currently employed in the computations. In its present form, the main routine $su3pn$, which is used to evaluate the matrix elements of the operators (step 3 above), is set to operate on spherical and cylindrical single-particle states. This clearly restricts the basis generation process (step 1 above) to the same type of states (spherical and cylindrical). Even though the spherical and cylindrical single-particle states are of special interest, the code could, at least in principle, be changed to operate on other desirable single-particle states, such as those one can obtain via Hartree-Fock procedure. \section{Generating Basis States} \quad Since the oblique-basis idea is to combine two or more basis sets, the oblique-code package has been designed to use basis states generated and used by other shell-model packages. The two main codes in mind are: a version of the Glasgow code, which performs calculations in the spherical shell-model basis, and the SU(3) RME code, which performs calculations in the SU(3) shell-model basis using cartesian (cylindrical) single-particle states. \paragraph{Spherical Shell-Model States.} To generate spherical shell-model basis states one needs to run the Glasgow code ($nuke$ located in the folder $Glasgow-code$) with the desired configuration limits and with $IBASIS=-1$ in the appropriate interaction file ($*.int$). For more details see the example file $BasisOnly.int$ and files $Glasgow.int-instructions$ or \textit{Instructions Glasgow test9.doc}. There is a script file, $RunGlasgow.sc$, which one may find useful when running $nuke$. This script file uses head files (*.head) and an interaction file (*.int) to construct different input files (*.inp) for $nuke.$ The wave functions from $nuke$ are stored in the file $fort.60$ (see $Basis-states.info$ for its structure). This file is used by $GlsgwBasis2Redstick$, located in the folder $Oblique-Glasgow,$ to produce file $fort.35$ which contains the spherical shell-model basis states and the single-particle states data. If $nuke$ (Glasgow) is used to calculate the eigenvectors of a Hamiltonian, then these eigenvectors are extracted from file $fort.60$ into file $fort.36$. \textbf{Important note}: File $fort.11$ is very important pre-generated file. This file ($fort.11$) is used by $nuke$ and $GlsgwBasis2Redstick$. \textbf{Do not lose the file }$fort.11$! \textbf{Tip}: In oblique runs of the $su3pn$ code, consider using sorted spherical shell-model states. Such states are produced by the $EpsSorting$ code located in the folder $Oblique/SphToCyl$. \paragraph{SU(3) Shell-Model States.} Even though the C. Bahri's SU(3) RME code may provide the SU(3) highest weight states in the near future, the \textit{Oblique -Basis package 2002} has its own highest-weight state generator for some of the most important SU(3) irreps. The SU(3) related fortran codes are in the folder $SU3Generator $. The current highest weight state generator $SU3\_HWS\_GEN$ is located in a folder with the same name. Before using $SU3\_HWS\_GEN,$ one may need to run the auxiliary code $SU3Lister$ which would help in the HWS selection process. The $SU3Lister$ code also generates the necessary cylindrical single-particle states (file $cylin.sps$) that are needed in the evaluation of matrix elements of operators through the $su3pn$ code. \textbf{Important note}: Before doing any runs, compile file $fort.4$ using $SU3GENBK$ located in the folder $ProjectLibrary$. \textbf{Tip}: By comparing the output of the $SU3Lister$ with the output of the $genwsirl$, one can determine the irreps that are not generated by the generator $SU3\_HWS\_GEN$. Once the highest weight states are created, they are stored in a file with extension $*.hws.$ This file is used as input file to the SU(3) code $PNGGMJ$ located in the sub-folder $PNGGMJ.$ The code $PNGGMJ$ generates two files containing basis states $*.su3$ and $*.bas$. It also generates a file $PNGGMJ\_Brief\_Info.log$ which contains information that may be used to set the parameters \textbf{max\_nbas\_su3} and \textbf{maxmpc} for the $su3pn$ code. The $*.su3$ file has some of the SU(3) related information and is usually used in testing tools which are located in the sub-folder $ProjectTools$. The $*.bas$ file is mainly for use by the $su3pn$ code. \section{Generating Interaction Files} \quad Even though there are a few the realistic interactions commonly used (KB3, Wildenthal...), their format files may differ significantly. Interactions given in the isospin format as used by the Glasgow code can be transformed into the proton-neutron format by using the code $IsoInt2pn$ located in the folder $Oblique-Glasgow$. Often used schematic interactions are also available through a package called $MakeInteractions$ made by Dr. C. Johnson. $MakeInteractions$ allows one to generated combinations of frequently studied nuclear interactions. The menu of the currently available interactions is: \begin{itemize} \item[(0)] Random noise (TBRE, two-body random ensemble), \item[(1)] Pairing, \item[(2)] Multipole-multipole (you choose L), \item[(3)] S\symbol{94}2 (total spin), \item[(4)] L\symbol{94}2 (total orbital angular momentum), \item[(5)] J\symbol{94}2 (total angular momentum), \item[(6)] L*S (spin-orbit) 1+2 body. \end{itemize} \section{Running the Main Routines} \paragraph{Evaluating Matrix Elements of Operators.} Usually, the running of programs takes considerable time. Thus, it is better to use a script file for such runs. Some example script files are $RunSu3pn+GLDriver.sc$ and $RunSu3pnGLanzos.sc.$ Basically, the input of the $ su3pn$ code requires the following entries to be specified: \begin{itemize} \item[$>$] name of the file containing single-particle levels with extension *.sps, \item[$>$] scaling of the two-body matrix elements $\left( A/B\right) \symbol{94}X$, \item[$>$] interaction file (*.int), \item[$>$] name of the file containing cylindrical single-particle levels (*.sps), \item[$>$] name of the file containing the su3 basis states (*.bas) \item[$>$] name of the file containing the ssm basis states (*.bas) \item[$>$] desired name for the output files (*.ham and *.ovr) \end{itemize} \paragraph{Eigenvectors, Expectation Values and Transition Matrix Elements.} After generation of the Hamiltonian and operator matrices, and the overlap matrices (if needed), which are stored in files with extensions *.ham and *.ovr, one has to run the generalized Lanczos code ($GLanczos$) to obtain the eigenvalues, eigenvectors, expectation values, and transition matrix elements. There is a script file $RunGLDriver.sc$ which may be used. \chapter{Introduction} \quad Selecting the right basis to perform calculations is a very essential step in analyzing any eigenvalue problem; it is especially true for many body quantum mechanical problems. When performing calculations, symmetries are also very important. Each of the fundamental quantities, such as energy ($E$ ), linear momentum ($p$), and angular momentum ($L$), is conserved due to an exact symmetry of the physical space. It is well known that energy conservation is due to time translational symmetry, linear momentum conservation is due to space translational symmetry, and angular momentum conservation is related to rotational symmetry. Any mathematical description used in physics takes advantage of these symmetries and incorporates them explicitly. For example, in the Lagrangian (Hamiltonian) formalism, the Lagrangian (Hamiltonian) of the system is explicitly invariant with respect to the fundamental symmetries, such as time translation, space translation and space rotation. Most examples of exactly solvable problems come from systems with some type of symmetry \cite {Elliott-Symmetries}. The notion of a stable equilibrium state, which is often related to an energy minimum, is another very important concept in physics. If $\vec{x} _{0} $ is an equilibrium point of the Hamiltonian function ($H$) for a classical particle, then $H$ can conveniently be expressed in a Taylor series around $\vec{x}_{0}$: \[ H=\frac{1}{2m}\vec{p}^{2}+\frac{1}{2}k(\vec{x}-\vec{x}_{0})^{2}+\mathcal{O} \left( \Delta x^{4}\right). \] This way, the harmonic oscillator described by the Hamilton $H=\frac{1}{2m} \vec{p}^{2}+\frac{1}{2}k\vec{x}^{2}$ turns out to be one of the most important model systems in physics with many applications \cite {MoshinskyBookOnHO}. $SU\left( n\right) $ is the symmetry group of the $n$-dimensional harmonic oscillator, while $Sp(2n,R)$ is the corresponding dynamical group. Thus, the $SU(3)$ symmetry of the three-dimensional harmonic oscillator is a very important approximate symmetry of a system near equilibrium. Symmetries are very useful in the construction of shell-model structures in nuclear physics as well as in atomic physics. A shell-model structure is based on some exactly solvable limit of an effective interaction potential. An exactly solvable system allows for a well defined set of basis states. In particular, if bound states exist, they can be considered as single-particle levels of the system. Usually, a shell model assumes a mean field with which the particles of the system interact. For example, in atomic physics, the shell structure is mainly due to the Coulomb field of the nucleus, while in nuclear physics the mean field is often taken to be the Hartree-Fock mean field. In this approach, the particle-particle interaction is assumed to be incorporated as much as possible in the average mean field. In particular, the nuclear spherical shell model is very successful in the description of nuclei \cite{Heyde's-shell model}. Despite the enormous success of the spherical shell model, it is generally difficult to deal with nuclei in the middle of the shell (mid-shell nuclei) using this model. For such nuclei the collective degrees of freedom are very essential and the shell-model configuration space is very big. Therefore, for these nuclei, a shell model based on the collective degrees of freedom is more appropriate. Elliott's $SU(3)$ model is useful for understanding the collectivity in light nuclei, up to $A<28$ ($sd$-shell) \cite{Elliott's SU(3) model}. For heavier nuclei with $A>80$, the pseudo-$SU(3)$ version of Elliott's model is very successful in the description of the collective modes \cite {pseudo SU(3) symmetry}. For these nuclei ($A>80,$), the deformed Nilsson model is more accurate in the description of the single-particle levels than the simple spherical shell model \cite{Nilsson model}. At least in principle, collective phenomena, such as rotational spectra with strong $B(E2)$ transitions, should be reproduced by the microscopic models. However, to do so using the spherical shell model, one needs sufficiently many particle configurations. Unfortunately, the dimensionality of the space grows combinatorially with the number of particles placed in the allocated levels. This binomial growth is a major computational problem. On the other hand, the $SU(3)$ model allows for a good understanding of the collective nuclear properties in light and heavy mid-shell nuclei. However, for nuclei near closed shells, the spherical shell model is more favorable due to the dominance of the single-particle phenomena in these nuclei \cite{VGG SU(3)andLSinPF-ShellNuclei}. Therefore, it seems plausible to consider a hybrid-type calculation that uses these two models. In general, the two bases, the spherical shell-model basis and the $SU(3)$ shell-model basis, will not be orthogonal to each other. Such a calculation can be considered as an ``\textbf{oblique}'' basis shell-model calculation \cite{VGG 24MgObliqueCalculations}. The oblique-basis calculation for nuclei is the subject of the research presented here. Oblique-basis calculations are expected to be of a practical value in systems with competing degrees of freedom. For example, our study shows the relevance of the oblique calculation in the case of $^{24}$Mg. For this nucleus, the single particle excitations described by the spherical shell model and the collective excitations described by the $SU(3)$ shell model are important. When we combine the two bases, we obtain a significant gain in the convergence of the low-energy spectra towards the full space result. In particular, the addition of the leading-$SU(3)$ irreducible representations (irreps) yields the right placement of the $K=2$ band and the correct order for most of the low-lying levels. Indeed, an even more detailed analysis shows that the structure of the low-lying states is significantly improved through the addition of a few $SU(3)$ irreps. The oblique-basis calculation will be an unnecessary numerical complication for systems where one of the excitation modes is dominant. For example, in the lower $pf$-shell nuclei $^{44}$Ti and $^{48}$Cr, the spherical shell model gives a significant part of the low-energy wave functions within a few spherical shell-model configurations, while in the $SU(3)$ shell-model basis one will need more than a few $SU(3)$ irreps. This fact is mainly due to the strong breaking of the $SU(3)$ in the lower $pf$-shell induced by the spin-orbit interaction \cite{VGG SU(3)andLSinPF-ShellNuclei}. In spite of the results in the lower $pf$-shell, it is expected that in the mid-shell region some sort of $SU(3)$ collective structure will gain importance \footnote{It was pointed by Chairul Bahri that the deformed Nilsson diagram for the $pf$-shell suggest a pseudo $SU(3)$ symmetry. Another alternative could be a quasi-$SU(3)$ symmetry.}. If this is to happen, then the oblique-basis calculation will be an important alternative for calculating the structure of nuclei, such as $^{56}$Fe and $^{56}$Ni. Results of the shell-model calculations for lower $pf$-shell nuclei show that $SU(3)$ symmetry breaking in this region is driven by the single-particle spin-orbit splitting. However, even though states of the yrast band exhibit $SU(3)$ symmetry breaking, the results also show that the yrast band $B(E2)$ values are insensitive to this fragmentation of the $ SU(3) $ symmetry; specifically, the quadrupole collectivity as measured by $ B(E2)$ transition strengths between low-lying members of the yrast band remain high even though $SU(3)$ appears to be broken. Results for $^{44,46,48}$Ti and $^{48}$Cr using the Kuo-Brown-3 two-body interaction \cite {KB3 interaction} are given to illustrate these observations. \chapter{The Nuclear Shell Model} \quad In some sense, the shell structure of nuclei is more complicated than the shell structure of atoms. The shell structure of atoms is due to the Coulomb force between the nucleus and the electrons. It may be a nice coincidence, but it is a fact that the Coulomb potential problem in quantum mechanics is an exactly solvable problem \cite{MoshinskyBookOnHO}. In the case of nuclei, the situation is more complicated. The reason is that there is no single source of a central potential. Instead, all nucleons are considered to act together, generating a mean field. Within this mean field, the problem is more tractable \cite{Heyde's-shell model}. Here, we do not consider the problem of how to obtain the mean-field potential. Instead, we just use some general symmetry properties that a phenomenological potential and a realistic effective interaction should obey. These symmetry properties provide insight about the relevant single-particle basis within which one can consider the problem. \section{Magic Numbers in Nuclei} \quad Maria G. Mayer's discussion of the magic numbers in nuclei has clearly demonstrated the nuclear shell structure associated with the independent-particle model for nuclei \cite{Mayer-1948 Magic Numbers}. In this model, each closed-shell configuration provides a convenient first approximation. In this approximation, one can assume that the system under consideration consists of a closed-shell core plus valence particles in a valence shell. This approach very successfully explains the ground state properties of nuclei \cite{Mayer-1950-I IPNSM}. In order to understand and obtain qualitatively good results for the structure of the excited states, one has to consider a configuration mixing in the valence space. This usually leads to a very big model space. Therefore, a further truncation scheme is required. In this chapter, we will discuss two main approaches used in the nuclear shell model, namely the spherical shell-model truncation scheme and the $SU(3)$ shell-model truncation schemes. \section{The Nuclear Interaction} \quad From a fundamental point of view, the problem of the relevant nucleon-nucleon interaction is very important. However, it is outside the scope of the research presented here. Even when one is provided with a good phenomenological nucleon-nucleon interaction, there is a lot of hard work to be done before one can finally set things up and calculate some experimentally meaningful results. Usually, a Hartree-Fock procedure is employed to reduce the many-particle Schr\"{o}dinger equation to a single-particle Schr\"{o}dinger equation with a self-consistent mean field. Once the single-particle states and energies are defined, then the $n$-particle configurations are formed using Slater determinants. Finally, a configuration mixing is used to take into account some of the residual interaction. This process may be simplified by using a phenomenological single-particle potential and a realistic interaction with a set of parameters adjusted to fit the experimental data. In this section, we consider a phenomenological interaction that contains some effective one-body and two-body potentials that are obtained from the original two-body nucleon-nucleon interaction: \[ H=\sum_{i=1}^{A}T_{i}+\frac{1}{2}\sum_{i\neq j}^{A}V\left( \left| r_{i}-r_{j}\right| \right) \rightarrow \sum_{s\in \left\{ {valence\quad particles}\right\} }\left( t_{s}+U_{s}\right) +V_{res}. \] $T_{i}$ is the kinetic energy of the $i$-th nucleon, $V\left( \left| r_{i}-r_{j}\right| \right) $ is the two-body nucleon-nucleon interaction, $ t_{s}$ is an effective one-body kinetic energy of the valence particles, $ U_{s}$ is the effective mean-field potential, and $V_{res}$ is the effective residual two-body interaction between the valence particles \cite {Heyde's-shell model}. The effective one-body interaction $H^{1b}=t+U$ provides a set of single-particle states: \[ H^{1b}\phi _{i}\left( x\right) =\left( t+U\right) \phi _{i}\left( x\right) =\varepsilon _{i}\phi _{i}\left( x\right). \] The many-body wave function for a fermion system has to obey the Pauli principle. Thus, a fully antisymmetric combination, a Slater determinant, has to be constructed: \[ \Psi \left( \vec{x}_{1},....,\vec{x}_{n}\right) =\det \left| \begin{array}{cccc} \phi _{1}\left( \vec{x}_{1}\right) & \phi _{1}\left( \vec{x}_{2}\right) & \cdots & \phi _{1}\left( \vec{x}_{n}\right) \\ \phi _{2}\left( \vec{x}_{1}\right) & \phi _{2}\left( \vec{x}_{2}\right) & \cdots & \phi _{2}\left( \vec{x}_{n}\right) \\ \vdots & \vdots & \vdots & \vdots \\ \phi _{n}\left( \vec{x}_{1}\right) & \phi _{n}\left( \vec{x}_{2}\right) & \cdots & \phi _{n}\left( \vec{x}_{n}\right) \end{array} \right|. \] Here, the single-particle wave functions $\phi _{m}\left( \vec{x}_{s}\right) $correspond to the $s$-th particle in the $m$-th single-particle state with quantum numbers depending on the exact symmetries of the single-particle problem. Usually, these quantum numbers include angular momentum ($j$) and parity ($\pi $). \section{Hamiltonian in Second Quantized Form} \quad Given the single-particle levels, one can simplify the notation by going from the coordinate representation of the single-particle levels to an occupation representation. This process is often called a second quantization since the wave functions are constructed from appropriate creation/annihilation tensor operators acting on a vacuum state: \[ \phi ^{\alpha jm}(x)\rightarrow \left| \alpha jm\right\rangle =a_{\alpha jm}^{+}\left| 0\right\rangle. \] Here, $\alpha $ stands for other quantum numbers, such as harmonic-oscillator shell numbers, spin and isospin labels. The vacuum state $\left| 0\right\rangle $ is a reference state on which everything else is built. The vacuum state $\left| 0\right\rangle $ may have a different meaning depending on the quantum labels of the annihilation operators. The annihilation operators usually define the vacuum as follows: \[ a_{\alpha jm}\left| 0\right\rangle =0. \] For example, if $a_{\alpha jm}^{+}$ and $a_{\alpha jm}$ represent some real particles, such as fermions, then clearly the vacuum state is a state of no particles at all. If $a_{\alpha jm}^{+}$ and $a_{\alpha jm}$ represent the valence nucleons, then the vacuum state $\left| 0\right\rangle $ would represent the closed-shell core. In the forthcoming chapters, we consider $\left| 0\right\rangle $ to represent closed-shell nuclei. For example, $ ^{16}O$ is the closed-shell nucleus when we study nuclei in the valence $sd$-shell; $^{40}Ca$ is the closed-shell nucleus when we study nuclei in the valence $pf$-shell. In this second quantized form, the effective Hamiltonian is: \begin{equation} H=\sum_{i}\varepsilon _{i}a_{i}^{+}a_{i}+\frac{1}{4} \sum_{i,j,k,l}V_{ij,kl}a_{i}^{+}a_{j}^{+}a_{k}a_{l}. \label{H=aa+aaaa} \end{equation} Here, $\varepsilon _{i}$ are single-particle energies derived from excitation spectra of one valence particle system, i.e. $^{17}O$ in the case of the $sd$-shell. The $V_{kl,ij}$ are two-body matrix elements derived from an initial approximation, which are improved by a data fitting across the range of nuclei in consideration. For example, in the case of the $sd$-shell we would use the $63$ two-body matrix elements obtained by Wildenthal \cite {Wildenthal}. \section{Spherical Shell Model for Nuclei} \quad We have already mentioned the independent-particle model \cite{Mayer-1950-I IPNSM}. This model uses the harmonic-oscillator potential as an effective single-particle potential for nucleons \cite{Mayer-1950-II-IPNSM} plus a spin-orbit interaction that provides for the correct shell closure \cite {Haxel-1949 IPNSM}. In addition, there is a strong pairing part in the two-body interaction. The pairing interaction and the quadrupole-quadrupole interaction \cite{Haxel-1949 IPNSM} are essential parts of the two-body interaction. \subsection{Single-Particle Basis} \quad In computations based on the independent-particle basis, we use a phenomenological Hamiltonian (\ref{H=aa+aaaa}) with single-particle levels labeled by the harmonic-oscillator quantum numbers $nljm$ as follows: \begin{itemize} \item $n$ is the harmonic-oscillator shell, \item $l$ is the angular momentum quantum number, \item $j=l\pm \frac{1}{2}$ is the total angular momentum of the nucleon with spin $1/2,$ \item $m$ is the third projection of the total spin $\vec{j}.$ \end{itemize} Within the above labeling scheme, the single-particle wave functions in the coordinate representation have the form: \begin{equation} \phi _{nlsjm}\left( x\right) =\left\langle x|nlsjm\right\rangle =\sum_{m=m_{l}+m_{s}}\left\langle lm_{l},sm_{s}|jm\right\rangle R_{nl}\left( r\right) Y_{lm_{l}}\left( \theta,\varphi \right) \chi _{m_{s}}. \label{spherical wave functions} \end{equation} Here, $\left\langle lm_{l},sm_{s}|jm\right\rangle $ stand for the Clebsch-Gordan coefficients of $SU\left( 2\right)$, $R_{nl}\left( r\right) $ are the radial wave functions, $Y_{lm_{l}}\left( \theta,\varphi \right) $ are the spherical harmonics, and $\chi _{m_{s}}$ are the internal spin$\frac{ 1}{2}$ wave functions for nucleons. \subsection{Many-Particle Basis} \quad In the occupation number representation, Slater determinant states are constructed from $n_{1}...n_{k}$ nucleons by means of the fermion particle creation operators $a_{i}^{+}:$ \begin{equation} \left| n_{1}...n_{k}\right\rangle =\prod\limits_{s=1}^{k}\left( a_{s}^{+}\right) ^{n_{s}}\left| 0\right\rangle, \label{SSM-ManyParticleBasis} \end{equation} where the operators $a_{i}^{+}$ and $a_{i}$ obey a Fermi algebra: \begin{eqnarray*} a_{i}^{+}a_{j}^{+}+a_{j}^{+}a_{i}^{+} &=&0, \\ a_{i}a_{j}+a_{j}a_{i} &=&0, \\ a_{i}^{+}a_{j}+a_{j}a_{i}^{+} &=&\delta _{ij}. \end{eqnarray*} Here, the labels of the operators $a_{i}^{+}$ and $a_{i}$ correspond to some specific quantum labels $nlsjm$ of the spherical single-particle wave functions (\ref{spherical wave functions}). \subsection{Configuration Truncation and the M-scheme Basis} \quad Based on the independent-particle model, one can make an initial approximation to the wave functions of nuclei. This approximation uses the lowest energy configuration $\left[ n_{1,}...,n_{k}\right]$, where $n_{i}$ is the number of identical particles placed in the $i$-th orbital subject to the condition $0\leq n\leq 2j+1$. The energy of such a configuration is given by the expression $E_{\left[ n_{1,}...,n_{k}\right] }=\sum_{i}\varepsilon _{i}n_{i}.$ It is immediately clear that in general there would be some degeneracy. Thus, the proper description of the excitation spectrum would need the two-body part of the interaction to lift this degeneracy. However, even then, using only the few lowest energy configurations is not sufficient to describe properly collective excitons in the mid-shell nuclei. For heavy mid-shell nuclei, one needs to include a significant number of configurations. One way to proceed and include many configurations is to consider many-particle states with good $J$ and $M_{J}$ via $SU(2)$ coupling within each configuration. Codes based on this approach usually rely heavily on $3j$, $6j,$ and higher $SU(2)$ symbols \cite{French's Oak-Ridge Code, NATHAN}. Since these $j$-symbols are calculated repeatedly, an efficient $ SU(2)$ package and a smart way to store often used coefficients are very essential. Recently, an $SU(3)$ code using the same strategy has been successfully developed \cite{Bahri-RME}. This code relies on a very efficient data storage technique \cite{Park-WST}. An alternative computational method is the $M$-scheme approach \cite{the M-scheme approach}. In this approach, instead of using states with good $J$ and $M_{J},$ one uses only states with good $M_{J}$ and lets the Hamiltonian select the states of good $J$. Diagonalizing the Hamiltonian in such a basis results in a few of the lowest energy eigenstates. The $M$-scheme set of states is convenient since $M_{J}$ is an additive quantum number. In order to provide for good total angular momentum ($J$)$,$ one has to include all states of fixed $M_{J}$ within a given configuration. This method relies heavily on large matrix diagonalization algorithms. One such algorithm is the Lanczos algorithm which is very fast and efficient \cite{Van Loan Cullum-Lanczos}. The Lanczos algorithm is a cornerstone of the modern $M-$ scheme shell-model codes \cite{Whitehead-shell model}. To illustrate the spherical shell-model truncation scheme, we consider $^{24}$Mg. For this nucleus, the lowest configuration providing the initial approximation to the ground state is $0s^{4}0p^{12}0d_{5/2}^{8}1s_{1/2}^{0}0d_{3/2}^{0}$. Here, $0s^{4}0p^{12}$ is the core nucleus $^{16}O$; the valence space is $0d_{5/2}$ $1s_{1/2}$ $ 0d_{3/2}$ with the lowest configuration of $8$ particles, $4$ protons + $4$ neutrons, in the $0d_{5/2}.$ If we explicitly write down a $jj$ coupled state with good $J$ and $M_{J}$ within the $0d_{5/2}^{8}$ configuration, then we would see that all the states with a fixed total $M_{J}$ within the $ 0d_{5/2}^{8}$ configuration contribute to this state with good $J$ and $ M_{J}.$ Since the Hamiltonian ($H$) respects the rotational symmetry, its eigenvectors must have good $J$ and $M_{J}$ values. Therefore, diagonalizing $H$ in the space of all the states with fixed $M_{J}$ within the $ 0d_{5/2}^{8}$ configuration will automatically produce eigenstates with different $J$ values and same $M_{J}$ values. Usually, one has to include many configurations by using some selection principle. Often, the selection scheme uses the energy of the configurations. In this scheme, one includes only configurations that are within some range $\Delta E$ relative to the lowest energy configuration. Another selection scheme, which we use for the present study, considers the number of particles excited out of the lowest energy configuration into the full harmonic-oscillator shell. This selection scheme takes into account possible collective pair excitations when applied with two and four particle excitations outside of the lowest energy configuration.\footnote{Recently, it has been shown that one can successfully extrapolate some observables, such as energy eigenvalues, quadrupole moments, B(E2) transition strengths and Gamow-Teller transition strengths, using successively bigger truncation spaces. For more details see nucl-th/0203012 by Mizusaki and Imada and nucl-th/0112014 by Zelevinsky and Volya.} \section{The SU(3) Shell Model for Nuclei} \quad If one considers a system near equilibrium, then it is possible to approximate its potential with a harmonic-oscillator potential. Since the symmetry group of the three-dimensional harmonic oscillator is $SU\left( 3\right) $, it is plausible to use $SU(3)$ basis states. In this section we discuss the $SU(3)$ shell model. We begin with a review of Elliott's $SU(3)$ model \cite{Elliott's SU(3) model}. In particular, we present two single-particle labeling schemes, the spherical and cylindrical labeling scheme. Then, the structure of a general $SU(3)$ irrep in the cylindrical labeling scheme is given. Next, we describe the $SU(3)$ truncation scheme which is based on $SU(3)$ invariant two-body interactions. We conclude the section with a brief discussion of the $SU(3)$ breaking interactions. \subsection{Labeling of the States in Elliott's SU(3) Model} \quad In this section we review group theoretical concepts that are important to the development of the theory and introduce $SU(3)$ conventions adopted in our discussion. We consider the physical reduction, $SU(3)\supset SO(3),$ and the canonical group reduction, $SU(3)\supset U(1)\otimes SU(2),$ with their respective labels. First we consider the physical group reduction $SU(3)\supset SO(3)$. This reduction yields a convenient labeling scheme for the generators of $SU(3)$ in terms of $SO(3)$ tensor operators. The commutation relations for these $ SU(3)\supset SO(3)$ tensor operators are given in terms of ordinary $SO(3)$ Clebsch-Gordan coefficients (CGC) $(jm,j^{\prime }m^{\prime }|j^{\prime \prime }m^{\prime \prime })$ \cite{Elliott's SU(3) model}: \begin{eqnarray} \lbrack L_{m},L_{m^{\prime }}] &=&-\sqrt{2}(1m,1m^{\prime }|1m+m^{\prime })L_{m+m^{\prime }}, \nonumber \\ \lbrack Q_{m},L_{m^{\prime }}] &=&-\sqrt{6}(2m,1m^{\prime }|2m+m^{\prime })Q_{m+m^{\prime }}, \label{LQ - Elliott I} \\ \lbrack Q_{m},Q_{m^{\prime }}] &=&3\sqrt{10}(2m,2m^{\prime }|1m+m^{\prime })L_{m+m^{\prime }}. \nonumber \end{eqnarray} Here, $L_{m}$ are generators of the angular momentum and $Q_{m}$ is an algebraic quadrupole operator. Within this reduction scheme, states of an $SU(3)$ irrep $(\lambda,\mu )$ have the following labels: \begin{itemize} \item $(\lambda,\mu )$ -- $SU(3)$ irrep labels, \item $l$ -- total orbital angular momentum, which corresponds to the second order Casimir operator of $SO(3)$, \item $m_{l}$ -- projection of the angular momentum along the laboratory $z$-axis, \item $k$ -- projection of the angular momentum in a body-fixed frame, which is related to multiple occurrences of $SO(3)$ irreps with angular momentum $l$ in the $(\lambda,\mu )$ irrep. \end{itemize} \noindent Unfortunately, this scheme has only one additive label, namely $ m_{l}$, and in addition, there are technical difficulties associated with handling the $k$ label. The labeling scheme for our study is the canonical group reduction, $ SU(3)\supset U(1)\otimes SU(2)$ \cite{VGG-1998 su3 good M}. In this scheme $ Q_{0}$ is the $U(1)$ generator and the $SU(2)$ generators are proportional to $L_{0}$, $Q_{+2}$, and $Q_{-2}$ \cite{Elliott's SU(3) model}. Under the action of the generators of these $U(1)$ and $SU(2)$ groups, the remaining four generators of $SU(3)$ transform like two conjugate spin $[\frac{1}{2}]$ $SU(2)$ tensors with $\varepsilon =\pm 3$ values for $Q_{0}$. In this scheme, states of a given $SU(3)$ irrep $(\lambda,\mu )$ have the following labels: \begin{itemize} \item $(\lambda,\mu )$ -- $SU(3)$ irrep labels, \item $\varepsilon $ -- eigenvalue of the quadrupole moment ($Q_{0}$), \item $m_{l}$ -- projection of the orbital angular momentum along the $z$-axis ($L_{0}$), \item $n_{\rho }$ -- related to the second order Casimir operator of $SU(2)$, which for symmetric $(\lambda,0)$ irreps is simply the number of oscillator quanta in the $(x,y)$ plane. \end{itemize} \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = 4.2in \centerline {\includegraphics[width= 4.2in]{3DviewOfSU3}} \end{center} \caption{Three-dimensional view of the $(\lambda,\mu )$ $SU(3)$ irrep.} \label{3D-view of SU(3) irrep} \end{figure} This canonical reduction, $SU(3)\supset U(1)\otimes SU(2)$, has two additive labels, $\varepsilon $ ($Q_{0}$) and $m_{l}$ ($L_{0}$) and the allowed values of these labels for fixed $SU(3)$ irrep $(\lambda,\mu )$ are given by \cite{Hecht}: \begin{eqnarray} \varepsilon &=&2\lambda +\mu -3(p+q) \label{pqm-parametriztion} \\ n_{\rho } &=&\mu +(p-q) \nonumber \\ m_{l} &=&n_{\rho }-2m \nonumber \end{eqnarray} where $0\leq p\leq \lambda $, $0\leq q\leq \mu $, and $0\leq m\leq n_{\rho}$. \subsection{SU(3) Truncation Scheme} \quad It should be pointed out that the quadrupole operator $Q$ used in (\ref{LQ - Elliott I}) is actually an algebraic quadrupole operator \[ Q_{2\mu }^{a}=\sqrt{\frac{4\pi }{5}}\sum_{i}\left( \frac{r_{i}^{2}}{b^{2}} Y_{2\mu }\left( \hat{r}_{i}\right) +b^{2}p_{i}^{2}Y_{2\mu }\left( \hat{p} _{i}\right) \right), \] with $b^{2}=\frac{\hbar }{m\omega }.$ However, the matrix elements of $Q^{a}$ reduce to the matrix elements of the physical collective quadrupole operator $ Q^{c}$ within a single harmonic-oscillator shell. \[ Q_{2\mu }^{c}=\sqrt{\frac{16\pi }{5}}\sum_{i}\frac{r_{i}^{2}}{b^{2}}Y_{2\mu }\left( \hat{r}_{i}\right). \] In general, the set of operators $Q^{c}$ and $L$ are part of a $Sp\left( 6,R\right)$ Lie algebra. Within the $Sp\left( 6,R\right) $ model, the $Q^{c}$ operators connect same parity harmonic-oscillator neighbor shells \cite{Juta Escher's thesis}. The algebraic realization of the $SU(3)$ model has the advantage that one can easily connect the important collective operators with the algebraically significant $SU(3)$ operators. An example of a significant $SU(3)$ operator is the second order Casimir operator of $SU\left( 3\right)$: \begin{equation} C_{2}^{SU(3)}=\frac{1}{4}(3L^{2}+Q\cdot Q). \label{C2su3=LL+QQ} \end{equation} By using the generators of $SU(3)$ as labeled by the physical reduction $ SU(3)\supset SO(3),$ we can easily write a general algebraic $SU(3)$ Hamiltonian: \begin{equation} H=H_{osc}+\chi Q\cdot Q+\frac{1}{2\mathcal{J}}L\cdot L+aC_{3}^{SU(3)}+bL\cdot Q\cdot L+c(L\cdot Q)\cdot (Q\cdot L)+dL\cdot S \label{Hsu3} \end{equation} Here, $H_{osc}$ is the harmonic-oscillator Hamiltonian with single-particle energies $\varepsilon _{n}=\hbar \omega \left( n+\frac{3}{2}\right)$. The strength of the quadrupole-quadrupole interaction is $\chi $. The `bare' classical moment of inertia is $\mathcal{J}$, when the effective moment of inertia will depend on $a,b$, $c$ and $d$. The parameter $a$ is related to the third order Casimir operator $C_{3}^{SU(3)}$ of $SU(3).$ The strengths of the other $SO(3)$ invariant interactions, denoted by $b$ and $c,$ contain third and fourth order products of the $SU(3)$ generators relevant to the multiplicity of the $SO(3)$ irreps within the physical reduction $ SU(3)\supset SO(3)$ \cite{Thomas Beuschel's thesis}$.$ If $b$ and $c$ are such that $bL\cdot Q\cdot L+c(L\cdot Q)\cdot (Q\cdot L) \sim \gamma K^{2}+L^{2}$ where $\gamma $ is the strength of the $K$-band splitting, then the collective states in the $SU(3)\supset SO(3)$ chain labeled by $\left| N[f](\lambda \mu )\kappa LSJM_{J}\right\rangle $ would provide a basis in which $H$ is diagonal. The main advantage of using an algebraic Hamiltonian, such as (\ref{Hsu3}), is its $SU(3)$ symmetry. Therefore, an $SU(3)$ invariant Hamiltonian ($H$) does not connect states from different $SU(3)$ irreps. Since the $Q\cdot Q$ interaction is proportional to the $C_{2}$ of $SU(3),$ it can be used as an essential $SU(3)$ truncation scheme. This scheme prescribes $C_{2}$-ordered importance of the $SU\left( 3\right) $ irreps. In this scheme, one selects $SU(3)$ irreps $(\lambda,\mu )$ with $C_{2}=\lambda ^{2}+\mu ^{2}+\lambda \mu +3(\lambda +\mu )$ values close to the biggest possible $C_{2}$ value. The irrep with the biggest possible $C_{2}$ value is called the leading $SU(3)$ irrep. The leading irrep often corresponds to a total spin $S=0$ configuration. This way the leading irrep becomes also the dominant irrep for the low-lying energy states because the strength of the $L\cdot S$ interaction is usually expected to be small. This is due to the strong spin-pairing which tends to bring $S=0$ lower in energy. However, the one-body part of the $\sum_{i}l_{i}\cdot s_{i}$ interaction can cause significant deviation in the dominance of the leading irrep. Expressing the $SU(3)$ Hamiltonian (\ref{Hsu3}) in a second quantized form (\ref{H=aa+aaaa}) gives: \begin{equation} H=\hbar \omega \left( n+\frac{3}{2}\right) \sum_{i}a_{i}^{+}a_{i}+\frac{1}{4} \sum_{i,j}(\chi \left\langle ij\left| Q\cdot Q\right| kl\right\rangle +...)a_{i}^{+}a_{j}^{+}a_{k}a_{l}. \label{Hsu3N+aaaa} \end{equation} Here, the labels $i$ are shorthand notation for the single-particle labels in the $SU(3)$ shell-model scheme, that is, $i\rightarrow \tau \tau _{0}(\eta,0)\kappa lsjm_{j}$ in the $SU(3)\supset SO(3)$ chain or respectively $i\rightarrow \tau \tau _{0}(\eta,0)n_{\rho }\varepsilon m_{l}sm_{s}$ in the $SU(3)\supset SU(2)$ chain. As usual, $\tau =1/2$ is the isospin quantum number with $\tau _{0}=\pm 1/2$ for protons/neutrons respectively, and $(\eta,0)$ is the SU(3) irrep corresponding to a given harmonic-oscillator shell $n$ ($\eta =n$). The remaining labels were discussed in the previous section on the SU(3) shell model. \subsection{Interactions that Break the SU(3) Symmetry} \quad Degenerate single-particle energies are an essential ingredient for good $ SU(3)$ symmetry; this is clear from our discussion on the general algebraic $ SU(3)$ Hamiltonian (\ref{Hsu3}) and its second quantized form (\ref {Hsu3N+aaaa}). However, we already discussed that the breaking of the single-particle degeneracy by the spin-orbit interaction is essential for the description of the correct nuclear shell closures in terms of the independent-particle model. Therefore, in case of a significant single-particle splitting, which is due to the orbit-orbit interaction $ \sum_{i}l_{i}^{2}$ and the spin-orbit interaction $\sum_{i}l_{i}\cdot s_{i}$, there would be a significant disturbance in the $SU(3)$ truncation scheme. In this case, the spherical shell model described earlier would work and its truncation scheme could be used. Another SU(3) breaking factor is the pairing interaction. This interaction is a very essential short-range two-body nuclear interaction that can have significant impact on any $SU(3)$-based calculations as well as on the spherical shell-model type calculations. Although we have studied some effects of the pairing interaction in the $sd$-shell as well as in the $pf$-shell, we would rather not engage in this matter. We only mention that effects of the pairing in the context of the pseudo-$SU(3)$ model have been studied before by C. Bahri \cite{Chairul Bahri's Thesis}, and currently we are considering incorporating the pairing effects within an oblique-basis type calculation via the broken pair model \cite{Heyde's-shell model}. \chapter{Toy Model of a Two-Mode System} \quad The study of $^{24}$Mg, which will be discussed later in more detail, has successfully demonstrated the oblique-basis concept \cite{VGG 24MgObliqueCalculations}. The quality of the results for $^{24}$Mg are due to the near equal importance of the two basis sets used. On the one hand, the spherical shell-model basis is well-suited for description of the single-particle excitations; on the other hand, the $SU(3)$ shell model puts an emphasis on the collective excitations in nuclei. These two modes are crucial for the $^{24}$Mg example. In general, determining the relevant excitations is a cornerstone in the study of any system; in some sense this is the art of physics. Usually one basis works well for one system, but fails for another system. The reason is that in any general method, such as the variational method, perturbation theory, or fixed-basis matrix diagonalization, one needs to start with a good guess about the Hamiltonian and the states that describe the relevant excitation modes \cite{Skyrme-1957 CinQM}. When applying perturbation theory, one is often concerned with a small perturbation of an exactly solvable limit of the full Hamiltonian \cite {Fernandez-2000,Arteca-1990}. However, there are many examples when the relevant Hamiltonian has more than one exactly solvable limit \cite {Rau-1987,Rau-2002}. This is a common situation when a dynamical symmetry group is used in the construction of the Hamiltonian \cite{Iachello-1987}, \cite{Arima and Iachello}, \cite{Cheng-Li Wu et al}. \textit{What shall we do if the system described by such a Hamiltonian is nowhere near any of the exact limits?} In these situations, the problem may be better approached by using states associated with both limits. This set of states will form an oblique--mixed-mode--basis for the calculation. Taking into account the importance of the relevant energy scale of a problem and the wave function localization with respect to the range of the potential, the oblique-basis method can be taken beyond the idea of using two orthonormal basis sets. Specifically, one can consider a variationally-improved basis set starting with some initially guessed basis states. In the occupation number representation for the nuclear shell model, this variationally-improved basis method seems inapplicable.\footnote{In the occupation-number representation one assumes a fixed single-particle structure and then expands the states in the Slater determinants provided by this basis. From this point of view, there is no room for variationally-improved basis states since each Slater determinant is a single-integer machine word.} However, the method seems interesting because of its possible relevance to multi-shell \textit{ab-initio} nuclear and atomic physics calculations. The method may also be related to some renormalization-type techniques. Therefore, a brief discussion of the variationally-improved basis and its possible applications is given in the Appendix. In this chapter some relevant mathematical notation and concepts used in the oblique-basis method are introduced. Specifically, we demonstrate the concept of the oblique basis on a simple two-mode system, the one-dimensional harmonic oscillator in a box. First, we discuss the concept and then the two exactly solvable limits of our toy model are briefly summarized. A qualitative discussion of the expected spectrum of the one-dimensional harmonic oscillator in a box is given. This is followed by an example spectrum and quantitative estimates. Some specific problems related to the structure of the Hilbert space will be addressed. Finally, the main results will be discussed, especially a quasi-perturbative behavior and a coherent structure within the strong mixing region. \section{Harmonic Oscillator in a One-Dimensional Box} \quad Let us start with an abstract two-mode system. For simplicity, we assume that the Hamiltonian for the system under investigation has two exactly solvable limits, for example: \begin{equation} H=(1-\lambda )H_{0}+\lambda H_{1}+\lambda (1-\lambda )H_{2}. \label{2-mode system H} \end{equation} Here, $H_{0}$ and $H_{1}$ are two exactly solvable Hamiltonians. This way, we have $H$ $\rightarrow H_{0}$ in the limit $\lambda \rightarrow 0$ and $H$ $\rightarrow H_{1}$ when $\lambda \rightarrow 1$. In the vicinity of these two limits we can approach the problem using standard perturbation theory. However, for $\lambda \approx \frac{1}{2}$ we have a very mixed system with unclear behavior which could be complicated further by an interaction $H_{2}$ between the natural modes of $H_{0}$ and $H_{1}.$ In the expression (\ref{2-mode system H}), $\lambda $ is introduced to simplify the discussion. In general, we have more than one parameter in the Hamiltonian. Often the exactly solvable limits are described as hypersurfaces in the full parameter space. It could even be that there are three or more exactly solvable limits. For example, the Interacting Boson Model (IBM) has three exactly solvable limits \cite{MoshinskyBookOnHO}. Another example with three exactly solvable limits is the commonly used nuclear schematic interaction. It has nondegenerate single-particle energies ($\varepsilon _{i}$), pairing ($P^{+}P$) two-body interaction, and quadrupole-quadrupole ($Q\cdot Q$) two-body interaction: \[ H=\varepsilon _{i}N_{i}+GP^{+}P-\chi Q\cdot Q. \] Here, we consider the simplest two-mode system that is sufficiently close to the problem we have to solve for nuclei. The system under consideration consists of a one-dimensional harmonic oscillator in a one-dimensional box of size $2L$ \cite{Armen and Rau}: \begin{equation} H=\frac{1}{2m}p^{2}+V_{L}(q)+\frac{m\omega ^{2}}{2}q^{2}. \label{H-ho-1Dbox} \end{equation} where $V_{L}(q)$ is the confining potential which is zero for $\left| q\right| <L$ and $\infty $ for $\left| q\right| \geq L$. This system has two exactly solvable limits. A more realistic model might consist of a three-dimensional harmonic oscillator and a square-well potential since these two potentials are known to be good starting points in the nuclear shell model \cite{Heyde-1994}. The one-dimensional harmonic oscillator in a one-dimensional box model has been used as an example by Barton, Bray, and Mackane in their discussion on the effects of the distant boundaries on the energy levels of a one-dimensional quantum system \cite{Barton-Bray-Mckane-1990}. Also, some studies have already been done for the cylindrical symmetric system of a three-dimensional harmonic oscillator between two impenetrable walls \cite {Marin and Cruz-1988}. However, the bi-modal structure of the problems has not been discussed in these studies. The essential two-mode regime of such problems has been studied in the context of a two-dimensional confinement of a particle in an external magnetic field by Rosas \textit{et al.} \cite{Rosas et al-2000}. Some authors have generalized the one-dimensional harmonic oscillator by introducing time dependent parameters in the Hamiltonian \cite {Lejarreta-1999} and have recognized the two limiting cases of a free particle and harmonic oscillator. The infinite square well and the harmonic oscillator have been considered as the two limiting cases of a power-law potential within the context of wave packet collapses and revivals \cite {Robinett-2000 AJP,Robinett-2000 JMP}. Here, we focus our study on the bi-modal structure of the one-dimensional harmonic oscillator in a one-dimensional box. The first limit of the toy model (\ref{H-ho-1Dbox}) is $\omega =0.$ This is a free particle in a one-dimensional box with size $2L:$ \begin{equation} H_{0}=\frac{1}{2m}p^{2}+V_{L}(q). \label{1D box Hamiltonian} \end{equation} The eigenvectors and energies are labeled by $n=0,1,...$ and are given by the expressions: \begin{eqnarray} \Phi _{n}(q) &=&\left\{ \begin{tabular}{lll} $\sqrt{\frac{1}{L}}\cos \left( (n+1)\frac{\pi }{2}\frac{q}{L}\right) $ & if & n is even \\ $\sqrt{\frac{1}{L}}\sin \left( (n+1)\frac{\pi }{2}\frac{q}{L}\right) $ & if & n is odd \end{tabular} \right. , \label{1D box FW and En} \\ E_{n} &=&\frac{1}{2m}\left( (n+1)\frac{\pi }{2}\right) ^{2}\left( \frac{ \hbar }{L}\right) ^{2}. \nonumber \end{eqnarray} This limit corresponds to extreme nuclear matter when the short range nuclear force produces an effective interaction well represented by a square-well potential \cite{Heyde-1994}. We can think of this limit as a one-dimensional equivalent of a three-dimensional model where nucleons are confined within a finite volume of space representing the nucleus. The other exactly solvable limit of the toy model (\ref{H-ho-1Dbox}) is the harmonic oscillator in one dimension: \begin{equation} H_{1}=\frac{1}{2m}p^{2}+\frac{m\omega ^{2}}{2}q^{2}. \label{Harmonic oscillator Hamiltonian} \end{equation} In dimensionless coordinates \[ q\rightarrow \tilde{q}\sqrt{\frac{\hbar }{m\omega }},\quad p\rightarrow \tilde{p}\sqrt{m\hbar \omega }, \] we have: \[ H_{1}=\hbar \omega \frac{1}{2}\left( \tilde{p}^{2}+\tilde{q}^{2}\right) . \] Thus the eigenvectors and energies are labeled by $n=0,1,...$ and are given by the expressions: \begin{eqnarray} \Psi _{n}(q) &=&\sqrt{\frac{1}{bn!2^{n}\sqrt{\pi }}}H_{n}\left( \frac{q}{b} \right) \exp \left( -\frac{1}{2}\frac{q^{2}}{b^{2}}\right),\quad b=\sqrt{ \frac{\hbar }{m\omega }} \label{Harmonic oscillator WF and En} \\ E_{n} &=&\hbar \omega \left( n+\frac{1}{2}\right) . \nonumber \end{eqnarray} Where $H_{n}$ are the Hermite polynomials. This limit corresponds to the three-dimensional harmonic oscillator model for nuclei. In a one-dimensional toy model, the anharmonic oscillator with a quartic anharmonicity would be the appropriate counterpart of the $Sp(6,R)$ shell model since the quadrupole-quadrupole interaction $Q\cdot Q$ goes as $\sim r^{4}$ and $Q$ connects same parity harmonic oscillator shells. If we restrict the model space to only one harmonic oscillator shell, then we can use the algebraic quadrupole moment $\tilde{Q}$ of Elliott \cite{Elliott's SU(3) model} because within a single shell $\tilde{Q}$ is the same as $Q$ \cite{MoshinskyBookOnHO}. Thus for our study it is appropriate to consider the one-dimensional harmonic oscillator to correspond to the $SU\left( 3\right) $ shell model for nuclei. \section{Spectral Structure at Different Energy Scales} \quad Often in physics the spectrum of a system is different for different energy scales. This usually reflects the existence of different excitation modes of the system. For the toy model Hamiltonian (\ref{H-ho-1Dbox}) we can clearly define three spectral types: \begin{itemize} \item Spectrum of a particle in a one-dimensional box (\ref{1D box FW and En}) with quadratic dependence on $n$ ($E_{n}$ $\sim $ $n^{2}$), \item Spectrum of the one-dimensional harmonic oscillator (\ref{Harmonic oscillator WF and En}) with linear dependence on $n$ ($E_{n}\sim $ $n$), \item Intermediate spectrum that is neither of the above two types. \end{itemize} \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{1D+HO_potential}} \end{center} \caption{Two--mode toy system. The structure of the interaction potential of a particle in a one-dimensional box subject to a harmonic oscillator restoring force towards the center of the box.} \label{1D+HO-potental} \end{figure} From Fig. \ref{1D+HO-potental} we expect that the particle in a box spectrum should be operative at high energies. These energies are energies where the box boundaries dominate over the harmonic oscillator potential. In this regime one can use standard perturbation theory to calculate the energy for a particle in a box perturbed by a harmonic oscillator potential. It can be shown that perturbation theory will give better results for higher energy levels. For $n\rightarrow \infty $ the first correction ($\delta E_{n}^{1}$) approaches the constant value of $m\omega ^{2}L^{2}/6.$ An estimate on when the perturbation calculations are feasible using $E_{n+1}^{0}-E_{n}^{0}>>\left\langle n\left| V\right| n\right\rangle $ gives: \begin{equation} n>>2m^{2}\omega ^{2}L^{4}/(3\hbar ^{2}\pi ^{2}). \label{1D box spectrum begins} \end{equation} This analysis is confirmed by the numerical calculations shown in Fig. \ref {w4SpectralStructure} where the perturbed particle in a box spectrum is really operative at $n>3$ for the case of $m=\hbar =2L/\pi =1$ and $\omega =4.$ \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{w4Spectrum}} \end{center} \caption{Spectral structure of the two--mode system for $m=\hbar =2L/\pi =1$ and $\omega =4.$} \label{w4SpectralStructure} \end{figure} The intermediate spectrum should be observed when the harmonic oscillator turning points coincide with the walls of the box. Therefore, the critical energy scale that separates the two extreme spectral structures is given by: \begin{equation} E_{c}=\frac{m\omega ^{2}}{2}L^{2} \label{Ec for 1D box and HO } \end{equation} Notice that the constant energy shift $m\omega ^{2}L^{2}/6$ in the energy of the high energy levels $\delta E_{n>>1}^{1}$ is one-third of the critical energy ($E_{c}/3$). At low energies, where the one-dimensional harmonic oscillator determines the classical turning points to be far from the boundaries, we expect to see the harmonic oscillator spectrum as shown in Fig. \ref{w4SpectralStructure}. The number of harmonic oscillator states that will be observed is easily estimated using: \begin{equation} E_{c}>E_{n}^{ho}\Rightarrow n_{\max }^{ho}=\frac{1}{2}\frac{m\omega L^{2}}{ \hbar }-\frac{1}{2} \label{HO spectrum ends} \end{equation} It should be pointed out that there is a compatible number of levels, usually bigger than $n_{\max }^{ho}$, below the $E_{c}$ corresponding to a free particle in a box: \begin{equation} E_{c}>E_{n}^{1D}\Rightarrow n_{\max }^{1D}=\frac{2}{\pi }\frac{m\omega L^{2} }{\hbar }-1. \label{1Dbox spectrum ends} \end{equation} However, these states are mixed by the harmonic oscillator potential toward the corresponding harmonic oscillator wave functions. Using the ratio of the ground state energies, $E_{g.s.}^{HO}/E_{g.s.}^{1D}=4m\omega L^{2}/(\hbar \pi ^{2})$, together with (\ref{HO spectrum ends}) and (\ref{1Dbox spectrum ends}), the following spectral situations apply: \begin{itemize} \item For $\frac{m\omega L^{2}}{\hbar }>\left( \frac{\pi }{2}\right) ^{2}$ there are levels below $E_{c}$ corresponding to the harmonic oscillator and the free particle in a box such that $E_{g.s.}^{HO}>E_{g.s.}^{1D}.$ However, only the harmonic oscillator levels are seen in the low energy spectrum. \item For $\left( \frac{\pi }{2}\right) ^{2}>\frac{m\omega L^{2}}{\hbar }> \frac{\pi }{2}$ there are only the ground states $E_{g.s.}^{1D}$ and $E_{g.s.}^{HO}$ below $E_{c}$ and $E_{g.s.}^{1D}>E_{g.s.}^{HO}$ \item For $\frac{\pi }{2}>\frac{m\omega L^{2}}{\hbar }>1$ there is only the ground state of the harmonic oscillator $E_{g.s.}^{HO}$ below $E_{c}$. \end{itemize} Therefore, the smallest number of states\footnote{For simplicity we usually fix the parameters as follows: $m=\hbar =1$, $L=\pi /2$.} to illustrate the two mode spectra is the case of $m=\hbar =1$, $L=\pi /2$ and $\omega =4.$ With these parameters, formula (\ref{HO spectrum ends}) gives $n_{\max }^{HO}=4.\,5348.$ Thus one should see no more than $4$ equidistant states as shown in Fig. \ref{w4SpectralStructure}. In Fig. \ref {w4SpectralStructure} there are three clear equidistant energy levels that correspond to a harmonic oscillator spectrum. With respect to the critical energy $E_{c},$ there is a more explicit classification of the spectral structure: \begin{itemize} \item Perturbed particle in a one-dimensional box spectrum for energies $E>>E_{c}$ such that (\ref{1D box spectrum begins}) holds, \item One-dimensional harmonic oscillator spectrum (\ref{Harmonic oscillator WF and En}) for energies $E_{c}>>E$ such that (\ref{HO spectrum ends}) holds, \item Intermediate spectrum for energies $E\approx E_{c}$. \end{itemize} \section{Toy Model Calculations and Results} \quad Despite the simplicity of the toy model (\ref{H-ho-1Dbox}), the harmonic oscillator in a box exhibits some of the essential characteristics of a more complex system. Our main interest is in problems associated with the use of fixed-basis calculations. In particular, one such problem is the slow convergence of the calculations \cite{Armen and Rau}. If one can implement an exact arithmetic, one may not worry too much about the slow convergence when enough time, storage, and other resources are provided. However, numerical calculations are plagued with numerical errors that may grow significantly and render the results meaningless. From this point of view, a calculation that converges slowly may be compromised by accumulated numerical error. \subsection{On the Hilbert Space of the Basis Wave Functions} \quad Before discussing the toy model using an oblique basis, it is instructive to discuss briefly the harmonic oscillator problem (\ref{Harmonic oscillator Hamiltonian}) using the wave functions for a free particle in a one-dimensional box (\ref{1D box FW and En}); and vice versa, solving the problem of a free particle in a one-dimensional box (\ref{1D box Hamiltonian}) using the wave functions for a particle in the harmonic oscillator potential (\ref{Harmonic oscillator WF and En}). Due to the structure of the wave functions, there are some specific problems that need to be addressed. For example, using wave functions for a free particle in a one-dimensional box to solve the harmonic oscillator problem may not be appropriate especially for high energy states $E>>$ $E_{c}$. The problem is that any linear combination of wave functions with the same localized support, in our case the wave functions are localized within the box, will still be a function with the same localized support (see Fig. \ref{wf-spread}). That is, any linear combination of wave functions that are zero outside of the box is a function that is zero outside of the box too. Because the harmonic oscillator potential gets wider for higher and higher energies, any higher energy wave function must spread more than the previous one. Similarly, the spreading of the harmonic oscillator wave functions is responsible for the troubles that arise in solving the problem of a free particle in a one-dimensional box using the harmonic oscillator wave functions. The essence of these problems is in the structure of the corresponding Hilbert spaces. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{wfspread}} \end{center} \caption{Spreading of the wave functions for the harmonic oscillator (blue) and particle in a box (red).} \label{wf-spread} \end{figure} The influence of the boundary conditions on the properties of a quantum mechanical system has been recognized from the dawn of quantum mechanics. It is well known that some separable problems may re-couple due to the boundary conditions \cite{Tanner-1991}. Some recent studies on the problem of confined one-dimensional systems using equations for relevant cut-off functions have been pioneered by Barton, Bray, and Mackane \cite {Barton-Bray-Mckane-1990}. Their method has been further developed in a more general setting by Berman \cite{Berman-1991}. Other authors aim at variational procedures using simple cut-off functions \cite{Marin and Cruz-1991 AJP, Marin and Cruz-1991 JPB} or derive asymptotic estimates for multi-particle systems using the Kirkwood-Buckingham variational method \cite {Pupyshev and Scherbinin-1999}. Somewhat different approaches focus on shape-invariant potentials and use supersymmetric partner potentials to derive energy shifts and wave function approximations \cite {Dutt-Mukherjee-Varshni-1995}, as well as sample-size dependence of the ground-state energy \cite{Monthus et al. -1996}. In the next few paragraphs we discuss the structure of the relevant Hilbert spaces when confinement is present. \subsubsection{$\bullet$ Harmonic Oscillator in the One-Dimensional Box Basis} \quad Now we consider the harmonic oscillator problem (\ref{Harmonic oscillator Hamiltonian}) using the wave functions for a free particle in a one-dimensional box (\ref{1D box FW and En}). There are no difficulties for energies $E<<$ $E_{c}$ (\ref{Ec for 1D box and HO }) where the harmonic oscillator potential is still within the box. However, for energies $E>>$ $E_{c}$ the basis wave functions are localized only on the interval $[-L,L]$. Thus they cannot provide the necessary spread over the potential width (Fig. \ref{wf-spread}). This situation would be appropriate for the toy model (\ref {H-ho-1Dbox}) but not for the pure harmonic oscillator problem (\ref {Harmonic oscillator Hamiltonian}). One simple solution of the spreading problem is to continue the basis wave functions by periodicity. This way the necessary spread of the basis wave functions can be achieved and the new basis will stay orthogonal but must be re-normalized.\footnote{If one continues the wave functions to infinity, then there is a normalization problem. However, if the continuation is on a finite interval, then the functions can still be normalized.} However, these basis wave functions do not decay to zero in the classically forbidden zone. This means that some significant number of basis wave functions will be needed to account for the necessary behavior within the classically forbidden zone. Another alternative is to change the support domain corresponding to non-zero values of the function by stretching or squeezing it through a scaling of the argument of the basis wave functions, $x\rightarrow x\alpha _{n}/L$. This way the support becomes $[-L,L]$ $\rightarrow $ $[-\alpha _{n},\alpha _{n}]$. Here, $\alpha _{n}$ is a scale factor for the $n$-th basis wave function (\ref{1D box FW and En}) estimated either from the width of the harmonic oscillator potential\footnote{The initial idea is to use basis states that have a spread compatible with the width of the potential in the energy region of interest, thus resolving the spectra only within that energy scale without calculating the lower energy states. Unfortunately, it does not seem to work since interference causes reduction of the wave spread and therefore drives the solutions towards the lowest eigenstate.}, or determined by variational minimization. Either way, the new set of basis functions will be non-orthogonal. In general, there may be even a linear dependence. However, for the basis functions discussed here, linear dependence may not appear due to the different number of nodes for each wave function. The number of nodes (zeros) is not changed under the re-scaling procedure. While the potential width scaling is simpler, its applicability is more limited than the variationally-determined one. In general, the variational approach can be extended for much more general situations as discussed in the Appendix. \subsubsection{$\bullet$ Particle in a Box in the Harmonic-Oscillator Basis} \quad Next, suppose we want to solve the problem of a free particle in a one-dimensional box $[-L,L]$ (\ref{1D box Hamiltonian}) using the harmonic oscillator wave functions (\ref{Harmonic oscillator WF and En}). The first thing to do is to change the inner product of the wave functions: $\left( f,g\right) =\int_{-\infty }^{\infty }f^{*}\left( x\right) g\left( x\right) dx\rightarrow \int_{-L}^{L}f^{*}\left( x\right) g\left( x\right) dx. $ Then, it is immediately clear that the set of orthonormal harmonic oscillator wave functions $\Psi _{n}(q)$ (\ref{Harmonic oscillator WF and En}) will lose its orthonormality and even its linear independence.\footnote{The set of functions $\Psi _{n}\left( q\right)$ with support domain restricted to $[-L,L]$ and denoted by $\Psi _{n}(q;[-L,L])$ may become linearly dependent if $L$ is so small that there are more than one $\Psi _{n}(q;[-L,L])$ with the same number of nodes within $[-L,L]$.} However, this is not the actual trouble in such an approach.\footnote{The oblique basis type calculations described later can successfully remove the linearly dependent basis states in the process of handling the non-orthogonality of the basis.} Neither the variational nor the potential-width wave function scaling will help to cure the loss of hermiticity of the physically significant differential operators, such as the momentum operator ($p=-i\hbar \frac{\partial }{\partial x}$) and the Hamiltonian operator ($H=\frac{1}{2m}p^{2}$). This non-hermiticity is due to the behavior of the basis states at the boundary, mainly the non-vanishing of the wave functions at $-L$ and $L.$ For detailed analysis on the loss of hermiticity, we refer the reader to the Appendix. In order to recover the hermiticity of the differential operator $i\frac{\partial }{ \partial x}$, it is sufficient\footnote{Wave functions with the same value at $\pm$ L is the necessary condition; the wave functions should be zero only for an infinite potential at $\pm$ L.} to make sure that our basis wave functions vanish at the boundary points $-L$ and $L.$ For this purpose one can look at the nodes of each basis wave function and scale it so that its outer nodes are at the boundary points.\footnote{From the nodal structure of the harmonic oscillator wave functions, given by the Hermite polynomials, it is clear that the first two wave functions ($\Psi _{0}$ and $\Psi _{1}$) cannot be used since they have less than two nodes.} Since the physical requirement that the wave functions have to be zero at the boundary is the cornerstone in quantizing the free particle in a one-dimensional box (\ref{1D box FW and En}), it is not surprising that the nodally adjusted harmonic oscillator wave functions are very close to the exact wave functions for the free particle in a one-dimensional box as shown in Fig. \ref{regularized-fw}. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{regularized-fw}} \end{center} \caption{Harmonic-oscillator trial wave functions adjusted with respect to the one-dimensional box problem: (a) adjusted according to the potential width $E_{n}^{1Dbox}=\omega _{n}^{2}L^{2}/2\Rightarrow \omega _{n}=\frac{ \hbar }{L^{2}}\left( 1+2n\right) $, (b) nodally adjusted, (c) boundary adjusted using $\Psi (q)\rightarrow \Psi (q)-\Psi \left( L\right) (1+q/L)/2-\Psi \left( -L\right) (1-q/L)/2$} \label{regularized-fw} \end{figure} In general, calculating the nodes of a function may become very complicated. To avoid problems with finding the roots, one can use the following technique\footnote{ This technique has been suggested by Professor A. R. P. Rau (private communications).}: the idea is to evaluate the value of the wave function at the boundary points, then shift the wave function by a constant to get zeros at the boundary, $\Psi (q)\rightarrow \Psi (q)-\Psi \left( L\right) $. This idea works well for even wave functions, but has to be generalized for odd wave functions by adding a linear term, $\Psi (q)\rightarrow \Psi (q)-(\Psi\left( L\right) /L)q$. Thus for a general function we can have: $\Psi (q)\rightarrow \Psi (q)-(1+q/L)\Psi \left( L\right) /2-(1-q/L)\Psi \left( -L\right) /2$. In Fig. \ref{regularized-fw} we have shown some of the resulting wave functions. Notice that this procedure gives a new wave function $\Psi $ that is well behaved inside the interval $[-L,L]$ and grows linearly with $q$ outside the interval $[-L,L]$. This is in contrast to the behavior of the cut-off function $f(q)$ obtained by Barton et al \cite {Barton-Bray-Mckane-1990}. The function $f(q)$ has $L/q$ singularity at the origin ($q=0$). The use of a cut-off function to enforce boundary conditions has been developed by Barton et al \cite {Barton-Bray-Mckane-1990} and Berman \cite{Berman-1991} and provides an interesting integral equation for the cut-off function. On the other hand, a simple cut-off function supplemented by a variational method seems to be very effective \cite{Marin and Cruz-1991 AJP,Marin and Cruz-1991 JPB,Pupyshev and Scherbinin-1999}. It should be pointed out that by using the above process one can set up and successfully run a modification of the usual Lanczos algorithm, to be discussed later, to solve for the few lowest eigenvectors of the free particle in a one-dimensional box through an arbitrarily chosen initial wave function. The major modification is to project every new function, $\Psi _{n+1}=$ $H\Psi _{n}$, into the appropriate Hilbert space and subtract the components along any previous basis vectors. Only then should one attempt to evaluate the matrix elements of $H$ related to the new basis vector that is clearly within the correct Hilbert space. This way, one has to double the number of scalar product operations compared to the usual algorithm where the matrix elements of $H$ are calculated along with the complete re-orthogonalization of the basis vectors. \subsection{Discussion of the Toy Model Results} \quad Having considered the main problems one may face in studying the simple toy model (\ref{H-ho-1Dbox}), \[ H=\frac{1}{2m}p^{2}+V_{L}(q)+\frac{m\omega ^{2}}{2}q^{2}, \] we close the discussion with a sample spectrum for the case of $m=\hbar =2L/\pi =1$ and $\omega =4$. As one can see in Fig. \ref{w4SpectralStructure}, the first three energy levels are really equally distant from one another and coincide with the harmonic oscillator levels as expected from (\ref{HO spectrum ends}). For these states, the wave functions are also the harmonic oscillator wave functions. The intermediate spectrum is almost missing. After the $E_{c}$, the spectrum is that of a free particle in a 1D box perturbed by the harmonic oscillator potential. The oblique-basis type calculation reproduces the first eight low energy states within a 14-dimensional calculation, seven nodally adjusted harmonic oscillator states and seven states of a free particle in a box, while the fixed-basis calculation, using only the wave functions of a free particle in a one-dimensional box, requires 18 basis states. Due to the simplicity of the toy model, one does not find any big numerical advantage of the oblique-basis calculation compared to the calculations using the fixed basis of the 1D box wave functions. There are two main reasons for this: (1) there is a sharp energy scale $E_{c}$ that separates the two modes, (2) the spectrum above the energy $E_{c}$ has a nice regular structure. The nice regular structure above the energy $E_{c}$ results in a very favorable situation for the usual fixed-basis calculations since the dimension of the space needed to obtain the $n$-th eigenvalue grows as $n+\alpha$. The parameter $\alpha$ is relatively small and does not change much in a particular region of interest. For example, the $\omega =16$ calculations need only $\alpha =15$ extra basis vectors when calculating any of the eigenvectors up to the hundredth vector. The relatively constant value of $\alpha $ can be understood by considering the harmonic oscillator potential as an interaction that creates excitations out of the $n$-th unperturbed 1D box state. Therefore, $\alpha $ is the number of 1D box states with energies in the interval $E_{n}^{0}$ and $E_{n}^{0}+\omega ^{2}/2\left\langle \Phi _{n}\right| x^{2}\left| \Phi _{n}\right\rangle $ where $E_{n}^{0}$ is the $n$-th unperturbed 1D box state energy. There is a fast de-coupling of the higher energy states from any finite excitation process that starts out of the $n$-th state. The fast de-coupling is due to the increasing energy spacing of the 1D box spectrum. This results in a finite number of states mixed by the presence of the harmonic oscillator potential. Using the upper limit $E_{c}/3$ on $\delta E_{n}^{1}$, one can easy estimate $\alpha$: \[ \alpha \approx \frac{1}{3}n_{\max }^{1D}. \] The sharp separation of the two modes allows for a safe use of the harmonic oscillator states without any rescaling. This is especially true when $\omega$ is very big since then the low energy states are naturally localized within the box. Therefore, there is a clear shortcut: instead of diagonalizing the Hamiltonian in some 1D box wave-function basis, one can just use the harmonic oscillator wave functions. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = 5.2 in \centerline {\includegraphics[width= 5.2 in]{DeltaEn}} \end{center} \caption{Absolute deviations from the exact energy eigenvalues for $\omega =16$, $L=\pi /2$, $\hbar =m=1$ as a function of $n$. Blue circles represent deviation of the exact energy eigenvalue from the corresponding harmonic oscillator eigenvalue ($\Delta E=E^{exact}_n-E^{HO}_n$), the red diamonds are the corresponding deviation from the energy spectrum of a particle in a 1D box ($\Delta E=E^{exact}_n-E^{1D}_n$), and the green squares are the first-order perturbation theory results.} \label{DeltaEn} \end{figure} Fig. \ref{DeltaEn} shows the absolute deviation ($\Delta E=E_{n}^{exact}-E_{n}^{estimate}$) of the exact energy spectrum for the case of $\omega =16$, $L=\pi /2$, $\hbar =m=1$. Here, $E_{n}^{estimate}$ refers to the three energy estimates one cam make: the harmonic oscillator $E^{HO}_n$, particle in a 1D box $E^{1D}_n$, and the first order perturbation theory estimate considering the harmonic oscillator potential as a perturbation ($E_{n}^{1D}+\omega ^{2}/2\left\langle \Phi _{n}\right| x^{2}\left| \Phi _{n}\right\rangle $). There are about 19 states that match a harmonic oscillator spectrum which is consistent with the expected value from (\ref{HO spectrum ends}). After the $n=20$ level, the perturbation theory gives increasingly better results for the energy eigenvalues. Fig. \ref{DeltaEoverE} shows the relative deviation ($1-E_{n}^{estimate}/E_{n}^{exact}$) of the exact energy spectrum for the case of $\omega =16$, $L=\pi /2$, $\hbar =m=1.$ \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = 5.2 in \centerline {\includegraphics[width= 5.2 in]{DeltaEoverE}} \end{center} \caption{Relative deviations from the exact energy eigenvalues for $\omega =16$, $L=\pi /2$, $\hbar =m=1$ as a function of $n$. The blue circles represent deviation of the exact energy eigenvalue from the corresponding harmonic oscillator eigenvalue (${\Delta E}/E=1-E^{HO}_n/E^{exact}_n$), the red diamonds are the corresponding relative deviation from the energy spectrum of a particle in a 1D box (${\Delta E}/E=1-E^{1D}_n/E^{exact}_n$), and the green squares are the first-order perturbation theory results.} \label{DeltaEoverE} \end{figure} From these graphs, it seems that the transition region is somewhat absent since the first-order perturbation theory takes on immediately after the breakdown of the harmonic oscillator spectrum. Even though the first-order perturbation theory gives good estimates for the energy levels in this transition region, this is not a manifestation of a proper perturbation theory. Rather, it is a manifestation of a coherent behavior \cite{Adiabatic mixing}. What actually happens in this region is a coherent mixing of 1D box states by the harmonic oscillator potential in the sense of a quasi-symmetry discussed in the Appendix. Notice that perturbation theory is valid, as expected, for high energy states determined by the expression (\ref{1D box spectrum begins}). For the high energy spectrum the harmonic oscillator potential acts as a small perturbation. Thus the first-order corrections in the energy and the wave function are small. Fig. \ref{State105} shows that the main component of the 105th exact wave function comes from the 105th 1D box wave function, as it should for small perturbations. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{105th1DHOin1DboxW16}} \end{center} \caption{Non-zero components of the 105th exact eigenvector in the basis of a free particle in a one-dimensional box. Parameters of the Hamiltonian are $\omega =16$, $L=\pi /2$, $\hbar =m=1$.} \label{State105} \end{figure} For low energy states, perturbation theory around the 1D box states is not appropriate since the harmonic oscillator states are the true states in this region. Specifically, for $m=\hbar =2L/\pi =1$ and $\omega =16,$ the first ten states are exactly the harmonic oscillator states with a very high accuracy. The next ten states have high overlaps with the corresponding harmonic oscillator wave functions. For example, starting from 0.999999 at the tenth state, the overlaps go down to 0.880755 at the twentieth state; after that the overlaps get small very quickly. Fig. \ref{State3} shows the structure of the third exact eigenvector when expanded in the 1D box basis. Notice that the third 1D box wave function is almost missing from the structure of the third harmonic oscillator wave function. Such small overlap can happen at particular values of the parameter $\omega L^{2}$ relevant for the problem at hand. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{ThirdHOStateW16}} \end{center} \caption{Non-zero components of the third harmonic oscillator eigenvector as expanded in the basis of a free particle in a one-dimensional box. Parameters of the Hamiltonian are $\omega =16$, $L=\pi /2$, $\hbar =m=1$.} \label{State3} \end{figure} This pattern of having a small component of the exact wave function along the corresponding 1D box wave function continues to persist into the transition region. This is an unexpected behavior considering the fact that the first order estimates of the energy levels are relatively good. Thus we are confronted with a situation where perturbation theory is not appropriate since level spacing is smaller than the magnitude of the ``perturbing potential'' but the expectation values of the full Hamiltonian are relatively close to the exact eigenvalues\footnote{A simple explanation of this effect is that the unperturbed energies $E^0_n$ are such that $E^0_n > \delta E^1_n$.}, even thought the corresponding 1D box wave functions are not at all present in the exact wave function as shown in Fig. \ref{States25to29}. \begin{figure}[h] \begin{center} \leavevmode \epsfxsize = 5in \centerline {\includegraphics[width= 5in]{1DHO1DboxCoherentStructure}} \end{center} \caption{Coherent structure with respect to the non-zero components of the 25th, 27th and 29th exact eigenvector in the basis of a free particle in a one-dimensional box. Parameters of the Hamiltonian are $\omega =16$, $L=\pi/2$, $\hbar =m=1$.} \label{States25to29} \end{figure} In conclusion, there is a clear shortcut when using the oblique-basis idea. This allows one to use the correct wave functions in the relevant low and high energy regimes relative to $E_c$. There is a clear coherent mixing in the transition region. Such a phenomenon has been also observed in the lower $pf$-shell nuclei $^{44-48}$Ti and $^{48}$Cr which will be discussed in the next chapters. Due to the simplicity of the model, there is a small numerical gain in using oblique-basis calculations. However, there could be other cases with a significant gain in using the oblique-basis type calculations. Nuclear physics provides one such example as demonstrated in our study of $^{24}$Mg \cite{VGG 24MgObliqueCalculations}. Another two-mode system of interest is a particle confined in two dimensions by an external magnetic field. This system is interesting because there is a lifting of the infinite degeneracy of Landau like states due to the confinement \cite{Rosas et al-2000}. \chapter{Oblique Shell-Model Basics} \quad Some modern shell-model codes are based on the so-called $m$-scheme logic, namely, the model space is spanned by many-particle configurations (Slater determinants) with \textbf{good third component of the total angular momentum ($M_{J}$)} \cite{ANTOINE,OXBASH}. A good total angular momentum $J$, which is a conserved symmetry due to the isotropy of the space, is obtained either by angular momentum projection before, after, or as a consequence of the diagonalization of the Hamiltonian. Codes of this type normally achieve a good description of nuclear phenomena dominated by the single-particle effects. In these codes the basis consists of single machine words representing the many-particle configurations $\left| n_{1}...n_{k}\right\rangle =\prod\limits_{s=1}^{k}\left(a_{s}^{+}\right) ^{n_{s}}\left| 0\right\rangle $. Unfortunately, an equally good description of collective phenomena within the framework of this approach is difficult to obtain due to the computational problems associated with the size of the needed model space. On the other hand, the $SU(3)$-based shell-model scheme is designed to give a simple interpretation of the collective nuclear phenomena. An ideal scenario would incorporate both the single-particle degree of freedom and the collective degree of freedom, allowing the Hamiltonian of the system to ``choose'' the admixture that is most appropriate. In this chapter, we discuss some of the computational methods and techniques used in our calculations. \section{Generalized Eigenvalue Problem} \quad The usual procedure for solving an eigenvalue problem $\hat{H}\vec{v} =\lambda \vec{v}$ is to cast it into a matrix equation. In a non-orthogonal basis \cite{Fox Lin. Alg. -nonortogonal}, this matrix equation includes an overlap matrix ($\Theta _{ij}=\langle i|j\rangle $) and has the form \begin{equation} \sum_{j}\left(H_{ij}v_{j}-\lambda \Theta _{ij}v_{j}\right) =0. \label{generalized eigenvalue problem - matrix eq.} \end{equation} For an orthonormal basis the overlap matrix becomes the identity matrix ($ \Theta _{ij}\rightarrow \delta _{ij}$), and the matrix form of the eigenvalue problem is \begin{equation} \sum_{j}H_{ij}v_{j}=\lambda v_{i}. \label{standart eigenvalue matrix equation} \end{equation} When the overlap matrix $\Theta $ is positive-definite, the Cholesky algorithm \cite{Press -NR book Cholesky}, which decomposes $\Theta $ into the product of an upper diagonal matrix ($U$) and its transposed ($U^{T}$), $ \Theta \rightarrow UU^{T}$, can be used to cast the generalized eigenvalue problem (\ref{generalized eigenvalue problem - matrix eq.}) back into the standard matrix equation (\ref{standart eigenvalue matrix equation}): \begin{equation} H^{\prime}\vec{v}^{\prime}=\lambda \vec{v}^{\prime},\quad H^{\prime} =U^{-1}H\left(U^{-1}\right) ^{T},\quad \vec{v}^{\prime}=U^{T}\vec{v}. \label{effective eigenvalue problem} \end{equation} The use of the Cholesky algorithm is essential for identifying the linearly dependent vectors within the oblique basis. For large spaces, the effective eigenvalue problem (\ref{effective eigenvalue problem}) can be solved efficiently by using an appropriately modified Lanczos algorithm which we will discuss in a later section. For the calculations that will be discussed later, we use two basis sets. The first set consists of spherical shell-model states (ssm-states) expressed in spherical single-particle coordinates ($nlj$). The second set has a good SU(3) structure (su3-states) which track nuclear deformation \cite{Draayer SU3-(beta-gamma)}; this basis set is given in cylindrical single-particle coordinates. By construction, both sets have the third projection $M_{J}$ of the total angular momentum $J$ as a good quantum number \cite{VGG-1998 su3 good M, the M-scheme approach}. Schematically, these basis vectors and their overlap matrix can be represented in the following way: \begin{eqnarray} \mathrm{basis\quad vectors} &:&\mathrm{\quad}\left(\begin{array}{l} e_{\alpha} :\mathrm{ssm\ - \ basis} \\ E_{i} :\mathrm{su3\ - \ basis} \end{array} \right), \label{Basis vectors} \\ \mathrm{overlap\quad matrix} &:&\mathrm{\quad}\Theta =\left(\begin{array}{ll} \mathbf{1} & \Omega \\ \Omega ^{+} & \mathbf{1} \end{array} \right),\qquad \Omega _{\alpha i}=e_{\alpha}\cdot E_{i}, \label{Overlap matrix} \\ \mathrm{Hamiltonian\quad matrix} &:&\mathrm{\quad}H=\left(\begin{array}{ll} H_{ssm \times ssm} & H_{ssm \times su3} \\ H_{su3 \times ssm} & H_{su3 \times su3} \end{array} \right) =\left(\begin{array}{ll} H_{\alpha \beta} & H_{\alpha j} \\ H_{i\beta} & H_{ij} \end{array} \right). \label{hamiltonian Matrix} \end{eqnarray} In the above, $\alpha$ and $i$ span the following ranges: $\alpha = 1$,..., dim(ssm-basis) and $i = 1$,..., dim(su3-basis). Calculations in a nonorthogonal oblique basis require an evaluation of the matrix elements of physical operators plus a knowledge of the scalar product ($e_{\alpha}\cdot E_{i}$) related to the overlap matrix. While it may be desirable to have an analytical expression for the overlap matrix, as we have for the single-particle overlap matrix \cite{Chacon overlap}, for practical purposes it suffices either to know the representation of each basis state in a common set that spans the full space, which is counter to the overall objective of reducing the number of basis states to a manageable subset, or to expand one set in terms of the other. For the present work, the $e_{\alpha}$, which can be represented by a single machine word in a spherical single-particle scheme, were expanded in a cylindrical basis, which is the representation for our collective SU(3) basis vectors. This transformation is handled by an efficient routine that exploits two computational aids: bit manipulation via logical operations and a weighted search tree for fast data storage and retrieval \cite{Park-WST}. Transformation of this type has to be done at least once per ssm basis state ($e_{\alpha}$). We transform the ssm-basis states since the result is usually a vector with fewer components than a typical SU(3) basis state. There is a simple way to calculate the overlap between states in different single-particle bases \cite{Lang dot(a.b)=Det(A'B)}. However, for the calculation of matrix elements of the Hamiltonian, it is better to transform each $ e_{\alpha}$ vector in the basis used by the SU(3) states. Matrix elements of the one-body and two-body Hamiltonian \[ H=\sum_{i}\varepsilon _{i}a_{i}^{+}a_{i}+\frac{1}{4} \sum_{i,j}V_{kl,ij}a_{i}^{+}a_{j}^{+}a_{k}a_{l} \] have to be evaluated in each subspace ($H_{\alpha \beta}$ and $H_{ji}$), as well as between the two spaces ($H_{\alpha i}$ and $H_{j\beta}$), see (\ref {hamiltonian Matrix}). The $H_{\alpha \beta}$ part is normally given and evaluated in a spherical single-particle basis. By transforming the Hamiltonian to a cylindrical single-particle basis one can obtain the $H_{ji}$ part of $H$. In order to compute the off-diagonal blocks ($H_{\alpha i}$, $H_{j\beta}$, and overlap matrix elements between SU(3) and ssm-basis states), both basis sets are expanded in a basis of Slater determinants using cylindrical single-particle states. For example, any vector within the two irreps (8,4) and (9,2) of $^{24}$Mg has at most 2120 cylindrical Slater determinants; each ssm state, which itself is a single spherical Slater determinant, typically expands into less than 1296 cylindrical Slater determinants. We do not expand the SU(3) states into spherical-basis Slater determinants because that would require a significant fraction of the entire spherical shell-model space, defeating the rationale of our approach. Taking into account the significant number of Hamiltonian matrix elements ($H_{ij}$ and $H_{i\beta}$) between multi-component states, it should be clear that this is the most time consuming part of the calculation. The extra labels associated with the intrinsic quadrupole moment $\varepsilon $ of each basis state is used to produce well-structured band-like matrices and to speed up the calculation. Specifically, basis states are pre-ordered according to their deformation as reflected by $\varepsilon,$ and during the evaluation of $H$ a $\Delta \varepsilon $ selection rule is applied. It is important to point out that knowledge of the overlap matrix $\Theta $ and the matrix elements of $H$ in the two spaces ($H_{\alpha \beta}$, $ H_{ij}$) is not enough to obtain the correct off-diagonal block $H_{\alpha i}.$ This is clear from the following explicit expression for $H_{\alpha i}$ which contains a summation along ($\bar{\beta}$) that lies outside of the oblique model space ($\beta,i$): \[ H_{\alpha i}=\sum_{\beta}H_{\alpha \beta}\Theta _{\beta i}+ \sum_{\bar{\beta}}H_{\alpha \bar{\beta}}\Theta _{\bar{\beta}i}. \] Thus a direct evaluation of $H_{\alpha i}$ is required. \section{Geometrical Visualization of the Oblique Basis} \quad It is instructive to consider a geometrical visualization of the oblique-basis concept. Since a set of vectors defines a hyperplane, it is natural to ask the question: ``What is the angle between hyperplanes defined by the bases under consideration?'' To answer this question, first consider the angle $\theta $ between a normalized SU(3) basis vector $\vec{v}$ and the subspace $V$ spanned by the spherical shell-model basis vectors. The length of the projected vector $\vec{v}_{V}\in V$ is given by $\cos(\vec{v},V)=\cos \theta =\left| \vec{v}_{V}\right| $. The space $V$ of the spherical shell-model basis vectors induces a natural basis $\vec{n}_{\varepsilon}$ in the SU(3) space ($\vec{n}_{\varepsilon}=n_{\varepsilon}^{i}\vec{E}_{i}$). The angle between each new basis vector $\vec{n}_{\varepsilon}$ and the space $V$ will again be the length of its projection into the space $V,$ but it has the nice property that this set of orthogonal basis vectors stays orthogonal after the projection into the space $V$: \begin{eqnarray*} \cos \theta _{\varepsilon} &=&\cos (\vec{n}_{\varepsilon},V)=\left| \vec{n} _{\varepsilon V}\right|, \\ \vec{n}_{\varepsilon V} &=&\sum_{i,\alpha}n_{\varepsilon}^{i}(\vec{E}_{i} \cdot \vec{e}_{\alpha})\vec{e}_{\alpha}= \sum_{i,\alpha}n_{\varepsilon}^{i}\Theta _{i\alpha}\vec{e}_{\alpha}, \\ \left| \vec{n}_{\varepsilon V}\right| ^{2} &= &\sum_{\alpha}(\sum_{i}n_{\varepsilon}^{i}\Theta _{i\alpha})^{2}=\sum_{\alpha,i,j}n_{\varepsilon}^{i}\Theta _{i\alpha}n_{\varepsilon}^{j}\Theta _{j\alpha}. \end{eqnarray*} In matrix notation this reads \[ \left| \vec{n}_{\varepsilon V}\right| ^{2}=\vec{n}_{\varepsilon}\cdot \hat{\Theta}\cdot \hat{\Theta}^{T}\cdot \vec{n}_{\varepsilon}, \] where the natural basis vectors $\vec{n}_{\varepsilon}$ are eigenvectors of the symmetric matrix $\hat{\Theta}\cdot \hat{\Theta}^{T}$ \begin{equation} \hat{\Theta}\cdot \hat{\Theta}^{T}\cdot \vec{n}_{\varepsilon}=\varepsilon ^{2}\vec{n}_{\varepsilon}. \label{natural basis} \end{equation} It follows that $\left| \vec{n}_{\varepsilon V}\right| ^{2}=\vec{n} _{\varepsilon}\cdot \hat{\Theta}\cdot \hat{\Theta}^{T}\cdot \vec{n} _{\varepsilon}=\varepsilon ^{2}\vec{n}_{\varepsilon}\cdot \vec{n} _{\varepsilon}=\varepsilon ^{2}$, and thus the matrix $\hat{\Theta}\cdot \hat{\Theta}^{T}$ is positive definite ($\left| \vec{n}_{\varepsilon V}\right| ^{2}=\varepsilon ^{2}\geq 0$) with eigenvalues determined by the $ \cos \theta $. This construction allows for a simple visualization of the space spanned by the oblique basis: Choose the $x$-axis to correspond to the space $V$ of all the spherical shell-model basis vectors and represent the SU(3) space as a collection of unit vectors each at an angle $\cos \theta =\varepsilon $ with respect to the $x$-axis. This construction will be applied later to the geometry of oblique-basis space calculations to demonstrate the relative orthogonality of the two vector sets, $e_{\alpha}$ and $E_{i}$. \section{The Lanczos Algorithm} \quad The Lanczos algorithm is an essential scheme for obtaining a small number of eigenvectors corresponding to the lowest or highest eigenvalues \cite{Van Loan Cullum-Lanczos, Lanczos-1950}. It has been applied successfully to spatial dimensions on the order of $10^{6}$ and even pushed up to $10^{8}$ \cite{Ur et al}. This algorithm is a simple and very efficient method to build a basis of the Hilbert space associated with an eigenvalue problem for an operator $H$. In its simplest form, one starts with a trial state and applies $H$ over and over to generate new states; the process can be applied as many times as desired. This way one generates an orthonormal basis in which the corresponding matrix of the operator $H$ is tri-diagonal. The method is recursive and could be used in numerical, as well as in analytic calculations \cite{Kaluza - Analytical Lanczos}. For our toy model described earlier, we have used analytic realization (coordinate representation) of the algorithm while for the calculations in nuclei a numerical matrix realization has been more suitable due to the Fock representation of the states. In brief, the algorithm starts with the choice of a first normalized vector $ \vec{v}_{1}$ ($\left\langle \vec{v}_{1}|\vec{v}_{1}\right\rangle =1$). In matrix calculations this vector is often chosen randomly. Then $H$ is applied on $\vec{v}_{1}$ and a new vector orthogonal to $\vec{v}_{1}$ is constructed, $\vec{v}_{2}=H\vec{v}_{1}-\left\langle \vec{v}_{1}|H\vec{v}_{1}\right\rangle \vec{v}_{1}$. Next $\vec{v}_{2}$ is normalized and used to generate a new vector and so on. It can be shown that the basis $\{\vec{v}_{n}\}$ generated this way is orthonormal, and $H$ is tri-diagonal in this basis$.$ However, numerical noise destroys the orthogonality and requires one to do full reorthogonalization of each newly generated vector with all previous vectors. One important feature of the Lanczos algorithm is that at each new iteration it provides the vector that has the next most important contribution within the model space. However, this is true only if our first vector is a good trial guess to an exact state. A trial vector with some bad components can cause problems. Another feature of the Lanczos algorithm is that it preserves the symmetry of the initial vector when this symmetry is a symmetry of the Hamiltonian as well.\footnote{The Lanczos algorithm should in principle conserve symmetries; however, machine round-off error often mixes in different states. The round-off error is also the reason for performing a complete re-orthogonalization for each newly generated state.} For example, if $H$ is invariant under parity transformation, then the algorithm will produce only even parity states. Similarly, if we start with a state with good $J$ and $M_{J}$, then applications of $H$ will only produce states of the same symmetry. While this can be viewed as an advantage, sometimes it can be a problem especially when the symmetry has a finite irreducible sub-space. For example, if we start with a vector from a finite irreducible sub-space, then after a finite number of iterations the algorithm will exhaust the sub-space and any new vector will be linearly dependent on the previously generated vectors. This breakdown of the algorithm is generally overcome by introducing a new guess vector. In large matrix diagonalizations, the new vector could be a random Gaussian vector, or it could be ``the next vector'' from a prior given set. In our toy model, we carried out Lanczos type calculations using the harmonic-oscillator basis as a prior given set. An interesting variation to the Lanczos algorithm aiming at a particular $k$-th eigenvector has been suggested by Davidson \cite{Davidson}. In the Davidson algorithm, one tries to increase the speed of convergence by modifying the way a new vector is generated. For example, the Lanczos algorithm uses $\vec{w}_{n}=H\vec{v}_{n}$ as a seed for a new vector $\vec{v} _{n+1}$ while the Davidson algorithm uses the vector $\vec{w}_{n}= \left(\lambda _{k}I-diag\left(H\right) \right) ^{-1}\left(He_{k}-\lambda _{k}e_{k}\right) $ with enhanced components along the $k$-th eigenvector. In the Davidson expression for $\vec{w}_{n}$, $e_{k}$ and $\lambda _{k}$ are the approximate $k$-th eigenvector and eigenvalue after the $n$-th iteration, and $diag\left(H\right) $ is the diagonal part of $H$. To solve the generalized eigenvalue problem (\ref{effective eigenvalue problem}), we have to modify the Lanczos algorithm. In doing so, it is more efficient to perform consecutive action of the matrices $U$ and $H$ on the resulting vectors. The computational time in this case grows like the dimensionality of the space to the second power ($n^{2}$). This is to be compared to the case when one would first fully multiply these three matrices: $(U^{-1})^{T},$ $H,$ and $U^{-1},$ which grows as the third power of $n$ ($n^{3}$), and then act on vectors. In closing this section, we would like to point out some possible future applications of the Lanczos algorithm and its modifications. Although recently the algorithm is mostly used in huge matrix diagonalizations \cite {Van Loan Cullum-Lanczos, Hausman - Cornelius and Vladimir}, it can also have some applications to constrained problems as mentioned in our toy model discussion of a particle in a box in the harmonic-oscillator basis. For such problems, one would have to project $\vec{w}_{n}=H\vec{v}_{n}$ in the space determined by the constraints, and only after that proceed with the calculation. Another interesting application is related to the long standing problem of doing \textit{ab-initio} calculations in nuclear physics with effective interactions derived from a NN-interaction. Some current advances in this field which takes advantage of the Lanczos algorithm has been reported by Haxton \textit{et al.} \cite{Haxton and Song, Haxton and Luu}. A link between the Lanczos method and space projection techniques, such as Brueckner, Feshbach, and Bloch-Horowitz projection treatments \cite{Armen and Rau, Feshbach-1962, Bloch and Horowitz} also seems intriguing \cite{Haxton and Song}. \section{Mixed-Symmetry Basis for the Nuclear Shell Model} \quad Even though the purpose of this work has been reiterated several times in different contexts, we feel strong motivation to state it again, but this time in the context of a pure practical curiosity. Specifically, can we do calculations using two or more important basis sets as usually employed in different shell models? In particular, can we use spherical shell-model states, which are related to the single-particle $j$-shell symmetry ($ \otimes ^{2j+1}U(1)=U(1)\otimes...\otimes U(1)\otimes U(1)$), together with $SU(3)$ shell-model states, which are related to the $Q\cdot Q$ interaction? With these questions in mind, which will be answered in the positive sense in the next chapter, we continue our examination of the oblique basis concept by focusing on the structure of the two basis sets used in our mixed-symmetry shell-model calculations. In the rest of the chapter, we briefly go over the spherical single-particle basis, which is then followed by a discussion of the $SU(3)$ symmetry-adapted basis. \subsection{Spherical Basis for Single-Particle Excitations} \quad We have already introduced the main concepts and notations in a previous chapter, as well as the basic idea for configuration truncation in the $m$-scheme basis. Here, we would like to touch upon some details related to the specifics of our calculations. First of all the spherical basis states are taken from an old $m$-scheme code \textit{NUCK} (\textit{GLASGOW)} \cite {Whitehead-shell model}. This code has been used as a benchmark and a testing ground of our oblique code results. The output from NUCK containing the spherical shell-model states (ssm-states) is used as input for \textit{GlsgwBasis2Redstick} which gives a binary form of the proton-neutron basis states for use by the oblique code \textit{su3pn}. In order to speed the calculations when $SU(3)$ states are also included, there is an option to use $\varepsilon$--sorted ssm--states (sorting is provided by \textit{EpsSorting}). \subsection{SU(3) Basis for Collective Excitations} \quad We have already reviewed Elliott's $SU(3)$ shell model \cite{Elliott's SU(3) model} and some of the single-particle labeling schemes in a previous chapter. In this section, we briefly discuss the $SU(3)$ package used to generate $SU(3)$ symmetry-adapted states with good third component of the total angular momentum. Next the $SU(3)$ single-particle shell-model basis states are discussed, and the action of the $SU(3)$ generators is explained. It is then followed by a discussion of the structure of the \textit{Extreme Weight State(s)} (EWS) of an $SU(3)$ irrep for protons (neutrons), and especially the \textit{Highest Weight State(s)} (HWS) of the so-called leading $SU(3)$ irreps. Once a HWS is known, then all states of the corresponding irrep can be constructed using $SU(3)$ step operators \cite{Hecht}. Proton (neutron) states with good third component of the angular momentum ($M_{J}$) are obtained by considering the direct product $SU(3)\otimes SU_{S}(2)$ using spin highest weight states, as well as spin \textit{Lowest Weight States(s)} (LWS) \cite {VGG-1998 su3 good M}. Once the proton and neutron highest weight states are constructed, then they can be coupled to different possible proton-neutron highest weight states: \[ (SU(3)\otimes SU_{S}(2))_{p}\otimes (SU(3)\otimes SU_{S}(2))_{n}\rightarrow (SU(3)\otimes SU_{S}(2))_{pn}. \] This is the so-called strong coupling scheme. This scheme is used to couple the proton and neutron irreps to some final proton-neutron irreps. Each extreme weight state can be used for the generation of all states of good $ M_{J}$ within a given $SU(3)$ irrep. Another possible coupling scheme extends $SU_{S}(2)$ to $SU(4)\supset SU_{S}(2) \otimes SU_{T}(2)$. This is the supermultiplet scheme which is a good symmetry for light nuclei. Since the goodness of the supermultiplet scheme does not extend to heavy nuclei, we will not considered it further in this study. \subsubsection{$\bullet$ The SU(3) Basis Generator} \quad The package for generation of $SU(3)$ symmetry-adapted states with good third component of the total angular momentum consists of two major codes: (1) $ SU(3)$ proton-neutron HWS generator (\textit{SU3\_HWS\_GEN}) and (2) proton-neutron generator of good $M_{J}$ (\textit{PNGGMJ}). The \textit{\ SU3\_HWS\_GEN} routine provides the input for $PNGGMJ$ which generates the basis states needed for our oblique calculations. The overall algorithm has four basic components: 1) definition of the single-particle levels and matrix elements of the $SU(3)$ generators for a given proton (neutron) shell; 2) generation of the HWS of $SU(3)\otimes SU_{S}(2)$ for a given spin $S$ and number of protons (neutrons) $N$; 3) coupling of the proton HWS and neutron HWS to the desirable proton-neutron $ SU(3)$ HWS and $SU_{S}(2)$ LWS; 4) generation of all proton-neutron $ SU(3)\otimes SU_{S}(2)$ states with good third component of the total angular, $ M_{J}$. The \textit{PNGGMJ} code accepts any pn-HWS and generates all the states with a given $M_{J}$ value. However, the \textit{\ SU3\_HWS\_GEN} code does not generate all possible pn-HWS since one purpose of the current project is to include a few essential $SU(3)$ basis states in an $m$-scheme type calculation. Thus the \textit{SU3\_HWS\_GEN} code is set to generate only the leading proton and neutron configurations and their possible couplings. Therefore, an additional code (\textit{SU3Lister}) is needed to allow for a quick look at all the proton-neutron $SU(3)$ irreps that can be generated by the current version of the \textit{SU3\_HWS\_GEN} code. As a reference to a complete list of proton-neutron $SU(3)$ irreps, one can use the $SU(3)$ reduced matrix elements package \cite{Bahri-RME}. If other HWS are desired, then the \textit{SU3\_HWS\_GEN} code can be modified to generate non-leading HWS. This could be done either by using Bahri's method \cite{Bahri-RME} or perhaps by using another more direct approach tailored to the particular application at hand. The generation of non-leading HWS is important when one wishes to include states that are not maximally deformed in their intrinsic configuration. For example, non-leading HWS will be important if the non-$Q\cdot Q$ parts of the interaction play a significant role. \subsubsection{$\bullet$ Single-Particle Levels and the SU(3) Matrix Elements} \quad The foundation of a microscopic type symmetry-adapted shell-model calculation is the structure of the single-particle levels (SPL). The single-particle levels should be related to a representation of the symmetry group which is $SU(3)$ in our case. Therefore, a discussion focused on the $SU(3)$ single-particle levels and matrix elements of the $SU(3)$ generators is desirable and will be given in the following few paragraphs. \paragraph{Single-particle Levels - Ordering Scheme:} \begin{figure}[tbp] \begin{center} \leavevmode \centerline {\includegraphics[width= \textwidth]{StepOperators}} \end{center} \caption{Three-dimensional view of the $(\lambda, \mu)$ $SU(3)$ irrep together with the action of the $SU(3)$ step operators.} \label{SU(3) Step Operators} \end{figure} Single-particle levels of the $\eta =0,1,2,...$ $(s,p,sd,...)$ harmonic oscillator shell belong to the symmetric $(\eta,0)$ irrep of $SU(3)$. Because $\mu =0$, a typical three-dimensional representation of $SU(3)$ basis states, Fig. \ref{SU(3) Step Operators}, reduces to a special two-dimensional triangular shape ($\varepsilon $ and $n_{\rho}$ become linearly dependent), Fig. \ref{Single Particle Levels}. Also, because $SU(3)$ is a compact group, its irreps are finite dimensional, and many-particle (fermion) configurations can be conveniently represented as binary strings with a $1$ or $0$ symbolizing the presence or absence of a particle in the corresponding single-particle level. (The latter, together with a ``sign rule'' to accommodate fermion statistics, is a convenient computer implementation of a Slater determinant representation of the basis states.) Recall that the canonical reduction, $SU(3)\supset U(1)\otimes SU(2)$, has two additive labels $\varepsilon $ ($Q_{0}$) and $m_{l}$ ($L_{0}$), and the allowed values of these labels for fixed $SU(3)$ irrep $(\lambda,\mu)$ are given by \cite{Hecht}: \begin{equation} \varepsilon =2\lambda +\mu -3(p+q),\quad n_{\rho}=\mu +(p-q),\quad m_{l}=n_{\rho}-2m \label{pqm-parametriztion again} \end{equation} where $0\leq p\leq \lambda $, $0\leq q\leq \mu $, and $0\leq m\leq n_{\rho}. $ \begin{figure}[tbp] \begin{center} \leavevmode \centerline {\includegraphics[width= \textwidth]{SingleParticleLevels}} \end{center} \caption{Ordering scheme of the single-particle levels.} \label{Single Particle Levels} \end{figure} A convenient ordering scheme (which tracks the arrows in Fig. \ref{Single Particle Levels}) is set by requiring a simple representation of many-particle configurations with maximum quadrupole deformation. This objective can be achieved if the states are ordered by $\varepsilon $ (quadrupole moment) first and then by $m_{l}$ (third component of the angular momentum). \paragraph{Action of SU(3) Generators on Single-Particle States:} To be able to apply the $SU(3)$ generators on many-particle configurations, it suffices to know the action of these generators on the single-particle states. The eight generators of $SU(3)$ belong to the self-adjoint $(1,1)$ irrep of $SU(3)$. The operator structure should be chosen in the most convenient form for the application under consideration. For the present application, this choice is the same as used for the basis states, namely, the $SU(3)\supset SU(2)\otimes U(1)$ reduction. The matrix elements of the $ SU(3)$ generators can be obtained either by using an application of the appropriate Wigner-Eckart theorem or by using explicit expressions \cite {Hecht} for determining the action of the operators on the basis states. For computational purposes, it is better to adopt a direct solution. We use the fact that the action of the operators is on a product of single-particle levels each of which belongs to a symmetric $(\eta,0)$ irrep of $SU(3)$. This allows the matrix elements of the $SU(3)$ generators to be calculated using properties of the $SU(2)$ only (Fig. \ref{SU(3) Matrix Elements}). \begin{figure}[tbp] \begin{center} \leavevmode \centerline {\includegraphics[width= \textwidth]{MatrixElements}} \end{center} \caption{Action of the $SU(3)$ generators on the single particle levels. The diagram on the left shows that applying an $SU(3)$ generator to a single-particle state results in another single-particle state. The diagram on the right shows the action of the six non-diagonal generators of $SU(3)$. The vertical solid lines represent the action of the $SU(2)$ subgroup that enters the $SU(3)\supset SU(2)\otimes U(1)$ chain.} \label{SU(3) Matrix Elements} \end{figure} A key feature is the fact that the six non-diagonal generators of $SU(3)$ (recall that $L_{0}$ and $Q_{0}$ are diagonal) are rising or lowering generators of $SU(2)$ subgroups of $SU(3)$. The three $SU(2)$ subgroups and their respective actions are shown in Fig. \ref{SU(3) Matrix Elements}. States that are collinear with one of the sides of the triangular shape shown on the right in the figure form an irrep of the corresponding $SU(2)$ subgroup. \subsubsection{$\bullet$ Action of SU(3) Generators on Many-Particle States} \quad Having been supplied with the single-particle levels and the action of the $ SU(3)$ generators on them, we can construct many-particle states and extend the action of the $SU(3)$ generators to these many-particle states as well. Since one goal of the oblique-basis project is to include essential $SU(3)$ basis states in an $m$-scheme type calculation, we would briefly discuss the maximally deformed HWS for protons (neutrons). These HWS are the leading proton (neutron) irreps and can be coupled easily to the leading and other proton-neutron irreps for a given nucleus. Once we have an $SU(3)$ HWS $ \otimes SU_{S}(2)$ LWS state, we can easily generate all the states with good $M_{J}$ within this $SU(3)\otimes SU_{S}(2)$ irrep. \paragraph{Highest Weight States of SU(3)$\otimes$SU$_{S}$(2) for Leading Irreps:} So far we have constructed single-particle states and evaluated matrix elements of the generators of $SU(3)$ when they act on these states. The next step is to construct many-particle HWS of $SU(3)\otimes SU_{S}(2)$. In the chosen $SU(3)$ labelling scheme, there are seven extreme states which correspond to the vertex points of the three-dimensional diagram (Fig. \ref{SU(3) Step Operators}) of a general $(\lambda,\mu)$ irrep. We are particularly interested in the vertex that has the maximum value for the quadrupole moment of the system (Fig. \ref{SU(3) Step Operators}). Our HWS is the state with $\varepsilon =2\lambda +\mu $, $n_{\rho}=\mu $, and $m_{l}=\mu $. This HWS (maximum value of $m_{l}$ for maximum $\varepsilon $) can be easily constructed by ensuring that the action of the $SU(3)$ rising generators annihilates it. Indeed, for such a HWS, the values of $\lambda $ and $\mu $ can be determined from its $\varepsilon$ and $m_{l}$ labels. Selecting the leading $(\lambda,\mu)$ irrep (HWS with maximum overall value of $\varepsilon $) out of all possible irreps of an $N$ fermion system with total system spin $S$ is very simple within the chosen scheme. This is because the number of particles with spin up $n_{\uparrow}$ and spin down $ n_{\downarrow}$ is uniquely determined by the solution of two linear equations: \[ N=n_{\uparrow}+n_{\downarrow},\quad 2S=n_{\uparrow}-n_{\downarrow}. \] The second of these two equations expresses the fact that we also require the state to be highest weight with respect to $SU_{S}(2)$. Further, maximizing the value of $Q_{0}$ is achieved by filling the single-particle states of the $ (\eta,0)$ irrep (Fig. \ref{Single Particle Levels}) from bottom to top. The chosen scheme ensures that this simple procedure gives maximum values for $ \varepsilon $ and $m_{l}$. The $SU(3)$ irrep labels $(\lambda,\mu)$ are obtained by evaluating the quadrupole moment ($Q_{0}$) and angular momentum projection ($L_{0}$) which are additive quantum numbers. For example, in the $sd$-shell, there are six single-particle levels corresponding to the $(2,0)$ irrep of $SU(3)$. The HWS of the leading irrep for $N=6$ particles and total system spin $S=1$ is $(3,3)$ (Fig. \ref {N=6&S=1}), whereas for $S=0$ the leading $SU(3)$ irrep is $(6,0)$ (Fig. \ref{N=6&S=0}). These many-particle configurations are HWS with respect to $SU(3) $ and $SU_{S}(2)$. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = 5in \centerline {\includegraphics[width= 5in]{N6S1Example}} \end{center} \caption{Highest weight state of the leading $(3,3)$ irrep for $N=6$ and $ S=1 $ in the $sd$-shell.} \label{N=6&S=1} \end{figure} \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = 5in \centerline {\includegraphics[width= 5in]{N6S0Example}} \end{center} \caption{Highest weight state of the leading $(6,0)$ irrep for $N=6$ and $ S=0 $ in the $sd$-shell.} \label{N=6&S=0} \end{figure} \paragraph{Generating SU(3) States with Step Operators:} Once we have the HWS of $SU(3)$ $\otimes $ $SU_{S}(2)$, we can generate any other state of the $SU(3)$ irrep $(\lambda,\mu)$ by applying step operators similar to those given by Hecht. This process is needed when we produce proton-neutron coupled $SU(3)$ irreps. By using the parameterization (\ref{pqm-parametriztion again}) \cite{Hecht}, we can identify the corresponding step operators, Fig. \ref{SU(3) Step Operators}. It is important to note that applying $p$-move or $q$-move step operators to states on the top surface yields other states (or zero) on that same surface. Since the states on the top surface are HWS with respect to $SU(2)$ in the $SU(3)$ $\supset $ $SU(2)\otimes U(1)$ reduction, the $m$-move step operator is an $SU(2)$ lowering operator which changes the third component of the angular momentum ($m_{l}$). The $p$-move and $q$-move step operators can be obtained by imposing the restriction that they generate only transformations within the $SU(2)$ HWS space. From an algebraic perspective, the $p$-move and the $m$-move operators are linear in $SU(3)$ generators while the $q$-move operator is quadratic. Nevertheless, the state generation process can be written in such a way that the $q$-move operator effectively reduces to a linear action. These step operators can also be obtained by a projection operator technique \cite{Tolstoy}. \paragraph{Generating Proton-Neutron SU(3) Highest Weight States:} In brief, the generation of the proton-neutron $SU(3)$ HWS is just a matter of $SU(3)$ and $SU_{S}(2)$ couplings. However, the actual algorithm for the generation process is somewhat backwards to the structure of the sentences that we would use to describe it. What we mean by this is that when the number of protons and the number of neutrons are given together with their harmonic oscillator shells, then the algorithm does a loop over all possible total proton-neutron spins ($S_{pn}$). For each total spin ($S_{pn}$), loops are made over the possible proton spin ($S_{p}$) and neutron spin ($S_{n}$) that can couple to the total spin ($S_{pn}$). This way the proton HWS can be constructed, as described in the previous sections, by using the proton number($N_{p}$) and spin ($S_{p}$) to get the $(SU(3)\otimes SU_{S}(2))_{p}$ HWS state. The same is done for the neutron HWS. This way all the major labels, $\left| N(\lambda,\mu)\varepsilon,n_{\rho},m_{l}\right\rangle _{p}$ $ \otimes \left| SM_{S}\right\rangle _{p}$ for protons and $\left| N(\lambda, \mu)\varepsilon,n_{\rho},m_{l}\right\rangle _{p}$ $\otimes \left| SM_{S}\right\rangle _{p}$ for neutrons, have been determined and one can proceed with the details of the coupling to $\left| N(\lambda,\mu)\varepsilon,n_{\rho},m_{l}\right\rangle _{pn}$ $\otimes \left| SM_{S}\right\rangle _{pn}$. \[ (SU(3)\otimes SU_{S}(2))_{p}\otimes (SU(3)\otimes SU_{S}(2))_{n}\rightarrow (SU(3)\otimes SU_{S}(2))_{pn}. \] One final detail on the generation process is that the proton HWS state is actually an $SU(3)$ HWS but a spin LWS while for the neutrons it is an $SU(3) $ and a spin HWS. Then they are coupled to a proton-neutron $SU(3)$ HWS with a spin LWS structure. The reason behind this is that in the $SU(2)$ type coupling one does a loop such that $m_{p}+m_{n}=m_{pn}.$ Hence, if $m_{p}$ is the minimal $m$-value of the proton spin such that $m_{p}+m_{n}=m_{pn}$ for a fixed $m_{pn}$, then the $m_{n}$ should be the maximal $m$-value of the neutron spin. Therefore, the loop which satisfies $m_{p}+m_{n}=m_{pn}$ will have an increase of $m_{p}$ and simultaneous decrease of $m_{n}$ so that $m_{pn}$ stays fixed. This is also the reason why the total proton-neutron state is a LWS ($M_{S}=-S_{pn}$) so that the coupling to the good $M_{J}=M_{l}+M_{S}$ is done in the same way since $M_{l}$ is at maximum in the proton-neutron $SU(3)$ HWS. \paragraph{Generating States of Good M$_{J}$:} Since the action of the $SU(3)$ commutes with that of the spin group ($SU_{S}(2)$), it is not difficult to achieve the final goal of states with good third component of the total angular momentum ($M_{J}$). Recall that we have just generated proton-neutron EWS ($SU(3)^{HWS}$ $\otimes $ $SU_{S}(2)_{LWS}$). We also introduced the procedure to generate other states of an $SU(3)$ irrep by applying step operators. Each of these states remains a LWS of $ SU_{S}(2)$. Hence, after each move on the top surface (see Fig. \ref{SU(3) Step Operators}), using $p$-move or $q$-move step operators, we can apply spin rising and angular momentum lowering to the corresponding state toward $ M_{J}=M_{l}+M_{s}$. This way, we can generate all proton-neutron states with labels: $N$, $S$, $(\lambda,\mu)$, $\varepsilon $, $n_{\rho}$, $M_{l}$, and $M_{J}$. \chapter{$^{24}$Mg Mixed-Symmetry Calculations} \quad The success and applicability of the oblique-basis approach to $^{24}$Mg, which will be demonstrated in the following sections, can be related to the fact that the spherical shell-model states are eigenstates of the one-body Hamiltonian $(\sum \varepsilon _{i}a_{i}^{+}a_{i})$ while the two-body part of the Hamiltonian ($\sum_{i,j}V_{kl,ij}a_{i}^{+}a_{j}^{+}a_{k}a_{l}$) is strongly correlated with the quadrupole-quadrupole interaction ($Q\cdot Q$) which is diagonal in the SU(3) basis \cite{QQ in sd-shell}. By combining spherical shell-model states and SU(3) states, one accommodates, from the onset, the dominant modes of the system. In this chapter we discuss the oblique-basis technique as applied to $^{24}$Mg \cite{VGG 24MgObliqueCalculations}. This is a strongly deformed nucleus with well-known collective properties and is one of the best manifestations of the Elliott's SU(3) symmetry \cite{Elliott's SU(3) model}. In terms of dimensionality of the model space, adding a few leading SU(3) irreps to a highly truncated spherical shell-model basis results in significant gains in the convergence of the low-energy spectra towards the full space result. In particular, the addition of leading SU(3) irreps yields the right placement of the K=2 band and the correct order for most of the low-lying levels. Indeed, an even more detailed analysis shows that the structure of the low-lying states is significantly improved through the addition of a few SU(3) irreps. The Hamiltonian used in our analysis is the Wildenthal interaction \cite{Wildenthal}. In the following sections we summarize some of the important features of the spherical, SU(3), and mixed-symmetry type shell-model calculations. First we discuss the dimensions of each model space. Then we consider the ground-state energy as a function of the model space used. Next we focus on the structure of the low-energy spectrum, and finally we discuss the structure of the states as compared to the exact $sd$-shell results. \section{Structure of the Model Spaces} \quad One important question in any computational study is the dimensions of the matrices involved, as well as the structure of the model space used. In this section, we address this question by briefly summarizing the space structure and dimensions for the spherical, SU(3), and oblique shell-model calculations. \subsection{Model Space Dimensions} \quad Our model space for $^{24}$Mg consists of 4 valence protons and 4 valence neutrons in the $0\hbar \omega $ $sd$-shell. The $m$-scheme dimensionality ($M_{J}=0$) of this space is 28503. This space is denoted as FULL in the figures that follow. To test the effects of truncations, calculations were also carried out permitting $n$ particles to be excited out of the lowest $d_{5/2}$ orbit, i.e. $d_{5/2}^{8-n}(d_{3/2}s_{1/2})^{n}$, and are denoted as SM(n). The SM(2) approximation is of particular interest since it allows one to take into account the effect of pairing correlations (one pair maximum) in the `secondary levels' ($s_{1/2}$ and $d_{3/2}$ for the $^{24}$Mg) with a minimum expansion of the model space. The SU(3) part of the basis includes two scenarios: one with only the leading representation of SU(3), which for $^{24}$Mg is the (8,4) irrep, with dimensionality 23 for the $M_{J}=0$ space and denoted in what follows by (8,4); and another with the (8,4) irrep plus the next most important representation of SU(3), namely the (9,2). The (9,2) irrep occurs three times, once with $S=0$ ($M_{J}=0$ dimensionality 15) and twice with $S=1$ ($M_{J}=0$ dimensionality $2\times 45=90$). All three (9,2) irreps have total $M_{J}=0$ dimensionality of 15+90=105. The (8,4)\&(9,2) case has total $M_{J}=0$ dimensionality of 23+105=128 and is denoted by (8,4)\&(9,2). In Table \ref{Table1} we summarize the dimensionalities involved. \begin{table}[tbp] \caption{Labels and $M_{J}=0$ dimensions for various $^{24}$Mg calculations. The leading SU(3) irrep is denoted by (8,4) while (8,4)\&(9,2) implies that (9,2) irreps have also been included. The SM(n) spaces correspond to spherical shell-model partitions with $n$ valence particles excited out of the $d_{5/2}$ shell into the $s_{1/2}$ and $d_{3/2}$ levels.} \label{Table1} \vskip 0.25cm \begin{center} \begin{tabular}{rrrrrrrr} \hline Model space & $(8,4)$ & $(8,4)\&(9,2)$ & $SM(0)$ & $SM(1)$ & $SM(2)$ & $ SM(4) $ & $FULL$ \\ \hline space dimension & $23$ & $128$ & $29$ & $449$ & $2829$ & $18290$ & $28503$ \\ \% of the full space & $0.08$ & $0.45$ & $0.10$ & $1.57$ & $9.92$ & $64.17$ & $100$ \end{tabular} \end{center} \vskip 0.25cm \end{table} \subsection{Visualizing the Oblique Basis for $^{24}$Mg} \quad After obtaining an idea of the space dimensions involved, we now try to visualize the oblique basis. The method described in the previous chapter can be used to visualize the structure of the oblique basis. First, consider the SM(2) space enhanced by the SU(3) irreps (8,4)\&(9,2). Since the SM(2) and (8,4)\&(9,2) spaces are both relatively small (see Table \ref{Table1}), we expect the basis vectors of these spaces to be nearly orthogonal. This orthogonality is clearly seen from inset (a) in Fig. \ref{SU3+ Relative To SM(2) and SM(4)}. Inset (b) in Fig. \ref{SU3+ Relative To SM(2) and SM(4)} shows a loss of orthogonality between the SM(4) and the (8,4)\&(9,2) basis vectors. This is due to the fact that SM(4) space is about 64\% of the full $sd$-space. Therefore, there is a relatively high probability that some linear combinations of SU(3) basis vectors lie in the SM(4) space. Indeed, it can be shown that there are five vectors from (8,4)\&(9,2) that lie within the SM(4) space. Such redundant vectors are identified and excluded from the calculation within the Cholesky algorithm when it is applied to the overlap matrix ($\Theta \rightarrow UU^{T}$). \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{SU3SM2SM4}} \end{center} \caption{Orthogonality of the basis vectors in the oblique geometry. The SU(3) space consists of (8,4)\&(9,2) basis vectors with the shell-model spaces (SM(n) with n=2 and 4) indicated by a horizontal line. (a) SM(2) and the natural SU(3) basis vectors and (b) SM(4) and the natural SU(3) basis vectors. In the latter case (b), there are five SU(3) vectors that lie in the SM(4) space.} \label{SU3+ Relative To SM(2) and SM(4)} \end{figure} \section{Spectral Characteristics} \quad Reproducing the correct energy spectra of a nucleus is one of the goals of any nuclear-structure study. Since we are trying to develop a new concept for nuclear structure studies, the mixed-symmetry approach, we are currently comparing our results only with full shell-model calculations. Therefore, a computation-to-computation comparison is our reality check. In the next sub-sections we compare the ground-state energy and energy spectrum for $^{24}$Mg as calculated with the Wildenthal interaction \cite{Wildenthal} using spherical, SU(3), and mixed-symmetry shell-model bases. \subsection{Ground-State Energy} \quad We now turn to the consideration of the main results of the oblique-basis calculation, starting with ground-state convergence issues. The results shown in Fig. \ref{Mg24DimConv} illustrate that the oblique-basis calculation gives good dimensional convergence in the sense that the calculated ground-state energy for the SM(2)+(8,4)\&(9,2) calculation is 3.3 MeV below the calculated energy for the SM(2) space alone. Adding the SU(3) irreps only increases the size of the space from 9.9\% to 10.4\% of the full space. Compare this 0.5\% increase in the size of the space with the huge (54\%) increase in going from SM(2) to an SM(4) calculation. For the latter, the ground-state energy is 4.2 MeV lower than the SM(2) result, somewhat better than for the SM(2)+(8,4)\&(9,2) calculation but in 64.2\% rather than 10.4\% of the full model space. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{Mg24DimConv}} \end{center} \caption{Calculated ground-state energy for $^{24}$Mg. Ground-state energy as a function of the various model spaces. Note the dramatic increase in binding (3.3 MeV) in going from SM(2) to SM(2)+two SU(3) irreps, (8,4)\&(9,2), (a 0.5\% increase in the dimensionality of the model space). Enlarging the space from SM(2) to SM(4) (a 54\% increase in the dimensionality of the model space) adds 4.2 MeV in binding energy.} \label{Mg24DimConv} \end{figure} The exponential fall-off of the ground-state energy in Fig. \ref{Mg24DimConv} is striking. It has been observed many times, and has been recently suggested as a possible extrapolation procedure for obtaining the ground-state energy \cite{Zelevinsky and Volya}. An even more rigorous extrapolation procedure has been suggested by Mizusaki and Imada \cite{Mizusaki and Imada}. Within this procedure, one can also estimate the error of a given calculation. Their procedure is also applicable to other observables, as well. \subsection{Energy and Angular Momentum of the Low Lying States} \quad Fig. \ref{LevelStructure} and Fig. \ref{RightLevelStructure} show that the oblique-basis calculation positions the K=2 band head correctly. Furthermore, most of the other low-energy levels are also positioned correctly. The results for pure spherical and pure SU(3) calculations are shown in Fig. \ref{LevelStructure}. As can be seen from the results in Fig. \ref{LevelStructure}, an SM(4) calculation (64\% of the full model space) is needed to get the ordering of the lowest angular momentum states correctly. Also, notice that in this case the third and fourth energy levels are practically degenerate. On the other hand, it only takes 0.5\% of the full space to achieve comparable success with SU(3). In particular, Fig. \ref{LevelStructure} shows that an SU(3) calculation using only the (8,4) and (9,2) irreps gives the right ordering of the lowest levels. Note that the first few low-energy levels for SM(2) are close in energy to the corresponding low-energy levels for the (8,4)\&(9,2) result. Since these two spaces are nearly orthogonal (see Fig. \ref{SU3+ Relative To SM(2) and SM(4)} ), these two sets of levels mix strongly in an oblique calculation and yield excellent results. The comparable ground-state energies of the SM(2) and (8,4)\&(9,2) configurations can also be seen in Fig. \ref{Mg24DimConv}. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{LevelStructure}} \end{center} \caption{Structure of the energy levels for $^{24}$Mg for different calculations. Pure $m$-scheme spherical basis calculations are on the left-hand side of the graph; pure SU(3) basis calculations are on the right-hand side; the spectrum from the FULL space calculation is in the center.} \label{LevelStructure} \end{figure} \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{RightLevelStructure}} \end{center} \caption{Energy levels for $^{24}$Mg as calculated for different oblique bases. The SM(4) basis calculation is included for comparison.} \label{RightLevelStructure} \end{figure} Compare the spectra shown in Fig. \ref{LevelStructure} with the results from the oblique-basis calculations shown in Fig. \ref{RightLevelStructure}. From this comparison one can see that the correct level structure can be achieved by using 1.6\% (SM(1)+(8,4)) of the full $sd$-space. However, one should also notice that for the SM(0)+(8,4) space, which is only 0.2\% of the full space and the minimum oblique-basis calculation, the results are quite close to the correct level structure. Despite the fact that the ground-state energy of the oblique-basis calculation is higher than the ground-state energy for the SM(4)-type calculation, the oblique calculations are favorable in terms of dimensionality considerations and correctness of the level structure. \section{Overlaps with the Full $sd$-shell Calculation} \quad Figs. \ref{UsualCalculationOverlaps}--\ref{SelectedOverlaps} focus on the actual structure of the states by showing overlaps of eigenstates calculated in the SM(n), SU(3), and oblique bases with the corresponding states of the full space calculation. Specifically, in Fig. \ref{UsualCalculationOverlaps}, overlaps of states for pure SM(n) and pure SU(3)-type calculations are given. Note that the SM(4) states have big overlap (90\%) for the first few eigenstates. This should not be too surprising since SM(4) covers 64\% of the full space. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{UsualCalcOverlaps}} \end{center} \caption{Overlaps of the pure spherical and SU(3) with the FULL states. The first four bars represent the SM(0), SM(1), SM(2), and SM(4) calculations, the next two bars represent SU(3) calculations, etc.} \label{UsualCalculationOverlaps} \end{figure} The results in Fig. \ref{UsualCalculationOverlaps} show that in general SU(3)-based calculations give much better results than the low-dimensional SM(n)-type calculations. The SM(n)-based calculations have irregular overlaps along the low-lying states and require SM(4), which is 64\% of the full space, to get relatively well behaved overlaps. This can be seen most clearly from the inset labeled SM in Fig. \ref{MixedOverlaps}. Note that the SM(0) contributions to the third, fifth, and sixth states are very low while SM(1) and SM(2) have varying contributions. The structure of the SU(3)-type states leads to a stable picture for the oblique calculations as shown in the inset SM(n)+(8,4) and SM(n)+(8,4)\&(9,2) in Fig. \ref{MixedOverlaps}. \begin{figure}[tbp] \begin{center} \leavevmode \centerline {\includegraphics[width= 5in ]{MixedOverlaps}} \end{center} \caption{Overlaps of the oblique-basis states with the exact states (set I). Inset SM contains the overlaps for the pure spherical shell-model basis states only. Inset SM+(8,4) contains the overlaps of the SM basis enhanced by the leading SU(3) irrep (8,4). Inset SM+(8,4)\&(9,2) has the (9,2) irreps included as well.} \label{MixedOverlaps} \end{figure} In Fig. \ref{MixedCalculationOverlaps}, the improvement in the structure of the calculated states is followed as the SU(3) states are added to the SM(n) basis. From this graph, one can see that the improvement to the SM(0)- and SM(1)-type calculation is due mainly to the goodness of SU(3) itself. The improvement obtained in the oblique calculation is due to the SU(3) enhancement of the SM(2) space. From this graph, one can also conclude that there is only a small gain in going to the SM(4)-based oblique calculation. However, this improvement can not be achieved by any other means with such a small increase in the model space. This is clear from a careful examination of Fig. \ref{Mg24DimConv} where one can see that the SM(5) result, which has 25142 basis vectors (88\% of the full $sd$-space), gives the same ground-state energy as the SM(4)+(8,4)\&(9,2) result (64.6\% of the full $sd$-space). \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{MixedCalcOverlaps}} \end{center} \caption{Overlaps of the oblique-basis states with the exact states (set II). Each inset represents a particular SM(n)-type calculation, showing how the overlaps change along the corresponding oblique-basis calculation.} \label{MixedCalculationOverlaps} \end{figure} Finally, to compare the three schemes -- SU(3), SM(n), and the various oblique-basis combinations -- representative overlaps are shown in Fig. \ref{SelectedOverlaps}. From these results, it is very clear that SU(3)-type basis states yield the right structure in a very low order. In particular, in Fig. \ref{SelectedOverlaps}, it can be seen that a 90\% overlap with the exact eigenvectors can be achieved by using only 10\% of the total space, SM(2)+(8,4)\&(9,2). Furthermore, Fig. \ref{SelectedOverlaps} also shows that SU(3) enhances the SM(4) results yielding eigenstates with overlaps that are very close ($\approx$ 98\%) to the exact results. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{SelectedOverlaps}} \end{center} \caption{Representative overlaps of pure SM(n), pure SU(3), and oblique-basis results with the exact full $sd$-shell eigenstates. A number within a bar denotes the state with the overlap shown by the bar if it is different from the number for the exact full-space calculation shown on the abscissa. For example, for SM(2) the third eigenvector overlaps the most with the fourth exact eigenstate, not the third, while the fifth SM(2) eigenvector has the overlap shown with the third exact eigenstate.} \label{SelectedOverlaps} \end{figure} \chapter{Study of Lower $pf$-shell Nuclei} \quad For $^{24}$Mg, the single-particle excitations, described by the spherical shell model, and the collective excitations, described by the $SU(3)$ shell model, are of comparable importance. In the previous chapter, we have shown the relevance of the oblique-basis calculation for $^{24}$Mg. It is, therefore, natural to seek other nuclear systems to apply the mixed-symmetry method to. The even-even nuclei in the $sd$-shell are one place to start. For the $sd$-shell nuclei, however, one can perform full $sd$-shell calculations with modern computer codes. The $pf$-shell nuclei are another option. For these nuclei, full $pf$-shell calculations have just recently been achieved \cite{Ur et al, Caurier -full pf shell}. However, adding only the leading and next to the leading irreps, as it has been done for $^{24}$Mg, is not sufficient for the lower $pf$-shell nuclei, Ti and Cr, to obtain results as good as those for $^{24}$Mg. This is because the spherical shell model provides a significant part of the low-energy wave functions of these nuclei within a few spherical shell-model configurations, while in the $SU(3)$ shell-model basis one needs more than a few $SU(3)$ irreps. This is due mainly to the strong breaking of $SU(3)$ in the lower $pf$-shell induced by the spin-orbit interaction \cite{VGG SU(3)andLSinPF-ShellNuclei}. When the spin-orbit splitting is removed, the importance of the $SU(3)$ basis is restored. Although the usual $SU(3)$ structure of the states is lost, there is an adiabatic $SU(3)$ mixing which gives rise to the coherent structure of the yrast states. We have already seen this coherent mixing phenomenon in our toy model. In nuclei, however, this coherent mixing can be interpreted as an illustration of the intrinsic state idea where all the states within a given band can be projected out from a particular intrinsic state \cite{Elliott's SU(3) model}. Even more, in nuclei this coherent structure is assumed to be a result of an adiabatic $ SU(3)$ mixing which means that some observables stay close to the $SU(3)$ limit, that is, as if there was a pure $SU(3)$ symmetry. Specifically, the $ B(E2)$ values remain strongly enhanced with values close to the $SU(3)$ symmetry limit. It is important to point out that there is a coherent mixing of the spherical shell-model states as well. In this chapter, we will discuss our study of the even-even lower $pf$-shell nuclei $^{44-48}$Ti and $^{48}$Cr. First, we show what a few $SU(3)$ irreps can do for us within a mixed-symmetry calculations for $^{44}$Ti nucleus. Then, we focus on the spin-orbit interaction that strongly breaks the $SU(3) $ symmetry. We conclude this chapter by discussing the coherent structure of the states and the adiabatic $SU(3)$ mixing which produces enhanced $B(E2)$ values. \section{$^{44}$Ti Oblique-Basis Results} \quad The simplest even-even nucleus in the $pf$-shell, from a computational point of view, is $^{44}$Ti. In this section we discuss our oblique-basis calculations for $^{44}$Ti. If one compares the spherical shell-model with the $SU(3)$ shell-model results within the framework of a realistic interaction, such as KB3 interaction \cite{KB3 interaction}, then $SU(3)$ seems to be badly broken. Specifically, the ground-state energy and wave function are poorly reproduced. This seems to be a common trend in the even-even $sd$-shell nuclei as well \cite{PHF and SU(3)}. Even the ground-state energies within the oblique-basis calculations do not look prominent. However, at a closer examination one finds that the oblique-basis idea still works. The results may be not as good as in $^{24}$Mg, but there are some close analogies. For example, the SM(1) space in $^{44}$Ti seems to be what SM(2) is for $^{24}$Mg, while the SM(2) space in $^{44}$Ti seems to be what SM(4) is for $^{24}$Mg. By that we mean that the $SU(3)$ enhanced SM(1) basis in $^{44}$Ti gives overlaps that are compatible with the overlaps of the pure SM(2) calculation. In the next few sub-sections we briefly illustrate these findings. \subsection{Model Space Dimensions} \quad The model space for $^{44}$Ti consists of 2 valence protons and 2 valence neutrons in the $pf$-shell. We use the same notation for the $m$-scheme spherical bases as in \ref{Table1}. The SU(3) part of the basis includes the leading irrep of SU(3), which for $^{44}$Ti is (12,0) with $M_{J}=0$ dimensionality 7, and the next to the leading irrep, namely the (10,1). The (10,1) irrep occurs three times, once with $S=0$ ($M_{J}=0$ dimensionality 11) and twice with $S=1$ ($M_{J}=0$ dimensionality $2\times 33=66$). All three (10,1) irreps have total $M_{J}=0$ dimensionality of 11+66=77. The (12,0)\&(10,1) case has total $M_{J}=0$ dimensionality of 7+77=84 and is denoted by (12,0)\&(10,1). In Table \ref{TableTi44} we summarize the dimensionalities involved. \begin{table}[tbp] \caption{Labels and $M_{J}=0$ dimensions for various $^{44}$Ti oblique calculations.} \label{TableTi44}\vskip 0.25cm \begin{center} \begin{tabular}{rrrrrrrr} \hline Model space & $(12,0)$ & $(12,0)\&(10,1)$ & $SM(0)$ & $SM(1)$ & $SM(2)$ & $ SM(4)$ & $FULL$ \\ \hline space dimension & $7$ & $84$ & $72$ & $580$ & $1908$ & $3360$ & $4000$ \\ \% of the full space & $0.18$ & $2.1$ & $1.8$ & $14.5$ & $47.7$ & $84$ & $100$ \end{tabular} \end{center} \vskip 0.25cm \end{table} As in the case of $^{24}$Mg, there are linearly dependent vectors within some of the $^{44}$Ti calculations. For example, there is one such vector in the SM(2)+(12,0) space, two in the SM(3)+(12,0), two in the SM(1)+(12,0)\&(10,1), twelve in the SM(2)+(12,0)\&(10,1), and thirty-three in the SM(3)+(12,0)\&(10,1). Each linearly dependent vector is handled as discussed in the chapter devoted to $^{24}$Mg. \subsection{Ground-State Energy} \quad Fig. \ref{Ti44DimConv} shows that the oblique-basis calculation of the ground-state energy for $^{44}$Ti does not give results as good as for $ ^{24} $Mg. The calculated ground-state energy for the SM(1)+(12,0)\&(10,1) calculation is 0.85 MeV below the calculated energy for the SM(1) space alone. Adding the two SU(3) irreps to the SM(1) space increases the size of the space from 14.5\% to 16.6\% of the full space. This is a 2.1\% increase, while going from the SM(1) to SM(2) involves an increase of 33.2\% in the model space. For the latter, the ground-state energy is 2.2 MeV lower than the SM(1) result. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{Ti44DimConv}} \end{center} \caption{Ground-state energy for $^{44}$Ti as a function of the various model spaces. The SU(3) irreps used are (12,0) and (10,1).} \label{Ti44DimConv} \end{figure} \subsection{Energy Spectrum of the Low Lying States} \quad We have seen that the position of the K=2 band head for $^{24}$Mg is correct for the SU(3)-type calculations but not for the low-dimensional SM(n) calculations. In $^{44}$Ti this seems to be the opposite, the SM(n)-type calculations reproduce the position of the K=2 band head while the SU(3) do not, as shown in the upper graph in Fig. \ref{Ti44LevelStructure}. Furthermore, most of the low-energy levels are much higher for the pure SU(3) limit than for the pure SM(n) case. Thus, one may expect that these two sets of levels (the SM(n) and the SU(3)) may not mix as strongly in an oblique calculation as for the $^{24}$Mg case. Surprisingly, the oblique-basis calculations seem to produce a good spectral structure as shown in the lower graph in Fig. \ref{Ti44LevelStructure}. Notice that the SM(2)+(12,0)\&(10,1) spectrum is very good and compatible with the SM(3) spectrum. This is 50\% compared to 84\% of the model space. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = 5in \centerline {\includegraphics[width= 5in]{Ti44LevelStructure}} \epsfxsize = 5in \centerline {\includegraphics[width= 5in]{Ti44SMSU3Spectrum}} \end{center} \caption[Structure of the energy levels for $^{44}$Ti for different calculations.]{Structure of the energy levels for $^{44}$Ti for different calculations. Pure $m$-scheme spherical-basis calculations are on the left-hand side of the upper graph; pure SU(3)-basis calculations are on the right-hand side; the spectrum from the FULL space calculation is in the center. The spectra form oblique-basis calculations are in the lower graph.} \label{Ti44LevelStructure} \end{figure} \subsection{Overlaps with the Exact Calculation} \quad The top graph in Fig. \ref{Ti44Overlaps} shows overlaps of states for pure SM(n) and pure SU(3)-type calculations while the lower part shows some selected overlaps from the oblique calculations. Notice that the overlaps of the pure SU(3)-type calculations are very small, often less than 40\%, while the SM(n) results are far better with the SM(2)-type calculations having about 80\% overlap with the exact states. Note that the SM(3) states have big overlap ($>$97\%) for the first few eigenstates. This is not surprising since SM(3) covers 84\% of the full space. What is surprisingly good is that the SM(2)+(12,0)\&(10,1)-type calculation is as good as the SM(3). Even more, the SM(1)+(12,0)\&(10,1) overlaps seem to be often bigger than the SM(2) overlaps. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = 5in \centerline {\includegraphics[width= 5in]{Ti44UsualCalcOverlaps}} \epsfxsize = 5in \centerline {\includegraphics[width= 5in]{Ti44SelectedOverlaps}} \end{center} \caption{$^{44}$Ti wave function overlaps of pure spherical, SU(3), and oblique states with the FULL states. The first four bars in the upper graph represent the SM(0), SM(1), SM(2), and SM(3) calculations, the next two bars represent the SU(3) calculations, etc. Representative overlaps of pure SM(n), pure SU(3), and oblique-basis results with the exact full $pf$-shell eigenstates are shown in the lower graph.} \label{Ti44Overlaps} \end{figure} \section{Set Up for the Study of the $SU(3)$ Breaking} \quad To understand better the results of the mixed-mode calculations described in the previous section, we need to recall that the oblique-basis method is expected to work well if we are dealing with two or more competing and compatible modes. Therefore, if the Hamiltonian of the system is dominated by its one-body term, then the effect of the two-body part will be suppressed. However, if the single-particle energies are degenerate, the importance of $SU(3)$ should reappear. In the next subsections, we discuss the structure of the Hamiltonian, as well as some of the computational methods used in our calculations. \subsection{Interaction Hamiltonian} \quad To retain clarity of the discussion, we recall the structure and notations of the one- plus two-body Hamiltonian: \[ H=\sum_{i}\varepsilon _{i}a_{i}^{+}a_{i}+ \frac{1}{4} \sum_{i,j,k,l}V_{kl,ij}a_{i}^{+}a_{j}^{+}a_{k}a_{l}. \] The summation indexes range over the single-particle levels included in the model space. We only consider levels of the $pf$-shell which have the following radial $(n)$, orbital $(l)$ and total angular momentum $(j)$ quantum numbers: $nl_{j}=\left\{ 0f_{7/2},0f_{5/2},1p_{3/2},1p_{1/2}\right\} $. In what follows, the radial quantum number $(n)$ is dropped since the $ l_{j}$ labels provide a unique labelling scheme for single-shell applications. It is common practice to replace the four single-particle energies $\varepsilon _{i}$ by the $l^{2}$ and $l\cdot s$ interactions: $ \sum_{i}\varepsilon _{i}a_{i}^{+}a_{i}\rightarrow \epsilon (n_{i}-\alpha _{i}l_{i}\cdot s_{i}-\beta _{i}l_{i}^{2})$, where $\epsilon $ is the average binding energy per valence particle, $n_{i}$ counts the total number of valence particles, and $\alpha $ and $\beta $ are dimensionless parameters giving the interaction strength of the $l^{2}$ and $l\cdot s$ terms. For realistic single-particle energies used in the KB3 interaction (\ref{KB3 spe}), these parameters are $\epsilon =2.6$ $MeV,$ $\beta =0.0096,$ $\alpha _{p}=1.3333,$ and $\alpha _{f}=1.7143.$ The small value of $\beta $ signals small $l^{2}$ splitting (\ref{KB3p-f spe}) while the values of $\alpha $ demonstrate the presence of a strong spin-orbit splitting. A significant part of the two-body interaction, $V_{kl,ij}$, maps onto the quadrupole-quadrupole ($Q\cdot Q$) and the pairing ($P$) interactions. Since $Q\cdot Q$ can be written in terms of $SU(3)$ generators, it induces no $ SU(3)$ breaking, as has been discussed in the first chapters. Hence $ Q\cdot Q$ serves to re-enforce the importance of the Elliott model \cite {Elliott's SU(3) model}. However, the pairing interaction mixes different $ SU(3)$ irreps, but in our study it does not seem to cause any strong $SU(3)$ breaking. In this analysis the two-body part of the Hamiltonian ($V_{kl,ij}$ ) is fixed by the Kuo-Brown-3 (KB3) interaction matrix elements, and the single-particle energies, $\varepsilon _{i}$, are changed as described below. The following single-particle energies are normally used with the KB3 interaction \cite{KB3 interaction}: \begin{equation} \mathrm{KB3\quad [MeV]}:\varepsilon _{p_{\frac{1}{2}}}=4,\quad \varepsilon _{p_{\frac{3}{2}}}=2,\quad \varepsilon _{f_{_{\frac{5}{2}}}}=6,\quad \varepsilon _{f_{\frac{7}{2}}}=0. \label{KB3 spe} \end{equation} For the purposes of the current study, it is important to know the centroids of the $p$- and $f$-shells. For example, the energy centroid of the $p$-shell is given by: \[ \varepsilon _{p}= \frac{ \varepsilon _{p_{\frac{1}{2}}} \dim(p_{\frac{1}{2} })+ \varepsilon _{p_{\frac{3}{2}}} \dim (p_{\frac{3}{2}})} {\dim(p_{\frac{1}{ 2}})+ \dim (p_{\frac{3}{2}})}. \] In what follows, we label by $KB3p\_f$ that Hamiltonian which uses the KB3 two-body interaction with single-particle $p$- and $f$-shell energies set to their centroid values: \begin{equation} \mathrm{KB3}p\_f\quad [MeV]:\varepsilon _{p_{\frac{1}{2}}}=\varepsilon _{p_{ \frac{3}{2}}}=2.6670,\quad \varepsilon _{f_{_{\frac{5}{2}}}}=\varepsilon _{f_{\frac{7}{2}}}=2.5710. \label{KB3p-f spe} \end{equation} We use $KB3pf$ for the case when the single-particle energies are set to their overall average: \begin{equation} \mathrm{KB3}pf\quad [MeV]:\quad \varepsilon _{p}=\varepsilon _{f}=2.6 \label{KB3pf spe} \end{equation} Due to the near degeneracy of the single-particle energies of the $KB3p\_f$ interaction (\ref{KB3p-f spe}), the results for the $KB3pf$ case are very similar to those for $KB3p\_f$. \subsection{Computational Procedures} \quad In our study, we have focused on $^{44}$Ti, $^{46}$Ti, $^{48}$Ti, and $^{48}$Cr because these are $pf$-shell equivalents of $^{20}$Ne, $^{22}$Ne, $^{24}$Ne, and $^{24}$Mg, respectively, which are known to be good $SU(3)$ $sd$-shell nuclei. Furthermore, data on these nuclei are readily available from the National Nuclear Data Center (NNDC) \cite{NNDC} and full $pf$-shell calculations are feasible \cite{Caurier -full pf shell}. The model dimensionalities for full-space calculations increase very rapidly when approaching the mid-shell region; those for the cases considered here are given in Table \ref{pf space dimensions}. \begin{table}[tbp] \caption{Space dimensions for the $m$-scheme calculations in the full $pf$-shell model space. We have used even parity and even isospin basis states with no restrictions on the total angular momentum $J$ except for the $ M_{J}=0$ case where only states with even $J$ values have been selected.} \label{pf space dimensions} \begin{center} \begin{tabular}{rrrrr} \hline Nucleus & $M_{J}=0$ & $M_{J}=6$ & $M_{J}=10$ & $M_{J}=14$ \\ \hline $^{44}$Ti & 1080 & 514 & 30 & --- \\ $^{46}$Ti & 43630 & 32297 & 4693 & 134 \\ $^{48}$Ti & 317972 & 278610 & 57876 & 3846 \\ $^{48}$Cr & 492724 & 451857 & 104658 & 8997 \end{tabular} \end{center} \end{table} The computational procedures and tools used in the analysis of the $SU(3)$ symmetry breaking are described in this section. In brief, the Hamiltonian and other matrices are calculated using an $m$-scheme shell-model code \cite {the M-scheme approach} while the eigenvectors and eigenvalues are obtained by means of the Lanczos algorithm \cite{Whitehead-shell model}. All the calculations are done in the full $pf$-shell model space. First, the Hamiltonian $H$ for each interaction ($KB3$ (\ref{KB3 spe}), $ KB3p\_f$ (\ref{KB3p-f spe}), and $KB3pf$ (\ref{KB3pf spe})) is generated. Then the eigenvalues and eigenvectors are calculated and the yrast states identified. Next, the matrix for the second-order Casimir operator of $SU(3)$ , namely $C_{2}=(3L^{2}+Q\cdot Q)/4$, is generated using the shell-model code, and a moments method \cite{Moments method} is used to diagonalize the $ C_{2}$ matrix by starting the Lanczos procedure with specific eigenvectors of $H$ for which an $SU(3)$ decomposition is desired. Finally, $B(E2)$ values in $e^{2}fm^{4}$ units are calculated from one-body densities using Siegert's theorem with a typical value for the effective charge \cite{Ur et al, effective charges}, $q_{eff}$ = 0.5, so $e_{p}=(1+q_{eff})e=1.5e$ and $ e_{n}=(q_{eff})e=0.5e$. Even though the procedure can generate the spectral decomposition of a state in terms of the eigenvectors of $C_{2}$ of $SU(3)$, this alone is not sufficient to determine uniquely all irrep labels $\lambda $ and $\mu $ of $ SU(3)$. For example, $C_{2}$ has the same eigenvalue for the $(\lambda ,\mu ) $ and $(\mu ,\lambda )$ irreps. Nevertheless, since for the first few leading irreps (largest $C_{2}$ values) the $\lambda $ and $\mu $ values can be uniquely determined \cite{Tabels of SU(N) to SU(3)}, this procedure suffices for our study. Usually, when considering full-space calculations, a balance between computer time and accuracy has to be considered. While the Lanczos algorithm \cite{Whitehead-shell model} is known to yield a good approximation for the lowest or highest eigenvalues and eigenvectors, it normally does a relatively poor job for intermediate states. This means, for example, that higher states, in particular high total angular momentum states, may be poorly represented or, in a worst case scenario, not show up at all when these states are close to or beyond the truncation edge of the chosen submatrix. An obvious way to maintain a good approximation is to run the code for each $M_{J}$ value, that is, $M_{J}=0,2,4,6$\ldots . However, this might be a very time consuming process, but nonetheless one which could be reduced significantly if only a few $M_{J}$ values are used for each run. For the calculations of this study, we have used $M_{J}=0,6,10,$ and $14$. To maintain high confidence in the approximation of the intermediate states which have $J=2,4,8,12,...$ we required that they be within the first half of all the states produced. The code output was set for $29$ states. A further verification of the accuracy of the procedure is whether the energies of the same state calculated using different $M_{J}$ runs are close to one another. For example, as a consistency check the energy of the lowest $J=6$ state in the $M_{J}=0$ run was compared to the energy of the same state obtained from the $M_{J}=6$ run. \section{Measuring Symmetry Breaking Using $C_{2}$ of $SU(3)$} \quad In this section, we discuss the results of our study on the $SU(3)$ symmetry breaking in the $pf$-shell. In order to identify the $SU(3)$ structure of an yrast state, we calculate the spectral distribution of the state along the second Casimir operator ($C_{2}$) of $SU(3)$ as described in the previous section. From the spectral distribution, we can clearly determine whether the $SU(3)$ symmetry is broken or not. However, a graphic or table representation of the data becomes very inelegant with growing space dimensions. Thus, we have decided to use also average quantities, such as centroid, width, and skewness of the distributions to illustrate the main points one can deduce from a complicated spectral distribution. \subsection{Spectral Distribution} \quad The first set, Figs.\ref{C2 of SU(3) for KB3 in Ti44} and \ref{C2 of SU(3) for KB3p_f in Ti44}, demonstrates the recovery of the $SU(3)$ symmetry as the single-particle spin-orbit interaction is turned off, that is, in going from the $KB3$ to the $KB3p\_f$ interaction. Corresponding results for the $ KB3pf$ interaction are similar to the $KB3p\_f$ results. In each graph, $ C_{2}$ values of $SU(3)$ are given on the horizontal axis with the contribution of each $SU(3)$ state on the vertical axis. The bars within each cluster are contributions to the yrast states starting with the ground state ($J=0$) on the left. Hence the second bar in each cluster is for the $ J=2$ yrast state, etc. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{Ti44C2ofSU3forKB3}} \end{center} \caption{Strength distribution of $C_{2}$ of $SU(3)$ in yrast states of $ ^{44}$Ti for realistic single-particle energies with Kuo-Brown-3 two-body interaction ($KB3$).} \label{C2 of SU(3) for KB3 in Ti44} \end{figure} \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{Ti44C2ofSU3forKB3pf}} \end{center} \caption{Strength distribution of $C_{2}$ of $SU(3)$ in yrast states of $^{44}$Ti for degenerate single-particle energies with Kuo-Brown-3 two-body interaction ($KB3p\_f$).} \label{C2 of SU(3) for KB3p_f in Ti44} \end{figure} We have chosen $^{44}$Ti for an in-depth consideration of the fragmentation of the $C_{2}$ strength in yrast states. The results for the nondegenerate $KB3$ interaction are shown in Fig. {\ref{C2 of SU(3) for KB3 in Ti44}. In this case the highest contribution (biggest bar) is more than $50\%$ which corresponds to a $C_{2}$ value of 114 for the $J=12$ state. The $C_{2}=114$ value is for $(\lambda ,\mu)=(8,2)$ which is two $SU(3)$ irreps down from the leading one, $(\lambda ,\mu)=(12,0)$ with $C_{2}=180$. The leading irrep only contributes about 10$\%$ to the $J=12$ yrast state. The contribution of the next to the leading irrep, $C_{2}=144$ for $(\lambda ,\mu)=(10,1)$, is slightly less than 40$\%$. Thus, for all practical purposes, the first three irreps determine the structure of the $J=12$ yrast state. This illustrates that the high total angular momentum $J$ states are composed of only the first few $SU(3)$ irreps. This is easily understood because high $J$ values require high orbital angular momentum ($L$) states which are only present in $SU(3)$ irreps with large $C_{2}$ values. The high $J$ states may therefore be considered to be states with good $SU(3)$ symmetry. However, this is not the case with the ground state of $^{44}$Ti which has very important contributions from states with $C_{2}$ values 60, 72, 90, 114, 144, and 180 with respective percentages, 7.5, 25, 10, 21, 8, and 21$\%$. This shows that the leading irrep is not the biggest contributor to the $J=0$ ground state; there are two other contributors with about $20\%$, the third ($C_{2}=114$) and seventh ($C_{2}=72$) $SU(3)$ irrep. } When the spin-orbit interaction is turned off, which yields nearly degenerate single-particle energies since the single-particle orbit-orbit splitting is small, one has the $KB3p\_f$ interaction, and in this case the structure of the yrast states changes dramatically, as shown in Fig. \ref{C2 of SU(3) for KB3p_f in Ti44}. In Fig. \ref{C2 of SU(3) for KB3p_f in Ti44} one can see that the leading irrep plays a dominant role as its contribution is now more than $50\%$ of every yrast state. As in the previous case, the high total angular momentum $J$ states have the biggest contributions from the leading irrep, for example, more than 97$\%$ for $J=12$, 91$\%$ for $ J=10 $, and 80$\%$ for $J=8$. The ground state is composed of few irreps with $C_{2}$ values 72, 114, and 180, but in this case the leading irrep with $C_{2}=180$ makes up more than 52$\%$ of the total with the other two most important irreps contributing 21$\%$ [$C_{2}=72$, $(\lambda ,\mu )=(4,4)]$ and 23$\%$ [$C_{2}=114$, $(\lambda ,\mu )=(8,2)].$ \subsection{Moments of the Spectral Distributions.} \quad An alternative way to show the recovery of the $SU(3)$ symmetry is given in Fig. \ref{<C2> for KB3 and KB3p_f in Ti44} and Fig. \ref{<C2> for KB3 and KB3p_f in Ti48}. These figures show the centroid, width, and skewness of the $C_{2}$ distributions. The $J$ values are plotted on the horizontal axis with the centroids given on the vertical axis. The width of the distribution is indicated by the length of the error bars which is just the rms deviation, $\Delta C_{2}=\sqrt{\left\langle \left(C_{2}-\left\langle C_{2}\right\rangle \right) ^{2}\right\rangle }$, from the average value of the second-order Casimir operator $\left\langle C_{2}\right\rangle $. The third central moment, $\delta C_{2}=\sqrt[3]{\left\langle \left(C_{2}-\left\langle C_{2}\right\rangle \right) ^{3}\right\rangle }$, which measures the asymmetry, is indicated by the length of the error bar above, $\Delta C_{2}+\frac{\delta C_{2}}{2}$, and below, $\Delta C_{2}-\frac{ \delta C_{2}}{2}$, the average value. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{Ti44C2forKB3andKB3pf}} \end{center} \caption{Average $C_{2}$ values for $KB3$ and $KB3p\_f$ interactions in $ ^{44}$Ti.} \label{<C2> for KB3 and KB3p_f in Ti44} \end{figure} \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{Ti48C2forKB3andKB3pf}} \end{center} \caption{Average $C_{2}$ values for $KB3$ and $KB3p\_f$ interactions in $ ^{48}$Ti.} \label{<C2> for KB3 and KB3p_f in Ti48} \end{figure} Note that the recovery of the leading irrep when the spin-orbit interaction is turned off is clearly signaled not only through an increase in the absolute values of the first centroid $\left\langle C_{2}\right\rangle $ but also through the skewness $\delta C_{2}$. For example, in $^{44}$Ti with the $KB3$ interaction (spin-orbit interaction turned on) the ground state $J=0$ has $\left\langle C_{2}\right\rangle =110$ and skewness $\delta C_{2}=33$. This changes for the $KB3p\_f$ interaction to $\left\langle C_{2}\right\rangle =139$ and a skewness of $\delta C_{2}=-37$, as shown in Fig. \ref{<C2> for KB3 and KB3p_f in Ti44}. The equivalent of the $^{44}$Ti graph for the $^{48}$Ti case is shown in Fig. \ref{<C2> for KB3 and KB3p_f in Ti48}. As for the $^{44}$Ti case, the results show the recovery of the $ SU(3)$ symmetry in $^{48}$Ti when the single-particle spin-orbit interaction is turned off. \subsection{Coherent Spectral Structure} \quad We now turn to a discussion of the coherence nature of the yrast states. First, notice that the widths of the distributions as defined by $\Delta C_{2}=\sqrt{\left\langle \left( C_{2}-\left\langle C_{2}\right\rangle \right) ^{2}\right\rangle }$ are surprisingly unaffected (Fig. \ref{<C2> for KB3 and KB3p_f in Ti44} and Fig. \ref{<C2> for KB3 and KB3p_f in Ti48}) when turning the spin-orbit interaction on and off. This effect occurs in all cases studied: $^{44}$Ti, $^{46}$Ti, $^{48}$Ti, and $^{48}$Cr. The more detailed graphs, Fig. \ref{C2 of SU(3) for KB3 in Ti44} and Fig. \ref{C2 of SU(3) for KB3p_f in Ti44}, offer an explanation in terms of the fragmentation of the $C_{2}$ distribution. As can be seen from these graphs, the irreps that are present in the structure of a given yrast state in the presence of the spin-orbit interaction (Fig. \ref{C2 of SU(3) for KB3 in Ti44}) remain present, even though with reduced strength, in the structure of the state when the spin-orbit interaction is turned off (Fig. \ref{C2 of SU(3) for KB3p_f in Ti44}). As a consequence, $\Delta C_{2}=\sqrt{ \left\langle \left( C_{2}-\left\langle C_{2}\right\rangle \right) ^{2}\right\rangle }$ which measures the overall spread of contributing irreps, is more or less independent of the spin-orbit interaction. One can see a sharp decrease in the width of the distribution only for high spin states like $J=12$ in the graph for $^{44}$Ti in Fig. \ref{<C2> for KB3 and KB3p_f in Ti44}. Fig. \ref{Coherence in Cr48 yrast band} demonstrates the coherent nature of the states within the yrast band. The three graphs shown give the spectrum of the second-order Casimir operator $C_{2}$ of $SU(3)$ for the $J=0$, 2 and 4 yrast states in $^{48}$Cr. The axes are labelled the same way as in Figs. \ref{C2 of SU(3) for KB3 in Ti44} and \ref{C2 of SU(3) for KB3p_f in Ti44}, but in this case all bars are for a single yrast state. In this figure there are three peaks surrounded by smaller bars that yield a very similar enveloping shape for the given yrast states. The fragmentation and spread of $C_{2}$ values is nearly identical for these states with no dominant irrep, indicative of severe $SU(3)$ symmetry breaking. \begin{figure}[tbp] \begin{center} \leavevmode \centerline {\includegraphics[width= 5in]{Cr48KB3J0to4}} \end{center} \caption{Coherent structure of the first three yrast states in $^{48}$Cr calculated using realistic single-particle energies with Kuo-Brown-3 two-body interaction ($KB3$). On the horizontal axis is $C_{2}$ of $SU(3)$ with contribution of each $SU(3)$ state to the corresponding yrast state on the vertical axis.} \label{Coherence in Cr48 yrast band} \end{figure} Graphs for the $KB3p\_f$ case, when the spin-orbit interaction is turned off, are not shown since the results are similar to the results for $^{44}$Ti shown in Fig. \ref{C2 of SU(3) for KB3p_f in Ti44}. For example, when the spin-orbit interaction is on (KB3), the leading irrep for $^{48}$Cr has a $ C_{2}$ value of 396 and this accounts for only around 10$\%$ of the total strength distribution (see Fig. \ref{Coherence in Cr48 yrast band}), but when the spin-orbit interaction is off (KB3$p\_f$), the leading irrep is the dominant irrep with more than 55$\%$ of the total strength. We conclude the section with a discussion of the coherent structure of the yrast states by an illustration of the coherent structure of the $^{48}$Cr states within the spherical shell-model basis. The inset (a) of Fig. \ref {Coherent mixing in Cr48} shows the spectral structure of the lowest yrast states ($J=0,2,4,$ and $6$), as calculated with the KB3 interaction, with respect to the spherical configuration basis. Notice the common spectral distribution of these states. The distribution along the energy configurations, related to excitation energies smaller than the harmonic oscillator spacing ($<1\hbar \omega =10$ MeV), provides an illustration and support of the energy-based configuration truncation scheme. The bump at 12 MeV is probably related to the fact that this is only a $pf$-shell calculation ($0\hbar \omega $) which does not include the multi-shell excitations that are at energies above $1\hbar \omega =10$ MeV. \begin{figure}[tbp] \begin{center} \leavevmode \centerline {\includegraphics[width= 5in]{CoherentMixingCr48}} \end{center} \caption{Coherent mixing and SU(3) breaking and recovery in $^{48}$Cr. Inset (a) demonstrates the coherent structure of the yrast states with respect to the spherical shell-model configuration basis (KB3); (b) coherent structure of the yrast states with respect to the SU(3) basis (KB3); (c) recovery of the SU(3) symmetry within the $KB3p_{f}$ interaction.} \label{Coherent mixing in Cr48} \end{figure} \subsection{Enhanced Electromagnetic Transitions.} \quad Our results on the lower $pf$-shell nuclei, so far, have shown that $SU(3)$ symmetry breaking in this region is driven by the single-particle spin-orbit splitting. However, even though states of the yrast band exhibit $SU(3)$ symmetry breaking, the yrast band $B(E2)$ values are insensitive to this fragmentation of the $SU(3)$ symmetry; specifically, the quadrupole collectivity as measured by $B(E2)$ transition strengths between low-lying members of the yrast band remain high even though $SU(3)$ appears to be broken. Relative $B(E2)$ values are shown in Figs. \ref{Relative B(E2) in Ti44}, \ref{Relative B(E2) in Ti46}, and \ref {Relative B(E2) in Ti48}, that is, $B(E2)$ strengths normalized to the $B(E2:2^{+}\rightarrow 0^{+})$ value. For isoscalar transitions, the relative $B(E2)$ strengths are insensitive to the chosen effective charges which may be used to bring the theoretical $ B(E2:2^{+}\rightarrow 0^{+})$ numbers into agreement with the experimental values. Whenever absolute $B(E2:2^{+}\rightarrow 0^{+})$ values are given, they are in $e^{2}fm^{4}$ units and the effective charges are $1.5e$ for protons and $0.5e$ for neutrons ($q_{eff}$ = 0.5). The first graph on relative $B(E2)$ values (Fig. \ref{Relative B(E2) in Ti44} ) recaps our results for $^{44}$Ti. Calculated relative $B(E2)$ values for $^{44}$Ti corresponding to the spin-orbit interaction turned on (KB3) and spin-orbit interaction off (KB3$p\_f$) are very close to the pure $SU(3)$ limit. The agreement with experiment is very satisfactory except for the $ 4^{+}\rightarrow 2^{+}$ and $8^{+}\rightarrow 6^{+}$ transitions. However, the experimental data \cite{NNDC} on the $8^{+}\rightarrow 6^{+}$ transition give only an upper limit of 0.5 pico-seconds to the half-life. We have used the worse case, namely a half-life of 0.5 ps, as a smaller value would increase the relative $B(E2)$. For example, a half-life of 0.05 ps will agree well with the relative $B(E2)$ value for the $KB3p\_f$ interaction. This example supports the adiabatic mixing which seems to be present for all the yrast states of $^{44}$Ti. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{RelativeBE2ValuesTi44}} \end{center} \caption{Relative $B(E2)$ values $\left(\frac{B(E2:J_{i}\rightarrow J_{f})}{ B(E2:2^{+}\rightarrow 0^{+})}\right) $ for $^{44}$Ti. The $ B(E2:2^{+}\rightarrow 0^{+})$ transition values are 122.69$e^{2}fm^{4}$ for the experiment, 104.82$e^{2}fm^{4}$ for the KB3 interaction, and 138.58$ e^{2}fm^{4}$ for the $KB3p\_f$ case.} \label{Relative B(E2) in Ti44} \end{figure} Fig. \ref{Relative B(E2) in Ti46} shows $B(E2)$ values for $^{46}$Ti. In this case there are deviations from adiabatic mixing for the $ 6^{+}\rightarrow 4^{+}$, $10^{+}\rightarrow 8^{+}$, and higher transitions. Two experimental data sets are shown in Fig. \ref{Relative B(E2) in Ti46}: data from the NNDC is denoted as Exp\_(NNDC), and updated data on $ 2^{+}\rightarrow 0^{+}$ and $4^{+}\rightarrow 2^{+}$ transitions from \cite {Recent Data on B(E2)} is denoted as Exp\_(Updated). For $^{46}$Ti the agreement with the experiment is not as good as for $^{44}$Ti. However, the experimental situation is also less certain. The adiabatic behavior is well demonstrated for the first three yrast states $0^{+},$ $2^{+}$, and $4^{+}$ via relative $B(E2)$ values for the KB3 and $KB3p\_f$ interactions which are very close to the $SU\left( 3\right) $ limit. \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{RelativeBE2ValuesTi46}} \end{center} \caption{Relative $B(E2)$ values $\left(\frac{B(E2:J_{i}\rightarrow J_{f})}{ B(E2:2^{+}\rightarrow 0^{+})}\right) $ for $^{46}$Ti. The $ B(E2:2^{+}\rightarrow 0^{+})$ transition values are 199.82$e^{2}fm^{4}$ for the experimental data, 181.79$e^{2}fm^{4}$ for the updated experimental data, 208$e^{2}fm^{4}$ for KB3 interaction, and 299.83$e^{2}fm^{4}$ for $ KB3p\_f.$} \label{Relative B(E2) in Ti46} \end{figure} \begin{figure}[tbp] \begin{center} \leavevmode \epsfxsize = \textwidth \centerline {\includegraphics[width= \textwidth]{RelativeBE2ValuesTi48}} \end{center} \caption{Relative $B(E2)$ values $\left(\frac{B(E2:J_{i}\rightarrow J_{f})}{ B(E2:2^{+}\rightarrow 0^{+})}\right) $ for $^{48}$Ti. The $ B(E2:2^{+}\rightarrow 0^{+})$ transition values are 144.23$e^{2}fm^{4}$ for the experimental data, 155.5$e^{2}fm^{4}$ for the updated experimental data, 202.4$e^{2}fm^{4}$ for KB3 interaction, and 445.32$e^{2}fm^{4}$ for $ KB3p\_f. $} \label{Relative B(E2) in Ti48} \end{figure} We conclude this section by showing the recovery of the $SU(3)$ symmetry; this time via relative $B(E2)$ values as shown for $^{48}$Ti in Fig. \ref {Relative B(E2) in Ti48}. In Fig. \ref{Relative B(E2) in Ti48} we see that for the degenerate single particle case (KB3$p\_f$) the first few transitions have relative $B(E2)$ values which follow the $SU(3)$ limit very closely. On the other hand, the interaction involving spin-orbit splitting (KB3) is far from the $SU(3)$ limit. The $B(E2:4^{+}\rightarrow 2^{+})$ transition is strongly enhanced due to the adiabatic mixing which is missing in the higher than $J=4$ yrast states. \chapter{Summary and Discussions} \quad The primary goal of the current work has been to study and apply a new method--the mixed-symmetry approach--for large shell-model calculations. Our aim was to combine two very successful computational methods: the $m$-scheme spherical shell model and the $SU(3)$ shell model. In the process of this study, we have realized a new computational paradigm: an oblique-basis calculation that can be used to capture the mixed-mode structure of complex systems, such as the atomic nuclei. The two methods, the $m$-scheme and $SU(3),$ are closely connected to the two dominant but often competing modes that characterize the structure of atomic nuclei: the single-particle shell structure underpinned by the validity of the mean-field concept, and the many-particle collective behavior manifested through the nuclear deformation. This is reflected in two dominant elements in the nuclear Hamiltonian: the single-particle term, $H_{1}=\sum_{i}\varepsilon _{i}n_{i}$, and a collective two-body term $H_{2}.$ The collective term $H_{2}$ is dominated by the quadrupole-quadrupole interaction, $H_{QQ}=Q\cdot Q$ which has good $SU(3)$ symmetry. It follows that the simplified Hamiltonian $H=\sum_{i}\varepsilon _{i}n_{i}-\chi Q\cdot Q$ has two exactly solvable limits and thus can be considered to be a two-mode system. To probe the nature of such a system, we have considered a simple toy model: the one-dimensional harmonic oscillator in a box. As for real nuclei, this system has a finite volume and a restoring force whose potential is of a harmonic oscillator type. For this model, there is a well-defined energy scale which measures the strength of the potential at the boundary of the box, $E_{c}=\omega ^{2}L^{2}/2$. For this system, the use of two sets of basis vectors, one for each of the two limits, has physical appeal, especially at energies near $E_{c}$. One basis set consists of the harmonic oscillator states; the other set consists of basis states of a particle in a box. In the regime of strong mixing of the two modes at an energy scale compatible with $E_{c}$, there is a coherent structure expressed through a quasi-perturbational behavior of the system. Specifically, in this energy region first-order perturbation theory is not appropriate since the zeroth order approximation to the wave function is very poor; nevertheless, the first-order estimates of the energies are very close to the actual results. Even more, the structure of the exact wave functions exhibits a coherent mixing (Fig. \ref{States25to29}) similar to the one observed in nuclei (Fig. \ref{Coherence in Cr48 yrast band}). An application of the mixed-symmetry basis calculations to $^{24}$Mg, using the realistic USD interaction of Wildenthal, has served to demonstrate the validity of the mixed-mode shell-model scheme. In this case, the oblique-basis consists of the traditional spherical states, which yield a diagonal representation of the single-particle interaction, together with collective SU(3) configurations, which yield a diagonal quadrupole-quadrupole interaction. The results obtained in a space that spans less than 10\% of the full-space reproduce the correct binding energy, within 2\% of the full-space result, as well as the low-energy spectrum, and the structure of the states within 90\% overlap with the exact states. In contrast, for an $m$-scheme spherical shell-model calculation, one needs about 60\% of the full space to obtain results comparable with the oblique basis results. Calculations for $^{44}$Ti also support the mixed-mode shell-model scheme, even though calculations using a few $SU(3)$ irreps are not as good as the standard spherical shell-model calculations. And, as the results confirmed, the combined basis yields less enhancements. For example, an oblique-basis calculation in $50\%$ of the full $pf$-shell space is as good as a usual $m$-scheme calculation in $80\%$ of space. These results show very clearly that if the important modes can be isolated, then one can build an oblique theory that incorporates leading configurations of each mode and could get good convergence in a limited model space. The study of the lower $pf$-shell nuclei $^{44-48}$Ti and $^{48}$Cr, using the realistic Kuo-Brown-3 (KB3) interaction, has shown strong SU(3) symmetry breaking due mainly to the single-particle spin-orbit splitting. When the spin-orbit splitting is reduced, the importance of the $SU(3)$ as seen through a growth in the dominance of the leading irrep is restored. Thus the KB3 Hamiltonian is at least a two-mode system. This is further supported by the behavior of the yrast band B(E2) values that seem to be insensitive to the fragmentation of the SU(3) symmetry. Specifically, the quadrupole collectivity as measured by the B(E2) strengths remains high even though the SU(3) symmetry is rather badly broken. This has been attributed to a quasi-SU(3) symmetry where the observables behave like a pure SU(3) symmetry while the true eigenvectors exhibit a strong coherent structure with respect to each of the two bases. This has been observed in all yrast states for the $^{44}$Ti case; while for the other nuclei studied, this coherence breaks down after the first few yrast states. In particular, even though the yrast states are not dominated by a single $SU(3)$ irrep, the $B(E2:4^{+}\rightarrow 2^{+})$ values remain strongly enhanced with values close (usually within 10-20\%) to the $SU(3)$ symmetry limit. From a technical point of view, there are some other possible basis sets to be studied.\footnote{In our study, SU(3) is shown to be good due to the 3D harmonic oscillator and the dominance of the Q.Q interaction in nuclei. The cylindrical basis is just the easiest way to construct the SU(3) states and seems to be most economical in terms of components. From computational point of view, a good total angular momentum ($J$) and its third component ($M_J$) for the SU(3) states are essential. However, if one can find any other basis set, besides the SU(3)-based one, with good $J$ and $M_J$, then things may be as good, or even better.} For example, one can try to use deformed Nilsson basis states, or a basis set generated from a Hartree-Fock type procedure \cite{PHF and SU(3)}. One can even try simple cylindrical basis states with an appropriate procedure to maintain a complete set for good spin quantum numbers. If good rotational symmetry is to be sacrificed, then one can try a Lanczos algorithm which keeps only the big components during the iteration process. Another further development of the theory and its application is a study of other $sd$-shell nuclei as well as $pf$-shell nuclei. Such studies will further test the theory and the codes that have been developed. In spite of the results in the lower $pf$-shell, it is expected that in the mid-shell region some sort of $SU(3)$ collective structure is important. Thus, the oblique-basis calculation may be an important alternative for calculating structure of nuclei, such as $^{56}Fe$ and $^{56}Ni.$ Another possibility is to integrate the oblique basis concept into no-core calculations of the type developed by \cite{Navratil'00}. Such an extension would involve the symplectic group for multi-shell correlations rather than just SU(3) \cite {Sp(6)models}. An extension of the theory to a multi-mode oblique shell-model calculation is also a possibility. An immediate extension of the current scheme might use the eigenvectors of the pairing interaction \cite{Dukelsky et al-Pairing} within the Sp(4) algebraic approach to the nuclear structure \cite {Sviratcheva-sp(4)}, together with the collective SU(3) states and spherical shell-model states. Even the three exact limits of the IBM \cite {MoshinskyBookOnHO} can be considered to comprise a three-mode system. Further, an even broader extension of the theory would involve a general procedure for the identification of dominant modes from any one- and two-body Hamiltonian along with a complementary partitioning of the model space into physically relevant subspaces with small overlaps. One can then start with eigenstates for an arbitrary subspace and constructively improve the results by including corrections from the remaining subspaces. It should be possible to do this by keeping only a small set of the calculated lowest energy states at each iteration. Hamiltonian-driven basis sets can also be considered. In particular, the method may use eigenstates of the very-near closed shell nuclei obtained from a full shell-model calculation to form Hamiltonian driven J-pair states for mid-shell nuclei \cite{Heyde's-shell model}. This type of extension would mimic the Interacting Boson Model (IBM) \cite{Iachello-1987} and the so-called broken-pair theory \cite {Heyde's-shell model}. Nonetheless, the real benefit of this approach is expected when the system is far away from any exactly solvable limit of the Hamiltonian and the spaces encountered are too large to allow for exact calculations. In summary, we have studied a new computational method, the oblique-basis method. The concept has been applied to a toy model, as well as to some realistic nuclear systems. For realistic nuclei, we used spherical and cylindrical single particle states to perform our mixed-symmetry calculations. We have studied $^{24}$Mg in the $sd$-shell and $^{44}$Ti in the $pf$-shell in a mixed-symmetry basis. For $^{24}$Mg, we have seen very promising results with respect to the energy spectra and the structures of the wave functions. When these results are translated into model space dimensions, we see that an oblique-basis calculation in $10\%$ of the full $sd$-shell space is as good as a usual $m$-scheme calculation in $60\%$ of the full $sd$-shell space. For $^{44}$Ti, the results are less pronounced due to the dominance of the one-body over the two-body part of the Hamiltonian. However, in model space dimensions, the results for $^{44}$Ti state that an oblique-basis calculation in $50\%$ of the full $pf$-shell space is as good as a usual $m$-scheme calculation in $80\%$ of the full $pf$-shell space. Through a detailed study of $^{44}$Ti, $^{46}$Ti, $^{48}$Ti, and $^{48}$Cr in the full $pf$-shell, we have confirmed the effect of the one-body part of the Hamiltonian, that is, the strong $SU(3)$ symmetry breaking is due to the spin-orbit interaction which splits the single-particle energies. For degenerate single-particle energies, we have seen that one recovers the dominance of the leading $SU(3)$ irrep which is consistent with two-body interaction dominated by the quadrupole-quadrupole interaction. Along our study, we have seen some interesting coherent structures, such as coherent mixing of basis states, quasi-perturbative behavior in the toy model, and an enhanced $B(E2)$ strength toward the $SU(3)$ limit in nuclei. In concision, the main positive outcome of this work is a prove-of-principle of the mixed-mode concept. We have shown that such calculations are doable and may yield better results and lead to a clearer understanding of complex systems. Problems yet to be solved are related mainly to the software package and its development. First of all, a routine for the complete generation of SU(3) shell model basis is needed. Basis sets other than the spherical shell-model and SU(3) shell-model basis sets are also desirable; some possible basis sets have been discussed. Another important software component is a set of commonly used physical observables and their matrix elements. The most important improvement, however, is to implement error estimate of the final results and possible extrapolation procedure for estimating the exact energy eigenvalues. Immediate further work should include a concentrated study of other $sd$-shell nuclei, $pf$-shell nuclei, and multi-shell calculations. Applications to atomic and molecular physics are also possible. \chapter*{Dedication} \quad At the end of my Ph.D. study program, I look back in time and think of the events and people that have taught, encouraged and supported me in my study of physics. I would never forget the events of Summer of 1982 that set the direction of my profession. That year I finished middle school and had to make a decision on a high school. Since I had shown interest in mathematics, physics, and technology, my mother recommended that I apply to the National Natural Science High School in Sofia, Bulgaria. I spent the whole summer reviewing my school books in mathematics and physics. That was the first time in my life that I had to concentrate on a broad range of information, extract the essential elements, and commit them to memory. I discovered the joy and satisfaction of learning, problem solving, and overcoming obstacles through hard work. Due to the quality of the National Natural Science High School and my superior performance, I was accepted in the Sofia University as a physics student where I continued to acquire knowledge in physics and mathematics. I am thankful to my school teachers, many of whom were professors in physics at the Sofia University, for keeping me interested in physics and the natural sciences. My interest in theoretical physics jelled during my final years at the Sofia University where I attended many lectures on various subjects in mathematical and theoretical physics. It was then that I became interested in symmetries and group theory, and especially the newly emerging concept of quantum (deformed) Lie algebras. The next important event in my life was the choice of a professor for my master thesis. I still remember going from one professor to another looking for someone who was working on quantum algebras. Finally, I met my M.S. advisor, Professor R. P. Roussev at the Institute of Nuclear Research and Nuclear Energy of the Bulgarian Academy of Sciences, Sofia, Bulgaria. After a few short meetings with Professor Roussev and his coworkers, Professors P. P. Raychev and A. I. Gueorguieva, I was given a paper, one of the fundamental papers on the topic, to read and explain. Back then, I did not see this as another test that I had to pass, rather I thought of it as an opportunity to show what I had learned and what I was capable of doing. Not until many years later did I appreciate that this was a defining time for me, one that enabled me to continue working with and learning from Professor Roussev and his colleagues, with all of whom I became a good friend. Working with them was one of the best research experiences in my life. I am very thankful to them for their time, help, interesting conversations, and long hours spent in lengthy calculations and hard work. I am also grateful to many other colleagues at the Institute of Nuclear Research and Nuclear Energy of the Bulgarian Academy of Sciences that I had a chance to meet and work with. My next opportunity came as a surprise to me. I was not planning on going abroad as I was married to a wonderful wife, I had great colleagues, and I found my work to be very rewarding. But the challenge I faced was not mine alone, it was a problem for Bulgaria as it was for most other eastern European countries -- limited opportunities due to political upheaval and difficult economic times. In the spring of 1994 I met Professor Jerry P. Draayer at the Annual Bulgarian International Workshop on Nuclear Theory, Rila Mountains, Bulgaria. As a result, I guess, of a conversation between Professors J. P. Draayer and A. I. Gueorguieva -- a conversation that I know very little about -- I was offered the chance to come to Louisiana State University as a Ph.D. student in Professor J. P. Draayer's group. This was an honor I could not turn down. I am very grateful to my advisor Professor J. P. Draayer who gave me the opportunity to learn and work in a different international culture and environment, and to experience and enjoy interactions with teachers, students, and participants at many workshops and conferences in the United States and abroad. I am also very grateful to Professor J. P. Draayer and his wife Lois for their hospitality and the valuable and pleasant time spent in their home. I also would like to thank the International Hospitality Foundation, and especially my host family, Professor R. Imlay and his wife Dena Imlay, for their warm hospitality and valuable introduction to the American Culture. I am very grateful to Thomas Beuschel and Kenneth Bernstein, who helped me out in my first days in Baton Rouge; to Jutta Escher, Gabriela Popa and Ivan Chompalov for their continuing support through the years; as well as to the other recent and former graduate students in the Nuclear Theory group at Louisiana State University who contributed to a stimulating work environment. I am also grateful to Professor A. I. Gueorguieva, Dr. Ulrich Eichmann, and Kristina Sviratcheva for their friendship and support. At last, but not least, I cannot find words and space to write my extreme gratefulness to my beloved wife Petia and our precious children Anna and Alex for their support and understanding, and to my mother, father, and sister, and to my relatives for the support they have provided through the years. \addcontentsline{toc}{chapter}{Acknowledgments} \chapter*{Acknowledgments} \quad First of all, I wish to express my sincere gratitude to my advisor, Professor Jerry P. Draayer, who suggested my Ph. D. project and provided the necessary environment for its realization through his guidance, patience, and understanding. I would like also to acknowledge the help of Dr. W. E. Ormand and C. Johnson with whom two major topics were studied, the structure of the $^{24}$Mg nucleus, and the SU(3) symmetry breaking in the lower $pf$-shell; discussions with Professor A. Rau with whom the toy model was worked out; and interesting conversations and discussions with Dr. C. Bahri. I would like to thank Professors E. Zganjar, R. Haymaker, A. Rau, and C. Johnson from the Department of Physics and Astronomy, and A. Raman, from the Department of Mechanical Engineering, for their comments and suggestions, and for serving on my dissertation committee. I am grateful to the professors and administrative personnel in the Department of Physics and Astronomy for educational, technical, and administrative help, as well as for their friendly and kindly attitude. My thanks go also to my recent and former colleagues and friends in the Nuclear Theory group and in the Department of Physics and Astronomy. I am grateful to the LSU Writing Center for providing help in the preparation of my dissertation manuscript and especially to Dr. Joe Abraham and Lauren Moise. I wish to acknowledge the financial support from the Department of Physics and Astronomy, a dissertation fellowship from the Louisiana State University Graduate School, a travel grant from the Charles E. Coates Memorial Fund, and the U. S. National Science Foundation support under Grant No. PHY-9970769 and Cooperative Agreement No. EPS-9720652 that includes matching from the Louisiana Board of Regents Support Fund. \vfill \pagebreak \tableofcontents \pagebreak \addcontentsline{toc}{chapter}{List of Tables} \listoftables \pagebreak \addcontentsline{toc}{chapter}{List of Figures} \listoffigures \pagebreak \addcontentsline{toc}{chapter}{Abstract} \chapter*{Abstract} \quad Advances in computer technologies allow calculations in ever larger model spaces. To keep our understanding growing along with this growth in computational power, we consider a novel approach to the nuclear shell model. The one-dimensional harmonic oscillator in a box is used to introduce the concept of an oblique-basis shell-model theory. By implementing the Lanczos method for diagonalization of large matrices, and the Cholesky algorithm for solving generalized eigenvalue problems, the method is applied to nuclei. The mixed-symmetry basis combines traditional spherical shell-model states with SU(3) collective configurations. We test the validity of this mixed-symmetry scheme on $^{24}$Mg and $^{44}$Ti. Results for $^{24}$Mg, obtained using the Wilthental USD intersection in a space that spans less than 10\% of the full-space, reproduce the binding energy within 2\% as well as an accurate reproduction of the low-energy spectrum and the structure of the states -- 90\% overlap with the exact eigenstates. In contrast, for an $m$-scheme calculation, one needs about 60\% of the full space to obtain compatible results. Calculations for $^{44}$Ti support the mixed-mode scheme although the pure SU(3) calculations with few irreps are not as good as the standard $m$-scheme calculations. The strong breaking of the SU(3) symmetry results in relatively small enhancements within the combined basis. However, an oblique-basis calculation in 50\% of the full $pf$-shell space is as good as a usual $m$-scheme calculation in 80\% of the space. Results for the lower $pf$-shell nuclei $^{44-48}$Ti and $^{48}$Cr, using the Kuo-Brown-3 interaction, show that SU(3) symmetry breaking in this region is driven by the single-particle spin-orbit splitting. In our study we observe some interesting coherent structures, such as coherent mixing of basis states, quasi-perturbative behavior in the toy model, and enhanced B(E2) strengths close to the SU(3) limit even though SU(3) appears to be rather badly broken. The results suggest that a mixed-mode shell-model theory may be useful in situations where competing degrees of freedom dominate the dynamics, and full-space calculations are not feasible. \pagebreak \pagenumbering{arabic} \input{VGGPhDThesisCh1and2.tex} \input{VGGPhDThesisCh3.tex} \input{VGGPhDThesisCh4.tex} \input{VGGPhDThesisCh5.tex} \input{VGGPhDThesisCh6.tex} \input{VGGPhDThesisCh7.tex} \input{VGGPhDThesisReferences.tex} \input{VGGPhDThesisAppendix.tex} \chapter*{Vita} \addcontentsline{toc}{chapter}{Vita} \quad Vesselin Gueorguiev was born on May 27, 1967, in Sofia, Bulgaria. He started his physics career at age 15 while attending the National Natural Science High School in Sofia, Bulgaria, where he received a diploma with a physics major as a Semiconductor Production Operator. He continued his studies in physics at the University of Sofia, St. Kliment Ohridski in Sofia, Bulgaria, where he majored in nuclear and elementary particle physics and received a master of science degree in 1992. He was then employed as a research-assistant at the Department of Theoretical Physics in the Institute of Nuclear Research and Nuclear Energy of the Bulgarian Academy of Sciences, Sofia, Bulgaria. His Doctor of Philosophy degree in nuclear physics will be awarded by Louisiana State University in December, 2002. During his graduate school years, he attended many workshops and conferences and presented his research results at the 1998 and 2000 International Symposia in Nuclear Physics at Oaxtepec, M\'{e}xico, the April 2000 and 2001 annual meetings of the American Physical Society, the 2000 and 2002 International Workshops on Nuclear Theory in Bulgaria, the 2001 annual meeting of the Division of Computational Physics of the American Physical Society, the 2002 Nuclear Structure Conference: Mapping the Triangle in Wyoming, and the 2002 International Colloquium on Group Theoretical Methods in Physics, Paris, France. He was a visiting scientist at the Institute for Nuclear Theory at the University of Washington in Seattle in October of 2000. He is the recipient of several awards from Louisiana State University, including a Graduate School Dissertation Fellowship, Coates Travel Award, and Graduate School Tuition Waiver. He is the author of 14 publications, including four abstracts, and three submitted papers. \end{document}
1,314,259,993,154
arxiv
\section{Introduction} The Standard Model (SM) is a very successful theory of elementary particle physics. It is, however, known to have several essential problems. Primarily it fails to provide an explanation of observed phenomena like the neutrino masses, the matter-antimatter asymmetry, and the dark matter origin. Therefore, it is necessary to search for New Physics, that will help to complete the theory, solve its problems and account the missing details. Recently a Higgs boson with mass of 125 GeV has been discovered at the Large Hadron Collider (LHC) ~\cite{Aad:2012tfa, Chatrchyan:2012xdj} that behaves like the Higgs boson of the SM. Whether it is indeed the SM Higgs boson or a Higgs boson of New Physics beyond the SM, this is presently one of the most important issues in particle physics. A detailed study of the properties of the Higgs boson can provide a crucial clue in the search for the ultimate New Physics theory. The theory of Supersymmetry (SUSY) is the most prominent candidate for a New Physics theory solving the SM problems. In this paper we study the possibility that the discovered Higgs boson is the lightest CP-even neutral Higgs boson $h^0$ of the Minimal Supersymmetric Standard Model (MSSM) \cite{LHCcrosssecs, Djouadi:2005gi}. In the phenomenological analysis of the MSSM, quark flavour conservation (QFC) is usually assumed, apart from the quark flavour violation (QFV) induced by the Cabibbo-Kobayashi-Maskawa matrix. However, SUSY QFV terms could be present in the mass matrix of the squarks. Especially important can be the mixing terms between the 2nd and the 3rd squark generations, such as $\ti{c}_{L,R}-\ti t_{L,R}$ mixing terms, where $\ti{c}$ and $\ti t$ are the charm- and top-squark, respectively. In ~\cite{Bartl:2014bka} we pointed out the importance of the SUSY QFV effects due to squark loop contributions in the decays of the MSSM Higgs boson $h^0$. We showed that the QFV effect due to $\ti{c}_{L,R}-\ti t_{L,R}$ mixing can have a major impact on the decay $h^0 \to c \, \bar{c}$, strongly enhancing the deviation of the MSSM Higgs boson decay rate $\Gamma(h^0 \to c \, \bar{c})$ from the SM Higgs boson decay rate $\Gamma(H_{SM} \to c \, \bar{c})$, where c is the charm-quark. In \cite{Eberl:h2bb} we also showed that the QFV due to $\ti{c}_{L,R}-\ti t_{L,R}$ mixing can significantly enhance the difference between $\Gamma(h^0 \to b \, \bar{b})$ and $\Gamma(H_{SM} \to b \, \bar{b})$, where b is the bottom-quark. The loop-induced decays $h^0 \to \gamma \, \gamma$ and $h^0 \to g \, g$ are very sensitive to New Physics since loops of New Physics particles can appear at the lowest order of perturbative expansion of the decay amplitudes. The rates of these loop-induced decays were already calculated including gluonic QCD \cite{QCD_corr} and electroweak \cite{EW_corr} radiative corrections in the SM and also partly in the MSSM with QFC (except \cite{Brignole:2015kva} mentioned below). In this paper we study the influence of the SUSY QFV due to $\ti{c}_{L,R}-\ti t_{L,R}$ mixing on $h^0 \to \gamma \, \gamma$ and $h^0 \to g \, g$, including the gluonic two-loop QCD corrections \cite{Spira}. (We also studied $\tilde{s}_{L,R}-\tilde{b}_{L,R}$ mixing, with $\tilde{s}$ and $\tilde{b}$ the strange- and bottom-squark, respectively, but the effects turned out to be very small.) For this purpose, we perform a MSSM parameter scan respecting theoretical constraints from vacuum stability conditions and experimental constraints, such as those from B meson data and electroweak precision data, as well as recent limits on SUSY particle masses from LHC experiments. In \cite{Brignole:2015kva} these loop-induced decays were studied in the MSSM with QFV in an effective field theory approach based on dim-6 operators in a so-called $\kappa$-framework. However, that paper does not take into account the radiative corrections and the constraints mentioned above, except those from the electroweak precision data. Moreover, it does not include the $\ti c_{R}-\ti t_{R}$ mixing effect. As we will point out later, this mixing effect can also play an important role in the considered loop-induced decays. Although the $h^0$ decay widths of the $\gamma \gamma$ and $g g$ modes are studied in the SM and the MSSM in many articles \cite{QCD_corr} - \cite{Nojiri_Boselli}, a systematic numerical study of the deviations of the MSSM widths from the SM values taking into account the SUSY QFV effect and the constraints is still missing. In this article we thoroughly perform such a study with special emphasis on the importance of SUSY QFV. Furthermore, we elucidate the sensitivities of measurements at the LHC and at future lepton colliders, such as ILC, to the deviations. As lepton-flavour violation effect has turned out to be very small in our analysis, we assume lepton flavour conservation. We also assume that the lightest neutralino is the lightest SUSY particle (LSP). In the following section we introduce the SUSY QFV parameters originating from the squark mass matrices. Details about our parameters scan are given in Section~\ref{sec:full scan}. In Section~\ref{sec:correlation} we define the deviations of the widths $h^0 \to \gamma \, \gamma$ and $h^0 \to g \, g$ from the SM and analyse their behaviour in the studied SUSY QFV scenarios. The paper rounds up with conclusions, contained in Section~\ref{sec:concl}, and one short Appendix, where all relevant constraints are listed. \section{Squark mass matrices in the MSSM with flavour violation} \label{sec:sq.matrix} In the super-CKM basis of $\ti q_{0 \gamma} = (\ti q_{1 {\rm L}}, \ti q_{2 {\rm L}}, \ti q_{3 {\rm L}}$, $\ti q_{1 {\rm R}}, \ti q_{2 {\rm R}}, \ti q_{3 {\rm R}}),~\gamma = 1,...6,$ with $(q_1, q_2, q_3)=(u, c, t),$ $(d, s, b)$, the up-type and down-type squark mass matrices ${\cal M}^2_{\tilde{q}},~\tilde{q}=\tilde{u},\tilde{d}$, at the SUSY scale have the following most general $3\times3$ block form~\cite{Allanach:2008qq}: \begin{equation} {\cal M}^2_{\tilde{q}} = \left( \begin{array}{cc} {\cal M}^2_{\tilde{q},LL} & {\cal M}^2_{\tilde{q},LR} \\[2mm] {\cal M}^2_{\tilde{q},RL} & {\cal M}^2_{\tilde{q},RR} \end{array} \right), \quad \tilde{q}=\tilde{u},\tilde{d}\,. \label{EqMassMatrix1} \end{equation} Non-zero off-diagonal terms of the $3\times3$ blocks ${\cal M}^2_{\tilde{q},LL},~{\cal M}^2_{\tilde{q},RR},~{\cal M}^2_{\tilde{q},LR}$ and ${\cal M}^2_{\tilde{q},RL}$ in Eq.~(\ref{EqMassMatrix1}) explicitly break the quark-flavour in the squark sector of the MSSM. The left-left and right-right blocks in Eq.~(\ref{EqMassMatrix1}) are given by \begin{eqnarray} & &{\cal M}^2_{\tilde{u}(\tilde{d}),LL} = M_{Q_{u(d)}}^2 + D_{\tilde{u}(\tilde{d}),LL}{\bf 1} + \hat{m}^2_{u(d)}, \nonumber \\ & &{\cal M}^2_{\tilde{u}(\tilde{d}),RR} = M_{U(D)}^2 + D_{\tilde{u}(\tilde{d}),RR}{\bf 1} + \hat{m}^2_{u(d)}, \label{EqM2LLRR} \end{eqnarray} where $M_{Q_{u}}^2=V_{\rm CKM} M_Q^2 V_{\rm CKM}^{\dag}$, $M_{Q_{d}}^2 \equiv M_Q^2$, $M_{Q,U,D}$ are the hermitian soft SUSY-breaking mass matrices of the squarks, $D_{\tilde{u}(\tilde{d}),LL}$, $D_{\tilde{u}(\tilde{d}),RR}$ are the $D$-terms, and $\hat{m}_{u(d)}$ are the diagonal mass matrices of the up(down)-type quarks. $M_{Q_{u}}^2$ is related with $M_{Q_{d}}^2$ by the CKM matrix $V_{\rm CKM}$ due to the $SU(2)_{\rm L}$ symmetry. The left-right and right-left blocks of Eq.~(\ref{EqMassMatrix1}) are given by \begin{eqnarray} {\cal M}^2_{\tilde{u}(\tilde{d}),RL} = {\cal M}^{2\dag}_{\tilde{u}(\tilde{d}),LR} &=& \frac{v_2(v_1)}{\sqrt{2}} T_{U(D)} - \mu^* \hat{m}_{u(d)}\cot\beta(\tan\beta), \label{M2sqdef} \end{eqnarray} where $T_{U,D}$ are the soft SUSY-breaking trilinear coupling matrices of the up-type and down-type squarks entering the Lagrangian ${\cal L}_{int} \supset -(T_{U\alpha \beta} \tilde{u}^\dagger _{R\alpha}\tilde{u}_{L\beta}H^0_2 $ $+ T_{D\alpha \beta} \tilde{d}^\dagger _{R\alpha}\tilde{d}_{L\beta}H^0_1)$, $\mu$ is the higgsino mass parameter, and $\tan\beta = v_2/v_1$ with $v_{1,2}=\sqrt{2} \left\langle H^0_{1,2} \right\rangle$. The squark mass matrices are diagonalized by the $6\times6$ unitary matrices $U^{\tilde{q}}$, $\tilde{q}=\tilde{u},\tilde{d}$, such that \begin{eqnarray} &&U^{\tilde{q}} {\cal M}^2_{\tilde{q}} (U^{\tilde{q} })^{\dag} = {\rm diag}(m_{\tilde{q}_1}^2,\dots,m_{\tilde{q}_6}^2)\,, \label{Umatr} \end{eqnarray} with $m_{\tilde{q}_1} < \dots < m_{\tilde{q}_6}$. The physical mass eigenstates $\ti q_i, i=1,...,6$ are given by $\ti q_i = U^{\ti q}_{i \alpha} \ti q_{0\alpha} $. In this paper we focus on the $\tilde{c}_L - \tilde{t}_L$, $\tilde{c}_R - \tilde{t}_R$, $\tilde{c}_R - \tilde{t}_L$, and $\tilde{c}_L - \tilde{t}_R$ mixing which is described by the QFV parameters $M^2_{Q23}$, $M^2_{U23}$, $T_{U23}$ and $T_{U32}$, respectively. We will also often refer to the QFC parameter $T_{U33}$ which induces the $\tilde{t}_L - \tilde{t}_R$ mixing and plays an important role in this study.\\ The slepton parameters are defined analogously to the squark ones. All the parameters in this study are assumed to be real, except the CKM matrix $V_{CKM}$. \section{Parameter scan} \label{sec:full scan} We perform a MSSM parameter scan taking into account theoretical constraints from vacuum stability conditions and experimental constraints from K- and B-meson data, the $h^0$ mass and coupling data and electroweak precision data, as well as limits on SUSY particle masses from recent LHC experiments (see Appendix A). As for the squark generation mixings, we only consider the mixing between the second and third generation of squarks. The mixing between the first and the second generation squarks is very strongly constrained by the K and D meson data ~\cite{Gabbiani:1996hi, PDG2016}. The experimental constraints on the mixing of first and third generation squarks are not so strong ~\cite{Dedes}, but we don't consider this mixing since its effect is essentially similar to that of the mixing of second and third generation squarks. The parameter points are generated by using random numbers in the ranges shown in Table~\ref{table1}, some parameters are fixed (given in the last box). All parameters are defined at scale Q = 1 TeV, except $m_A(pole)$ which is the pole mass of the CP odd Higgs boson $A^0$. The parameters that are not shown explicitly are taken to be zero. The entire scan lies in the decoupling Higgs limit, i.e. in the scenarios with large $\tan\beta \geq 10$ and large $m_A \geq 800$ GeV (see Table~\ref{table1}), respecting the fact that the discovered Higgs boson is SM-like. It is well known that the lightest MSSM Higgs boson $h^0$ is SM-like (including its couplings) in this limit. Note that we don't assume the GUT relation for the gaugino masses $M_1$, $M_2$, $M_3$. \begin{table}[h!] \footnotesize{ \caption{ Scanned ranges and fixed values of the MSSM parameters (in units of GeV or GeV$^2$, except for $\tan\beta$). $M_{1,2,3}$ are the U(1), SU(2), SU(3) gaugino mass parameters.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \vspace*{-0.3cm} & & & & &\\ \vspace*{-0.3cm} $\tan\beta$ & $M_1$ & $M_2$ & $M_3$ & $\mu$ & $m_A(pole)$\\ & & & & &\\ \hline \vspace*{-0.3cm} & & & & &\\ \vspace*{-0.3cm} 10 $\div$ 30 & $100 \div 2500$ & $100 \div 2500$ & $2500 \div 5000$ & $100 \div 2500$ & $800 \div 3000$\\ & & & & &\\ \hline \hline \vspace*{-0.3cm} & & & & &\\ \vspace*{-0.3cm} $ M^2_{Q 22}$ & $ M^2_{Q 33}$ & $|M^2_{Q 23}| $ & $ M^2_{U 22} $ & $ M^2_{U 33} $ & $|M^2_{U 23}| $\\ & & & & &\\ \hline \vspace*{-0.3cm} & & & & &\\ \vspace*{-0.3cm} $2500^2 \div 4000^2$ & $2500^2 \div 4000^2$ & $< 1000^2$ & $1000^2 \div 4000^2$ & $600^2 \div 3000^2$& $ < 1200^2$\\ & & & & &\\ \hline \hline \vspace*{-0.3cm} & & & & &\\ \vspace*{-0.3cm} $ M^2_{D 22} $ & $ M^2_{D 33}$ & $ |M^2_{D 23}|$ & $|T_{U 23}| $ & $|T_{U 32}| $ & $|T_{U 33}|$\\ & & & & &\\ \hline \vspace*{-0.3cm} & & & & &\\ \vspace*{-0.3cm} $ 2500^2 \div 4000^2$ & $1000^2 \div 3000^2 $ & $ < 1000^2$ & $< 4000 $ & $ < 4000$& $< 4000 $\\ & & & & &\\ \hline \multicolumn{6}{c}{}\\[-3.6mm] \cline{1-4} \vspace*{-0.3cm} & & & \\ \vspace*{-0.3cm} $ |T_{D 23}| $ & $|T_{D 32}| $ & $|T_{D 33}|$ &$|T_{E 33}| $\\ & & & \\ \cline{1-4} \vspace*{-0.3cm} & & & \\ \vspace*{-0.3cm} $< 1000 $ & $< 1000 $& $ < 1000$& $ < 500$\\ & & & \\ \cline{1-4} \end{tabular}\\[3mm] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \vspace*{-0.3cm} & & & & & & & &\\ \vspace*{-0.3cm} $M^2_{Q 11}$ & $M^2_{U 11} $ & $M^2_{D 11} $ & $M^2_{L 11}$ & $M^2_{L 22} $ & $M^2_{L 33}$ & $M^2_{E 11}$&$M^2_{E 22}$ & $M^2_{E 33} $\\ & & & & & & & &\\ \hline \vspace*{-0.3cm} & & & & & & & &\\ \vspace*{-0.3cm} $4500^2$ & $4500^2$ & $4500^2$ & $1500^2$ & $1500^2$ & $1500^2$& $1500^2$& $1500^2$&$1500^2$\\ & & & & & & & &\\ \hline \end{tabular} \end{center} \label{table1} } \end{table} The decay widths $\Gamma(h^0 \to \gamma \gamma)_{MSSM}$ and $\Gamma(h^0 \to g g)_{MSSM}$ are calculated with our own code based on the public code {\tt SPheno}~\cite{SPheno1,SPheno2}. For the calculation of the MSSM spectrum we use the version {\tt SPheno-v3.3.8}. The computation includes lowest order 1-loop contributions and gluonic 2-loop QCD corrections (i.e. NLO QCD corrections) to quark loops \cite{Spira}~\footnote{ The gluonic 2-loop QCD corrections to the squark loops are negligibly small since the squark-loop contributions to the widths are rather small due to large squark masses from the LHC limit (see Appendix A). As the corrections to small contributions are very small, we can neglect such corrections. We can also neglect SUSY-QCD corrections to the quark/squark-loops since gluino/squarks are required to be so heavy by the LHC limits (see Appendix A) that gluino/squark-loop corrections (i.e. SUSY-QCD corrections) to the widths are very small. Moreover, the NNLO QCD corrections \cite{QCD_corr} and the NLO electroweak (EW) corrections \cite{EW_corr} to the widths are found to be much smaller than the NLO QCD corrections. Therefore, we take into account only the gluonic 2-loop QCD corrections (i.e. NLO QCD corrections) to quark-loop contributions to $\Gamma(h^0 \to \gamma \gamma / g g)_{MSSM}$. }. The lowest order 1-loop contributions to $\Gamma(h^0 \to \gamma \gamma)_{MSSM}$ stem from the loops with SM particles, quarks (t, b, ...), charged leptons ($\tau^-$, ...) and $W^\pm$ boson and SUSY particles, squarks ($\ti{u}$, $\ti{d}$), charged sleptons ($\ti \tau^-$, ...), charginos $\ti \x^\pm$ and charged Higgs bosons $H^\pm$. The lowest order 1-loop contributions to $\Gamma(h^0 \to g g)_{MSSM}$ stem from the loops with quarks (t, b, ...) and squarks ($\ti{u}$, $\ti{d}$). In order to stay consistent we also use our own code for the SM decay widths $\Gamma(h^0 \to \gamma \gamma)_{SM} \equiv \Gamma(H_{SM} \to \gamma \gamma)$, and $\Gamma(h^0 \to g g)_{SM} \equiv \Gamma(H_{SM} \to g g)$ including the gluonic 2-loop QCD corrections \cite{Spira}. We have cross-checked them numerically with the decoupling limit of the MSSM results. The Higgs mass in the kinematic factors of the widths is fixed by the measured mass at LHC, $m_{h^0} = 125.09$~GeV to avoid an artificially large dependence stemming from the kinematic factor in $\Gamma(h^0 \to \gamma \gamma/g g)_{MSSM}$ , which is proportional to $m^3_{h^0}$. All MSSM input parameters are taken as ${\overline{\rm DR}}$ parameters at the scale $Q = 1$~TeV and then transformed by RGEs to those at the scale of $Q = m_{h^0} = 125.09$~GeV. The masses and rotation matrices of the sfermions are renormalized at one-loop level within SPheno based on the technique given in \cite{Pierce}. From 2850000 input points generated in the scan about 285500 survived all constraints. These are about 10\%. We show these survival points in all scatter plots in this article. \section{Deviation of the MSSM widths from the SM} \label{sec:correlation} We define the relative deviation of the MSSM width from the SM width as\footnote{For reference, the SM predictions (at 68\% CL) of \cite{Almeida:2013jfa} are $\Gamma(\gamma)_{SM} = (1.08 ^{+0.03}_{-0.02}) \cdot 10^{-5}$ GeV and $\Gamma(g)_{SM} = (3.61 \pm 0.06) \cdot 10^{-4}$ GeV, and those of \cite{CERN_YR4} are $\Gamma(\gamma)_{SM} = (9.31 \pm 0.09) \cdot 10^{-6}$ GeV and $\Gamma(g)_{SM} = (3.35 \pm 0.21) \cdot 10^{-4}$ GeV.} \begin{equation} DEV(X) = \Gamma(h^0 \to X X)_{MSSM}/\Gamma(h^0 \to X X)_{SM} - 1\, , \, \mbox{with } X = \gamma, g\, , \end{equation} where we identify $h^0$ with the Higgs boson with a mass of 125.09 GeV. \noindent The relative deviation of the width ratio from the SM prediction is defined as \begin{equation} DEV(\gamma/g) = [\Gamma(\gamma)/\Gamma(g)]_{MSSM}/[\Gamma(\gamma)/\Gamma(g)]_{SM} - 1 \label{eq_DEV(ga/g)} \end{equation} with \begin{equation} \Gamma(X) = \Gamma(h^0 \to X X), \mbox{where} \, X = \gamma, g. \end{equation} \noindent Note that $DEV(\gamma/g)$ in Eq.~(\ref{eq_DEV(ga/g)}) can be written also directly in terms of $DEV(\gamma)$ and $DEV(g)$, \begin{equation} DEV(\gamma/g) = {DEV(\gamma) + 1 \over DEV(g) + 1} - 1\, . \label{eq_DEV(X/Y)_1} \end{equation} Before we show the results of the full parameter scan, we briefly comment on an expected qualitative behaviour of $DEV(g)$. One can approximate $DEV(g)$ in an effective field theory approach based on dim-6 operators parametrized in a so-called $\kappa$-framework~\cite{Brignole:2015kva}, assuming that the SM contribution stems only from the top-loop and neglecting the Higgs mass in the amplitude. Based on the result for $\delta \kappa_g$ given in~\cite{Brignole:2015kva}, we can write the approximation for $DEV(g) \sim 2 \delta \kappa_g = DEV(g)^{approx}$ in our convention (see Section \ref{sec:sq.matrix}), \begin{equation} DEV(g)^{approx} = {v^2 \over 4} \Bigg[ {1 \over m^2_{\ti t_L} } \left( y_t^2 - {|T_{U23}|^2 \over m^2_{\ti c_R}} \right) + {1 \over m^2_{\ti t_R} } \ \left( y_t^2 - {|T_{U32}|^2 \over m^2_{\ti c_L}} \right) - {|T_{U33}|^2 \over m^2_{\ti t_L} m^2_{\ti t_R}} \Bigg] \, , \label{DEV(g)_approx} \end{equation} where $y_t = \sqrt{2}\, m_t/v_2 = g\, m_t/(\sqrt{2}\, m_W \sin\beta)$ is the top-quark Yukawa coupling, $v = \sqrt{v_1^2 + v_2^2} = 2\, m_W/g =$~ 242 GeV is the vacuum expectation value, $m_t$ is the top-quark mass, $m_W$ is the W-boson mass, and $g$ is the SU(2) gauge coupling constant. In Eq.~(\ref{DEV(g)_approx}) we have neglected terms $\propto \mu/\tan\beta$ because we use in this numerical study large values of $\tan\beta$ ($\ge 10$), see Eq.~(\ref {M2sqdef}). Note that Eq.~(\ref {DEV(g)_approx}) is not a function of $M^2_{U23}$ and $M^2_{Q23}$. The terms $m^2_{\ti c_{L,R}}$ and $m^2_{\ti t_{L,R}}$ are diagonal entries of the mass matrix ${\cal M}^2_{\tilde{q}}$, Eq.~(\ref {EqMassMatrix1}). For values much larger than $v$ we can approximate them by $m^2_{\ti c_L} \simeq M^2_{Q 22}$, $m^2_{\ti c_R} \simeq M^2_{U 22}$, $m^2_{\ti t_L} \simeq M^2_{Q 33}$, and $m^2_{\ti t_R} \simeq M^2_{U 33}$. \\ From Eq.~(\ref{DEV(g)_approx}) we see that $DEV(g)^{approx}$ depends only on the squared absolute values of $T_{U23}, T_{U32}$, and $T_{U33}$. When all these three parameters go to zero, $DEV(g)^{approx}$ is small and positive. For large values of $|T_{U23}|, |T_{U32}|$, and $|T_{U33}|$ $DEV(g)^{approx}$ becomes large and negative. Furthermore, $DEV(g)^{approx}$ also grows when $m^2_{\ti c_{L,R}}$ and/or $m^2_{\ti t_{L,R}}$ decrease. \begin{figure*}[t!] \centering \subfigure[]{ { \mbox{\hspace*{-1cm} \resizebox{7.5cm}{!}{\includegraphics{fig1a.pdf}} \hspace*{0cm}}} \label{fig1a}} \subfigure[]{ { \mbox{\hspace*{+0.cm} \resizebox{7.5cm}{!}{\includegraphics{fig1b.pdf}} \hspace*{-1cm}}} \label{fig1b}}\\ \caption{ The scatter plot of the scanned parameter points within the ranges given in Table \ref{table1} in the DEV($\gamma$) - DEV($g$) plane. (a): The expected 1$\sigma$ errors at ILC250/500 + HL-LHC [ILC250 + HL-LHC]; the black cross at (DEV($\gamma$), DEV($g$))=(0.025, -0.102) shows a possibly measured point of Eq.~(\ref{DEV_cdef}) and the orange and purple boxes indicate expected 1$\sigma$ errors of Eqs.~(\ref{DDEV_A}) and~(\ref{DDEV_B}) , respectively. (b): The 68\% and 95\% CL contours of the recent ATLAS/CMS data\cite{ATLAS_kappa_plot,CMS_kappa_plot}. } \label{fig1} \end{figure*} In the following we show the results of a full parameter scan without using this effective field theory approximation. In Fig.~\ref{fig1} we show the scatter plot of the scanned parameter points within the ranges given in Table \ref{table1} in the $DEV(\gamma) - DEV(g)$ plane. We see that $DEV(g)$ is mostly negative and goes down to more than -10\%, and that there is a strong correlation between $DEV(\gamma)$ and $DEV(g)$, \begin{equation} DEV(\gamma) \simeq - {1 \over 4} DEV(g) \, . \end{equation} Thus we also have $DEV(\gamma)^{approx} \simeq - {1 \over 4} DEV(g)^{approx}$. This feature is due to the fact that the amplitude for $h^0 \to \gamma \gamma$ is dominated by the W-boson loop contribution. The second important contribution to $h^0 \to \gamma \gamma$ stems from the top-quark loop. The decay $h^0 \to g g$ is dominated by the top-quark loop contribution. In the scenarios we are interested in, the up-type squark loop contributions to $h^0 \to \gamma \gamma / g g$ can be large. All other SUSY contributions are relatively small, giving together less than 0.5\% in our study. Hence both $DEV(\gamma)$ and $DEV(g)$ are dominated by the same common source (i.e. $\ti{u}_{1,2}$-loops) which together with the W-loop dominance leads to the strong correlation. Qualitatively our results are consistent with $DEV(g)^{approx}$ and $DEV(\gamma)^{approx}$ but it is hard to compare directly numerically because of the different usage of the MSSM input parameters, see the description at the end of Section~\ref{sec:full scan}.\\ The large deviations shown in Fig.~\ref{fig1} can be experimentally observed at a future $e^+ e^-$ collider such as ILC~\cite{ILC250} and/or CLIC~\cite{CLICref}. The abbreviations "ILC250/500 + HL-LHC" and "ILC250 + HL-LHC" are explained below. In Fig.~\ref{fig1}(b) the recent LHC data of the coupling modifiers ($\kappa_\gamma$, $\kappa_g$) transformed into the $(DEV(\gamma), DEV(g))$ plane by using the relation $DEV(X) = \kappa_X^2 -1$ are shown, where $\kappa_X = C(h^0XX)/C(h^0XX)_{SM}$ with $C(h^0XX)$ being the coupling of $h^0XX$. It is seen that the errors of the LHC data are very large and both the SM and the MSSM are allowed by the ATLAS/CMS data on the $h^0$ couplings $C(h^0\gamma\gamma)$ and $C(h^0gg)$.\\ If the measured point at ILC + HL-LHC was around ($DEV(\gamma)$, $DEV(g)$) = (0.025, -0.10) as shown in Fig.~\ref{fig1}(a), then the data would disfavour the SM and favour the MSSM. If the measured point was around ($DEV(\gamma)$, $DEV(g)$) = (-0.05, -0.10), then we could say that the data disfavours both the SM and the MSSM. \begin{figure*}[h!] \centering \subfigure[]{ { \mbox{\hspace*{0cm} \resizebox{7.5cm}{!}{\includegraphics{fig2a.pdf}} \hspace*{-0.5cm}}} \label{fig2a}} \hfill \subfigure[]{ { \mbox{\hspace*{0cm} \resizebox{7.5cm}{!}{\includegraphics{fig2b.pdf}} \hspace*{-0.5cm}}} \label{fig2b}}\\ \subfigure[]{ { \mbox{\hspace*{0.cm} \resizebox{7.5cm}{!}{\includegraphics{fig2c.pdf}} \hspace*{0cm}}} \label{fig2c}}\\ \caption{ The scatter plots of the scanned parameter points within the ranges given in Table \ref{table1} in (a): $T_{U 33}$ - DEV($\gamma$); (b): $T_{U 33}$ - DEV($g$); (c): $T_{U 33}$ - DEV($\gamma/g$) planes. The expected 1$\sigma$ errors at ILC250/500 + HL-LHC [ILC250 + HL-LHC] are also shown. The black horizontal solid lines at (DEV($\gamma$), DEV($g$), DEV($\gamma$/$g$))=(0.025, -0.102, 0.141) show possibly measured values of Eq.~(\ref{DEV_cdef}) and the orange and purple dashed-lines indicate expected 1$\sigma$ errors of Eqs.~(\ref{DDEV_A}) and~(\ref{DDEV_B}), respectively.} \label{fig2} \end{figure*} In Fig.~\ref{fig2} we show the scatter plots of the scanned parameter points within the ranges given in Table~\ref{table1} in the $T_{U 33}$ - DEV($\gamma$) (a), $T_{U 33}$ - DEV($g$) (b), and $T_{U 33}$ - DEV($\gamma/g$) (c) planes. We see that DEV($g$) and DEV($\gamma/g$) can be large in the scanned parameter ranges for large values of$|T_{U 33}|$. This means that the $\ti{u}_{1,2}$-loop ($\sim$ stop/scharm loops) contributions to these loop-induced decays are quite important. As in Figure~\ref{fig1} the deviations shown can be observed at a future $e^+ e^-$ collider (ILC/CLIC).\\ In all three plots of Fig.~\ref{fig2} we see the parabolic increase of the $DEV$'s for increasing $|T_{U33}|$ as this is discussed after Eq.~(\ref{DEV(g)_approx}). The less populated region around $T_{U33} = 3$~TeV stems from the fact that the upper limit of the $m_{h^0}$ constraint is often violated there. In order to show the importance of the QFV effect, in Fig.~\ref{fig3} we show the scatter plot in the $T_{U 32} - DEV(\gamma)$~(a), $T_{U 32} - DEV(g)$~(b), and $T_{U 32} - DEV(\gamma/g)$~(c) planes. In Fig.~\ref{fig3} we have a similar pattern as before in Fig.~\ref{fig2} but with the maximal results at slightly smaller values of the dependent variable, $|T_{U32}| \sim 2.5$~TeV. Again the parabolic shape is seen. And we see that in order to have large results we need the absolute value of both, the QFC parameter $T_{U33}$ and the QFV parameter $T_{U32}$, large. We have obtained a similar dependence on $T_{U23}$ to that on $T_{U32}$. Hence we do not show here the analogous plots on $T_{U23}$. In the parameter scan the average value of $M^2_{U22}$ is 1.8 times larger than that of $M^2_{U33}$. Therefore, the prefactor of $|T_{U23}|^2$ in Eq.~(\ref{DEV(g)_approx}) is 1.8 times smaller in average than that of $|T_{U32}|^2$ leading to somewhat milder $|T_{U23}|^2$ dependence of the deviations than that of $|T_{U32}|^2$. However, this choice of different mass ranges is just for a good efficiency (a good survival probability) of parameter scan in search for large deviations. The deviations can be enhanced by relatively light stop/scharm masses (see Eq.~(\ref{DEV(g)_approx})). Hence, relatively light mass ranges are taken for $M^2_{U22}$ and $M^2_{U33}$ in Table~\ref{table1}. In order to confirm that this choice does not affect our final conclusion essentially, we have performed the same parameter scan by taking common mass ranges [(0.6 TeV)$^2$, (4.0 TeV)$^2$] for \{$M^2_{Q22}, M^2_{Q33}, M^2_{U22}, M^2_{U33}, M^2_{D22}, M^2_{D33}$\}. We have obtained very similar scan results with slightly enhanced $T_{U23}$ dependence and much smaller survival probability of the scan. The common feature of the scan results is that the DEV's are significantly enhanced by the large values of the trilinear couplings $T_{U23}, T_{U32}, T_{U33}$. This can be explained as follows: \begin{itemize} \item The $ \ti c_{R/L} - \ti t_{R/L}$ mixings can be large for large QFV parameters $M^2_{Q 23}, M^2_{U 23}, T_{U 23}$, and $T_{U 32}$, for which the lighter up-type squarks $\ti{u}_{1,2}$ can be strong mixtures of $ \ti c_{R/L} - \ti t_{R/L}$. \item In our decoupling Higgs scenario (with large $m_A$ and large $\tan\beta$), $h^0 \simeq {\rm Re}(H_2^0)$ and hence $(T_{U 23}, T_{U 32}, T_{U 33}) \simeq (h^0 \ti t_L \ti c_R, h^0 \ti t_R \ti c_L, h^0 \ti t_L \ti t_R)$ couplings. \end{itemize} Thus, the $h^0 \ti{u}_{1,2} \ti{u}_{1,2}$ couplings and therefore also the $\ti{u}_{1,2}$-loop contributions to $\Gamma(h^0 \to \gamma \gamma, g g)$ can be enhanced by large $T_{U 23}, T_{U 32}, T_{U 33}$, which results in the significant correlations between $T_{U 23}, T_{U 32}, T_{U 33}$, and $DEV(\gamma), DEV(g), DEV(\gamma/g$). This explains the appearance of these $T_U$'s in Eq.~(\ref{DEV(g)_approx}). \begin{figure*}[h!] \centering \subfigure[]{ { \mbox{\hspace*{0cm} \resizebox{7.5cm}{!}{\includegraphics{fig3a.pdf}} \hspace*{-0.5cm}}} \label{fig3a}} \hfill \subfigure[]{ { \mbox{\hspace*{0cm} \resizebox{7.5cm}{!}{\includegraphics{fig3b.pdf}} \hspace*{-0.5cm}}} \label{fig3b}}\\ \subfigure[]{ { \mbox{\hspace*{0.cm} \resizebox{7.5cm}{!}{\includegraphics{fig3c.pdf}} \hspace*{0cm}}} \label{fig3c}}\\ \caption{ The scatter plot in the $T_{U 32}$ - DEV($\gamma$) (a), $T_{U 32}$ - DEV($g$) (b), and $T_{U 32}$ - DEV($\gamma/g$) (c) planes. The expected 1$\sigma$ errors at ILC250/500 + HL-LHC [ILC250 + HL-LHC] are also shown as in Fig.~\ref{fig2}. } \label{fig3} \end{figure*} Our analysis has shown that the correlations between the deviations DEV($\gamma$), DEV($g$), DEV($\gamma/g$) and all the FV/FC parameters other than those from the $\ti{u}$ sector, such as $T_{U23},T_{U32},T_{U33}$ and the stop/scharm masses, are very weak (see Eq.(\ref{DEV(g)_approx})). This means that the deviations DEV($\gamma$), DEV($g$), DEV($\gamma/g$) are quite insensitive to the parameters other than those of the up-type squark sector. The latter is due to the fact that in our decoupling Higgs scenario $h^0 \simeq {\rm Re}(H_2^0)$. Hence, the contributions of the down-type squark loops and the charged slepton loops to the decay widths $\Gamma(h^0 \to \gamma \gamma)$ and $\Gamma(h^0 \to g g)$ are very small. Note that $H_2^0$ couples to $\ti t_L / \ti{c}_L$ - $\ti t_R / \ti{c}_R$ but does not to $\ti b_L / \tilde{s}_L$ - $\ti b_R / \tilde{s}_R$. Furthermore, for $DEV(\gamma)$, the charged Higgs and the chargino contributions always remain in the few-per mille range. It is important to discuss the expected experimental errors. We use two supposed data sets, \mbox{data set A: ILC250/500 + HL-LHC} and for collecting data without having a 500 GeV ILC, \mbox{data set B: ILC250 + HL-LHC}. The explanation of what "ILC250 + HL-LHC" and "ILC250/500 + HL-LHC" stand for is given in detail in the caption of Table~\ref{table1} of \cite{ILC250}, named there "ILC250" and "ILC500". In order to discuss the experimental and theoretical errors we fix a possibly measured point, \begin{equation} \{DEV(\gamma)_c, DEV(g)_c, DEV(\gamma/g)_c\} = \{ 2.5\%, -10.2\%, 14.1\%\} \, . \label{DEV_cdef} \end{equation} This point is shown in Fig.~\ref{fig1}(a) by a black cross and in the Figs.~\ref{fig2}-\ref{fig3} by solid horizontal lines. We use the relative estimated experimental 1$\sigma$~errors on the couplings $h \gamma\g$ and $h g g$ and their ratio in the EFT fit framework, \begin{eqnarray} {\rm data set~A}: && \{ \delta^r g_\gamma, \delta^r g_g, \delta^r g_{\gamma/g} \} = \{1\%, 0.95\%, 1.3\%\} \, ,\\ {\rm data set~B}: && \{ \delta^r g_\gamma, \delta^r g_g, \delta^r g_{\gamma/g}\} = \{1.2\%, 1.7\%, 1.8\%\}\, , \end{eqnarray} where $\delta^r y$ is defined as the relative error $\Delta y/y$ of the parameter $y$. The values for $\delta^r g_\gamma$ and $\delta^r g_g$ are taken from Table~\ref{table1} in \cite{ILC250} and the value for $\delta^r g_{\gamma/g}$ we got from \cite{dga/g_error} using the same EFT fit program as in \cite{ILC250}. Using \begin{equation} \Delta DEV(X) = 2 (DEV(X)_c + 1) \delta^r g_X\, , \quad X = \gamma, g, \gamma/g\, , \end{equation} we get the 1$\sigma$ errors for our $DEV$'s, \begin{eqnarray} {\rm data set~A}: && \{ \Delta DEV(\gamma), \Delta DEV(g), \Delta DEV(\gamma/g) \} = \{2.1\%, 1.7\%, 3.0\%\}\, , \label{DDEV_A}\\ {\rm data set~B}: && \{ \Delta DEV(\gamma), \Delta DEV(g), \Delta DEV(\gamma/g) \} = \{2.5\%, 3.1\%, 4.1\%\} \label{DDEV_B}\, . \end{eqnarray} \noindent The 1$\sigma$ error bands are $DEV(X)_c \pm \Delta DEV(X)$, shown by boxes in Fig~\ref{fig1}a and by dashed and dotted lines in Fig.~\ref{fig2} and Fig.~\ref{fig3}. In all three figures Figs.~\ref{fig1}-\ref{fig3} we see that there are only a few dozens of points where we have really a large deviation from the SM expectation values. This is just a matter of statistics because we perform a scan in a 22-dimensional parameter space. Thus we choose a reference scenario where we have large $DEV$'s and then variate the most interesting parameters around this point P1. All MSSM input parameters for P1 are shown in Table~\ref{table2} giving the $DEV$'s in Eq.~(\ref{DEV_cdef}). This scenario P1 satisfies all present experimental and theoretical constraints, see Appendix A. The resulting physical masses of the particles are shown in Table~\ref{physmasses}. For the calculation of the masses and the mixing, as well as for the low-energy observables, especially those in the B and K meson sectors (see Table~\ref{TabConstraints}), we use the public code {\tt SPheno} v3.3.8~\cite{SPheno1, SPheno2}. For the calculation of the coupling modifier $\kappa_b = C(h^0 b \bar{b})/C(h^0 b \bar{b})_{SM}$ (or equivalently the deviation $DEV(b)(= \kappa_b^2 -1)$ of the width $\Gamma(h^0 \to b \bar{b})$ from its SM value) we compute the width $\Gamma(h^0 \to b \bar{b})$ at full one-loop level in the MSSM with QFV by using the code developed by us \cite{Eberl:h2bb}. We obtain $\kappa_b = 0.927$ (or $DEV(b) = -0.141$) which satisfies the LHC data in Table~\ref{TabConstraints}. For the B and K meson observables we get; $B(b \to s \gamma) = 3.177 \cdot 10^{-4}$, $B(b \to s \ l^+ l^-) = 1.588 \cdot 10^{-6}$, $B(B_s \to \mu^+ \mu^-) = 3.065 \cdot 10^{-9}$, $B(B^+ \to \tau^+ \nu) = 9.956 \cdot 10^{-5}$, $\Delta M_{B_s} = 19.606 [ps^{-1}]$, $|\epsilon_K| = 2.205 \cdot 10^{-3}$, $\Delta M_K = 2.322 \cdot 10^{-15} \ (GeV)$, $B(K^0_L \to \pi^0 \nu \bar{\nu}) = 2.307 \cdot 10^{-11}$, and $B(K^+ \to \pi^+ \nu \bar{\nu}) = 7.734 \cdot 10^{-11}$, all of which satisfy the constraints of Table~\ref{TabConstraints}. \begin{table}[h!] \footnotesize{ \caption{The MSSM parameters for the reference point P1 (in units of GeV or GeV$^2$ expect for $\tan\beta$)} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \vspace*{-0.3cm} & & & & &\\ \vspace*{-0.3cm} $\tan\beta$ & $M_1$ & $M_2$ & $M_3$ & $\mu$ & $m_A(pole)$\\ & & & & &\\ \hline \vspace*{-0.3cm} & & & & &\\ \vspace*{-0.3cm} 16 & 1270 & 500 & 4800 & 1260 & 1960\\ & & & & &\\ \hline \hline \vspace*{-0.3cm} & & & & &\\ \vspace*{-0.3cm} $M^2_{Q 22}$ & $ M^2_{Q 33}$ & $M^2_{Q 23}$ & $ M^2_{U 22} $ & $ M^2_{U 33} $ & $M^2_{U 23} $\\ & & & & &\\ \hline \vspace*{-0.3cm} & & & & &\\ \vspace*{-0.3cm} 3660$^2$ & 2520$^2$ & 550$^2$ & 3710$^2$ & 1435$^2$ & 875$^2$\\ & & & & &\\ \hline \hline \vspace*{-0.3cm} & & & & &\\ \vspace*{-0.3cm} $ M^2_{D 22} $ & $ M^2_{D 33}$ & $ M^2_{D 23}$ & $T_{U 23} $ & $T_{U 32} $ & $T_{U 33}$\\ & & & & &\\ \hline \vspace*{-0.3cm} & & & & &\\ \vspace*{-0.3cm} 3620$^2$ & 2720$^2$ & 925$^2$ & 760 & 1560 & - 4200\\ & & & & &\\ \hline \multicolumn{6}{c}{}\\[-3.6mm] \cline{1-4} \vspace*{-0.3cm} & & & \\ \vspace*{-0.3cm} $ T_{D 23} $ & $T_{D 32} $ & $ T_{D 33}$ &$T_{E 33} $\\ & & & \\ \cline{1-4} \vspace*{-0.3cm} & & & \\ \vspace*{-0.3cm} -565 & 690 & 270 & - 470\\ & & & \\ \cline{1-4} \end{tabular}\\[3mm] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \vspace*{-0.3cm} & & & & & & & &\\ \vspace*{-0.3cm} $M^2_{Q 11}$ & $M^2_{U 11} $ & $M^2_{D 11} $ & $M^2_{L 11}$ & $M^2_{L 22} $ & $M^2_{L 33}$ & $M^2_{E 11}$&$M^2_{E 22}$ & $M^2_{E 33} $\\ & & & & & & & &\\ \hline \vspace*{-0.3cm} & & & & & & & &\\ \vspace*{-0.3cm} $4500^2$ & $4500^2$ & $4500^2$ & $1500^2$ & $1500^2$ & $1500^2$& $1500^2$& $1500^2$&$1500^2$\\ & & & & & & & &\\ \hline \end{tabular} \end{center} \label{table2} } \end{table} \begin{table \caption{Physical masses in GeV of the particles for the scenario of Table~\ref{table2}.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $\mnt{1}$ & $\mnt{2}$ & $\mnt{3}$ & $\mnt{4}$ & $\mch{1}$ & $\mch{2}$ \\ \hline \hline $532.1$ & $1242$ & $1271$ & $1310$ & $532.3$ & $1275$ \\ \hline \end{tabular} \vskip 0.4cm \begin{tabular}{|c|c|c|c|c|} \hline $m_{h^0}$ & $m_{H^0}$ & $m_{A^0}$ & $m_{H^+}$ \\ \hline \hline $125.5$ & $1960$ & $1960$ & $1962$ \\ \hline \end{tabular} \vskip 0.4cm \begin{tabular}{|c|c|c|c|c|c|c|} \hline $\msg$ & $\msu{1}$ & $\msu{2}$ & $\msu{3}$ & $\msu{4}$ & $\msu{5}$ & $\msu{6}$ \\ \hline \hline $4562$ & $725$ & $2204$ & $3497$ & $3551$ & $4380$ & $4386$ \\ \hline \end{tabular} \vskip 0.4cm \begin{tabular}{|c|c|c|c|c|c|} \hline $\msd{1}$ & $\msd{2}$ & $\msd{3}$ & $\msd{4}$ & $\msd{5}$ & $\msd{6}$ \\ \hline \hline $2173$ & $2421$ & $3467$ & $3497$ & $4380$ & $4386$ \\ \hline \end{tabular} \end{center} \label{physmasses} \end{table} In Fig.~\ref{fig4} we show the contour plots of DEV($\gamma/g$) in the QFV/QFC parameter plane around P1. The reference point is marked by a green "x". We see that $DEV(\gamma/g)$ is really large in a large region of the parameter planes and that the effect of the QFV parameters $M^2_{U23}, T_{U23}, T_{U32}$ (and the QFC parameter $T_{U 33}$ also) on the $DEV(\gamma/g)$ is very important. We again see the parabolic behaviour on all the $T_U$ parameters. For this parameter point the dependence on $T_{U 32}$ and $T_{U 23}$ is of similar size and the dependence on $T_{U 33}$ varies from -3\% up to 16\% in the allowed region. Fig.~\ref{fig4}(c) shows a strong dependence on $\ti c_R-\ti t_R$ mixing parameter $M^2_{U23}$ which means that for large $M^2_{U23}$ the "linearized" approximation Eq.~(\ref{DEV(g)_approx}) is not good anymore. There one should add higher orders to Eq.~(\ref{DEV(g)_approx}) which includes $M^2_{U23}$. \begin{figure*}[h!] \centering \subfigure[]{ { \mbox{\hspace*{-0.5cm} \resizebox{7.5cm}{!}{\includegraphics{contour_bench1_TU23_TU32.pdf}} \hspace*{0cm}}} \label{fig4a}} \subfigure[]{ { \mbox{\hspace*{0cm} \resizebox{7.1cm}{!}{\includegraphics{contour_bench1_TU32_TU33.pdf}} \hspace*{-0.5cm}}} \label{fig4b}}\\ \subfigure[]{ { \mbox{\hspace*{0cm} \resizebox{7.5cm}{!}{\includegraphics{contour_bench1_TU32_M2U23.pdf}} \hspace*{0cm}}} \label{fig4c}}\\ \caption{Contour plots of DEV($\gamma/g$) in the $T_{U 32}$ - $T_{U 23}$ (a), $T_{U 32}$ - $T_{U 33}$ (b), $T_{U 32}$ - $M^2_{U 23}$ (c) planes. The parameters other than the shown ones in each plane are fixed as in Table~\ref{table2}. The "X" marks P1 in the plots. The shown forbidden areas are due to the constraints: $A \equiv m_{h^0}$, $B \equiv {\rm B}(B_s \to \mu^+ \mu^-)$, $C \equiv$ vacuum stability condition, $D \equiv m_{\ti{u}_1}$. The dashed lines are the contours of $m_{h^0} = 125.09$~GeV. } \label{fig4} \end{figure*} Finally, we also discuss the theoretical errors. The theoretical uncertainties of the MSSM predictions are twofold. If we consider a fixed MSSM parameter point, the total theoretical error can be split into two parts: one is the uncertainty due to unknown (higher order) loop contributions and the other one - the uncertainty due to errors of the SM input parameters. The former uncertainty we call scale uncertainty and the latter one - parametric uncertainty. The scale uncertainty can be estimated by varying the renormalization scale $Q$ from $Q = m_{h^0}/2$ up to $Q = 2 m_{h^0}$. We can write the relative parametric uncertainty as \begin{equation} \delta^{r,P} DEV(X) = \bigg|{ m_t \over DEV(X)} {\partial DEV(X) \over \partial m_t}\bigg|\delta^r m_t \oplus \bigg|{ \alpha_s \over DEV(X)} {\partial DEV(X) \over \partial \alpha_s}\bigg| \delta^r \alpha_s \, , \end{equation} with $X = \gamma, g, \gamma/g$. We have found that we can neglect the parametric uncertainties due to all the other SM parameters such as $m_b$, $\alpha_{EM}$, $m_Z$ etc.. We use as input the on-shell top-mass, $m_t = 173$~GeV with $\delta^r m_t = 0.23$\%, and $\alpha_s \equiv \alpha_s(m_Z)_{{\overline{\rm MS}}} = 0.1181$ with $\delta^r \alpha_s = 0.93$\% \cite{PDG2018}. We get for the reference point P1 at $1\sigma$ \begin{eqnarray*} \delta^{r,P} DEV(\gamma) & = & |- 1.7| \delta^r m_t \oplus | 3.0| \delta^r \alpha_s = \hphantom{0}0.4\% \oplus 2.8\%\, ,\\ \delta^{r,P} DEV(g) & = & |- 0.2| \delta^r m_t \oplus | 2.8| \delta^r \alpha_s = 0.05\% \oplus 2.6\%\, ,\\ \delta^{r,P} DEV(\gamma/g) & = & |-0.5| \delta^r m_t \oplus | 3.1| \delta^r \alpha_s = \hphantom{0}0.1\% \oplus 2.9\%\, . \end{eqnarray*} One would guess that for $DEV(\gamma)$ there should be a small coefficient in front of $\delta^r \alpha_s$. This is not the case because $\alpha_s$ has a strong influence on the calculation of the running top Yukawa coupling at $Q = m_{h^0}$ and on that of the $\ti{u}$~parameters entering the $h^0 \ti{u} \ti{u}^*$~couplings.\\ From the scale variation we get \begin{eqnarray*} \delta^{r,Q} DEV(\gamma) & = & \begin{array}{c} \hphantom{-}2.3 \% \\ -2.1\% \end{array} \simeq 2.3\%\, ,\\ \delta^{r,Q} DEV(g) & = &\begin{array}{c} \hphantom{-}2.9\% \\ -2.6\% \end{array} \simeq 2.9\%\, ,\\ \delta^{r,Q} DEV(\gamma/g) & = & \begin{array}{c} \hphantom{-}3.2\% \\ -2.8\% \end{array} \simeq 3.2\%\, . \end{eqnarray*} The upper value is for $Q = m_{h^0}/2$ and the lower one for $Q = 2 m_{h^0}$. Thus we estimate the total theoretical relative and absolute errors $\Delta DEV(X) = \delta^r DEV(X) DEV(X)_c$, at $1\sigma$ for the point P1, \begin{eqnarray*} \delta^{r} DEV(\gamma) = 5.1\% \, , & \quad \Delta DEV(\gamma) & = 0.13\%\,, \\ \delta^{r} DEV(g) = 5.5\% \, , & \quad \Delta DEV(g) & = 0.55\%\,,\\ \delta^{r} DEV(\gamma/g) = 6.1\% \, , & \!\!\!\!\! \quad \Delta DEV(\gamma/g) & = 0.85\%\, , \end{eqnarray*} where the parametric uncertainties are added quadratically and the scale uncertainty is added to them linearly. Comparing this result with Eqs.~(\ref{DDEV_A}) and (\ref{DDEV_B}) we see that the theoretical errors are one order smaller than the experimental ones at P1. From Eqs. (\ref{DEV_cdef}), (\ref{DDEV_A}), (\ref{DDEV_B}) and the theoretical errors, we see that ILC cannot miss this SUSY signal in case the scenario P1 (or similar ones) is realized in Nature. Using the LO (lowest order) results instead of the NLO results at P1, the relative shifts of the DEV's are found to be very small (less than 1\%). This is due to the fact that in our computation the NLO QCD corrections are included only in the SM parts which dominate the MSSM widths. One might think that the experimental and theoretical improvement expected in the low-energy observables could exclude the flavour-violating squark scenarios in the first place, way before the beginning of the HL-LHC or the ILC. On the other hand, the low-energy observables have both experimental and theoretical errors and currently the latter errors tend to be comparable to (or larger than) the former ones as shown in Table \ref{TabConstraints}. The theoretical improvement expected in the low-energy observables is rather unclear. Only if the observed values with almost zero errors perfectly agree with the SM predictions with almost zero errors, the possibility of the flavour-violating squark scenarios will be excluded. \section{Conclusions} \label{sec:concl} We have studied the correlation between the loop-induced decays $h^0 \to \gamma \gamma$ and $h^0 \to g g$ in the MSSM with QFV. From a full parameter scan and a detailed analysis around a fixed reference point, respecting all the relevant theoretical and experimental constraints, we have found that \begin{itemize} \item the relative deviation of the MSSM decay width $\Gamma(h^0 \to g \, g)$ from the Standard Model value, $DEV(g)$, can be large and negative down to $\sim$ -15\% in the studied parameter ranges, \item there is a strong correlation between $DEV(\gamma)$ and $DEV(g)$, \item the relative deviation of the width ratio $DEV(\gamma/g)$ from the SM value can be large (up to $\sim$ 20\%) in the studied parameter ranges, \item both SUSY QFV and QFC up-type squark parameters can have a strong influence on these deviations and their contributions add up. \end{itemize} Such large deviations can be observed at a future $e^+ e^-$ collider such as ILC and CLIC. Observation of the deviation patterns as shown in this study would favour the MSSM with flavour-violating squark mixings and encourage to perform further studies in this model. \section*{Acknowledgments} We would like to thank W. Porod for helpful discussions, especially for the permanent support concerning SPheno. We also thank J.~Tian for sharing his expertise on ILC physics with us. We also thank Prof. A. Bartl for useful discussions at the early stage of this work.\\ VRVis is funded by BMVIT, BMDW, Styria, SFG and Vienna Business Agency in the scope of COMET - Competence Centers for Excellent Technologies (854174) which is managed by FFG. \begin{appendix} \section{Theoretical and experimental constraints} \label{sec:constr} The experimental and theoretical constraints taken into account in the present work are discussed in detail in~\cite{Eberl_17}. Here we only list the updated constraints from K- and B-physics and those on the Higgs boson mass and coupling in Table~\ref{TabConstraints}. The $h^0$ couplings that receive SUSY QFV effects significantly are $C(hbb)$ ~\cite{Eberl:h2bb}, $C(hcc)$ ~\cite{Bartl:2014bka}, $C(hgg)$ and $C(h\gamma\gamma)$ \footnote{ Precisely speaking, in principle, $C(htt)$ coupling could also receive SUSY QFV effects significantly. However, predicting the (effective) coupling $C(htt)$ at loop levels in the MSSM is very difficult since its theoretical definition in the context of tth production at LHC is unclear ~\cite{tth@LHC}. }. The measurement of $C(hcc)$ is very difficult due to huge QCD backgrounds at LHC; there is no significant experimental data on $C(hcc)$ at this moment. Hence, the relevant h couplings to be compared with the LHC observations are $C(hbb)$, $C(hgg)$ and $C(h\gamma\gamma)$. The MSSM predictions for the couplings $C(hgg)$ and $C(h\gamma\gamma)$ are allowed by the current LHC data as shown in Fig.~\ref{fig1}(b). Therefore, we list the LHC data on $C(hbb)$ ($\kappa_b$) in Table~\ref{TabConstraints}. In \cite{Dedes} the QFV decays $t \to q h$ with $q = u, c$, have been studied in the general MSSM with QFV. It is found that these decays cannot be visible at the current and high luminosity LHC runs due to the very small decay branching ratios B($t \to q h$). \noindent In addition to these we also require our scenarios to be consistent with the following updated experimental constraints: \begin{table*}[h!] \footnotesize{ \caption{ Constraints on the MSSM parameters from the K- and B-meson data relevant mainly for the mixing between the second and the third generations of squarks and from the data on the $h^0$ mass and coupling $\kappa_b$. The fourth column shows constraints at $95 \%$ CL obtained by combining the experimental error quadratically with the theoretical uncertainty, except for $B(K^0_L \to \pi^0 \nu \bar{\nu})$, $m_{h^0}$ and $\kappa_b$. } \begin{center} \begin{tabular}{|c|c|c|c|} \hline Observable & Exp.\ data & Theor.\ uncertainty & \ Constr.\ (95$\%$CL) \\ \hline\hline &&&\\ $10^3\times|\epsilon_K|$ & $2.228 \pm 0.011$ (68$\%$ CL)~\cite{PDG2019} & $\pm 0.28$ (68$\%$ CL)~\cite{epsK_DMK_SM} & $2.228 \pm 0.549$\\ $10^{15}\times\Delta M_K$ [GeV] & $3.484\pm 0.006$ (68$\%$ CL)~\cite{PDG2019} & $\pm 1.2 $ (68$\%$ CL)~\cite{epsK_DMK_SM} & $3.484 \pm 2.352$\\ $10^{9}\times$B($K^0_L \to \pi^0 \nu \bar{\nu}$) & $< 3.0$ (90$\%$ CL)~\cite{PDG2019} & $\pm 0.002 $ (68$\%$ CL)~\cite{PDG2019} & $< 3.0$ (90$\%$ CL)\\ $10^{10}\times$B($K^+ \to \pi^+ \nu \bar{\nu}$) & $1.7 \pm 1.1$ (68$\%$ CL)~\cite{PDG2019} & $\pm 0.04 $ (68$\%$ CL)~\cite{PDG2019} & $1.7^{+2.16}_{-1.70}$\\ $\Delta M_{B_s}$ [ps$^{-1}$] & $17.757 \pm 0.021$ (68$\%$ CL)~\cite{HFAG2016} & $\pm 2.7$ (68$\%$ CL)~\cite{DeltaMBs_SM} & $17.757 \pm 5.29$\\ $10^4\times$B($b \to s \gamma)$ & $3.49 \pm 0.19$ (68$\%$ CL)~\cite{HFAG2016, PDG2016} & $\pm 0.23$ (68$\%$ CL)~\cite{Misiak_2015} & $3.49\pm 0.58$\\ $10^6\times$B($b \to s~l^+ l^-$)& $1.60 ~ ^{+0.48}_{-0.45}$ (68$\%$ CL)~\cite{bsll_BABAR_2014} & $\pm 0.11$ (68$\%$ CL)~\cite{Huber_2008} & $1.60 ~ ^{+0.97}_{-0.91}$\\ $(l=e~{\rm or}~\mu)$ &&&\\ $10^9\times$B($B_s\to \mu^+\mu^-$) & $2.8~^{+0.7}_{-0.6}$ (68$\%$CL)~\cite{Bsmumu_LHCb_CMS} & $\pm0.23$ (68$\%$ CL)~\cite{Bsmumu_SM_Bobeth_2014} & $2.80~^{+1.44}_{-1.26}$ \\ $10^4\times$B($B^+ \to \tau^+ \nu $) & $1.14 \pm 0.27$ (68$\%$CL) ~\cite{Trabelsi_EPS-HEP2015, Hamer_EPS-HEP2015} &$\pm0.29$ (68$\%$ CL)~\cite{Btotaunu_LP2013} & $1.14 \pm 0.78$\\ $ m_{h^0}$ [GeV] & $125.09 \pm 0.24~(68\%~ \rm{CL})$ \cite{Higgs_mass_ATLAS_CMS} & $\pm 3$~\cite{Higgs_mass_Heinemeyer} & $125.09 \pm 3.48$ \\ $\kappa_b$ & $1.06^{+0.37}_{-0.35}~(95\%~ \rm{CL})$ \cite{kappa_b_ATLAS} & & $1.06^{+0.37}_{-0.35}$ (ATLAS)\\ & $1.17^{+0.53}_{-0.61}~(95\%~ \rm{CL})$ \cite{kappa_b_CMS} & & $1.17^{+0.53}_{-0.61}$ (CMS)\\ &&&\\ \hline \end{tabular} \end{center} \label{TabConstraints}} \end{table*} \begin{itemize} \item The LHC limits on sparticle masses (at 95\% CL)~\cite{SUSY@EPS-HEP2017}-\cite{Strandberg18}: In the context of simplified models, gluino masses $\msg \lesssim 2.1~{\rm TeV}$ are excluded at 95\% CL. The mass limit varies in the range 1800-2100~GeV depending on assumptions. First and second generation squark masses are excluded below 1500~GeV. Bottom squark masses are excluded below 1250~GeV. A typical top-squark mass lower limit is $\sim$ 1100~GeV for $m_{\ti \x^0_1} < 500$ GeV. There is no top-squark mass limit for $m_{\ti \x^0_1} > 500$ GeV. For sleptons heavier than the lighter chargino $\ti \x^\pm_1$ and the second neutralino $\ti \x^0_2$, the mass limits are $m_{\ti \x^\pm_1}, m_{\ti \x^0_2} > 650$ GeV for $m_{\ti \x^0_1} \lesssim 300$ GeV and there is no $m_{\ti \x^\pm_1}$, $m_{\ti \x^0_2}$ limits for $m_{\ti \x^0_1} > 300$ GeV; For sleptons lighter than $\ti \x^\pm_1$ and $\ti \x^0_2$, the mass limits are $m_{\ti \x^\pm_1}, m_{\ti \x^0_2} > 1150$ GeV for $m_{\ti \x^0_1} \lesssim 700$ GeV and there is no $m_{\ti \x^\pm_1}$, $m_{\ti \x^0_2}$ limits for $m_{\ti \x^0_1} > 700$ GeV. \item The constraint on ($m_{A^0, H^+}, \tan\beta$) (at 95\% CL) from searches for the MSSM Higgs bosons $H^0$, $A^0$ and $H^+$ at LHC,~\cite{ICHEP2016_ATLAS,Charged_Higgs@ATLAS,SUSY@EPS-HEP2017,MSSM_Higgs@CMS}, where $H^0$ is the heavier CP-even Higgs boson. \end{itemize} \end{appendix}
1,314,259,993,155
arxiv
\section{Dataset Analysis} In this section, we analyse the captions collected in the High-Level dataset. To provide insights on the kind of captions collected, we analyse the distribution of the captions across different axes, also comparing them with the object-centric COCO captions. \footnote{The analysis is performed by using Spacy v.3 pipeline for English using the {\tt en\_core\_web\_md} model to analyse the part of speech of the texts.} \subsection{High-Level descriptions} We collected 3 annotations per axis over a set of 14997 images for a total of 134,973 captions. An example of high-level descriptions aligned with the original object-centric caption from COCO is shown in Table~\ref{tab:hl-example}. We expect to observe shorter texts in the high-level captions as the annotators would use more abstract words, instead of descriptive details typical of object-centric captions. This is visible in Figure~\ref{fig:cap_len}, which shows that the length of the high-level captions is roughly half of the object-centric COCO captions. Though shorter, they have a comparable number of unique tokens over all the axes (as reported in Table~\ref{tab:data_stats}); this suggests that the high-level captions are not repetitive and contain a fair amount of lexical variability. A more detailed comparison of the statistics is reported in Table~\ref{tab:data_stats}. \input{tables/data_stats} \begin{figure}[h] \centering \includegraphics[scale=0.49]{img/plots/cap_len.pdf} \caption{Caption length of the HL captions divided per axis (action, scene, rationale) in comparison to the object-centric COCO captions (object). } \label{fig:cap_len} \end{figure} Moreover, as already mentioned, the COCO captions are object-centric, that is, these captions are collected to objectively represent the visual content. Although this is convenient in recognition-oriented tasks, they lack the situational knowledge required to contextualize the scene; knowledge that is instead an essential part of the cognitive processes underlying the grounding of language in vision. Indeed, as shown in Figure \ref{fig:lemmas_coco}, the most frequent lemmas in the COCO captions for the images used in the HL dataset, denote mostly objects visible in the picture. The high-level captions represent the same visual content with the addition of situational knowledge coming from the three axes, and this is also visible in different lexico-semantic choices in the texts. Because we align them to the same images, the dataset gives us a clean way to explore the relationship between objects and these high-level axes. \paragraph{Disentangling the content across the axes} Asking the same three questions about the same subject for each image allows us to consistently compare the content of our captions across three well-defined axes. We analyse the most frequent nouns in the \textit{scene} axis in order to characterize the kind of scenes mentioned in the captions collected. The top most frequent scenes include \textit{street, room} and \textit{road}. These are scene types which can encompass a very broad variety of objects. However, we can also identify scenes for which a narrower range of objects would be diagnostic, for example those related to sport activities like \textit{baseball, tennis, ski, ground} and \textit{court}, or domestic environments like \textit{house, kitchen} and \textit{living} (referring to "living rooms"). For a more complete view see Figure~\ref{fig:lemmas_scene} where we report the top 20 most frequent scenes in the HL dataset. \noindent \begin{figure} \centering \includegraphics[width=0.45\textwidth]{img/plots/freq_lemmas_scenes.pdf} \caption{The most frequent lemmas of the captions in the \textit{scene} axis of the HL dataset.} \label{fig:lemmas_scene} \end{figure} \noindent \begin{figure} \centering \includegraphics[width=0.46\textwidth]{img/plots/freq_lemmas_coco_NN.pdf} \caption{The most frequent nouns in the COCO captions of the shared set of images with the HL dataset. The majority of the terms correspond to physical objects visible in the image.} \label{fig:lemmas_coco} \end{figure} Similarly, we can characterize also the \textit{action} and the \textit{rationale} axes. We identify the \textit{action} distribution by analysing the verbs contained in the captions. From Figure~\ref{fig:lemmas_action} we observe that the most frequent actions are related to sports activities, consistently with what was observed in the \textit{scene} axis distribution. The most frequent verbs are \textit{play, ski, surf, skateboard}, but we can also find generic actions like \textit{hold, walk, sit} and \textit{eat}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{img/plots/freq_lemmas_action.pdf} \caption{The most frequent lemmas of the captions in the \textit{action} axis of the HL dataset.} \label{fig:lemmas_action} \end{figure} In the \textit{rationale} axis we analyse both nouns and verbs. This axis is also very interesting because we expect to observe more subjectivity and content variability, with more lemmas denoting intent and mental states and events, including psych verbs. Our hypothesis is that the annotators leverage their personal experience to infer these answers to a greater extent than they do for scene descriptions. The majority of the rationales express intentions; in fact, \textit{want} is by far the most frequent term in the lemmas distribution. As observed with the other two axes, terms related to sport activities are more frequent (\textit{play, game, tennis, practice}), but also related to leisure (\textit{enjoy, fun, vacation, love, family}) along with generic activities (\textit{work, wait, try, eat}). For more details see Figure~\ref{fig:lemmas_rati}. We can combine the information coming from the three axes to perform detailed inter-axis analyses. For example, the most frequent action performed in scenes described as \textit{outside} is \textit{walking} as shown in Figure~\ref{fig:action_outside}. The systematic disentanglement of the content along three axes can serve as a filter to identify or analyse sub-samples of the data with specific characteristics. For instance, as observed so far, we can confidently say that sports-related activities are predominant in the dataset. \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{img/plots/freq_lemmas_ratio.pdf} \caption{The most frequent lemmas of the captions in the \textit{rationale} axis of the HL dataset.} \label{fig:lemmas_rati} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{img/plots/action_outside.pdf} \caption{Most frequent actions for scenes taken \textit{outside}.} \label{fig:action_outside} \end{figure} \paragraph{Connecting high- and low-level concepts} Enabling the discovery of connections between high- and low-level concepts, namely two very different levels of abstraction, is one of the main goal of this resource. By construction, the alignment provided by the HL dataset allows us to identify concrete objects in images which provide `support' to infer high-level concepts such as actions and rationales. We dive deeper into our analysis and study the connection between high-level concepts related to scene, action and rationale, to low-level objects present in the aligned COCO captions. We ask: "What are the most informative objects for a high-level concept (e.g. \textit{enjoy}) found in a specific axis (e.g \textit{rationale})?" We leverage the Point-wise Mutual Information (PMI) \cite{church-hanks-1990-word} to find the most informative objects linked to a high-level concept. This is helpful to discover connections between concepts across different levels of abstraction but gives also clues on the content distributions within the axes. We filter out object mentions which have a frequency less than 100 in the low-level captions. This leaves 475 object-denoting lemmas. Then, we compute the PMI between content words in the high-level captions and all these objects. For example, Figure \ref{fig:pmi_enjoy} shows the nouns in the object-centric captions which have the strongest PMI with the verb `enjoy' in the rationale axis. \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{img/pmi_rationale_enjoy.pdf} \caption{Most informative objects for the word \textit{enjoy} in the \textit{rationale} axis. Font size is proportional to PMI.} \label{fig:pmi_enjoy} \end{figure} We can observe that high-level captions, can express different nuances of the same abstract concept. To take another example, \textit{love} (in Figure~\ref{fig:pmi_love}) can refer to the love between an animal and its owner, between two partners (e.g. \textit{wedding}) or the love for sports (e.g. \textit{skate, snowboard}). In the same way, as shown in Figure~\ref{fig:pmi_enjoy} a general concept like \textit{enjoy} can be characterized by object-level concepts leaning toward a specific nuance of meaning, like sports activities (e.g. \textit{kite, snowboarder, skier}) or places (e.g. \textit{sandy shore, ocean, lake}). More examples are shown in Appendix~\ref{app:pmi}. \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{img/pmi_rationale_love.pdf} \caption{Most informative objects for the word \textit{love} in the \textit{rationale} axis. Font size is proportional to PMI.} \label{fig:pmi_love} \end{figure} \subsection{Confidence scores analysis} \label{sec:confidence} Our confidence scores are similar in spirit to the \textit{self-confidence} scores collected in the VQA dataset \cite{antol2015vqa}. However, they differ insofar as our scores are not self-reported by the authors of the captions, but collected from independent annotators. The inclusion of an external judgment plays an important role in determining the reliability of interpretation operated by the annotator in the caption collection and therefore, in shedding light on the extent to which an annotator's interpretation of a scene relies on `shared' or `commonsense' knowledge, or is entirely idiosyncratic. We observe an average confidence score of 4.47 on a Likert scale from 1 to 5 (with a standard deviation of 0.78 and a median of 5) over all the axes. This suggests that, overall, according to independent judges, our high-level captions succeeded in capturing shared or `commonsense' high-level interpretations of the scene. Furthermore, the confidence scores provide an additional perspective under which our data can be characterized: by performing an axis-wise analysis of the confidence scores distribution (see Figure~\ref{fig:conf_scores}), we observe that the \textit{scene} and \textit{action} captions feature the highest overall confidence, while the \textit{rationale} axis lags behind by a small margin. We expect such differences, since determining the rationale of an action depicted in a static image is challenging, in particular, because annotators can leverage significant visual cues, but have no access either to temporal information or the subject's stated intentions. Therefore, they need to resort to their own priors and expectations which can also lead to idiosyncractic interpretations which independent judges -- as in our confidence score analysis -- would find relatively unlikely. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{img/plots/conf_scores.pdf} \caption{Axis-wise confidence score distribution of the high-level captions.} \label{fig:conf_scores} \end{figure} In this context the confidence scores can provide a measure of uncertainty of the data, which can be used, for instance, to identify hard samples; an example is shown in Figure~\ref{fig:hard_example}. The scene is hard to interpret even for humans and the scene captions display more variability and have low confidence scores. In this context, the confidence scores could be used as additional signal for training or for evaluation purposes. \noindent \begin{minipage}{\linewidth} \vspace{1em} \begin{minipage}{\linewidth} \centering \includegraphics[width=\textwidth]{img/hard_sample_407740.png} \end{minipage} \vspace{1em} \begin{minipage}{\linewidth} \centering \small \begin{tabular}{c|c|c} \hline Idx & Scene caption & Confidence \\ \hline 1 & in the restaurant & 1 \\ 2 & in the entrance of the library & 1 \\ 3 & the picture is taken outside a library & 3 \\ \bottomrule \end{tabular} \end{minipage} \captionof{figure}{Example of a `hard' sample in the HL dataset where the scene captions have low confidence scores. \label{fig:hard_example}} \end{minipage} \subsection{Quantifying Lexical and Semantic Diversity} In Section~\ref{sec:confidence}, we showed that in the presence of low confidence, there can be variation or disagreement among high-level captions given by different annotators for the same axis. In such cases, the captions focus on different aspects or refer to different interpretations. Although this phenomenon has been observed for captions with a low confidence score, it is conceivable that it might also happen with high-confidence captions, for example, two captions annotated by different annotators, while differing in the interpretation of an image, could nevertheless be considered highly likely. To quantify this phenomenon, in this section we further expand our analysis by studying the lexical and semantic diversity of our captions. \paragraph{Purity score} \label{sec:purity} We leverage the BLEURT score \cite{sellam2020bleurt}, a trainable metric used to evaluate semantic differences in Natural Language Generation, to compute a score measuring the semantic diversity among the high-level captions associated with an image. To do so, we first compute such scores across each axis, and then we combine them to obtain a final score for the item. In this way, we can unpack the semantic diversity item-wise and axis-wise. Let $C$ be the set of high-level captions of a given axis (e.g. scenes) for a given image. For simplicity, we do not report the index of the image and the axis in the following notation. We compute the BLEURT score of the caption as follows: \begin{equation} \label{eq:score} s_i = BLEURT(c_i, ref) \end{equation} where $s_i$ is the resulting BLEURT score, $c_i$ is a high-level caption, and $ref$ is the set of reference captions defined as follows: \begin{equation} ref := \{c_j \; \vert \; c_j \in C \; and \; j \neq i \} \end{equation} In other words $ref$ is the set of remaining captions along the axis and therefore, $s_i$ is measuring the semantic diversity of the caption with respect to the other captions along the same axis. By averaging the caption-wise scores across a single axis and across all the axes we obtain a \textit{purity score} measuring the semantic consistency both axis-wise and item-wise. \paragraph{Diversity score} \label{sec:diversity} Along the same lines, we propose the \textit{diversity score}, to measure the lexical diversity of the captions. The \textit{diversity score} follows the same logic implemented to compute the \textit{purity score} introduced in the previous paragraph, but the BLEURT score in Eq.~\ref{eq:score} is replaced by the BLEU score \cite{papineni2002bleu} and then normalized between 0 (similar) and 1 (very different). Our score is similar in spirit to self-BLEU \cite{zhu2018texygen} as it measures the similarity of the captions within their own distribution. However, its computation concerns only axis-wise and item-wise captions. \subsection{Results and discussion} As shown in Figure~\ref{fig:purity_dis} the purity scores obtained are mostly negative, this is due to lexical variations, which the BLEURT score is known to be sensitive to \cite{sellam2020bleurt}. However, BLEURT is not defined in any specific interval thus, it is usually hard to interpret \cite{sellam2020bleurt} if not considered in relative terms. \begin{figure}[h] \centering \includegraphics[scale=0.48]{img/plots/bleurt_dis.pdf} \caption{Axis-wise purity score distribution.} \label{fig:purity_dis} \end{figure} Based on that, we use it to compare the semantic purity across items and axes within our dataset. As shown in Figure~\ref{fig:purity_dis}, \textit{action} and \textit{scene} share similar purity score distributions whereas the \textit{rationale} is more skewed to the left than the other axes. This shows that the rationales feature a higher semantic diversity (lower overall BLEURT) than the other axes. \begin{figure}[h] \centering \includegraphics[scale=0.54]{img/plots/self-bleu_dis.pdf} \caption{Axis-wise diversity score distribution. The scores have been normalized between 0 and 1.} \label{fig:diversity_dis} \end{figure} The \textit{rationale} axis is also the one featuring the highest lexical diversity, whereas the \textit{scene} and the \textit{action} have similar distributions. This is shown in Figure~\ref{fig:diversity_dis} where the \textit{rationale} density estimate (in green) has a higher peak skewed on the right-hand side than \textit{scene} and \textit{action}density estimate (respectively in orange and blue). We have similar observations for both \textit{purity} and the \textit{diversity} scores and this confirms what was observed in the confidence score analysis in Section~\ref{sec:confidence}, namely that the task of determining the rationale of an action from a static image produces more variation and divergent interpretations leading to higher semantic and lexical diversity. Moreover, we find that both the \textit{diversity} and the \textit{purity} scores positively correlate with the confidence scores (See Figure~\ref{fig:corr}). \begin{figure} \centering \includegraphics[scale=0.55]{img/plots/corr.pdf} \caption{Pearson correlation between confidence, diversity and purity scores.} \label{fig:corr} \end{figure} For more details on the item-based analysis see Appendix~\ref{app:item-based}. \section*{Appendix} \section{Annotation Costs} \label{app:costs} In this section, we report the costs related to the data collection. \paragraph{High-level caption collection} Overall 1033 participants took part in the caption data collection, they were paid \$ 0.04 per item corresponding to the hourly minimum rate in the United Kingdom. In total, the data collection cost \$ 1938. \paragraph{Confidence Scores collection} The qualification task for confidence scores led to the recruitment of 53 annotators. We found that this task was harder than the high-level caption annotation in terms of complexity but not in terms of execution time which was indeed shorter. Therefore, in order to encourage good quality annotations, we pay \$ 0.04 per item. Considering the time needed to perform the task, this corresponds to 4 times the hourly rate of the minimum wage in the United Kingdom. The qualification task and the data collection cost respectively \$ 93 and \$ 1938. \section{Item-based analysis} \label{app:item-based} An item in the HL dataset is an image along with all the high-level captions of all the axes. For instance, Figures~\ref{fig:bleurt_dis_item} and \ref{fig:bleurt_purity_item} show the item-wise \textit{diversity score} and \textit{purity score} distribution respectively, along with their average value across the whole dataset. An item on the right-hand side of the distribution is systematically more consistent across its axes with respect to the measure considered (\textit{purity} or \textit{diversity}). This information can be combined with confidence scores to perform a more fine-rained sample selection. For example in zero-shot testing, we might want to use a hard sample to test our model with, we can select items with similar lexicons, low-semantic purity, and low confidence scores. \begin{figure}[t] \centering \includegraphics[scale=0.5]{img/plots/diversity_dis_item.pdf} \caption{Item-wise diversity score distribution.} \label{fig:bleurt_dis_item} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.5]{img/plots/bleurt_dis_item.pdf} \caption{Item-wise purity score distribution.} \label{fig:bleurt_purity_item} \end{figure} \section{PMI analysis Examples} \label{app:pmi} The PMI analysis can provide interesting insight into the connection between object-level and high-level captions on all the three axes available. On the \textit{scene} axis, for instance, the PMI gives some clues on the extent to which an object can be considered diagnostic for a scene. For instance, two semantically similar scenes like \textit{restaturant} (see Figure~\ref{fig:pmi_restaurant}) and \textit{kitchen} (see Figure~\ref{fig:pmi_kitchen}) share several diagnostic objects, as we would expect. However, we can identify important semantic nuances: the scene \textit{restaurant} contains objects related to the food (i.e. \textit{pizza, cheese, wine, sandwhich}) whereas \textit{kitchen} contains objects related to the preparation of food (i.e. \textit{stove, oven, tray, refrigerator}). Another example is shown in Figure~\ref{fig:pmi_look}, where the most relevant objects for the action \textit{look} encompass a wide variety of contexts, like looking at a screen or a device (e.g. \textit{device, screen, cellphone}) or entertainment (e.g. \textit{zoo, zebra, giraffe}). For more examples see Table~\ref{tab:pmi_top}, where are shown the top most relevant objects for the top three lemmas in the \textit{scene, action} and \textit{rationale} axes. These semantic differences, while quite easy for humans to interpret, are not usually present in object-centric V\&L datasets. They are made explicit and easy to identify in the HL dataset, where captions with different levels of abstraction are aligned with the same image. \begin{figure}[H] \centering \includegraphics[width=0.45\textwidth]{img/pmi_scene_restaurant.pdf} \caption{Most informative objects for the word \textit{restaurant} in the \textit{scene} axis. Font size is proportional to PMI.} \label{fig:pmi_restaurant} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.45\textwidth]{img/pmi_action_look.pdf} \caption{Most informative objects for the word \textit{look} in the \textit{action} axis. Font size is proportional to PMI.} \label{fig:pmi_look} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.45\textwidth]{img/plots/pmi_scene_kitchen.pdf} \caption{Most informative objects for the word \textit{kitchen} in the \textit{scene} axis. Font size is proportioanl to PMI.} \label{fig:pmi_kitchen} \end{figure} \input{tables/pmi_examples.tex} \section{Annotation Details} \label{app:ann_details} We reproduce in Figure~\ref{fig:hl_form} the annotation form used for the HL caption collection. It is important to notice that the instructions in Figure~\ref{fig:hl_instruct} are always visible to the workers. \label{app:ann-details} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{img/annotation_example.png} \caption{Annotation form presented to the worker during the high-level captions collection. The instructions are always visible to the annotator.} \label{fig:hl_form} \end{figure*} Figure~\ref{fig:conf_example} shows the annotation form used for the confidence score collection. Also in this case, the instructions are always visible to the worker and each image is presented along with the original question and the answer. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{img/confidence_example.png} \caption{The confidence scores annotation form. We show the instructions, the image, the question, and the corresponding answer.} \label{fig:conf_example} \end{figure*} \section{Examples} In Table~\ref{tab:more-hl-example} we reproduce further examples of images and their corresponding captions in the HL Dataset. \begin{table*}[h] \begin{tabularx}{\linewidth}{XX|X} \centering \small \textbf{Image} & \textbf{Axis} & \textbf{Caption} \\ \cmidrule{2-3} \multirow{4}{*}{\includegraphics[width=\linewidth]{img/examples/COCO_train2014_000000031057.jpg}} & scene & the picture is taken in a construction site \\ & action & he is operating machinery \\ & rationale & he is clearing up debris with the machine. \\ \cmidrule{2-3} & object-centric (COCO) & A blue flatbed truck with a yellow backhoe behind on a residential street. \\ \midrule \multirow{4}{*}{\includegraphics[width=0.5\linewidth]{img/examples/COCO_train2014_000000002703.jpg}} & scene & The photo is taken in a toilet \\ & action & the subject is sitting on the toilet seat.\\ & rationale & doing it just for fun \\ \cmidrule{2-3} & object-centric (COCO) & A man in blue shirt sitting on toilet next to sink and mirror. \\ \midrule \multirow{4}{*}{\includegraphics[width=0.85\linewidth]{img/examples/COCO_train2014_000000001924.jpg}} & scene & the picture is taken at old town street \\ & action & one car is in the picture to turn to old town \\ & rationale & they are coming to old town \\ \cmidrule{2-3} & object-centric (COCO) & A car driving on a street in the town center \\ \midrule \multirow{4}{*}{\includegraphics[width=0.4\linewidth]{img/examples/COCO_train2014_000000293070.jpg}} & scene & in the restaurant. \\ & action & they are having their snacks. \\ & rationale & to taste it. \\ \cmidrule{2-3} & object-centric (COCO) & A dad and his daughter eating a meal at a small table. \\ \midrule \multirow{4}{*}{\includegraphics[width=0.4\linewidth]{img/examples/COCO_train2014_000000543058.jpg}} & scene & this is inside a garage \\ & action & the bike is just standing alone. \\ & rationale & no one is working on or trying to ride the bike. \\ \cmidrule{2-3} & object-centric (COCO) & Custom motorcycle has a wooden barrel as a sidecar \\ \end{tabularx} \caption{More examples of instances of the High-Level Dataset. It is shown one of the three captions available for the three axes collected: \textit{scene, action, rationale}, combined with the object-centric captions from COCO.} \label{tab:more-hl-example} \end{table*} \section{Conclusions} In this paper, we introduce the High-Level Dataset (HL). We extend 14,997 images from the popular COCO dataset with 134,973 human-annotated high-level descriptions systematically collected over three axes: \textit{scene}, \textit{action}, and \textit{rationale}. We align high-level captions with object-centric captions and we provide human-collected confidence scores to measure the degree of commonsense expressed in the high-level captions. Differently from current V\&L captioning datasets, the high-level captions capture the human interpretation of the scene allowing for inference and expectations. We discuss how they can be used also in combination with low-level captions to improve research in visual commonsense reasoning and multimodal grounding of visual concepts into linguistic expressions, hoping to foster future research in the direction. \section{Data} \label{sec:data} In this section, we describe the protocol used to collect annotations for \textit{scenes, actions} and \textit{rationales} and the subsequent collection of confidence scores through crowdsourcing. Differently from previous works, such as COCO, where human annotators are instructed to be objective and to mention only the objects clearly visible in the picture, we elicit high-level concepts in the form of captions by encouraging the annotators to rely on their subjective interpretation of the image. \subsection{Data collection} The task of collecting high-level descriptions is by nature hard to define and requires a clear and careful formulation. \paragraph{Pilot} We run a pilot study with the double goal of collecting feedback and defining the task instructions. The pilot is run with 9 participants who were trained on the task, with high proficiency in English and a background in computer science and linguistics. With the results from the pilot we design a beta version of the task and we run a small batch of cases on the crowd-sourcing platform. We manually inspect the results and we further refine the instructions and the formulation of the task before finally proceeding with the annotation in bulk. The final annotation form is shown in Appendix~\ref{app:ann-details}. \paragraph{Procedure} The participants are shown an image containing at least a human subject and three questions regarding three aspects or axes: \textit{scene}, \textit{actions} and \textit{rationales} i,e. \textit{Where is the picture taken?}, \textit{What is the subject doing?}, \textit{Why is the subject doing it?} We explicitly ask the participants to use their personal interpretation of the scene and add examples and suggestions in the instructions to further guide the annotators. Moreover, differently from other VQA datasets like \cite{antol2015vqa} and \cite{zhu2016visual7w}, where each question can refer to different entities in the image, we systematically ask the same three questions about the same subject for each image. The full instructions are reported in Figure~\ref{fig:hl_instruct}. For details regarding the annotation costs see Appendix~\ref{app:costs}. \begin{figure*}[h] \small \begin{framed} \textbf{Instructions}: \\ You are going to see some pictures. Each picture involves one or more people ('the subject'). You will be asked some questions about the picture \\ Don't think too much, feel free to give your personal interpretation using your knowledge or common sense. \\ Try to answer using full English sentences. \textbf{If you're not sure what the answer could be, give your best guess.} \\ \underline{Avoid using expressions like "I think" or "I suppose" or "Maybe.} \\ \textbf{Do not propose options or possibilities} saying for instance: something \underline{"or"} something else. \textbf{Make your best guess} and state the one you choose.\\ Write a statement, \underline{\textbf{don't write a one-word answer}}, avoid acronyms or slangs and write a \underline{\textbf{full sentence}}. \begin{enumerate} \item \textbf{Where is the picture taken}: give your best guess about the type of place where the action is happening (for example, "in a ski resort"); \item \textbf{What is the subject doing}: Try to describe what the people are doing as concisely as possible. \\ If there is more than one person, try to choose a description that captures what all of them are doing (for example, "They are skiing") \item \textbf{Why is the subject doing it}: here, write your best guess about why the person or persons are doing the action (for example, "They are on a family holiday") \end{enumerate} \underline{The \textbf{What} question and the \textbf{Why} question \textbf{cannot have the same} answer.} \\ \underline{The answers must be \textbf{written correctly in English}, check the spell and most importantly \textbf{don't forget the subject of the}} \\ \underline{\textbf{sentence in your answer} (HE, SHE, IT, THEY).} \end{framed} \caption{Final version of the instructions presented to the workers during the collection of the high-level captions. These instructions are always visible to the worker.} \label{fig:hl_instruct} \end{figure*} \paragraph{Images} As mentioned in Section~\ref{sec:intro} the COCO dataset has a very explicit object-centric orientation, therefore it provides a good starting point to select images, such that we can couple object-centric and high-level captions in a resource-lean approach. Moreover, the alignment of object-centric and high-level captions permits an investigation of the relationship between them. We randomly select 14997 images from the COCO 2014 train-val split. In order to answer questions related to \textit{actions} and \textit{rationales} we need to ensure the presence of a subject in the image. Therefore, we leverage the entity annotation provided in COCO to select images containing at least one person. The whole annotation is conducted on Amazon Mechanical Turk (AMT). We split the workload in batches in order to ease the monitoring of the quality of the data collected. Each image is annotated by three different annotators, therefore we collect three annotations per axis. \subsection{Confidence Scores} The high-level descriptions are collected by asking the participants to interpret the scene leveraging their personal experience. Thus, we expect more variation due to the subjectivity of such interpretations, some of which might deviate from general or `commonsense' interpretations of the meaning of visual stimuli. In order to distinguish what can confidently be considered commonsense from mere subjective interpretations, we conduct a separate study where we crowd-source \textit{confidence scores} for each high-level caption. We ask an independent participant to score the likelihood of a high-level description given the image and the corresponding question on a Likert scale from 1 to 5. For a detailed example of the form see Figure~\ref{fig:conf_example} in Appendix~\ref{app:ann-details}. \paragraph{Agreement-based worker selection} The confidence scores are collected following the same protocol used to collect the high-level descriptions. In the pilot study, we found it necessary, given the difficulty of the task to run a qualification task where we employed an \textit{automatic worker selection method} to hire the qualified annotators from the crowd-sourcing platform. Let's consider the participants of the pilot as gold annotators (as they were trained on the task) and their annotations as reference annotations. The inter-annotator agreement computed on the reference annotations can be considered the gold inter-annotator agreement $\alpha_{gold}$ of the task. We run the qualification task using the same set of items used in the pilot, then for each worker $w$ we re-compute the inter-annotator agreement combining the workers and the reference annotations, obtaining $\alpha_{w}$. We compute an agreement ratio \begin{equation} r = \frac{\alpha_{w}}{\alpha_{gold}} \end{equation} Then, we select the worker $w$ if $r > t$ where $t$ is a threshold empirically set to $0.5$. This is equivalent to choosing workers such that their contribution does not negatively affect $\alpha_{gold}$ by a factor greater than $t$. In other words, the workers are selected if they are relatively compliant with the gold annotators. \paragraph{Quantitying grammatical errors} We ask two postgraduate students experts in linguistics to correct grammatical errors in a sample of 9900 captions, 900 of which are shared between the two experts. They are shown the image-caption pairs and they are asked to edit the caption whenever they identify a grammatical error. The most common errors reported by the annotators are: \begin{itemize} \item Misuse of prepositions; \item Wrong verb conjugation; \item Pronoun omissions. \end{itemize} In order to quantify the extent to which the corrected captions differ from the original ones, we compute the Levenshtein distance \cite{1966SPhDL} between them. We observe that 22.5\% of the sample have been edited and only 5\% with a Levenshtein distance greater than 10. This suggests a reasonable level of grammatical quality overall, with no substantial grammatical issues. This can also be observed from the Levenshtein distance distribution reported in Figure~\ref{fig:lev_dist}. Moreover, the human evaluation is quite reliable as we observe a moderate inter-annotator agreement ($\alpha = 0.507$, \cite{krippendorff2018content}) computed over the shared sample. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{img/plots/lev_dist.pdf} \caption{Distribution of the Levenshtein distance computed between the original and the corrected high-level captions in a sample of 9900 captions. } \label{fig:lev_dist} \end{figure} \section{How to use this data} In light of what we observed so far, we envision a wide set of use cases and tasks enabled by the HL dataset. We group them by defining two main directions: \begin{enumerate} \item Image-to-text generation tasks \item Multimodal grounding analyses \end{enumerate} \paragraph{V\&L generative tasks} Under the generative perspective, our captions enable image captioning generation tasks which encompass a broader range of visually grounded linguistic descriptions than the highly object-centric, `conceptual' descriptions which dominate the captioning literature, as emphasized by \citet{hodosh2013framing}. The high-level captions contain human interpretations which can be exploited to generate more natural captions. Moreover, the decomposition along three axes can be exploited to compose narratives of the image, as in image paragraph generation \cite{wang2019convolutional} and visual storytelling \cite{huang2016visual, hu2020makes}. They can be used in combination with the question each axis corresponds to, in order to generate micro-dialog scenarios. We would also argue that the high-level captions are also more natural and human-like since they were collected without enforcing any restriction on the content to be described. This property could be used to generate more natural captions, including captions containing abstract concepts which could, in turn, be justified by the generator with reference to the object-level information which supports a particular interpretation. In this way, the dataset can be leveraged to provide captions and explanations. Furthermore, the confidence scores can be used to identify hard samples in the data or even to generate confidence scores reflecting human confidence. This information can serve both for evaluation purposes and designing a custom training strategy, e.g. curriculum learning \cite{bengio2009curriculum}, which could take into account the difficulty of the labels, e.g. soft-labels generation \cite{thiel2008classification}. \paragraph{Multimodal Grounding} The lack of abstract linguistic concepts in common object-centric V\&L datasets, makes this resource a useful tool to benchmark the grounding capabilities of large pre-trained V\&L models which are hardly exposed to this kind of data. Along these lines, \citet{cafagna2021vision} use a sub-sample of the scene descriptions in the HL dataset to study the capability of V\&L models to understand scene descriptions in zero-shot settings, finding that only large-scale pre-trained V\&L models have enough generalization capabilities to handle unseen high-level scene descriptions. In the same direction, \citet{cafagna2022understanding} analyse the impact of exposure to high-level scene descriptions of multimodal representations pre-trained on object-centric captions. They show that exposure to high-level concepts mainly affects the model's attentional resource allocation over the visual input, even though the low-level concepts learned during pre-training provide enough signal to support and easily adapt to scene descriptions during fine-tuning. This is also supported by \citet{wang2022understanding} who find that low-level concepts are needed to learn higher-level concepts, though this does not hold in the other direction. \section{Introduction} \label{sec:intro} Conceptual grounding broadly refers to the idea that language is grounded in perception \cite{barsalou2008grounded}. We elaborate perceptual signals under the assumption that we all see and hear the same things. Although the interpretation of scenes and events are to some extent shared across individuals, such interpretations can license subjective inferences which inform not just what we express by the language, but also what we choose to assume and leave unexpressed \cite{bisk2020experience}. Among the many modalities available in the perceptual spectrum, visual grounding has always been of primary interest as it provides a relatively straightforward way to link linguistic expressions to physical objects. Consistent with this claim, a glance at many widely used datasets and models in image captioning reveals a bias towards `object-centric' descriptions, whereby models are trained on image-text pairs where the text consists of explicit mentions of objects visible in the scene. However, experience and perception also motivate other, non-object-centric ways of talking about the world, for example, when we talk about scenes, or when we infer the action of a person and its underlying rationale. In this case, as we move away from object-centric perspectives, grounding becomes a more difficult enterprise, because we no longer have a simple correspondence between objects and expressions, but world knowledge and subjective experience play an important role. Language, in this context, embeds world knowledge and subjective experience together, connecting to high-level concepts, namely abstract concepts, which are not necessarily directly linked to physical concepts or objects, but that are the by-product of human assumptions and interpretations. For example, the object-centric description in Table~\ref{tab:hl-example} certainly describes the visual content, though it is based mainly on the recognition of objects in the scene. The three high-level captions (\textit{scene, action, rationale}) instead, provide three different perspectives of the scene among the many possible ones, which are triggered by expectations and assumptions based on subjective experience and world knowledge. In this work, we tackle the issue of grounding high-level linguistic concepts in the visual modality, proposing the High-Level (HL) Dataset: a resource for Vision and Langauge (V\&L) modelling which aligns existing object-centric captions with human-collected high-level descriptions of images along three different axes: \textit{scenes, actions} and \textit{rationales}. The high-level captions capture the human interpretation of the scene, providing abstract linguistic concepts complementary to object-centric captions used in current V\&L datasets, e.g. in COCO \cite{lin2014microsoft}. We take a step further, and we collect \textit{confidence scores} from independent annotators, which serve to shed light on the extent to which the high-level captions in the dataset correspond to widely-shared assumptions, or to idiosyncratic interpretations. Our contributions are: \begin{itemize} \item We present and release the HL dataset, a new V\&L resource, grounding high-level captions into images along three different axes and aligned with existing object-centric captions; \item We describe the collection protocol and provide an in-depth analysis of the data; \item We show how our data can flexibly be used to explore the linguistic grounding capabilities of pre-trained multimodal models and to enable new or extend existing downstream tasks. \end{itemize} \begin{table*}[tp!] \begin{tabularx}{\linewidth}{XX|X} \centering \small \textbf{Image} & \textbf{Axis} & \textbf{Caption} \\ \cmidrule{2-3} \multirow{4}{*}{\includegraphics[width=\linewidth]{img/main_example_3338041245.jpg}} & scene & the picture is shot in a ski resort \\ & action & they are just relaxing after a round of skiing \\ & rationale & they want to have a good time together \\ \cmidrule{2-3} & object-centric (COCO) & a woman and a boy sitting in the snow outside of a cabin. \\ \end{tabularx} \caption{Example of High-Level captions. It is shown one of the three captions available for the three axes collected: \textit{scene, action, rationale}, combined with the object-centric captions from COCO.} \label{tab:hl-example} \end{table*} \section{Related work} \citet{hodosh2013framing}, in their influential work, argue that image captioning is mostly interested in concrete descriptions of the depicted scene, entities, their attributes, and relations, as well as the events they participate in. These kinds of descriptions are also called conceptual descriptions because they focus on what is actually in the image and differ from the so-called non-visual descriptions, which provide also additional background information. This line of thought has been broadly followed in the field, resulting in datasets emphasizing object-centric content in V\&L tasks involving text generation, like image captioning \cite{lin2014microsoft,sharma-etal-2018-conceptual,agrawal2019nocaps} and visual question answering \cite{antol2015vqa, zhu2016visual7w}. For instance, in the instructions used to collect COCO \cite{lin2014microsoft}, the annotators are explicitly asked to mention in the caption, entities visible in the image. This is beneficial to enhance cross-modal interactions: \citet{zhang2021vinvl} show that improving the visual backbone on object recognition tasks, improves the performance of visio-linguistic models in downstream tasks. \citet{li2020oscar} show that using object labels to bridge the two modalities improve the grounding capabilities of low-level concrete concepts. Object-centricity is also a feature of widely-used web-scraped datasets: in the Conceptual Captions dataset for instance, \citet{sharma-etal-2018-conceptual} filtered out all image-caption pairs which did not contain, in the caption, a set of object labels automatically identified by a computer vision model. Some efforts have been made to understand how low-level concepts improve generalization capabilities and connect to high-level concepts. Object-centric captions help to improve the generalization over unseen objects \cite{hu2021vivo} and play a role in the model understanding of abstract concepts \cite{cafagna2022understanding, wang2022understanding}. In our work, we are interested in the relations between what \citet{hodosh2013framing} refer to as `conceptual' and `non-visual' descriptions, which we reframe as a distinction between low-level (object-centric) and high-level (abstract) concepts in multimodal learning. We release a novel dataset to foster research in this direction. Nowadays, non-visual aspects like inferences, temporal and causal relationships are becoming of interest \cite[e.g.,][]{park2020visualcomet}. In order to perform cognitive-oriented tasks in more realistic settings, deeper integration between the two modalities is required; this is also a major motivation for our work. In visual storytelling, for instance, the model has to understand actions and interaction among the entities \cite{huang2016visual, hu2020makes, lukin2018pipeline}. Action grounding is also important for predicting motivations of visually-grounded recognition tasks \cite{vondrick2016predicting} and explaining automatically generated descriptions of images \cite{hendricks2018generating}. Actions and intention are paramount to performing commonsense and temporal reasoning on visual inputs. Along these lines, \cite{park2020visualcomet} build a dataset of dynamic stories in the shape of graphs, on top of a static image, where the model has to predict priors and subsequent events along with rationales for the actions. Our work follows this direction in spirit, as we align high-level \textit{actions} and \textit{rationales} with low-level descriptions along with static images. Some work has been done also to test multimodal model grounding capabilities from a more linguistic perspective. \citet{parcalabescu2021valse} build a benchmark to test models on a variety of linguistic phenomena, like spatial relations, counting, existence, etc. \citet{pezzelle2020different} assess the integration of complementary information of V\&L models across modalities, \citet{thrush2022winoground} test multimodal models on compositional reasoning. In this context, the HL dataset proposed here can offer another benchmark for V\&L models' understanding of high-level concepts in relation to objects visible in the visual modality, but which are never explicitly mentioned in the textual modality, where this information is assumed.
1,314,259,993,156
arxiv
\section{Introduction} Unmanned Aerial Vehicles (UAVs) are increasingly being deployed in wireless communication networks, largely due to their low cost and unrestricted mobility~\cite{tutorialsofUAV}. Notable usage examples include the Google Loon project~\cite{googleloon} and the Facebook Aquila project~\cite{facebookAquila}. In these examples, UAVs serve as mobile Base Stations (BS) directly providing wireless communication for users, or as relays between devices and fixed base stations. UAVs-assisted networks have also found applications in fields that require reliable communication or assured identity~\cite{huaweiwhitepaper} such as precision agriculture, search and rescue, and parcel delivery as discussed next. With the recent boom in the number of mobile and Internet-of-Things (IoT) devices, swarms of UAVs may be needed to assist in establishing communication networks~\cite{huaweiwhitepaper,huaweivideo}. For example, in precision agriculture, multiple UAVs are deployed to assist in irrigation management, crops health monitoring, and cattle herding. These are labor-intensive tasks due to the dense distribution of crops and the continuous mobility of animals. The advantages of swarm UAVs in such cases include time savings and cost reduction~\cite{swarm_UAVs_review}. In search and rescue tasks, a swarm of UAVs are able to work cooperatively in extremely harsh disaster environments. UAVs are able to quickly and efficiently search an area, identifying victims and their status, then communicating such information to ground assets~\cite{SARescue_UAV}. Autonomous driving is also benefiting from advances in U-WCNs. As an example, Vehicle-to-Everything (V2X) communication systems will allow vehicles to connect to everything using UAVs, which can act either as a medium of data transmission between vehicles and base stations or as security enhancers~\cite{V2X}. Finally, one of the most immediate applications of swarm UAVs is for delivery service~\cite{AmazonDelivery}. In this scenario, the UAVs help deliver packages to customers' backyards, and rendezvous with delivery trucks. All these applications rely not only on the safe flight control of each UAV but also on their ability to communicate wirelessly and reliably. Traditionally, deploying UAVs in wireless communications systems faced challenges such as complicated channels models~\cite{surveychannel,zhanghanbook}, dynamic cell association~\cite{zhanghanbook}, energy constraints~\cite{zhanghanbook} and legislative regulations~\cite{Khamvilai2021}. With the continuing increase of the number of deployed UAVs, new challenges associated with multi-agent decision making also arise. Such challenges include multi-agent trajectory planning~\cite{MFG_NN}, multi-agent resource allocation~\cite{jointaccessselect,DDPG-RA, MFQ} and user association~\cite{MILP_cluster}. Game theory provides tools to solve multi-agent decision problems and to analyze the interactions among various agents in a communication network. Game theoretic concepts such as Nash or correlated equilibrium are well suited for U-WCNs~\cite{owen:Game-Theory}. With the increasing number of UAVs required to accomplish complex tasks however, traditional game theoretic algorithms may become intractable. One possible approach to tackling this challenge is to leverage machine learning techniques such as function approximation~\cite{neuralDynamicProgramming, Mnih2015_DQN}, policy gradient~\cite{DDPG}, and multi-agent actor critic~\cite{RL_sutton, lowe2017multi}. While abundant literature exists for game theory~\cite{survey-UAV-2,GT_survey} and machine learning~\cite{DRL_wireless,ML-UAV-survey} approaches to U-WCNs problems, few if any exist with a unified treatment of the two areas. This survey attempts to fill this void by first reviewing the existing literature, then providing linkages between game theoretic and machine learning techniques for UAVs-assisted wireless communication systems. \subsection{Prior surveys} There are many surveys of UAVs-assisted wireless communication networks~\cite{tutorialsofUAV,civil,router, fanet,fotouhi2018survey,surveychannel,LAP_survey}. The authors of~\cite{civil} for example, reported on the characteristics and requirements of UAV networks for multiple civilian applications. These include search and rescue, area coverage (e.g., monitoring and surveillance), network coverage (e.g., relays/base stations/data mule), delivery, and construction. In particular, the Quality-of-Service (QoS) requirements, network-relevant mission parameters, data requirements, connectivity, adaptability, safety, and privacy were discussed. Reference \cite{fotouhi2018survey} covered a variety of cellular-specific issues such as Third Generation Partnership Project (3GPP) development, vendor prototypes, regulations, and cyber-security issues that affect the cellular UAVs and potential business model. The authors also proposed multiple future research directions such as UAV simulators, advanced UAV mobility control based on image processing and deep learning, new antenna designs to achieve higher data rate, physical reliability, and mobile edge computing. Reference~\cite{router} discussed some important issues in UAVs communication networks, such as the characteristics of UAV networks and the protocols in various layers to assist in greening the network. The authors compared the advantages and disadvantages of various network structures (e.g., star vs mesh), different routing protocols (e.g., static, proactive, on-demand, or reactive, hybrid), and existing seamless handovers. Reference~\cite{surveychannel} presented a comprehensive and unified review of UAVs' air-to-ground channel models. Reference \cite{fanet} focused on applications of Flying Ad-hoc Networks (FANETs) based on UAVs, such as traffic monitoring, agricultural management, military defense, and relay networks. Furthermore, the authors considered the communication challenges in FANETs systems. These challenges include high mobility, frequent topology changes, minimal delay, and high reliability requirements. Reference \cite{LAP_survey} offered an overall view of High Altitude Platform (HAP)-based and Low Altitude Platform (LAP)-based communication networks, as well as Airborne Communication Networks (ACN). Reference \cite{tutorialsofUAV} presented a description of the potential applications and benefits of UAVs in wireless communication networks. It briefly described using game theory, Machine Learning (ML), and optimization theory to solve certain challenges in U-WCNs, such as Three-Dimensional (3D) deployment and energy optimization. Earlier research has focused on connecting game theory and wireless communications. As an example, Reference~\cite{survey-UAV-2} presented a number of game-theoretic solutions for energy consumption optimization, network coverage enhancement, and connectivity improvement in wireless communication systems using UAVs. In particular, the authors proposed Mean-Field Games (MFG) to solve problems in \textit{massive} UAVs networks. Reference \cite{GT_survey} utilized game-theoretic tools to model and analyze UAVs-assisted networks, where various problems within the physical layer, the data link layer, the network layer, the transport layer, and the application layer, were modeled and studied using various game formulations such as potential games, Bayesian games, and mean field games. Likewise, there are many surveys describing the use of machine learning methods in conventional wireless communication systems and UAVs-assisted wireless communication networks. Reference \cite{DRL_wireless} provided a comprehensive review of Deep Reinforcement Learning (DRL) in communication and networking. The authors reviewed recent DRL methods addressing issues such as dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation. That review, however, only briefly touched upon the recent applications of DRL in UAVs. In~\cite{ML-UAV-survey}, the authors provided an overview of ML techniques in U-WCNs, such as propagation channel modeling, resource management, security, and positioning. Other open issues for ML applications in UAVs-based networks are also identified in both the networking and security areas. Reference \cite{ML_UAV_intro} listed several applications of ML techniques (e.g., supervised learning and reinforcement learning) in UAV-based Radio Access Networks (RAN). These applications include radio resource allocation, design of collectors and relays, choice of the type and number of UAVs, positioning of UAVs acting as BSs, and the design of a mobile cloud. We summarize the contributions of these surveys in Table \ref{table:relevantSurvey}. Note that to the best of our knowledge, none of the previous surveys for U-WCNs have dealt with the intersection of machine learning and game theory. With the increasing interest in wireless communication applications requiring a large number of UAVs, ours seems to be the first survey that presents a unified view of the two fields. \subsection{Game theory and machine learning in UAVs-assisted wireless communication networks} Game theory and machine learning are two pillars that support applications in UAVs-assisted wireless communication networks. Notable examples include resource management~\cite{zhanghanbook,jointaccessselect,coalition,Charging,Koulali2016AGS,POCA,UAV_offloading,QL,double_Qlearning,DDPG-RA}, positioning~\cite{zhang2021multiagent,QL-emergency,non-coop-coverage}, trajectory planning~\cite{zhanghanbook,zhang2021multiagent,MFG-movementcontrol}, interference management~\cite{SG_Antijamming_Bayesian,Attack-PT-Q}, channel modeling~\cite{ML-UAV-survey,channelmodeling} and security~\cite{jamming,BayesianGame,SG_Antijamming_Bayesian,Attack-PT-Q}. Fig.~\ref{fig:ML_GT_application} presents various applications of machine learning and game theory in U-WCNs. We give next a brief introduction to each of the application, and present a more detailed discussion in later sections. \textbf{Positioning~\cite{zhang2021multiagent,QL-emergency,non-coop-coverage}:} The height and elevation angle of a UAV impact its coverage performance and link reliability over a service area \cite{zhanghanbook}. Furthermore, the optimal density of UAVs in an area is subject to safety and interference constraints. Research related to UAV positioning focuses on maximizing the coverage of the system while minimizing the interference. \textbf{Path/trajectory planning~\cite{zhanghanbook,zhang2021multiagent,MFG-movementcontrol}:} Subject to energy limitation, the trajectories of UAVs in a network need to be optimized, with link quality, interference and collision avoidance taken into consideration. \textbf{Security~\cite{jamming,BayesianGame,SG_Antijamming_Bayesian,Attack-PT-Q}:} Jamming and eavesdropping between UAVs and devices are two major security problems in U-WCNs. Both induce huge economical and political losses to companies and users. \textbf{Resource management~\cite{zhanghanbook,jointaccessselect,coalition,Charging,Koulali2016AGS,POCA,UAV_offloading,QL,double_Qlearning,DDPG-RA}:} Mobile devices and IoT devices have limited battery lifetime and constrained storage capability. As a result, in a UAV-cellular network, the UAVs need to support data caching and content relaying. Each UAV may be assigned different tasks (caching or relaying) and may also select different users to serve. The objective of resource management is to maximize the revenue of the operator(s), by optimizing the task assignments and user selection. Furthermore, if the UAVs belong to different operators, competition among the operators also need to be considered. \textbf{Interference management~\cite{SG_Antijamming_Bayesian,Attack-PT-Q}:} Interference exists in both traditional terrestrial networks and UAVs-assisted networks. For the latter, the interference comes from three sources: other communicating UAVs, mobile users, and ground control stations. \textbf{Channel modeling~\cite{ML-UAV-survey,channelmodeling}:} Working in a 3D dynamic environment, UAVs have to operate in a more complex channel model that accounts for the weather, obstacles, and the Doppler shift effect. \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{ML_app.jpg}} \caption{Scenarios of machine learning and game theory in UAVs-assisted wireless communication networks.} \label{fig:ML_GT_application} \end{figure} \subsection{Our contribution} As described earlier, there are many surveys of the application of game theory and machine learning methods to vehicular networks \cite{survey-V2V}, smart grids \cite{survey-smartGrid} and wireless sensor networks \cite{WSN-survey}. To the best of our knowledge, such surveys focus on either the machine learning ~\cite{DRL_wireless, ML-UAV-survey,ML_UAV_intro} or the game theory tools~\cite{survey-UAV-2, GT_survey}. The present survey attempts to provide the first unified survey that connects these two well-studied areas with their applications in U-WCNs. Rather than simply combining existing surveys, we examine the intrinsic connections between game theory, machine learning, and their applications to U-WCNs. \begin{sidewaystable}[htbp] \caption{Relevant surveys and magazines in UAVs-assisted wireless communication networks (N = No, Y = Yes, B = Brief introduction).} \vspace{+10pt} \resizebox{\textwidth}{!}{ \begin{tabular} {c|c|c|c|c } \hline \textbf{References} & \textbf{Topics} &\makecell[c]{\textbf{Game} \\[-2pt] \textbf{Theory}} & \makecell[c]{\textbf{Machine}\\[-2pt] \textbf{Learning}} & \makecell[c]{\textbf{Potential}\\[-2pt] \textbf{Challenges}} \\ \hline \cite{civil} & \makecell[c]{Characteristics and requirements\\[-2pt] of UAV networks} & N& N & Y\\ \hline \cite{router} & \makecell[c]{Characteristics, routing, \\[-2pt] handover scheme in UAV networks} & N & N & Y\\ \hline \cite{surveychannel} & Air-to-ground channel model & N & N & Y\\ \hline \cite{fanet} & Applications and challenges of FANETs & N & N &Y \\ \hline \cite{fotouhi2018survey} & \makecell[c]{Standardization, regulations,\\[-2pt] security, future direction} & N& N& Y \\ \hline \cite{LAP_survey} & LAP, HAP, ACN & N & N & Y\\ \hline \cite{tutorialsofUAV} &\makecell[c]{ Opportunities, challenges,\\[-2pt] open problems, and mathematical tools} & B & B & Y\\ \hline \cite{survey-UAV-2} & \makecell[c]{Game theoretic solutions for energy,\\[-2pt] coverage optimization, task allocation, etc.}& Y & N & Y\\ \hline \cite{GT_survey} & \makecell[c]{Game theoretic tools for modeling\\[-2pt] and analyzing UAV-assisted networks} & N & N & Y\\ \hline \cite{DRL_wireless} & DRL in communications and networking & N & Y &Y \\ \hline \cite{ML-UAV-survey} & ML applications in UAV-based networks & N & Y &Y \\ \hline \cite{ML_UAV_intro} & ML in UAV-based RAN & N& Y & Y\\ \hline \makecell[c]{Our survey} & \makecell[c]{ Game theory and machine learning \\ techniques in UAVs-assisted wireless\\[-2pt] communication, challenges and solutions } & Y & Y & Y \\ \hline \end{tabular} } \label{table:relevantSurvey} \end{sidewaystable} \subsection{Organization} The remainder of this article is organized as follows. In Section~\ref{sec:application}, we discuss the potential applications and challenges of UAVs for wireless communication. In Section~\ref{sec:GT}, we present some game theoretic techniques used to analyze wireless communication systems with UAVs. In Section~\ref{sec:ML}, we introduce machine learning algorithms for UAVs-assisted wireless communication systems. In Section \ref{sec:intersection}, we discuss the intersection of game theory and machine learning for U-WCNs, and present open problems then list several promising research directions. Section \ref{sec:conclusion} concludes this survey. \section{Wireless communication with UAVs: motivating applications and challenges}\label{sec:application} Depending on their flying altitude, UAVs are categorized into high-altitude platforms ($>17$ km) and low-altitude platforms. The low-altitude platforms have the advantages of higher flexibility, lower cost, lower latency, and easier maintenance, making them more suitable for Fifth-Generation wireless (5G) and IoT services. High-altitude platforms on the other hand, provide a more sustainable wireless network coverage for rural environments. This article focuses on low-altitude platforms and more specifically on unmanned aerial drones. \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{Applications.png}} \caption{Applications of UAVs-assisted networks.} \label{fig:UAV-applications} \end{figure} UAVs play different roles in various wireless communication settings. Fig.~\ref{fig:UAV-applications} shows some of those roles in future 5G and IoT networks. On one hand, UAVs may be used as aerial base stations in the 5G and beyond eras. Such UAVs improve the reliability of wireless links in Device-to-Device (D2D) and Vehicle-to-Vehicle (V2V) communications. On the other hand, aerial platforms are suitable for maintaining fast and ubiquitous connectivity whenever ground wireless networks fail after natural disasters~\cite{naturaldisaster}. UAVs can also serve as relays for communication among base stations and user devices. In addition, UAVs may be flying users as part of a cellular network for delivery applications and in Virtual Reality (VR) / Augmented Reality (AR) situations, where UAVs capture desired information about a specific area and transmit it to remote users in real time \cite{tutorialsofUAV}. In summary, UAVs can boost the performance of existing ground wireless networks in terms of coverage, capacity, delay and overall quality of service. Despite the ubiquity of their potential applications, many challenges remain for the wide deployment of UAVs. The first is the complexity of the UAVs-user channel model. The Air-to-Ground (A2G) channels are susceptible to blockage and affected by weather, altitude, elevation angle, type of UAVs, and propagation environments. For the A2G channel modeling problem as an example, there exists no specific modeling method for channel measurements in urban areas and rural areas under various weather conditions. In a dynamic UAV-to-UAV communication network, channel modelling is also complicated by the time-varying nature of the channel and the Doppler effect. The second challenge is the deployment and trajectory optimization problem. When integrating UAVs into communication systems, one would like to minimize the transmission latency of users, minimize the energy consumption while simultaneously maximizing the spectral efficiency and coverage performance. As a result, it is necessary to optimize the locations and the trajectories of UAVs, as well as the bandwidth/power allocation among them. Thus a framework that can dynamically manage these various resources while keeping the interference to ground users at acceptable levels is needed. Of course, the interference from UAVs to ground users should also be addressed. In addition, UAVs that act as users within cellular networks require a dynamic handover mechanism design and new scheduling schemes. Finally, as the use cases of UAVs increase (e.g., online video streaming, medical delivery), various security challenges may arise. For example, an attacker may disrupt the UAV's data transmission or send malicious data causing irregular movement and collisions, ultimately resulting in significant losses \cite{security}. Trying to address the 3D location and trajectory design problem, Reference \cite{tutorialsofUAV} proposed using convex optimization and optimal transport theory. Reference \cite{UAV_op} presented a framework to jointly optimize the 3D placement and mobility of UAVs, device-UAV association, and uplink power control. This framework breaks the complicated optimization problem into two sub-problems and solves the sub-problems in an iterative manner. Many similar problems are transformed into simpler but still challenging mixed integer programming problems, which either can not be solved by conventional optimization methods due to their non-convexity, or may still require high computational resources. Game Theory (GT) methods were introduced to assist in the modeling and solution of the optimization problem. Game theory provides a solid foundation for distributed decision making in UAVs-assisted wireless networks. In a game-theoretic framework, UAVs, BSs, and User Equipments (UEs) are regarded as players in a game, while the energy, spectrum, 3D positions and flight times are considered as the strategy spaces. This allows us to frame the optimization problem using existing machinery developed for stochastic differential games, coalitional games, mean-field games, contract theory, and others. With the development of high performance computing hardware and the availability of large data sets, Machine Learning (ML) techniques have recently been applied to many fields due to their ability of ``learning" from interacting with the environment. For UAVs-assisted wireless communication systems, ML algorithms enable UAVs to promptly adjust their positions, trajectories, flight directions, and motion control to serve the ground users. Moreover, ML algorithms may also be used to build a 3D channel model for UAVs~\cite{tutorialsofUAV}. Further synergies with optimization theory and game theory enlarge the range of problems that machine learning can address in UAVs-assisted wireless communication systems. For example, Reference \cite{MILP_cluster} combined Mixed Integer Linear Programming (MILP) with clustering methods to maximize the weighted sum rate of UAV-served users and the total number of D2D-connected users. This method reduces the time complexity of solving such problems while maintaining as good performance as the classical MILP methods. In the following two sections, we present a detailed summary of game theory and machine learning techniques in the field of UAVs-assisted wireless communication networks and some state-of-the-art algorithms. \section{Game theory in UAVs-assisted wireless communication} \label{sec:GT} Game theory studies the strategic interactions among rational players. More specifically, it deals with problems where multiple rational players interact with each other strategically in order to maximize their own benefit. Unlike most traditional optimization methods, game theory often provides efficient and robust distributed algorithms and has thus found extensive applications in wireless networks for modeling, analyzing, and designing distributed schemes~\cite{GT_survey,GT-wireless}. For UAVs-assisted wireless communication systems, one needs to resolve the load balancing, offloading, and distributed resource management problems among UAVs, BSs, and UEs. On the other hand, trade-offs between energy, spectrum, and 3D locations also require attention. In this article, we focus on game theoretic concepts and methods to solve both problems in U-WCNs as described next. In general, a game~\cite{GT1991} is composed of three elements: the set of players denoted by ${\mathcal{N}=\{1,2,...,i,...,n\}}$, the strategy space for each player $i$ denoted by ${S_i=\{s_1,s_2,...,s_m}\}$, and the payoff function $u_i$ also known as the reward that players receive at the end of the game contingent upon the actions of all other players in the game. In a UAVs-assisted wireless communication network, the players may be UAVs, ground users, or base stations. The strategies may be the beaconing periods scheduling, task servicing, UAVs relocating, offloading, channel assigning, and intruders evading. The payoff may be chosen as the throughput, Signal-to-Interference-plus-Noise Ratio (SINR), delays, or the number of nodes covered based on real applications~\cite{GT_survey}. A game is static if all players make decisions simultaneously without knowledge of other players' strategies. It is dynamic when the players make decisions sequentially or repeatedly. Based on whether the information structure is known or not, games may be divided into two categories: complete-information games and incomplete-information games. In addition, a game is characterized as a perfect-information or imperfect-information game based on whether all players know the historical actions of each other when they take their actions. Based on whether the players are cooperating to optimize a common goal or not, games can also be divided into cooperative games and non-cooperative games. The following list gives the definition of additional terms in the game theory literature: \begin{itemize} \item Stochastic game~\cite{Stochasticgame}: The game moves to a new state governed by transition probabilities that depend on the previous state and actions taken. The total payoff is defined as the discounted cumulative rewards of the payoffs during the course of the game. \item Nash equilibrium: When all players are operating at the Nash equilibrium, any unilateral deviation of an agent from this equilibrium point would not improve that agent's total payoff. A formal definition of a Nash equilibrium is: \\ \begin{rmk} (Nash equilibrium \cite{zhanghanbook}): We denote an action profile of the players as $a=\{ a_1, a_2, ..., a_M\}$. An action profile $a^* = \{ a_1^*, a_2^*,..., a_M^* \}$ is a pure-strategy Nash Equilibrium (NE) if and only if no player could improve its utility $u_m$ by deviating unilaterally, i.e., \begin{equation} u_m (a^*_m, a^*_{-m}) \geq u_m(a_m, a_ {-m}^*) \quad \text{for any action $a_m$}. \end{equation} \end{rmk} \end{itemize} In the following subsection, we introduce game theoretic concepts and their corresponding applications in U-WCNs. A more detailed description of game theory may be found in \cite{GT_survey} and \cite{GT_survey_simply}. Fig.~\ref{fig:GT_classification} presents a general classification of classical game-theoretic approaches used in U-WCNs. \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth]{classification.PNG}} \caption{Classification of current game theoretic approaches used in UAVs-assisted wireless communications~\cite{GT_survey_simply}.} \label{fig:GT_classification} \end{figure} \subsection{Cooperative games} A cooperative game, also known as coalitional game, is a game where the players form coalitions and take joint action as a group. The players within each group cooperate with each other while the group members with the same objective compete with members of other groups. In disaster scenarios, for example, UAVs have an incentive to cooperatively provide alternative network access to users in order to reduce their loss. UAVs from the same operator would like to cooperatively take on tasks (e.g. routing, data collecting, etc.) in order to maximize the revenues of the operator. Thus, cooperative game theory may be used to model the problems of rate allocation, cooperative transmission, packet forwarding, and so on. The drawback of the coalitional game however, is that it is NP-complete, i.e., with the increase of the size of the communication network, the time complexity increases. Thus, numerous heuristic algorithms are usually used to find a near-optimal solution in large communication networks. Hedonic coalitional formation game is a special class of coalitional formation game where the players are self-interested and only care about the identity of the players in their coalition, and each player has a preference rank over different coalitions. In~\cite{coalition}, a number of UAVs are required to collect data from several arbitrarily-located tasks in a UAV-based flying ad-hoc network. A hedonic coalitional formation game is used to model the interactions between UAVs and tasks in order to form disjoint coalitions. Both the tasks and the UAVs are players who decide to join or leave a coalition based on their payoffs. The total utility of every coalition is evaluated using a coalitional value function defined as the ratio of some power of the throughput and delay. Each formed coalition is modeled as a polling system comprised of a number of UAVs that move between different tasks to collect and transmit packets to a common receiver. Considering the computational complexity, the UAVs operate based on the nearest neighbor route. The coalition keeps updating until a Nash stable network partition is reached. The authors compared the performance of this algorithm with the algorithm that assigns the tasks equally among UAVs. The simulation results show that the proposed algorithm outperforms equal allocation in terms of the average payoff by at least 30\% no matter how many tasks there are. In game theory, a normal-form game is a game that is represented by a matrix, as opposed to the extensive form representation. In a normal-form game, pure strategies may not exist while a Nash equilibrium is guaranteed to exist in a mixed strategy which is the probability distribution over pure strategies for a player. This formulation is useful in identifying strictly dominant strategies (a strictly dominant strategy is one that always provides greater utility to the player, independent of the other player's strategy) and Nash equilibrium strategies and has gained popularity in wireless communication applications. A downside of the normal-form game formulation however, is the potential loss of some information. Such information includes the sequencing of agents' probable moves, their possible strategies at every decision-making point, the inadequate information each agent has about the other agents’ moves when they make a decision, and their payoffs for all possible game outcomes. A mixed strategy normal-form game is used for the recharging schedule in ~\cite{Charging}, where the authors proposed a joint coverage, connectivity, and charging strategy mechanism for a mesh of UAVs. The UAVs aim to maximize the stationary coverage of a target area, while simultaneously guaranteeing the continuity of service with necessary recharging. In this formulation, the scheduling of recharging operations is considered as a set of consecutive and different static normal-form games at each time slot $t_i$. The players are all the UAVs; the action set contains the following elements:~\\ \{access the replenishment station ($G_{\text{OK}}$), remain in state $s_{\text{fly}}$ ($G_{\text{NO}}$), release the replenishment station and change state to $s_{\text{fly}}$ ($R_{\text{OK}}$), remain in state recharging ($R_{\text{NO}}$)\}.~\\ The payoff is the energy defined as a function of the state-action pair. Then the mixed strategy is achieved when the expected utility is indifferent of its possible choice, i.e., $u(R_{\text{OK}}) = u(R_{\text{NO}})$. In the experiment, the authors compared the system life time and failed recharge attempt ratio in three cases, i.e., global knowledge (know all other UAVs' residue energy), local knowledge (only know the UAVs' energy at one-hop distance) and personal knowledge (only know its own residue energy) with the centralized coordination algorithm and probability approach. The results show that the game theory-based solutions outperform the probability approach but slightly under-perform the centralized coordination solution. \subsection{Non-cooperative games} As opposed to cooperative games, non-cooperative game theory deals with the scenario when individual players compete with each other to maximize their own payoffs. This type of game therefore assumes that all players are self-interested. There exist various kinds of non-cooperative games, such as differential games, Bayesian games, sub-modular games, and so on. Non-cooperative game theory is more commonly used to model competing relationships in UAVs-assisted wireless communication for power control, resource allocation, positioning of UAVs and security. For example, two UAVs belonging to different operators compete for business \cite{non-coop-coverage,Koulali2016AGS,POCA,UAV_offloading} and military UAVs try to monitor, jam or anti-jam enemy's communication systems~\cite{jamming, BayesianGame}. Reference \cite{non-coop-coverage} studied the positioning problem of UAVs in order to maximize the coverage of mobile devices (i.e., the number of mobile devices connected). In this case, the mobile devices are randomly moving on the ground. Three UAVs choose to either circle in their current cell or move to circle the center of an adjacent cell based on the number of mobile devices it supports. The payoff matrix contains the values for the coverage of each UAV if the corresponding action is selected. Then all players choose their strategies simultaneously by finding one Nash equilibrium from this payoff matrix. The coverage of UAVs is shown to improve by 11.9\%. This game theoretic scheme is also shown to be more energy-efficient compared to the three single-UAV coverage scenario. However, only three UAVs are considered in that article and it is well-known that the normal-form game has a scalability problem when the number of players increases. Furthermore, when comparing the power efficiency, only communication power is considered while the power needed for movement is not taken into consideration. Reference~\cite{Koulali2016AGS} focused on the beaconing scheduling problem between two non-cooperative UAVs. The two UAVs belong to different operators and independently optimize their beaconing period to provide coverage for the mobile users on the ground. This problem is formulated as a sub-modular game where UAVs are the players and they strategically choose their beaconing schedule. The payoff of each UAV taking beaconing strategy profile $(\tau_i,\tau_j)$ is defined as a function of the encounter rate and energy consumption, which is \begin{align} u_i^i(\tau_i,\tau_j) = P_s^i(\tau_i,\tau_j)-\frac{(C_b\tau_i+C_s)l}{m}, \end{align} where $m=l\times T$ is the available time window for UAVs to contact with mobile devices, $T$ is the beaconing period, $l$ is a constant, $P_s^i(\tau_i,\tau_j)$ is the successful encounter rate, $C_b$ and $C_s$ are respectively the energy cost per slot for sending beacons and energy cost for remaining switching the transceiver state. Due to the special property of this payoff function (sub-modular function), a pure strategy Nash equilibrium exists with the assumption of perfect rationality and complete knowledge. To overcome this limitation, the authors provided an adaptive distributed learning framework based on the ``Nash Seeking Algorithm (NSA)"~\cite{book-NSA} to find the Nash equilibrium. The advantage of this distributed algorithm is that each UAV strategy is only based on its own observations and the exact formula of the payoff is not even needed. To verify the efficacy of the proposed NSA algorithm, simulation results are provided to show that the algorithm converges to the same value but slightly slower than the Best Response Dynamics (BRD) algorithm. In \cite{POCA}, the authors used a non-cooperative model to explore the radio channels assignment problems in a combined UAV and D2D-based networks. These assignment problems are generally challenging due to the limited availability of orthogonal channels, interference, dynamic topology, and the high mobility of nodes. The authors proposed a distributed anti-coordination game-based partially overlapping channels assignment (AC-POCA) scheme to minimize signal interference and maximize the communication capacity. In this game, the UAVs and devices are players and share the same channels. The strategies are the assignment of channels. An $I-\text{Matrix}$ is a matrix used to record the interference of each user and determine whether the chosen channel is available to a given communication link. Each player wants to be assigned a proper channel to maximize its throughput and minimize the interference from its neighbors. Thus the utility of each player is a measure of the connectivity, which is $M_i$. The total utility of the network is thus defined as $U_i(\Psi) = \sum_{i\in A}M_i$. This utility function is found to be a potential function of the game and with the properties of a potential game, the authors were able to use the best response technique to obtain the Nash equilibrium. The authors tested their algorithm on the mixed topology and dynamic topology scenario when the network topology keeps changing. Simulation results demonstrate the impact of AC-POCA on convergence speed, signaling overhead, and throughput compared to the cooperative channel assignment game with best response and smoothed better response. This algorithm proves to be very effective in a dynamic environment. Reference \cite{UAV_offloading} tackled the offloading problem of the heavy computation tasks (e.g. pattern recognition and video reprocessing) to be completed by a fleet of UAVs. The problem was formulated as a non-cooperative game with $n$ players (i.e., the UAVs in the fleet) with three pure strategies for each player. The three strategies are (1) perform their tasks locally, (2) offload them via a local wireless connection to a neighboring base station (BS), or (3) transfer through a cellular connection to an edge server (ES). The utility is a linear combination of energy consumption, time delay, and computation cost, which is, \begin{align} U = \alpha\sum_{i=1}^N T_i +\beta\sum_{i=1}^N E_i +\gamma \sum_{i=1}^N C_i, \end{align} where $\alpha+\beta+\gamma = 1$, $N$ is the number of tasks, $T_i$, $E_i$, $C_i$ represent the time, energy overhead and communication cost respectively. Thus, for UAV $i$, its utility function depends on which state it is on and has the form: \begin{align} &U_i(s_j,S_{-j}) = \nonumber\\ &\left\{\begin{array}{lll} U_{\mathrm{Local}} &= \alpha E_{\mathrm{Local}} + \beta T_{\mathrm{Local}}+\gamma C_{\mathrm{Local}}, &\mathrm{if}~ s_i = \mathrm{Local computing}\\ U_{\mathrm{Local}} &= \alpha E_{\mathrm{ES}} + \beta T_{\mathrm{ES}}+\gamma C_{ES}, &\mathrm{if}~ s_i = \mathrm{Offloading to ES}\\ U_{\mathrm{Local}} &= \alpha E_{\mathrm{BS}} + \beta T_{\mathrm{BS}}+\gamma C_{BS}, &\mathrm{if}~ s_i = \mathrm{Offloading to BS}\\ \end{array},\right. \end{align} where $E_{\mathrm{text}}$, $T_{\mathrm{text}}$, $C_{\mathrm{text}}$ are the energy consumption, time delay and computation cost for three actions, respectively. This game is a potential game and the Nash equilibrium is found by a distributed offloading algorithm. The simulation results indicate that this approach achieves in average of 19\%, 58\%, and 55\% better results compared with pure local computing, offloading to the edge server, and offloading to a base station respectively. However, this algorithm faces a scaling problem if the network is very dense. Finally, aerial UAVs face the challenge of malicious attacks such as jamming from aerial intruders. Reference \cite{jamming} studied the jamming problem between an aerial jammer UAV and two communication UAVs. The authors formulated this problem as a zero-sum pursuit-evasion game, in which the jammer UAV tries to maximize the jamming time, while the two communication UAVs aim to minimize the jamming time. Then the \textit{Isaacs'} approach is used to derive the optimal control of each UAV, which turns out to be a bang-bang control verified by both theoretical analysis and simulation. A drawback however is that each UAV needs to have complete knowledge of the state of the system. Reference \cite{BayesianGame} utilized a Bayesian game for intrusion detection and ejection in a UAV-aided vehicular network. A Bayesian game is a game in which each player only knows partial information about the payoff-relevant parameters, and the payoff is taken as the expectation over a distribution~\cite{Zamir2009}. The motivation of this application is to provide a safety-oriented vehicular network by ejecting the suspected node so that important information can be exchanged among vehicles and UAVs. The authors proposed two safety problems in UAV-aided communication systems. The first problem studies when the intrusion detection system should be activated, while the second problem focuses on the criterion to eliminate a seemingly malicious communication node. To solve these two problems, the authors modeled the attacks and defenses in an UAV system as two Bayesian games, where the information of the players is not known to each other. During the game, an intrusion detection node performs eight monitoring or waiting strategies, whereas a malicious node performs six strategies against UAV, cluster head or cluster members, either normal or malicious. Furthermore, both attackers and detectors can work in two modes. The attacker operates in a normal mode and an attacking mode, while the detector operates in a normal mode and a detect mode. During the game, the attacker and detector gain a pre-defined profit with each strategy, which depends on the attacker's false positive rate and the detector's expected detection rate. It is shown in the paper that this Bayesian game has at least one Nash Equilibrium. At the equilibrium, the maximum profile $B$ gained by the attackers may be regarded as a threshold, which means that a normal node should perform malicious behaviors at a frequency less than $B$, but a malicious node performs bad behaviors more frequently than $B$. If such a node is found, then the intrusion detection system in the communication network should be activated, in order to find the attacker. The decision of the ejection, as is studied in problem two, follows a similar scheme. To decide whether a suspicious node should be cut off from the network, another Bayesian game is conducted. After the equilibrium is reached, the intrusion ejection system compares the rate of malicious behavior of a node with the profit at the equilibrium. If the former is larger than the later, then the node is probably performing attacks and should be ejected. Simulation results demonstrate that the proposed framework exhibits a high detection rate and low false positive rate while requiring low communication overhead compared to existing frameworks. \subsection{Stackelberg games} A Stackelberg game is a hierarchical game comprised of two types of players: leaders and followers. In most cases, the leaders act first then the followers respond to the leaders' decisions. However, each leader must consider how the followers might respond to its decisions as well as to other leaders' decisions. A Stackelberg game is a common framework for analyzing resource allocation among consumers and provider companies. More specifically, the companies decide the price of their resources and the consumers make decisions about the quantity they are going to purchase. The objective of both sides is to maximize their own benefits. In wireless communications applications, Stackelberg games are used to study the pricing and bandwidth/power allocation problem when the two types of players (leaders, followers) are related and can have different game mechanisms. For example, Reference \cite{stackelberg_pricing} studied the problem of downlink power allocation in a multi-UAV enabled wireless network by modeling it as a Stackelberg game. In this game, the UAVs are the leaders choosing the optimal price to maximize their revenue defined as \begin{align} \max \; U_{\text{UAV}}^j = \sum_{n=1}^{N}c_{jn}p_{jn}, \quad j \in \mathcal{M}, n \in \mathcal{N}_j \end{align} where $c_{jn}$ is the price charged by the $j$th UAV to the $n$th user per unit power, $p_{jn}$ is the corresponding power, $\mathcal{M}$ denotes the set of UAVs, and $\mathcal{N}_j$ is the set of users served by the $j$th UAV. The users are the followers selecting their optimal power strategy to maximize their revenue given by \begin{align} \max \; U_{jn} = \log_2(1+\mathrm{SINR}_{jn}) - c_{jn} p_{jn} \end{align} with the constraint that $\sum_{n=1}^Np_{jn} \leq P_{\text{max}}$. To make the game reach the equilibrium, a distributed iterative algorithm is proposed. Simulation results also show that the proposed scheme performs better than the uniform power allocation scheme. Likewise, Reference \cite{jointaccessselect} considered the UAV access selection and base station bandwidth allocation problems in a UAVs-assisted IoT network. In that case, the BSs are modeled as leaders and the UAVs are followers where the access competition among UAVs is formulated as a dynamic evolutionary game and the problem of bandwidth allocation of BSs is modeled as a non-cooperative game. A Stackelberg game is also believed to be a promising formulation to the anti-jamming defence problem in wireless networks~\cite{Stackelberg_survey}. A typical anti-jamming communication cycle includes three steps: jamming cognition, anti-jamming decision-making, and waveform reconfiguration. Two common ways of addressing anti-jamming are power control and channel selection. Stackelberg games were proposed in several works~\cite{stackelberg-bayesian,stackelberg-comm,stackelberg-IEEE} to solve the jamming power control problem in conventional communication networks. In these works, the legitimate users are the leaders and the jammer as the follower. Both legitimate users and jammers need to choose their power to maximize their payoff based on SINR, throughput or transmission cost. Anti-jamming power control in UAVs-assisted communication networks should consider the channel model of UAVs, the mutual interference, incomplete information constraint and the dynamic 3D flying environment. Reference \cite{SG_Antijamming_Bayesian} proposed a Bayesian Stackelberg game to model the competitive relations between multiple UAVs and a jammer. To be more specific, the jammer acts as the leader while the UAVs are the followers. The UAVs and jammers select their power control respectively to maximize their own payoff. Note that incomplete information and observation errors have been considered for the UAVs. The payoff of UAV $i$ is defined as follows: \begin{align} \resizebox{0.9\hsize}{!}{ $U_i(P_i, P_{-i}, \Tilde{J}) =\sum_{g=1}^G \sigma_{\beta_i}(g)\log_2 \left( 1+\frac{\alpha_i P_i}{N_0+\beta_i(g)\Tilde{J} + \sum_{m\neq i}P_m\theta_{m,i}} \right)-C_u P_i$, } \end{align} where $J$, $P_i$ are the transmission power of jammer and UAV $i$ respectively, $\Tilde{J}$ is the observation value of $J$, $\theta_{m,i}$ is the mutual interference gain which has $W$ states with probability $\sigma_{\theta_{m,i}}$, $\beta_i$ is the jamming gain which has $G$ states with probability distributions $\sigma_{\beta_i}(g)$, and $C_u$ is a constant. The payoff of the jammer is \begin{align} \resizebox{0.98\hsize}{!}{ $V(J, P_1,...,P_N) = -\sum_{i,j,k}\sigma_{\alpha_i}(k)\sigma_{\theta_{m,i}}(w)\log_2\left(1+ \frac{\alpha_i(k)P_i}{N_0+\beta_i J+\sum_{m\neq i} P_m\theta_{m,i}(w)}\right) -C_j J$ ,} \end{align} where $C_j$ is a constant, $\alpha_i$ is the transmission gain of UAV $i$ which has $K$ states with probability $\sigma_{\alpha_i}(k)$. Then a sub-gradient-based Bayesian Stackelberg iterative algorithm is proposed to obtain the Stackelberg equilibrium, the existence and uniqueness of which are theoretically proven. Simulation results illustrate the influence of incomplete information and observation errors. They show for example, that if the observation error of the jammer increases, the utility of UAV will decrease. At the same time, the algorithm has a fast convergence rate and each player reaches its optimal transmission power within 5 iterations. The main limitation of this work is that only one UAV jammer is considered. \subsection{Mean field game} The Mean Field Game (MFG) is a game-theoretic formulation suitable for dealing with a large number of agents. MFGs approximate the interaction between one agent and other agents as that between the agent and the ``mean agent" of all others, which is commonly referred to as mean field approximation. The interaction of each individual player with the mean field effect of the rest of the population is generally captured through a Hamilton-Jacobi-Bellman (HJB) equation where the mean field function evolves following a Fokker-Planck-Kolmogorov (FPK) equation. The goal of each player is then simplified to maximize its own utility over a pre-defined period of time considering the collective behavior of the rest of the population. Generally, MFGs are used when a large number of UAVs are involved. Indeed, the mean field approximation asymptotically achieves the $\epsilon$-Nash equilibrium of the original system when the number of agents goes to infinity~\cite{MFG_book_Prob}. Researchers have used mean field games to model UAVs movement control problems in order to reduce energy consumption and maximize ground users coverage~\cite{MFG-movementcontrol,MFG_NN,MFG_NN_FL,AdaptiveCoverage}. Reference \cite{MFG-movementcontrol} proposed a real-time MFG-based swarm movement control algorithm to minimize the weighted sum of each UAV's energy consumption per unit downlink rate and flocking cost. In this way, both downlink transmission energy consumption and mechanical movement energy consumption are taken into account. In this game, an individual UAV's velocity is determined by solving an HJB equation, and then the resultant UAV movements are obtained by solving a FPK equation in a windy environment. Each UAV can thereby decide its velocity using only its own location and channel states. The dynamics of each UAV under windy environment is defined as \begin{align} d z_i (t) = (v_i(t)+A) dt +\eta_A d W_i(t), \end{align} where $A$ is the average wind velocity, $\eta_A$ is the wind velocity variance, and $W_i$ is the standard Wiener process. The cost function of UAV $i$ is given by \begin{align} \label{eqn:HJB} J_i(t) = \frac{1}{T}\int_{t}^{T}\omega_e E_i(v_i(t),z_i(t))+\omega_fF_i(v_i(t),z_i(t)) dt, \end{align} where $E_i(v_i(t),z_i(t))$ is the energy cost, $F_i(v_i(t),z_i(t))$ is the flocking cost, and $\omega_e$ and $\omega_f$ are the weighting factors. Minimizing Equation \eqref{eqn:HJB}, a HJB equation is obtained. Since in MFG, each agent is playing with the ``mean agent", the flocking cost can be written as follows: \begin{align} F_i(v_i(t),z_i(t),m(z(t))) = \int_z \frac{m(z(t))\left \| v(z(t))-v_i(z_i(t)) \right \|^2}{(1/\gamma + \left \| z(t)-z_i(t) \right \|^2)^\beta}dz, \end{align} where $m(z(t))$ is the resultant UAV-position distribution. This distribution is then the solution of a FPK equation which is coupled with the above HJB equation. By solving the HJB-FPK equations, the optimal velocity is obtained. The efficacy of this algorithm is verified by simulation using 3GPP air-to-ground channel model of UAVs. The proposed algorithm saves up to 55\% average energy consumption per downlink rate compared to a baseline flocking scheme that does not consider energy efficiency under the same target collision probability. Even though the solution to this problem is well-understood through the lens of mean field game formulation, it still incurs a large computational burden in solving these coupled partial differential equations (PDEs). In light of this difficulty, the authors of \cite{MFG_NN} utilized two separate neural networks (NNs) to approximate the solutions of HJB and FPK equations, thus providing one of the first links between game theory and machine learning. Later in \cite{MFG_NN_FL}, the authors further combined federated learning with the neural network-based MFG method to help UAVs share parameters to achieve online path control and reduce computational burden. Reference \cite{AdaptiveCoverage} proposed a discrete-time MFG game framework where each UAV adjusts its velocity in order to increase the number of served users while simultaneously minimizing the flight energy consumption. The aim of each UAV is also to optimize the velocity control (i.e., flight direction policy). Unlike the above works, the flying model of UAV is assumed to be a discrete-time linear dynamic system and the UAVs are only allowed to fly in 9 directions (remain in place, move parallel to the coordinate axis, and move at a 45 degree angle with the axis of movement). The cost function is defined as \begin{align} J_i (u_i,m(t)) = \lim_{T\rightarrow \infty} E \sum_{t=0}^{T-1}(b\left \| x_i(t)-m(t) \right \|^2+u_i^T(t)Ru_i(t)) \end{align} where $R$ is a pre-defined weighting matrix. The optimal controller $u_i(t)$ is then obtained by solving this optimization problem analytically. \subsection{Evolutionary game theory} Evolutionary Game Theory (EGT) is a cross-field of evolutionary theory and game theory. The key idea behind EGT is the constitution of a population comprised of different phenotypes evolving over time. One important concept in EGT is that of Evolutionary Stable Strategies (ESS) defined as follows: \textbf{Definition (Evolutionary stable strategy~\cite{ESS_defi}):} Strategy $p^* \in S_n$ is evolutionary stable provided that for every other strategy $p \neq p^*$, there exists $\bar{\epsilon}(p) > 0$ such that the utility function satisfies \begin{equation} U(p^*,\epsilon p+(1-\epsilon)p^*) > U(p,\epsilon p+(1-\epsilon)p^*), \end{equation} for every $0<\epsilon<\bar{\epsilon}(p)$. EGT is used in wireless communication for access/mode selection and resource allocation when a population of players are involved. For example, Reference \cite{EGT-modeselec} proposed an EGT-based model selection approach in UAV-aided vehicular network. In this application, three communication modes are available to the vehicles, namely, Vehicle to Base station (V2B), Vehicle to Vehicle (V2V), and Vehicle to UAV (V2U). The vehicles need to decide which communication mode to choose in order to optimize the transmission reliability and the cost of resource utilization. The payoff functions under three different choices are thus defined as \begin{align} \left\{\begin{matrix} \pi_{V2U} = k_u P_{UAV}(x)-q_u x_U \\ \pi_{V2B} = k_b P_{V2B}(x)-q_b x_B\\ \pi_{V2V} = k_v P_{V2V}(x) \end{matrix}\right., \end{align} where $P_{UAV}$, $P_{V2B}$, $P_{V2V}$ are respectively the transmission reliability of the three communication modes, $k_u$, $k_b$, $k_v$ , $q_u$, $q_b$ are all constants, $x_U$, $x_B$, $x_V$ are the proportions of players that choose the three strategies. Usually, replicator dynamics (described in Equation~\eqref{eqn:replicator}) is used to describe the evolution process and capture the variation of the population state. In this approach, each player decides to switch to another strategy if its profit is under the average payoff of the whole population. Thus, the replicator dynamics are given by \begin{align}\label{eqn:replicator} \dot x_i = \sigma x_i(t)(\pi_i[x(t)]-\pi[x(t)]), \quad \forall \; i\in S, \end{align} where $i$ represents the strategy, $\sigma$ is a constant representing the speed of dynamic evolution, and $\pi[x(t)]$ is the average payoff of the whole population. The authors then demonstrated the fast convergence of this evolutionary game based on replicator dynamics and higher transmission reliability with lower cost of resource utilization compared to the selfish and random selection schemes. In \cite{jointaccessselect}, the authors studied the joint access selection and bandwidth allocation problem in an IoT system, where the access competition among groups of UAVs is formulated as a dynamic evolutionary game. In this game, the players are all the UAVs and these UAVs decide which BS to connect with based on the BS's bandwidth and price. If all players connect to the same BS, the bandwidth of this BS will be divided amongst them. In this case, some players would rather connect to another BS to get a better payoff. The payoff function is defined as \begin{align} \pi_n^g(x) = \log\left (1+ \frac{k_n B_n R_n^g}{p_n N^g x_n^g} \right), \end{align} where $k_n$ is a predefined coefficient of the linear pricing function, $B_n$ is the allocated bandwidth of BS $n$, $p_n$ is the service price of BS $n$, $x_n^g$ denotes the proportion in group $g$ connecting to BS $n$, and $R_n^g$ measures the ergodic rate performance in group $g$ choosing BS $n$. This evolutionary game is solved using replicator dynamics and an ESS is obtained when the replicator dynamics reach an equilibrium. Simulation results verify the fast convergence of this algorithm under different initial states. \subsection{Summary and lessons learned} We summarize in Table \ref{table:GT} several game theoretic formulations and their applications in UAVs-assisted wireless communication networks, covering the problems of task allocation, coverage maximizing, beaconing schedule, energy optimization, and so on. Note that the ``drawbacks" term in the last column are a characteristic of a specific game in a specific situation, rather than an inherent weakness for all cases. \begin{sidewaystable}[htbp] \caption{Types of game theoretic approaches used in UAVs-assisted wireless communication networks.} \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \textbf{Refs} & \textbf{Description} & \textbf{Game model} & \textbf{Players} &\textbf{Strategies} & \textbf{Utility} & \textbf{Drawbacks}\\ \hline \cite{coalition} & Task allocation & \makecell{Hedonic coalition\\[-2pt] formation game} & UAVs, tasks& Form coalition &\makecell{A function of\\[-2pt] throughput, delay} & \makecell{NP-complete,\\[-2pt] sub-optimal}\\ \hline \cite{Charging} & Recharging & Normal form game & UAVs & \makecell{Probability of\\[-2pt] $R_{\text{OK}},R_{\text{NO}}$} & Residual energy & \makecell{matrix-based,\\[-2pt] information loss} \\ \hline \cite{non-coop-coverage}& Positioning, coverage & \makecell{Non-cooperative\\[-2pt] normal-form game}& UAVs& \makecell{Circle in current cell or \\[-2pt] move to adjacent cell} & \makecell{Number of mobiles \\[-2pt] each UAV supports}& \makecell{High time complexity with \\[-2pt] the increasing size of players} \\ \hline \cite{Koulali2016AGS} & Beaconing schedule & Sub-modular & UAVs& Beaconing period duration &\makecell{ Encounter rate,\\[-2pt] consumed energy }& \makecell{Perfect rationality and \\ [-2pt]complete information}\\ \hline \cite{POCA} & Channels assignment&\makecell{ Anti-coordination\\[-2pt] game} & UAVs, devices & Assignment of channel&\makecell{ Maximize the network\\[-2pt] throughput} & -\\ \hline \cite{ UAV_offloading} & Offloading & Non-cooperative & Drones &\makecell{Local computing,\\[-2pt] offloading to ES,\\[-2pt] offloading to BS} & \makecell{Utility function\\[-2pt] that takes into account \\[-2pt] energy consumption, delay\\[-2pt] and communication cost} & Scalibility \\ \hline \cite{jamming}& Jamming attack & \makecell{Zero-sum pursuit\\[-2pt] evasion game} & UAVs &Optimal control & Termination time &\makecell{ Complete knowledge\\[-2pt] of the state \\[-2pt] of the system}\\ \hline \cite{BayesianGame}& \makecell{Intrusion monitoring and\\[-2pt] attacker ejection }& Bayesian game &UAV and vehicles&\makecell{Monitor and eject\\[-2pt] malicious nodes}&\makecell{Protect communication\\[-2pt] network from attacks}&\makecell{Parameters are\\[-2pt] determined manually} \\ \hline \cite{stackelberg_pricing}&Pricing and power allocation & Stackelberg game & UAVs, ground users & Power price, power & Revenue & - \\ \hline \cite{jointaccessselect} & \makecell{Access selection,\\[-2pt] bandwidth allocation} & Stackelberg game & UAVs, BSs &\makecell{ Access selection,\\[-2pt] bandwidth allocation} & \makecell{UAVs (maximize payoff), \\[-2pt] BSs (maximize bandwidth allocation)} & - \\ \hline \cite{SG_Antijamming_Bayesian}&\makecell{Anti-jamming \\[-2pt] power control} & \makecell{Bayesian \\[-2pt] Stackelberg game} & UAVs, jammer & Power control & \makecell{A function of\\[-2pt] throughput and\\[-2pt] transmission cost} & Only one jammer considered\\ \hline \cite{MFG-movementcontrol,MFG_NN,MFG_NN_FL} & Minimize energy consumption & MFG & Massive UAVs& Optimal velocity& Energy consumption & High computation\\ \hline \cite{AdaptiveCoverage} & Minimize energy consumption &MFG & UAVs & Velocity control & Energy consumption & Ideal environment \\ \hline \cite{EGT-modeselec} & Mode selection & Evolutionary game & Vehicles & \makecell{Selection of communication\\[-2pt] different communication modes} &\makecell{ Transmission reliability\\[-2pt] and the cost of \\[-2pt]resource utilization} & Massive players\\ \hline \cite{jointaccessselect} & Access selection& Evolutionary game &UAVs & Connect to which BS & Payoff function of bandwidth and price& Massive players\\ \hline \end{tabular} } \label{table:GT} \end{center} \end{sidewaystable} The main lessons of this section include: \begin{itemize} \item Game theory is a widely-used tool in the wireless communication field for modeling specific problems. \item The ultimate goal of a game is to find the (Nash) equilibrium. \item Different game types are appropriate for different problems in UAVs-assisted communication networks. \item The time complexity of conventional game theory solutions grows with the increase in the number of players. \item Mean field games and evolutionary games are potentially useful in massive UAVs network communication problems. \end{itemize} \section{Machine learning in UAVs-assisted wireless communication networks} \label{sec:ML} Machine learning techniques were introduced into the wireless communication field due to their ability to predict future network states, generalize to new unseen network states, and scale to large-size networks~\cite{security}. Machine learning methods are generally divided into supervised learning, unsupervised learning, and reinforcement learning methods. With the improvements in parallel computing and graphics processing units (GPU), neural networks (NN) became a powerful tool for machine learning. Notable structures of NN include deep feed-forward networks (DFF), convolutional neural networks (CNN), and recurrent neural networks (RNN). ML tools have been applied in the U-WCN arena for modelling, predicting and monitoring traffic patterns~\cite{RLCoverage,DRLCoverage,ML-deployment}, device locations, network access and rate control~\cite{LSTM_app,CacheESN}, connectivity preservation, resource allocation and interference management~\cite{QL,DDPG-RA}. These applications have benefited from recent advances in both theory and computational tools such as Tensorflow, Pytorch, and MATLAB's machine learning toolbox. \subsection{Neural networks} Powerful ML techniques such as deep learning and reinforcement learning are now used in the UAVs-assisted wireless communication field \cite{ML_UAV_intro,security,ML-UAV-survey,ML-deployment}. Compared to conventional model-based approaches, ML tools allow designers to take into account application-specific issues, such as the type of UAVs, Doppler effects, cache management, dynamic positioning, interference management, and load balancing \cite{ML_UAV_intro}. In \cite{ML-deployment}, the authors proposed an ML framework based on a Gaussian Mixture Model (GMM) and Weighted Expectation Maximization (WEM) algorithm to predict potential network congestion. Based on the predicted traffic, the optimal deployment of UAVs is then obtained by minimizing the transmission power and mobility powers. In that work, the authors used the actual dataset of a Chinese City Cellular Traffic Map. The dataset is composed of the number of aerial users that are offloaded from a BS at location $(x,y)$ to a UAV during time interval $[t,t+T]$, and the amount of cellular traffic that a UAV needs to provide for the aerial users from a BS at $(x,y)$ at $[t,t+T]$. The aim is to predict the total number of aerial users, the spatial distribution of aerial users, and the spatial distribution of aerial data traffic in a geographical area $\mathcal{A}$. Using GMM and WEM, the authors were able to predict the cellular traffic allowing for a constrained optimal problem to be solved in order to minimize the total power for downlink transmission and mobility. The simulation results show that the proposed algorithm reduces the power consumption for downlink transmission and mobility by over 20\% and 80\% respectively compared to a more traditional optimization approach without machine learning used. In the following subsections, different deep neural networks (e.g., convolutional neural networks, recurrent neural networks, spiking neural networks) and their applications in UAVs-assisted wireless communication networks will be reviewed. \subsubsection{Convolutional neural networks} Convolutional Neural Networks (CNN) were initially proposed and used in computer vision. A CNN consists of an input layer, several hidden layers and an output layer. The name ``convolutional" originated from the use of the convolution operator. The hidden layer of a CNN contains a convolutional layer, an activation layer, a pooling layer, and a fully connected layer. CNNs are useful because of their image processing ability, which can provide UAVs with vision-based sensing capabilities. By combining with reinforcement learning algorithms or recurrent neural networks, CNNs are playing an increasing role in UAVs-assisted wireless communication networks. For example, in a cellular-UAV network, CNNs help the UAVs identify the location of ground BSs, ground user equipment, and other UAVs in the network. Such information can then be fed into a recurrent neural network to help individual UAVs make decisions about their future movement in order to minimize the interference and latency at each time instant~\cite{security}. Another potential application of CNNs lies in UAV-enabled edge caching, where a CNN extracts and stores common features of the data files (videos, images, etc.) requested by different users, then uses these features to predict a user's video requests and preference~\cite{security}. \subsubsection{Recurrent neural networks} Recurrent Neural Networks (RNN) are a class of artificial neural networks that make use of sequential information. Fig.~\ref{fig:RNNstructure} presents the illustration of an RNN structure. Such a structure is able to capture long-term dependencies hidden in the dataset. \begin{figure*}[!htb] \includegraphics[width=\linewidth]{RNN.jpg} \caption{RNN structure.}\label{fig:RNNstructure} \end{figure*} Echo state networks and long-short-term memory networks are two widely-used RNN structures. The Echo State Network (ESN) is a practical type of recurrent neural network with a sparsely connected hidden layer. ESN is characterized by its adaptive memory, which enables it to store previous state information in order to predict future states of UAVs. Reference~\cite{CacheESN} studied the problem of proactive deployment of cache-enabled UAVs for optimizing the Quality-of-Experience (QoE) of wireless devices in a cloud radio access network. In this model, a conceptor-based ESN is deployed to predict the content request distribution and mobility pattern of each user, leveraging the users' visited locations, requested contents, and other human-centric information. Then, with these predictions, the authors tried to find the user-UAV associations, the optimal UAVs' locations, and the contents of the cache at UAVs by formulating an optimization problem to maximize the users' QoE while minimizing the UAVs' transmission power. The dataset is from BUPT and Youku recording the real pedestrian mobility patterns and content transmission. Simulation results show that the proposed algorithm achieves 33.3\% and 59.6\% gains in terms of the average transmit power and the percentage of the users with satisfied QoE compared to that of a benchmark algorithm without caching and a benchmark solution without UAVs. The advantage of using ESN here is that the users' mobility pattern and content request distribution have some time-dependent and spatial statistical characteristics. A Long-Short-Term Memory (LSTM) network is another specific type of recurrent neural network that can learn long-term dependencies~\cite{LSTM-ori}. LSTM networks have been successfully used in classification, image recognition, and machine translation fields~\cite{sentimentclassfication,LSTMgesturerecog,LSTMtranslation}. With three gated units, an LSTM network solves the gradient diminishing problem of traditional RNN structures. Recently, researchers proposed integrating LSTM into D2D communication systems. For example, \cite{DL_UAV} designed an integrated LSTM and Multi-Layer Perceptron (MLP) architecture to determine the position of a UAV in order to maximize the A2G link access coverage performance, while minimizing the transmission power and maximizing the user's throughput. In this experiment, the authors considered three UAVs connected through wireless multi-hop backhauls to the core network. To collect data, the authors designed the data acquisition procedures and environment at the National Taipei University of Technology with 900 MHz band. Data of the A2G link access coverage probability, Line of Sight/Non-Line of Sight (LoS/NLoS), elevation angle, Received Signal Strength (RSS), Signal-to-Noise-Ratio (SNR), and user-to-user distance are collected. The target area is divided into grid points and the data collected at 722 reference points are used as training samples, while data collected at another 85 reference points are used for testing. The collected data is then sent to the MLP-LSTM neural network structure as the input. Using the proposed MLP-LSTM structure, the algorithm finds the UAV position that maximizes the throughput. The authors compared this MLP-LSTM scheme performance with Support Vector Machines (SVM), LSTM, and MLP algorithms in three scenarios: using the original datasets; using reduced features only and estimating the values of user throughput for each user at each grid point; and using reduced data collected on different days/times and finding the grid points in which users achieved maximum and total throughput. The experiments indicate that the UAV positioning provides an accuracy level of 94.73\%, 98.33\%, and 99.53\% respectively in three scenarios and outperform SVM, MLP, and LSTM. Reference~\cite{LSTM_app} proposed the use of LSTM to predict the classification of potential content providers so that the D2D communication system between the content provider and the content requester achieves a desired level of confidentiality. In that article, LSTM selects the optimal D2D transmitter for the content requester based on experience and real-time information of the content requester, such as the amount of content requested, the mobile status of the content carriers, the distance between the content carriers and the content requesters, and the remaining energy of the UAV flying base station. Using simulation, the authors showed that the LSTM scheme improves the security capabilities of the system compared to the random-based scheme. \subsubsection{Spiking neural networks} Spiking Neural Networks (SNN) are novel artificial neural networks that mimic the operation of brain neurons. Liquid State Machine (LSM) is a particular type of SNN with five components: agents, input, output, liquid model, and output function. LSM is proposed to handle continuous-time inputs and to compute at various time scales. It has two advantages over traditional artificial neural networks, namely, fast real-time decoding of signals and high information carriage capacity by adding a temporal dimension ~\cite{ANN-tutorial}, and has been used for optimizing resource allocation in wireless communication with UAVs \cite{SNN-LSM}. Reference~\cite{SNN-LSM} proposed a distributed algorithm based on LSM to jointly optimize the user association, spectrum allocation, and content caching. The LSM stores the users' behavior information and tracks the state of the network over time in order to predict the content request distribution, and automatically adapts spectrum allocation to the change of the network states. In this algorithm, a cloud first predicts the content request distribution of each user using an LSM-based approach. Then with this distribution, each UAV finds the optimal user association by using an $\epsilon$-greedy mechanism. In this way, this algorithm solves the challenge of the original problem, which is a nonlinear discrete optimization problem. Simulation results show that it outperforms the Q-learning algorithm (introduced in Section~\ref{subsec:Ql}) in terms of the average number of stable queue users. The machine learning algorithms described so far, require that all data are sent to a central location. To address this shortcoming, federated learning emerged as an effective tool to implement machine learning in a distributed fashion. \subsection{Federated learning} Federated Learning (FL) is a concept proposed by Google researchers~\cite{googleFL2}. It involves training the model in a central server, while keeping the data localized, thus realizing the goal of preserving privacy and safety. \textbf{Algorithm \ref{algo:FDL}} summarizes the FDL algorithm presented in~\cite{UAV_FDL}. In this algorithm, $N$ UAVs store their own data and train a separate model on the data. Then these model parameters are aggregated by averaging to obtain a final model. With this mechanism, on one hand, a loss of one UAV's data will not greatly affect the whole system performance. On the other hand, storing the data in each UAV can reduce the energy loss of transmitting all the data to a central controller and protect privacy ~\cite{FL-wireless,dynamicFL,incentiveFL}. Reference \cite{FL-wireless} formulated federated learning over a wireless network as an optimization problem thus providing insight into the compromise between energy consumption, learning accuracy, and time. Reference \cite{incentiveFL} adopted contract theory to design an effective incentive mechanism to stimulate the mobile users with high-quality data to participate in federated learning in order to solve the heterogeneity problem. Because UAVs are resource-constrained devices while traditional ML-assisted schemes require UAVs' data to be sent and stored in a centralized server, distributed ML is needed in the UAVs-assisted wireless communication setting. Reference \cite{UAV_FDL} first introduced federated deep learning (FDL) concepts for UAV-enabled wireless applications and the authors discussed the key technical challenges, open issues, and future directions on FDL-based approaches. Basically, the FDL training process of UAV-based networks comprises three steps. The first is the training initialization. A server specifies the required data type and training hyper-parameters, together with an initial global model $G_0$ and broadcasts them to the UAVs. The second step is the UAVs' model training process. Each UAV collects data, keeps the data to itself, and updates parameters of its local model $L_i^j$. Then the updated parameters are sent to the server. The final step is the global model aggregation. The server aggregates these local models and sends back the updated model parameters to the UAVs. Recently, researchers have started to examine decentralized federated learning, to eliminate the need of a centralized server. Such works can be found in ~\citep{lalitha2019peer,savazzi2020federated,taya2021decentralized}, which provide fully decentralized framework for localized data and have greater potential in future IoT applications. Despite the above advantages, FDL still faces challenges from heterogeneous data distributions in real applications, and the lack of theoretical guarantees of convergence. Other problems will arise if FDL is applied to UAVs-assisted wireless communication given that UAVs are operating in a highly-dynamic environment. \begin{algorithm}[!htbp] \label{algo:FDL} \KwData{Number of UAVs $N$, number of local epochs $E$, batch size $B$, learning rate $\eta$, number of server rounds $R$} initial global model $G_0$\; \For{$j=1$ to $R$}{ $P=$ random set of UAVs of $N$\; \For{each UAV $i$ in $P$ in parallel }{$L_i^{j+1} \leftarrow \mathbf{ClientUpdate}(i, L^j)$\;} $G^{j+1}\leftarrow \frac{1}{|P|}\sum_{i=1}^{|P|}L_{i}^{j+1}$\; } \Return{$G^{j+1}$.} \textbf{ClientUpdate($i$,$L$)}: \For{$e=1$ to $E$}{ batches $\leftarrow$ split dataset into batches of size $B$\; \For{each batch $b$}{ $L \leftarrow L-\eta \bigtriangledown f(L,b)$\;}} \Return{$L$ to UAV.} \caption{FDL for FL server} \end{algorithm} \subsection{Reinforcement learning} Reinforcement Learning (RL) is a sub-field of machine learning. Detailed introductions and examples of reinforcement learning may be found in \cite{RL_sutton}. There are four main elements for an agent in a reinforcement learning system: a policy, a reward, a value function, and a model of the environment. Compared with supervised and unsupervised learning methods, RL-based algorithms have the advantage of learning in an unknown environment with a pre-designed reward. In particular, RL algorithms are used in UAVs-assisted wireless communication services to solve deployment, resource allocation, navigation, and control problems. In the following subsections, two commonly-used reinforcement learning algorithms, Q-learning and deep deterministic policy gradient, as well as their corresponding applications are reviewed. \begin{figure}[htbp] \centerline{\includegraphics[width=1.3\textwidth]{RL_1.png}} \caption{DDPG-based UAV trajectory planning.} \label{figure:RL-AC} \end{figure} \subsubsection{Q-learning} \label{subsec:Ql} Q-Learning (QL) is a model-free reinforcement learning algorithm that guides an agent to take a specific action in a given environment. QL provides an optimal action selection policy for a given finite Markov decision process \cite{QL_intro}. A typical Q-learning algorithm is shown in \textbf{Algorithm~\ref{algo:QL}}. To alleviate the space complexity of search on the Q-table, the deep Q-network, which uses a neural network to map the input states to the action value, was proposed. Q-learning was introduced to the study of UAVs-assisted wireless communication networks in order to solve the trajectory planning, 3D deployment, security, and resource allocation problems. \begin{algorithm} \label{algo:QL} initial $Q_0$; discount factor $\gamma$; learning rate $\alpha$\; \For{$t$ in epoch}{ At time $t$ with state $s_t$, selects an action $a_t$, observe a reward $r_t$, obtain next state $s_{t+1}$\; Update Q table: \begin{align} \label{eq:QL} Q^{new}(s_t,a_t)\leftarrow Q(s_t,a_t)+\alpha(r_t +\gamma \mathrm{max}_{a} Q(s_{t+1},a)-Q(s_t,a_t)). \end{align} } \caption{Q-Learning} \end{algorithm} As an example, Reference \cite{QL-emergency} proposed a Q-learning algorithm to find the best 3D positioning of multiple drone small cells in an emergency scenario. The main goal of this work is to maximize the number of users served by the drones with the constraints of both backhaul and radio access network. The state space is the position of UAV, the action space is \{ Up, Down, Left, Right, Forward, Backward, Keep still\}, and the reward is the total number of users allocated to the UAV. Then, $\epsilon$-policy is used to find the optimal solution. The proposed algorithm is shown to be robust to different network conditions, such as the position of other drones, interference between drones, as well as user movements and their constraints. Simulation results also show that the proposed algorithm has advantages over random position, fixed position, and circular position schemes in terms of two measures: users' throughput dissatisfaction and the percentage of users in outage. Similarly, in \cite{QL-trajectory}, one UAV was chosen as a base station in order to provide network services to multiple users. The main goal of that work is to optimize the trajectory of the UAV in order to maximize the sum rate of transmission (i.e., reward) during flying time. In such a problem, the state-space is composed of the position of the UAV while the action space contains \textit{\{up, right, down, left\}} on the same plane. The authors compared the table-based Q-learning and NN-approximator-based Q-learning approaches and showed that both converge to the desired trajectory. Reference \cite{zhanghanbook} presented Q-learning for a UAV trajectory design problem. In this problem, the finite state-space contains all possible locations of the UAVs, and the algorithm selects from a corresponding finite set of actions (27 directions). The reward of a UAV is designed as the total number of successful valid sensory data transmissions for its task. Then Q-learning algorithm is used to find the best action of each UAV. Even though this single-agent Q-learning has many favorable properties due to its small state-space and action sets, it does not account for the states and strategies of other UAVs. To solve this problem, in the same book, the authors presented a multi-agent Q-learning algorithm called opponent modeling Q-learning. This multi-agent reinforcement learning can better model the cooperation or competition relations among agents. However, the challenge of multi-agent reinforcement learning is that convergence can only be guaranteed under restrictive assumptions. Moreover, the above three works assume that UAV can only move in horizontal directions, which is limiting in real applications. Reference~\cite{Attack-PT-Q} applied prospect theory to formulate a subjective smart attack game for the UAV transmission. In this game, an attacker UAV can choose from three attack types (jamming, spoofing, and eavesdropping), and the defender UAV chooses the transmit power ($B$) on multiple radio channels to resist the smart attack. The prospect theory-based utility function of the defence UAV is defined as \hspace{-20pt} \resizebox{\linewidth}{!}{\parbox{1.1\linewidth}{ \begin{numcases}{U(x,y)=} \sum_{i=1}^B\left(h_{T,i}^{(k)}-h_{E,i}^{(k)}\right)x_i - \mu\sum_{i=1}^Bx_i, & if $y=-1$ \nonumber \\ \sum_{i=1}^B h_{T,i}^{(k)}x_i-\frac{C_m}{L}\sum_{l=0}^Llw_A(\beta_l)-\mu\sum_{i=1}^Bx_i, & if $y=-2$ \\ \sum_{i=1}^B h_{T,i}^{(k)}x_i-\mu\sum_{i=1}^Bx_i -\frac{1}{L}\sum_{l=0}^L lw_A(\eta_l)\sum_{i=1}^B\frac{h_{T,i}^{(k)}h_{J,i}^{(k)}x_iy_i}{\sigma+h_{J,i}^{(k)}y_i}, & if $y \geq 0$ \nonumber \end{numcases}}} where $x=\{x_i\}$, $y=\{y_j\}$ are the strategy set of defence UAV and attacker respectively, $h_{T,i}^{(k)}$ is the channel power gain of defence UAV and its user, $h_{E,i}^{(k)}$ is the wiretap channel gain, $h_{J,i}^{(k)}$ is the jamming channel gain, and $\omega_A(p)$ is the subjective probability viewed by defence UAV. Deep Q-learning algorithms (i.e., DQN) are then developed to achieve optimal power allocation against smart attacks. Simulation results reveal that DQN-based strategy has the highest safe rate, secrecy capacity, and SINR compared to pure Q-learning-based strategy and WoLF-PHC (Win or Learn Faster-Policy Hill Climbing)-based strategy. However, this performance comes at the cost of highest computational complexity and DQN takes a much longer time to make a decision. In \cite{QL}, the authors investigated the dynamic resource allocation of multiple UAVs within a Multi-Agent Reinforcement Learning (MARL) framework. The goal for each UAV $m$ is to jointly select the user ($a_m$), power level ($p_m$), and sub-channel ($c_m$) to ensure that the SINR provided by the UAVs is greater than a given threshold. The state of UAV $m$ at time $t$ is defined as \begin{align} s_m(t) = \left\{\begin{matrix} 1,\quad \gamma_m(t)\geq \bar{\gamma}\\ 0, \quad \gamma_m(t)< \bar{\gamma} \end{matrix}\right., \end{align} where $\bar{\gamma}$ is the threshold of satisfactory SINR. The reward function is \begin{align} R_m(t) = \left\{\begin{matrix} \frac{W}{K}\log_2(1+\gamma_m(t))-\omega_m P_m(t), \quad \text{if} \;\gamma_m(t) \geq \bar{\gamma}_m\\ 0, \text{else} \end{matrix}\right., \end{align} where $\gamma_m$ is the observed SINR of UAV $m$, $\omega_m$ is the cost per unit level of power, $P_m(t)$ is the transmit power of UAV $m$ at time slot $t$, and $\frac{W}{K}$ is the sub-channel bandwidth. Each UAV runs its decision algorithm independently, but all share a common structure based on Q-learning. The efficacy of the proposed MARL framework is shown via simulation and has a higher average reward compared to matching theory-based resource allocation and random user selection algorithms. The above works are all rooted in the offline Q-learning framework, which suffers from the well-known curse of dimensionality when the state and action spaces are large. Considering this drawback, \cite{DQL_power} proposed an on-board (or online) deep Q-learning technique to minimize the overall data packet loss of sensing devices. In this problem, the battery levels of ground devices, the queue lengths of the ground devices, the channel quality between the UAV and the device, and the location of the UAV are defined as the state. The selection of ground devices, the modulation of the device, and the instantaneous patrolling velocity of the UAV are the actions. Then a deep Q-network algorithm is used to learn and decide the device to be charged and interrogated for data collection and the instantaneous velocity of the UAV. This on-board deep Q-network has two separate Q-networks with current weights and old weights. Simulation results indicate that this algorithm has lower network costs and packet loss rates compared to other on-board scheduling policies. Traditional Q-learning uses the same values both for selecting and evaluating an action, thus suffering from the overestimation of action values under certain conditions. Double Q-learning was proposed to solve this problem \cite{double_Qlearning}. For double Q-learning, the selection and evaluation of an action are decoupled by using two value functions. The two value functions are learned by assigning each experience randomly to update one of them with weights $\theta$ and $\theta'$ respectively. During each update, one set of weights is used to determine the greedy policy and the other to determine its value. The clear difference between Q-learning and double Q-learning can be displayed by the following equations \cite{double_Qlearning}: \begin{align} \left\{\begin{matrix} Y_t^\mathrm{Q} = R_{t+1}+\gamma Q (S_{t+1},\argmax_{a} Q(S_{t+1},a;\theta_t);\theta_t)\\ Y_t^{\mathrm{DoubleQ}} = R_{t+1}+\gamma Q (S_{t+1},\argmax_{a} Q(S_{t+1},a;\theta_t);\theta'_t) \end{matrix}\right., \end{align} where $Y_t^{\mathrm{Q}}$ and $Y_t^{\mathrm{DoubleQ}}$ are the target value of Q-learning and double Q-learning respectively. One recent application of double Q-learning is in \cite{doubleQ_application}. In this article, the authors proposed an on-board double Q-learning scheduling algorithm for a UAV to select the IoT node for data collection and microwave power transfer along a predetermined flight trajectory. Similar to \citep{DQL_power}, the objective is to minimize data packet loss resulting from buffer overflow and channel fading. The action space is the selection of IoT nodes while the state space contains the battery levels, queue length of the IoT nodes, and the channel conditions between the IoT nodes and UAV. Then a double Q-learning algorithm is used to find the best selection of IoT nodes to reduce packet loss. To verify its efficacy, the authors compared their algorithm with the Q-learning algorithm. Simulation results show that double Q-learning outperforms Q-learning in both packet loss rate and learning error. Similarly, the proposed algorithm has an advantage over two other scheduling algorithms, which are called ``Longest Queue Scheduling Algorithm" and ``Longest Queue Lowest Battery algorithm". \subsubsection{Deep deterministic policy gradient} Deep Deterministic Policy Gradient (DDPG) is a model-free, off-policy actor-critic algorithm that concurrently learns the Q function and policy with neural networks. Both the critic and actor networks are parameterized using neural networks. DDPG learns policies in high-dimensional, continuous action spaces \cite{DDPG}. DDPG and its variants have been studied in robotics~\cite{DDPG_robotics}, self-driving~\cite{wang2019deep}, physical control domains and games such as Atari, chess, and others~\cite{ddpg_Atari}. A typical DDPG algorithm \cite{DDPG} is shown in \textbf{Algorithm \ref{algo:DDPG}}. \begin{algorithm} \label{algo:DDPG} Randomly initialize critic network $Q(s,a|\theta^Q)$ and actor $\mu(s|\theta^\mu)$ with weights $\theta^Q$ and $\theta^\mu$\; Initialize target network $Q'$ and $\mu'$ with weights $\theta^{Q'} \leftarrow \theta^Q$, $\theta^{\mu'} \leftarrow \theta^\mu$\; Initialize relay buffer $R$\; \For{episode =1, M}{ Initialize a random process $\mathcal{N}$ for action exploration\; Receive initial observation state $s_1$\; \For{t=1,T}{ Select action $a_t = \mu(s_t|\theta^\mu) + \mathcal{N}_t$ according to the current policy and exploration noise;\ Execute action $a_t$ and observe reward $r_t$ and observe new state $s_{t+1}$\; Store transition $(s_t,a_t,r_t,s_{t+1})$ in $R$\; Sample a random mini batch of $N$ transitions $(s_i,a_i,r_i,s_{i+1})$ from $R$\; Set $y_i = r_i+\gamma Q'(s_{i+1}, \mu'(s_{i+1}|\theta^{\mu'})|\theta^{Q'})$\; Update critic by minimizing the loss:~$L=\frac{1}{N}\sum_i(y_i-Q(s_i,a_i|\theta^Q))^2$\; Update the actor policy using the sampled policy gradient:\ \quad \quad$\bigtriangledown_{\theta^\mu}J\approx \frac{1}{N}\sum_i\bigtriangledown_a Q(s,a|\theta^Q)|_{s=s_i,a=\mu(s_i)}\bigtriangledown_{\theta^\mu}\mu(s|\theta^\mu)|_{s_i}$\; Update the target network: \\ \quad \quad $\theta^{Q'} \leftarrow \tau\theta^Q +(1-\tau \theta^{Q'})$\; \quad \quad $\theta^{\mu'} \leftarrow \tau\theta^\mu +(1-\tau \theta^{\mu'})$}. } \caption{DDPG Algorithm} \end{algorithm} DDPG extends the scope of Q-learning and has advantages over Q-learning in dealing with a continuous action space and high-dimensional problems. It is thus used in UAVs-assisted wireless communication problems to help solve the trajectory design, resource allocation, and deployment problems~\cite{RLCoverage,DRLCoverage,DDPG-RA}. Reference \cite{RLCoverage} proposed a DDPG-based algorithm for learning the optimal trajectories of a swarm of UAVs to efficiently maximize their coverage for vehicles on highways with poor cellular infrastructure and highly-dynamic environment. Fig.~\ref{figure:RL-AC} shows this application scenario where DDPG is used in a dynamic UAV-vehicular environment to optimize UAVs' trajectory. In this article, each UAV carries out a continuous control task to serve the vehicles on a highway. The inputs of the UAVs in the dynamic vehicular environment at time slot $n$ include: the remaining energy of each UAV, the number of vehicles residing within the considered highway segment, the instantaneous positions of vehicles, ground level position of each UAV, the status of the UAVs describing whether a UAV is deployed or not, and the coverage indicators of each vehicle. Each UAV takes an action which gives a traveling distance in a specific direction. The reward takes into consideration of several quantities: the coverage penalty due to non-coverage, the deployment penalty due to the deployment of a new UAV, the energy penalty due to traveling, and the penalty if the UAV flies outside the given segment. By using an actor-critic algorithm, the UAVs learn their flying trajectory and achieve an effective coverage with a minimum number of UAVs. The proposed algorithm is compared with three other approaches (namely, random UAV dispatching approach, fixed dispatching rate approach, and fixed hovering UAVs approach). It is shown that the proposed algorithm improves upon all three algorithms in terms of the number of required UAVs since it allows a UAV to dynamically predict and adapt its trajectory. The proposed algorithm also achieves the same coverage with less energy consumption. The drawback of this algorithm, however, is that it takes a long time (16 hours) to learn in the vehicular environment to obtain a good performance. With the same aim, Reference~\cite{DRLCoverage} proposed a DDPG-based method to find a flying control policy for UAVs, that will jointly maximize coverage and fairness while minimizing energy consumption. As opposed to \cite{RLCoverage}, the state-space contains three quantities: the current coverage score of each Point-of-Interest (PoI), the current coverage state of each PoI, and the current energy consumption. The reward function is given by \begin{align} r_t = \frac{f_t(\sum_{k=1}^K \Delta c_k^t)}{\sum_{i=1}^N\Delta e_i^t}, \end{align} where $f_t$ is the fairness index, $\Delta c_k^t$ is the increment coverage score, and $\Delta e_i^t$ is the incremental energy consumption. The proposed algorithm learns the UAVs' flying distance and flying direction. Simulation results show that the proposed algorithm outperforms two baselines (i.e. Random and Greedy policy) in terms of average coverage score and average energy consumption in spite of the number of UAVs used and the coverage range. By the same token, online DDPG is needed for future U-WCNs~\cite{DDPG-RA}. To jointly optimize the flight control of the UAV and data collection scheduling along the trajectory in real time, \cite{DDPG-RA} proposed a new online flight resource allocation scheme based on a DDPG algorithm. In particular, the flight resource allocation problem is formulated as a Markov decision process, where the network states consist of the battery level, data queue length, signal-to-noise-ratio of the channel, and the location of the UAV. The action set is composed of the heading, the patrol velocity of the UAVs and ground nodes selection for data collection. The heading and patrol velocity are in continuous action spaces and the reward is the packet loss of the network. Simulation results show the convergence of this algorithm. Furthermore, the same problem was extended and studied in \cite{DDPG_MC} by considering the real flying model and channel model of UAVs. Then an on-board DDPG-based maneuver control was proposed to jointly optimize the online maneuver control and communication schedule. \subsection{Summary and lessons learned} This section presented several popular machine learning algorithms and their applications to various problems in UAVs-assisted wireless communication networks. The above discussed machine learning frameworks and their corresponding applications in UAVs-assisted wireless communication networks are summarized in Table \ref{table:ML}. \begin{sidewaystable}[htbp] \caption{Types of machine learning approaches used in UAV-assisted wireless communication networks.} \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Refs} & \textbf{Description} & \textbf{ML algorithms} & Dataset/State & Outputs/Action\\ \hline \cite{ML-deployment}& On-demand deployment of UAV & GMM & \makecell{Number of users offloaded, \\[-2pt] intensity of cellular traffic} & \makecell{Number of aerial users, spacial distributions \\[-2pt] of aerial users and data traffic} \\ \hline \cite{CacheESN} & Proactive deployment& ESN & \makecell{Users' visited locations, \\[-2pt] contents requested, etc.} & Request distribution, mobility pattern \\ \hline \cite{DL_UAV} & UAV positioning& LSTM, MLP& LOS, elevation angle, etc.& Position, throughput\\ \hline \cite{LSTM_app}& Security &LSTM & Information of the content requester & Optimal D2D transmitter\\ \hline \cite{SNN-LSM}& Resource allocation & LSM & Users' information, UAVs' action & Content request distribution, UAVs' action\\ \hline \cite{UAV_FDL} & Mechanisms, challenges & Deep federated learning & -&-\\ \hline \cite{zhanghanbook,QL-emergency,QL-trajectory} &Positioning, trajectory planning& Q-learning & Position of UAVs & Up, down, left, etc.\\ \hline \cite{QL} & Resource allocation & Multi-agent Q-learning & 1, 0& \makecell{Selection of communicating user,\\[-2pt] power level and sub-channel}\\ \hline \citep{Attack-PT-Q} & Security & DQN & Attack mode & Transmit power selection \\ \hline \cite{DQL_power} & Trajectory planning, power transfer & Online Q-learning & Battery level, queue length, etc. & Selection of devices, velocity of UAVs, etc. \\ \hline \cite{doubleQ_application} &Data capture & Online double Q-learning& Battery level, queue length, etc. & Select IoT nodes\\ \hline \cite{RLCoverage} & Trajectory design, coverage problem & DDPG & \makecell{Remaining energy, \\[-2pt] instantaneous position, etc.} & Flying distance and direction\\ \hline \cite{DRLCoverage} & Coverage, fairness, energy efficiency & DDPG & \makecell{Current coverage score, \\[-2pt]energy consumption, etc.}& Flight distance and direction\\ \hline \cite{DDPG-RA,DDPG_MC} & Online flight resource allocation & DDPG& Battery level, queue length, etc.& Adjust heading and velocity, node selection \\ \hline \end{tabular} } \label{table:ML} \end{center} \end{sidewaystable} In summary, the main lessons learned from this section include: \begin{itemize} \item Machine learning tools such as supervised learning, CNN, RNN, SNN are being used for channel modeling, resource management, and positioning problems. \item ML tools make these problems model-free and easier to analyze the consumer behavior and requirements. \item Machine learning methods are limited due to their high computational requirements. \item Machine learning-based methods may be combined with traditional optimization methods to better serve users. \item Federated learning and distributed learning are used to protect the privacy of data. \item Reinforcement learning enables an agent to learn by interacting with the dynamic environment. However, it also suffers from computational complexity. \end{itemize} \section{The Intersection of game theory and machine learning in U-WCNs}\label{sec:intersection} With the increased deployment of mobile Internet and IoT systems, there are increasing communication requirements for ultra-Reliable Low Latency Communication (uRLLC), massive Machine-Type Communication (mMTC), and enhanced Mobile Broadband (eMBB) systems. UAVs have the potential of playing a major role in such fields and are also called upon in the elastic and reliable operation of V2X and Wireless Sensor Networks (WSN). It is important to delineate the limitations and benefits of deploying UAVs where ML and game-theoretic approaches may find broader applications. To be able to support massive wireless traffic demands, future networks will be multilayered and very dense. Consequently, a large number of UAVs will be deployed to satisfy such increasing demands, necessitating adaptive and data-driven algorithms. Swarms of UAVs equipped with innovative wireless communication technologies will be deployed to relay data, replace damaged communication infrastructures, assist overloaded networks, provide network backhaul, and to serve as flying base stations. Due to their large number and complexity, and with the dynamic nature of UAV-assisted networks, such systems must possess self-organizing capabilities. Self-organizing wireless networks will enhance network coverage, increase network capacity, improve quality of service, decrease operational costs by eliminating human involvement in performing tasks, and enhance network reliability. However, having a large number of UAVs induces interference to the network, which necessitates distributed techniques that suit the nature and features of these networks. The large size, complexity, and dynamic nature as well as the need for self-organization, pose challenges for centralized algorithms. Centralized approaches hinder scalability and induce backhaul network congestion, which limits the downlink and uplink data rates. Thus, advanced distributed algorithms are needed to address the interference challenge in UAVs-assisted networks. Players in UAV-assisted networks use distributed interference management approaches. These distributed algorithms will have multi-objective schemes that include optimizing the transmit power, the 3D locations, the azimuth and elevation angles of the UAVs’ antennas, the trajectory of the UAV, and the hover or flight times. Two famous examples are realizing Ultra-Reliable and Low-Latency Communications and Massive Machine Type Communications in UAV ecosystem. \subsection{uRLLC and mMTC} Realizing Ultra-Reliable and Low-Latency Communications (uRLLC) in UAV ecosystems faces a serious challenge, namely, the restricted frequency spectrum accessible for concurrent Air-to-Air (A2A), Ground-to-Air (G2A), Air-to-Ground (A2G), and Ground-to-Ground (G2G) communications. This is due to the fact that the implementation of A2G, G2A, and A2A links typically depends on devoted wireless communication channels. UAV-based communication systems have a rigorous demand for system resources in both the G2A and A2G links due to the essential provision of high data rate backhauling and the exchange of time-critical UAV controlling signals. Moreover, in order to achieve URLLC, multiple UAVs should be deployed simultaneously, which places a substantial burden on the need for an available frequency spectrum. Henceforth, classical multiple access schemes based on orthogonal spectrum partition would rapidly drain the available resources even with small numbers of UAVs and ground users. Furthermore, this will lead to a long delay in achieving URLLC and introduces severe safety issues in controlling UAVs~\cite{GT_survey}. In this case, conventional model-based methods with ideal assumptions can not address these challenges. Moreover, conventional optimization problems of resource management and schedule design are neither convex nor deterministic. Deep learning may be an option for these non-convex and non-deterministic problems~\cite{URLLC_review}, but conventional data-driven deep learning is data-dependent and has a long training phase, which limits its applicability in real systems. To make the best use of deep learning in URLLC, well-established models may be combined with deep learning in order to reduce the latency of URLLC. Transfer learning (model transfer) and federated learning can also be applied to reduce training cost and improve the learning efficiency. On the other hand, a multi-level architecture which enables device intelligence, UAV intelligence, and BS intelligence may also be proposed. Massive Machine Type Communications (mMTC) is another provisioned service of 5G and Beyond 5G (B5G). This service provides connections to a large number of devices/machines that sporadically exchange small amounts of data. In many practical data collection applications in mMTC networks, e.g., distributed intelligence realized by pervasive sensors, a large number of devices may be distributed in a wide area while each has to only transmit small bursts of data. In such a setup, it is very costly in terms of energy consumption to have the UAVs get close to each of them in order to collect data. This in turn will lead to an energy consumption adjustment between UAVs and ground devices \cite{Polo}. To tackle this problem, inter-UAV cooperation among UAV swarms is used, where the UAVs organize into clusters and dynamically select a cluster head based on some criteria relating the remaining energy and location-related physical parameters. The UAVs then collect data in a small area and transmit it to the UAV cluster head for further processing. Furthermore, in the case of large-scale deployment of UAVs in heterogeneous applications, the data exchange activities of UAVs to the same ground BS may be highly random, which requires more efficient random access protocols \cite{Hoefer}. The performance of UAV-based mMTC faces yet another challenge, namely the small batteries in UAVs due to their size, weight, and power limitations. In the following subsections, some potential solutions for solving the above problems based on a combination of game theory and machine learning methods and their respective challenges in U-WCNs are summarized. Other challenges and open problems such as the softwarization~\cite{softwarization} of U-WCNs, intelligent reflective surface for UAV communications~\citep{Agyapong}, and effective routing protocols~\cite{U2RV}, are also important but are out of the scope of this article. \subsection{Combining game theory and machine learning in U-WCNs} Based on the review of existing works of game theory in U-WCNs (Section \ref{sec:GT}) and machine learning used in U-WCNs (Section \ref{sec:ML}), we classify three examples for combining game theory and machine learning methods for solving problems in U-WCNs. \begin{itemize} \item One approach for combining game theory and machine learning in U-WCNs is to use a machine learning-based method to analyze the user's communication behavior and habits by collecting historical data, then use game theory to optimize a specific objective (such as association, positioning, trajectory planning, etc.), as done in Reference \cite{security,CacheESN,SNN-LSM}. \item Another approach in applications such as search and rescue and parcel delivery tasks, machine learning (CNN, RNN) may be used by UAVs for obstacle (victims, items, etc) recognition, then game theory is used to make some high-level decisions. \item Yet another unification of game theory and machine learning is found in Multi-Agent Reinforcement Learning (MARL)~\cite{zhang2021multiagent}. Multi-agent reinforcement learning involves the participation of more than one player in optimizing an objective. To be more specific, multiple players make decisions in a common environment and aim to maximize their own long-term return by interacting with the environment and other players. Without the need for exact modeling, MARL allows deep neural networks to be combined with game theory, thus realizing high-level decision making, as done in \cite{QL,zhang2021multiagent}. \end{itemize} In the following two subsections, we will introduce the benefits of mean field game, Evolutionary game and MARL in solving problems in multi-UAVs communication networks and their challenges, which we think are three tools that have great potentials in U-WCNs. \subsubsection{Mean field game and Evolutionary game} Mean field game and Evolutionary game are suitable for large scale networks and thus are believed to be appropriate options for interference management of massive UAVs~\cite{survey-UAV-2}. As a special form of differential game, MFG models each player's interaction with the collective behavior (mean field) of all the players instead of each of them. Such mean field approximation can thus be used to model the distribution of states (such as aggregated interference from other setups), which significantly simplifies the original problem which analyzes the coupling and gaming between every two players, thus reduces the computational complexity. One recent example is \cite{RobustMFG} for minimizing delay and energy consumption in a UAVs-caching system. In this work, a distributed delay optimization algorithm based on mean field game theory is proposed to model the large-scale UAVs caching and dynamic flight strategy problem. Simulation results show that the proposed algorithm has a larger delay reduction and higher average energy efficiency compared to other two strategies. Evolutionary game theory provides a solid basis for games among multi-agents in an uncertain environment based on the intuition that in the real world, players are not completely rational and knowledgeable. Recent applications of Evolutionary game theory in U-WCNs are mainly on access selection~\citep{jointaccessselect,EGT-modeselec}. What's more, the study of Evolutionary game (i.e., population dynamics) so far is limited to a single population. However, in the predictable future, many problems related to the interactions among different populations in the massive U-WCNs will appear. It's expected that multi-UAVs applications in wireless communication will benefit from mean field game and evolutionary game perspectives. \subsubsection{Multi-agent reinforcement learning} Future U-WCNs are highly dense, dynamic and non-deterministic communication networks. On one hand, due to the complexity of such systems, generating an exact model of the network environment is impractical. Model-free learning algorithms may however be used. On the other hand, the introduction of multiple intelligent agents results in a non-stationary environment, which makes the optimization/learning hard~\cite{matignon2012independent}. Hence, game theory and its solution frameworks are the necessary guidelines for creating stable algorithms. Multi-Agent Reinforcement Learning (MARL), at the intersection of game theory and machine learning, is a promising toolkit to solve problems in the dynamic and stochastic U-WCNs environments. Recent studies of MARL assume that each agent is an independent learner, which means that each agent tries to optimize its behavior by receiving feedback from the environment but without communicating with other agents. For example, \cite{MADDPG_independent_1} proposed a multi-agent DDPG (MADDPG) framework for solving UAVs' trajectory control problem in a UAV-aided mobile edge computing network. In this article, each UAV learns its offloading decision and flying control independently in order to maximize the geographical fairness among the covered user equipments and the fairness of user equipment-load of each UAV, and minimize the overall energy consumption. Reference \cite{MADDPG_independent_2} presented a MADDPG approach to jointly designing the UAVs' trajectory and allocating UAVs' transmission power in aims to satisfy the user equipments' quality of service requirement. \cite{MADDPG_independent_3} proposed a multi-agent deep Q-learning method for multi-UAV trajectory design in a cellular Internet of UAVs. Similarly, the UAVs need to determine its movements at each cycle to optimize the reward function which is a sum of valid transmission probability for the UAV in the UAV-to-Device (U2D) and the cellular modes. All the above works are similar in that the formulated optimization problem is either non-convex, highly-coupled or stochastic, and is therefore hard to solve using traditional optimization methods while greedy search algorithms have a high time and space complexities. In such case, multi-agent reinforcement learning solves these problems without exact knowledge of the exact model of the system. However, the feedback from the environment is dependent on the joint actions taken by all the agents, which makes the problem non-stationary and state-dependent. Multi-agent communication and cooperation are necessary to deal with the uncertainty of the dynamic environment. Considering this point, \cite{Cooperative_MARL_1} proposed a centralized offline training and decentralized online decision making MADDPG mechanism for vehicle association and resource allocation in a UAV-assisted vehicular network. In this mechanism, for the centralized offline training phase, the observations and actions of all the UAV agents are needed to train the network. Reference \cite{Cooperative_MARL_2} considered a cellular internet of UAVs executing sensing tasks through cooperative sensing and transmission to minimize the Age of Information (AoI). By selecting from a discrete set of tasks and a continuous set of locations for sensing and transmission cooperatively, UAVs are able to minimize the age of information. Similarly, the authors regarded the whole UAV-task-Base station system as a dynamic environment, where the state includes the location of all UAVs, the amount of sensing data, the AoI of task, and so on. Finally, a compound-action actor-critic algorithm where a deep Q-network is used to learn the task selection decision of UAVs and a DDPG is used for the sensing location selection, is proposed for this optimization problem. In the above two examples, scalability turns out to be a problem when combining action spaces of all agents without any effective mechanism. It is stated that before each agent makes decisions, the agent needs to be able to decide when/who to communicate and distinguish between important and un-important information~\cite{Chen2020,NiuYaru}. As a result, graph attention multi-agent reinforcement learning is proposed as a potential solution for the scalability problem of classical MARL and has more practical meaning~\cite{Chen2020,NiuYaru} by encoding the observation-action information into fixed-size features for each agent regardless of the number of neighbors. To the authors' best knowledge, few applications of these algorithms are used in U-WCNs. MARL and its variants enable agents to share information and learn from the environment to improve performance. It is envisioned that MARL will play a more and more important part for the uRLLC and mMTC in the U-WCNs. \subsection{Summary} Conventional game theory methods consider the interaction of each player with other players under some coupling of their cost function. This coupling relationship increases the computational complexity with the number of players increases. Machine learning-based methods are dependent on historical data. Federated learning, which allows model parameters instead of data to be shared is also limited due to heterogeneous data distribution. Mean field games and evolutionary games, are useful tools for dealing with large number of agents in interference management and resource allocation, but fail to model the interaction between the environment and players. Multi-agent reinforcement learning, which allows each agent to learn without a model of the environment focuses on independent learning for each agent. However, in practice, the action taken by one agent affects the reward of opponent agents and the evolution of the state. With many successful and empirical applications of MARL in U-WCNs, the theoretical understanding of MARL algorithms remains in its infancy. However, the combination of two or more of these methods can significantly alleviate the shortcomings of each method and solve problems in U-WCNs more efficiently. One recent example in UAV coverage control is \cite{MFG_DRL_UAVcontrol}, which fused mean field game with multi-agent deep reinforcement learning where the MFG is used to construct the HJB/FPK equation and the distribution of state is obtained through the neural network feature embedding method. In this way, the authors solved the difficulties of using MFG (i.e., complicated calculation process, limited sensing range, etc.) in real applications. With the development of better theoretical understanding of these algorithms and more efficient computational tools, the combination of game theory and machine learning has a more promising future for applications in U-WCNs. \section{Conclusion} \label{sec:conclusion} With the increased deployment of 5G, tele-medicine, IoT, AR/VR, smart cities and transportation, there is an increased desire for reliable wireless communications and privacy protection. UAVs-assisted wireless communication systems are a potentially excellent candidate for providing such services. This article reviewed the state-of-the-art applications of game theory and machine learning-based algorithms in UAVs-assisted wireless communication systems. Several challenges and future research directions were also illustrated in this article. In addition, we discussed the combined use of game theory and machine learning. In the near future, UAVs may deliver your parcel from an online store, based on an order from your phone or any IoT device; you will enjoy fast internet surfing and share your video when mountaineering; UAVs will monitor public safety including viral outbreaks or natural disasters. The technologies reviewed in this paper will help in making such scenarios possible.
1,314,259,993,157
arxiv
\section{Introduction} The past decade has witnessed the expeditious evolution of communication and computing technologies, and their innovative applications in many emerging fields such as the Internet of Vehicles and E-health, where massive amount of data is generated, exchanged and utilized. This development brings both technical challenges and great opportunities for a wide range of machine learning (ML)-based applications, since ML holds considerable promise to fast decisions and inferences without human intervention \cite{MyNetwork, SL-Healthcare-2018}. Besides, device-to-device (D2D) communications enabled multi-layer heterogeneous wireless networks are becoming one of main components of 5G/6G networks, where the complicated network topologies could impose great challenges to the implementations of ML\cite{FogLearning}. Securing abundant training data and computation resources is the fundamental requirement of ML. Traditional ML is generally centralized where massive data are collected and transmitted from local devices to centralized data centers associated with remote cloud servers. Disregards its advantages such as high accuracy and efficiency, centralized ML faces the following deficiencies: \begin{itemize} \item Frequent transmission of big training data is challenging even for the wired links, let alone the dynamic wireless links, while also incurs heavy energy consumption on local devices. \item Centralized ML is unconducive to rapid model deployment. Besides, it suffers from unsatisfying scalability in large-scale networks, especially when the model requires frequent retraining. \item Centralized ML is privacy-unfriendly, since many applications may involve lots of private information, e.g., pathological pictures in E-health\cite{SL-Healthcare-2018}. Under these circumstances, local devices (e.g., patients) may be unwilling to provide privacy-sensitive data due to ever-growing privacy concerns, which can result in a dilemma between model training and privacy protection. \end{itemize} \subsection{Preliminaries of Distributed ML} To reconcile the demand for ML model training and privacy protection, a straight idea is to conduct the model training by distributed data owners to avoid sharing raw data, which facilitates distributed ML architectures. Specifically, federated learning (FL) \cite{FL-2016} and split learning (SL) \cite{SL-2018} represent two bright peals. Specifically, FL and SL implement distributed ML from different perspectives, while the corresponding learning architectures are depicted in Fig. \ref{fig1}. \textbf{Training process of FL}: Regarding a typical FL scheme, a set of smart devices termed as clients can participate in the iterative model training. At the beginning of each iteration, each client receives a global model from a parameter server, then conducts local training to update the model by performing stochastic gradient descent on its local training data. After the completion of local training, multiple clients upload the corresponding model parameters to the FL server in parallel (step 1). Then, the FL server aggregates (e.g., FedAvg $Avg(\cdot)$ ) the overall received model parameters into a new global model, which will be broadcasted to clients (step 2) for the next training round. Specifically, each client only exchanges the model parameters with the FL server, which can thus prevent privacy disclosure to some extent. \textbf{Training process of SL}: Regarding a typical SL training process, an ML network is firstly split into two subnetworks via cutting in the middle layer of the network. Generally, the subnetwork associated with input layer is deployed on the clients' side, and the one related with output layer will be deployed at an SL server. In each iteration, a client first starts training by performing forward propagation; and then, the activation data, i.e., the output of cut layer of the client (with label data), will be transmitted to the SL server (step 1). After receiving the activation data, the SL server first continues forward propagation over the subnetwork to obtain output results; and then, starts with the back propagation by calculating the loss of the output results, as well as label data based on label sharing\cite{SL-2018}. Similarly, the output of cut layer can be transmitted back to the client for subsequent back propagation (step 2). Notably, if the client does not transmit label data to the server, i.e., without label sharing, the output layer should be deployed on the client's side for loss calculation. Finally, subnetworks on both the client and server can be updated respectively. By offloading the model parameter of the current client to the next client (step 3) and repeating the above steps, the model can be trained sequentially over multiple clients. \begin{figure}[t] \centering \includegraphics[width=3.4in]{Fig1.pdf} \caption{An illustration of the learning architectures of federated learning (the left side) and split learning (the right side).} \label{fig1} \end{figure} \subsection{Motivations and Contributions} Interestingly, FL and SL share some common advantages, such as communication/computation cost reduction, and privacy preservation. However, the implementations of FL and SL in heterogeneous wireless networks may also encounter the following challenges: \begin{itemize} \item \textit{Network heterogeneity}: Since the SL server can only interact with the clients sequentially, the low level of parallelization may hinder the corresponding convergence speed. For FL, owing to frequent model reporting from clients to the server, unstable wireless links may cause model performance degradation due to transmission failures. Besides, since conventional FL and SL are implemented based on a star topology, where a central server coordinates the model update of all the clients. Thus, FL and SL may suffer from unsatisfying scalability and performance degradation in large-scale D2D-enabled heterogeneous networks due to single point failures\cite{FogLearning}. For example, under a space-air-ground-ocean integrated 6G network architecture, satellites, unmanned aerial vehicles, smart vehicles, and underwater unmanned vehicles can be the clients, where frequent D2D communications among them can further complicate the network topology (e.g., hierarchical tree topology). Such disadvantages advocate combining both FL and SL architectures, to guarantee the training performance in heterogeneous wireless networks. \item \textit{Client heterogeneity}: Clients generally have heterogeneous computation/communication/energy capabilities. Each client in FL requires more computation/energy resources to support the complete model training. Besides, since the stability of wireless links of clients are essential to guarantee successful model transmissions (e.g., the size of a large complete model can reach 1 GB\cite{BAcombo}), FL prefers clients with sufficient computation/energy/communication resources. SL architecture can generally support clients with constrained on-board computation/energy resources since each client only has to train a partial model, which, however, can incur heavy communication overhead. Moreover, imbalanced and non-independent and identically distributed (non-IID) data on clients may dramatically impact training performance. To this end, exploring the combination of FL and SL architectures to make full use of clients' heterogeneous capabilities and resources presents another major significance. \item \textit{Optimization target heterogeneity}: Typically, model test accuracy and convergence speed represent the key optimization targets of distributed ML\cite{FogLearning}. Besides, when distributed ML is implemented in heterogeneous networks, common indicators such as datarate, throughput, delay, and energy consumption also represent major concerns of ML, which complicates problems such as client scheduling and resource allocation\cite{MultiFL-TMC}. Thus, it is significant to improve distributed ML architectures according to the characteristics of heterogeneous networks while realizing multiple targets optimization. \end{itemize} Given the above discussed major challenges and limitations of conventional FL and SL in heterogeneous networks, this article is motivated to propose two comprehensive architectures via analyzing the combination of FL and SL. Our main contributions are highlighted below: \begin{itemize} \item A new hybrid split FL (HSFL) architecture is firstly proposed by integrating the split architecture into FL. Then, the hybrid federated SL (HFSL) is introduced, which unifies federated architecture with SL. Advantages and performance comparisons of the two novel learning architectures are analyzed in detail. \item Open research directions are comprehensively discussed to identify the challenges and opportunities of our proposed architectures, for future implementations. \item Primary simulations are conducted to verify the feasibility of our proposed architectures on three datasets, under highly non-IID data settings. \end{itemize} \begin{figure*}[t] \centering \subfigure[] {\includegraphics[width=3.1in,angle=0]{Fig2.pdf}} \subfigure[] {\includegraphics[width=3.6in,angle=0]{Fig3.pdf}} \caption{The proposed hybrid ML architectures in a multi-layer D2D-aided heterogeneous network: a) a schematic of the proposed HSFL architecture; b) a schematic of the proposed HFSL architecture.} \label{fig2} \end{figure*} \section{Related work} Several initial works have been devoted to exploring the combination of FL and SL, as well as the corresponding performance improvement. For HSFL, \cite{BAcombo} proposed a decentralized FL mechanism (i.e., gossip learning) based on FL model splitting in a D2D network, where each FL client only transmits model segmentations to neighboring clients. However, this work only considers the D2D network. For HFSL, \cite{PSL-2020} proposed a parallel SL framework, where all the clients' subnetworks are synchronized. In each training round, all the clients send all the gradients back to the server, and the server averages the gradients and transmits them back to the clients. Although this method enables SL with parallelism during clients' model update process, it still depends on a single server and thus results in poor scalability, especially in large-scale networks. \cite{J-SplitFed-20} proposed a SplitFed framework where the clients' model parameters are also averaged with a dedicated server. Besides, the subnetwork at the server side is updated by averaging the gradients of each client. However, unsatisfying scalability represents one of the key drawbacks of SplitFed, upon considering the increasing number of clients\cite{C-CLOUD-FSL-21}. Therefore, \cite{C-CLOUD-FSL-21} decided to deploy edge servers as coadjutants to alleviate the communication and computation load of the SL server; then each edge server can interact with one or several clients to exchange gradients; while the SL server can further calculate averaged gradients and update the subnetworks at edge servers. \cite{J-TC-SFLG-22} put forward similar ideas by deploying multiple FL servers to handle groups of clients. Additionally, they have made comprehensive experiments on Raspberry Pi devices. Although these works have made certain contributions, none of them have comprehensively discussed the implementations on integrating FL and SL in heterogeneous networks while evaluating the performance associated with different architectures. \section{Hybrid Architectures for Distributed ML} \subsection{Architecture of HSFL} According to previous discussions, FL clients may undergo heavy communication costs since each of them has to transmit a complete model to the server, especially when considering large size models over unstable wireless links. Besides, in typical FL, the model of a client is no longer useful in updating the global mode when facing with transmission failures, e.g., only part of the corresponding model has been successfully transmitted to the server. Promoted by the basic idea of SL and to overcome the drawbacks of FL, we consider splitting the model from a different perspective, namely, the number of model parameters. Then, a comprehensive HSFL architecture is proposed as inspired by \cite{BAcombo}, in a multi-layer heterogeneous network, as depicted in Fig. \ref{fig2}(a), which consists of D2D clients, cellular clients, edge servers, and the main server. Key modules of HSFL are detailed below. \textbf{Model splitting:} The model is firstly split into $M$ ($\{S_{1},\dots,S_{M}\}$) segments with equal data size where each segment is identified by a unique identification number. $M$ represents a hyperparameter that can be different for various FL models. More importantly, the larger $M$ is, the smaller model granularity and higher transmission efficiency could be reached. Considering different communication conditions, an appropriate value of $M$ can ensure a good trade-off between transmission capacity and communication efficiency. Although each client can set $M$ by itself theoretically, to facilitate model aggregation/storage, all clients will use the same value of $M$. \textbf{Model transmission and aggregation at clients:} Each client first evaluates the wireless channel quality and transmission capacity, to determine the number of segments that can be transmitted successfully. Any specific segments for transmission can be randomly chosen or specified by the receiver, while different clients can transmit the same segments. Specifically, for D2D clients, suppose that each client can communicate with its neighboring clients within one hop, it will send/receive at least one segment to/from each neighbor. Two paradigms associated with model transmission and model aggregation are applied for D2D clients and cellular clients, respectively, as given in Fig. \ref{fig2}(a). On the right side, cellular clients can transmit model segments to the edge server in parallel, for example, client 1 sends segment 1 to the edge server while client 2 sends segments 2 and 3. On the left side of Fig. \ref{fig2}(a), D2D clients transmit model segments to their neighboring clients sequentially in a decentralized manner, while the last D2D client can send the aggregated model to the edge server. Specifically, edge servers or D2D clients proceed with segment-wise model aggregation, where the model segments are aggregated individually. For example, edge servers aggregate segment 1 of $W_{E,B}$ by averaging all the received segments 1. \textbf{Horizontal/Vertical model aggregation at edge servers:} Vertical aggregation and horizontal aggregation are considered in HSFL to improve communication efficiency. For vertical model aggregation, the model transmission and aggregation can be repeated for multiple rounds to obtain multiple model replicas, which thus reduces the communication cost between edge servers and main server\cite{FogLearning}. Then, the model will be transmitted to the main server for a wide-range global aggregation. Horizontal aggregation between edge servers can be regarded as a special D2D communication process from the edge server level, which can further reduce the total model size transmitted to the main server and thus alleviate communication costs. Moreover, horizontal aggregation can be used for model parameter sharing, which greatly accelerates local model training; while mitigating the influence of non-IID data distributions among clients. \subsection{Architecture of HFSL} Fig. \ref{fig2}(b) illustrates the proposed HFSL architecture, where the main principle is to parallelize SL training and average the model weights over multiple clients by applying FL. Similar to HSFL, both D2D clients and cellular clients are considered. When HFSL is implemented over multiple D2D clients, clients can be clustered into several clusters, e.g., based on their communication/computation capabilities. Then, the ML model is split into multiple subnetworks and distributed to the clients within each cluster. As shown on the left side of Fig. \ref{fig2}(b), the model is split into 4 and 3 subnetworks for two clusters, respectively. Then, the forward propagation starts at the client (e.g., client 1) with the input layer while ending at the client (e.g., client 4) with the output layer. Next, the back propagation starts in reverse order. Thus, each D2D cluster can be regarded as a hyper FL client which trains a complete model, i.e, ${W}_{1},{W}_{2}$. The models can be transmitted by the clients to the edge server/neighboring cluster for averaging aggregation. In the next training round, the training starts with different clients under a new model splitting setting. Apparently, the training process is sequential within each D2D client cluster, and parallel over different clusters. For cellular clients, we borrow ideas from \cite{PSL-2020, J-SplitFed-20,J-TC-SFLG-22, C-CLOUD-FSL-21}, and integrate them into the proposed HFSL. As shown by the right side of Fig. \ref{fig2}(b), the ML model is split into two subnetworks $C$ and $H$, where each client trains the same subnetwork $C$ in parallel while the subnetwork $H$ is deployed at edge server $B$. When multiple clients forward the activation results to the server in parallel, the edge server $B$ first copies the model to obtain multiple model copies to conduct forward propagation and back propagation for different clients in parallel. The number of model copies should be equal to the number of clients, e.g., two copies for clients $A$ and $B$. Then, the gradients will be sent back to the clients for updating the corresponding subnetwork parameters, $C_{A}$ and $C_{B}$; while the server can update parameters $H_{1}$ and $H_{2}$. Then, the clients can send the updated model parameters to the edge server in parallel. Finally, the edge server can aggregate the model copies of edge server $B$ with $H_{B}$, and $C_{B}$ of the two clients. $C_{B}$ will be sent back to the clients for the next training round. Notably, the number of cellular clients associated with each edge server should be optimized so that to alleviate the model storage cost of the edge server. Similarly, edge server $B$ can further send the complete model parameter $W_{E,B}$ to the main server for wide-range global aggregation. Besides, horizontal model aggregation can also be done between the edge servers in HFSL, which is omitted here. \subsection{Comparison of HSFL and HFSL} \textbf{Connections and differences:} Although HSFL and HFSL are both concrete implementations of the combination of FL and SL architectures, they have connections and differences. First, the core architecture of HSFL is FL, namely, all the clients should train a complete ML model, while only have to transmit partial (or complete) model parameters. Model splitting based on the number of model parameters can be considered as a special case of SL. Differently, the core architecture of HFSL is SL, that is, each client only trains a certain part of the model, while the federated architecture aims to realize parallel training and multi-layer model aggregation. Therefore, HFSL and HFSL essentially represent two different architectures, but can both be implemented in heterogeneous networks. Generally, HSFL and HFSL are interrelated and can be organically combined to form a more complex architecture. \textbf{Performance discussion}: To achieve better performance comparison regarding different learning architectures, we quantify the communication (comm.)/computation (comp.) cost of clients for a simple analytical analysis. Without losing generality, we mainly consider cellular clients, since it is challenging to compare the performance of different architectures under the same parameter settings for D2D clients. Besides, it is hard to find general settings for decentralized FL and SL. Assuming that there are $N$ clients, the total training data size is $D$ where each client has the same training data size $D/N$. The overall model data size is $|W|$. The model is split into two subnetworks for SL and HFSL, the size of the split layer is $b$, and the size of forward propagation or back propagation over the split layer can be calculated by $bD/N$\cite{MIT-FLSL-19}. The fraction of model size with clients is $\gamma$ and $(1-\gamma)$ with the server. For HSFL, the model is equally divided into $M$ segments, while each client transmits $m$ segments to the server at one training round. The computation cost, i.e., floating point of operations (FLOPs) of training a complete model is denoted by $F$; and the size of allocated computation load fraction at the client is $\lambda$. Thus, the total computation cost is $NF$ for FL and HSFL, while it is $N\lambda F$ for SL and HFSL. Correspondingly, HSFL reduces the total communication cost of FL from $2N|W|$ to $N2m|W|/M$, while HFSL raises the communication cost of SL from $N(2bD/N+\gamma|W|)$ to $N2(bD/N+\gamma|W|)$, which, however, greatly reduces the training time through parallelization (as shown in the simulation). \textbf{Key advantages:} According to the above-discussions, the advantages of HSFL and HFSL are summarized as follows: \begin{itemize} \item Training cost reduction: Compared with FL/SL, HSFL/HFSL can greatly reduce the communication cost and training time, while HSFL works better for training large size models, rather than HFSL, upon considering a large number of clients. \item Communication efficiency improvement: Spectrum can be efficiently reused as benefited by in-band D2D communication mode with an appropriate resource sharing scheme. \item Data-efficient training: HSFL can alleviate the loss caused by possible model transmission failures, while take good use of the local models trained by clients. In addition, HFSL can facilitate more clients with limited computation resources to participate in each global training round, thanks to the property of parallelism. Besides, D2D communications and multi-layer hybrid network architecture will greatly expand the client coverage of the servers, and more training data can be efficiently utilized. \item Good adaptation in time-varying network topology: Based on the hybrid network architecture, HSFL and HFSL can adapt well to the dynamic network environment (e.g., caused by the mobility of clients), and achieves good stability during the training process. \item High privacy protection: HSFL and HFSL enable clients to train or transmit partial models, which thus reduce the possibility of privacy disclosure incurred by malicious attacking or eavesdropping. \end{itemize} \ \begin{figure*}[htbp] \centering \subfigure[] {\includegraphics[width=2.3in,angle=0]{Fig4.pdf}} \subfigure[] {\includegraphics[width=2.3in,angle=0]{Fig5.pdf}} \subfigure[] {\includegraphics[width=2.3in,angle=0]{Fig6.pdf}} \caption{Test accuracy of CL, FL, SL, HSFL, and HFSL over the training rounds, with four clients upon considering three datasets: a) MNIST; b) Fashion-MNIST; c) MedMNIST.} \label{fig3} \end{figure*} \begin{table*}[htbp] \footnotesize \begin{center}\renewcommand\arraystretch{1.2} \caption{Performance comparison of different architectures for different data sets in one global round.} \setlength{\tabcolsep}{0.3mm}{ \begin{tabular}{|ccccccc|} \hline \multicolumn{7}{|c|}{\cellcolor{gray!40}\textbf{MNIST}} \\ \hline \multicolumn{1}{|c|}{Architectures} & \multicolumn{1}{c|}{Client network} & \multicolumn{1}{c|}{Server network} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Total comm. cost \\ (MB)\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Total comp. cost \\ (FLOPs)\end{tabular}} & \multicolumn{1}{c|}{Test accuracy} & \begin{tabular}[c]{@{}c@{}}Training time\\ (Second)\end{tabular} \\ \hline \multicolumn{1}{|c|}{CL} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}cov2d(32,(5,5))+cov2d(64,(3,3))+\\ dense(30976,128)+dense(128,64)+\\ dense(64,10)\end{tabular}} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{99.05\%} & 5.02 \\ \hline \multicolumn{1}{|c|}{FL} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}cov2d(32,(5,5))+cov2d(64,(3,3))+\\ dense(30976,128)+dense(128,64)+\\ dense(64,10)\end{tabular}} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{60.94} & \multicolumn{1}{c|}{$1.07*10^8$} & \multicolumn{1}{c|}{96.83\%} & 50.83 \\ \hline \multicolumn{1}{|c|}{SL} & \multicolumn{1}{c|}{cov2d(32,(5,5))+cov2d(64,(3,3))} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}dense(30976,128)+dense(128,64)+\\ dense(64,10)\end{tabular}} & \multicolumn{1}{c|}{7090.1} & \multicolumn{1}{c|}{$7.52*10^7$} & \multicolumn{1}{c|}{96.20\%} & 891.02 \\ \hline \multicolumn{1}{|c|}{HSFL} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}cov2d(32,(5,5))+cov2d(64,(3,3))+\\ dense(30976,128)+dense(128,64)+\\ dense(64,10)\end{tabular}} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{30.47} & \multicolumn{1}{c|}{$1.07*10^8$} & \multicolumn{1}{c|}{96.92\%} & 49.91 \\ \hline \multicolumn{1}{|c|}{HFSL} & \multicolumn{1}{c|}{cov2d(32,(5,5))+cov2d(64,(3,3))} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}dense(30976,128)+dense(128,64)+\\ dense(64,10)\end{tabular}} & \multicolumn{1}{c|}{7090.4} & \multicolumn{1}{c|}{$7.52*10^7$} & \multicolumn{1}{c|}{96.33\%} & 222.75 \\ \hline \multicolumn{7}{|c|}{\cellcolor{gray!40}\textbf{Fashion-MNIST}} \\ \hline \multicolumn{1}{|c|}{CL} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}cov2d(32,(5,5))+cov2d(64,(3,3))+\\ dense(30976,128)+dense(128,64)+\\ dense(64,10)\end{tabular}} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{90.34\%} & 5.02 \\ \hline \multicolumn{1}{|c|}{FL} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}cov2d(32,(5,5))+cov2d(64,(3,3))+\\ dense(30976,128)+dense(128,64)+\\ dense(64,10)\end{tabular}} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{60.94} & \multicolumn{1}{c|}{$1.07*10^8$} & \multicolumn{1}{c|}{78.00\%} & 50.83 \\ \hline \multicolumn{1}{|c|}{SL} & \multicolumn{1}{c|}{cov2d(32,(5,5))+cov2d(64,(3,3))} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}dense(30976,128)+dense(128,64)+\\ dense(64,10)\end{tabular}} & \multicolumn{1}{c|}{7090.1} & \multicolumn{1}{c|}{$7.52*10^7$} & \multicolumn{1}{c|}{68.51\%} & 891.02 \\ \hline \multicolumn{1}{|c|}{HSFL} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}cov2d(32,(5,5))+cov2d(64,(3,3))+\\ dense(30976,128)+dense(128,64)+\\ dense(64,10)\end{tabular}} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{30.47} & \multicolumn{1}{c|}{$1.07*10^8$} & \multicolumn{1}{c|}{78.26\%} & 49.91 \\ \hline \multicolumn{1}{|c|}{HFSL} & \multicolumn{1}{c|}{cov2d(32,(5,5))+cov2d(64,(3,3))} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}dense(30976,128)+dense(128,64)+\\ dense(64,10)\end{tabular}} & \multicolumn{1}{c|}{7090.4} & \multicolumn{1}{c|}{$7.52*10^7$} & \multicolumn{1}{c|}{80.30\%} & 222.75 \\ \hline \multicolumn{7}{|c|}{\cellcolor{gray!40}\textbf{MedMNIST}} \\ \hline \multicolumn{1}{|c|}{CL} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{ResNet(32)} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{91.73\%} & 2.03 \\ \hline \multicolumn{1}{|c|}{FL} & \multicolumn{1}{c|}{ResNet(32)} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{7.18} & \multicolumn{1}{c|}{$4.27*10^8$} & \multicolumn{1}{c|}{80.21\%} & 52.77 \\ \hline \multicolumn{1}{|c|}{SL} & \multicolumn{1}{c|}{ResNet(2)} & \multicolumn{1}{c|}{ResNet(30)} & \multicolumn{1}{c|}{1400.02} & \multicolumn{1}{c|}{$3.2*10^7$} & \multicolumn{1}{c|}{78.21\%} & 180.18 \\ \hline \multicolumn{1}{|c|}{HSFL} & \multicolumn{1}{c|}{ResNet(32)} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{3.59} & \multicolumn{1}{c|}{$4.27*10^8$} & \multicolumn{1}{c|}{82.41\%} & 52.66 \\ \hline \multicolumn{1}{|c|}{HFSL} & \multicolumn{1}{c|}{ResNet(2)} & \multicolumn{1}{c|}{ResNet(30)} & \multicolumn{1}{c|}{1400.18} & \multicolumn{1}{c|}{$3.2*10^7$} & \multicolumn{1}{c|}{81.32\%} & 45.04 \\ \hline \end{tabular} } \end{center} \end{table*} \section{Open Research Directions} This section discusses interesting research directions of the proposed HSFL and HFSL architectures, some examples are discussed as below. \textbf{Model splitting and resource allocation:} For HSFL, an applicable splitting scheme, e.g., the value of $M$, can greatly improve the transmission efficiency and reduce client dropout rate. Particularly, considering a D2D network where the wireless links among clients are random and dynamic, how to determine a reasonable $M$ represents a noteworthy problem, e.g., designing a dynamic model splitting scheme. For HFSL, the structural characteristics of the model can impose higher complexity and additional challenges to the model splitting problem, especially in D2D networks. For example, the model of HFSL can be divided into multiple subnetworks and assigned to different D2D clients, where subnetworks and D2D clients are regarded as two directed graphs. In this case, the subnetwork allocation problem over a D2D network is formulated as a subgraph isomorphism problem\cite{Liwang-IOTJ}, which is generally NP-complete and greatly calls for low-complexity and responsive solution designs. Besides, since different model segments (layers) may leave various impacts on training performance, highly depending on the local data set of each client. Thus, model splitting scheme for HSFL and HFSL should be client/segment/layer-wise, which still remains an open problem. Moreover, considering heterogeneous resource conditions of clients, model splitting should be jointly optimized via designing feasible resource allocation strategies, for better training performance and higher resource efficiency. \textbf{Privacy leakage and protection:} Although HFSL and HSFL can provide a higher level of privacy protection as compared with SL and FL, privacy leakages are still inevitable even when all the participants (i.e., clients or edge servers) are semi-honest (i.e., \textit{honest-but-curious})\cite{Sp-book}. For example, each participant can receive one or multiple model segments from other participants in HSFL. As the training proceeds, each participant eventually has access to a complete model, which makes HSFL facing the same risk of privacy leakage due to model attacks (e.g., membership inference attacks, model inversion attacks) as FL. Although the complete model is not available to the clients in HFSL, data features and gradient information need to be exchanged frequently and directly, making the risk of privacy leakage during the communication process much higher. Some existing techniques such as differential privacy and homomorphic encryption can alleviate the above-mentioned privacy leakage risk to a certain extent, the corresponding high computational complexity and model performance degradation may fail to bring satisfactory results. Therefore, it is still challenging to design algorithms with both low computational complexity and guaranteed model accuracy to achieve lower privacy leakage risk, by combining the features of HSFL/HSFL (e.g., dynamic model partitioning and aggregation strategies). \textbf{Incentive mechanism design:} Although many existing works have investigated incentive mechanisms for FL/SL-based services, the problem can become much more complicated when considering HSFL and HFSL. For example, the granularity of services in HSFL/HFSL (regarding model segment/layer level) provided by each client can be smaller, while involving more cooperations among clients. Specifically, problems such as reward transfer among different clients and the secondary distribution of internal reward in client clusters under HSFL/HFSL should be well concerned. \textbf{Multiple ML tasks scheduling:} Thanks to the innovative and diverse on-board sensors, client devices can collect various types of data. Besides, benefit from the enhanced multi-core computing processors, a client can participate in multiple ML model training processes simutaneously\cite{MultiFL-TMC}. Therefore, when performing multiple ML training tasks at one time, it is significant to determine appropriate learning architectures (i.e., FL, SL, HSFL, or HFSL) and resource scheduling schemes regarding diverse learning tasks' requirements and clients' status, to ensure the learning performance. This topic also offers a practical and interesting research direction. \textbf{Privacy-oblivious data sharing:} Although avoiding raw data sharing (e.g., in case of privacy disclosure) represents the basic intention of distributed ML, several clients are somehow allowed to offload training data to a trusted client in many scenarios. For example, clients are willing to share photos with their families and friends, rather than strangers. This privacy-oblivious data sharing process can involve more training data. For example, clients with limited power supply can offload training data to others in supporting model training. Besides, data sharing can also realize client dimensionality reduction in HSFL and HFSL, which thus enables better optimizations. To avoid high-risk privacy disclosure, data sharing strategies need to be further studied, such as measuring the relationship between privacy disclosure and the amount of shared data, establishing a reputation/credit-based evaluation system. \section{Primary Simulation} This section evaluates the performance of our proposed hybrid architectures in comparison with centralized learning (CL), FL, and SL on three datasets, namely, MNIST, Fashion-MNIST, and MedMNIST. Each dataset contains multiple classes of different objects, and can thus be utilized to train classification models. Specifically, MNIST includes grayscale images of handwritten digits from `0' to `9'; Fashion-MNIST includes grayscale images of ten different clothing items; MedMNIST includes grayscale images of eight blood cell microscopes. Besides, four clients are supposed to participate in each architecture. To better evaluate the feasibility and stability of the proposed architectures, we adopt a highly non-IID dataset setting \cite{Non-IID-FL} for the clients and datasets. For example, we partition MNIST into four groups according to label spaces, while ensuring that any two groups have different label sets. For example, we assume client-1 has all the training data of class `0' and `1'; client-2 has all the training data of class `2' and `3'; client-3 has all the training data of class `4', `5' and `6'; client-4 has all the training data of class `7', `8' and `9'. A similar dataset setting is applied for Fashion-MNIST and MedMNIST. Besides, Table I demonstrates the model setting for different datasets and architectures. Note that a ResNet(32) model is applied for MedMNIST, where ResNet(2) and ResNet(30) mean that the first two hidden layers and the remaining 30 layers are deployed at the client-side and the server-side, respectively. We run simulations with AMD Ryzen 7-5800H@3201MHz as clients and with NVIDIA GeForce RTX3070-8G as an edge server. To better measure the training time, the average uplink/downlink datarate between the edge server and any client is set as 10/50 MB/s, and the average D2D datarate between any two clients is 5 MB/s. In addition, all the clients conduct one local training epoch for each global training round. Fig. \ref{fig3} demonstrates the test accuracy of different architectures over training rounds, with four clients, upon considering three datasets. In general, CL converges rapidly and achieves the highest test accuracy, which embodies the advantages of CL. In contrast, SL converges the slowest, while FL, HSFL, and HSFL achieve relatively close performance on convergence, which proves the feasibility of our proposed architectures. Obviously, there exists a performance gap between distributed ML and CL, which grows with the complexity of dataset and model, mainly affected by the distribution of non-IID data. {Besides, Table I illustrates the detailed performance comparison of different learning architectures on three datasets and models. Notably, MNIST and Fashion-MNIST reach the same performance except for test accuracy, owing to that they share the same data format, data size, and network model. Specifically, compared with FL, HSFL significantly reduces the communication cost (by 50\%) and slightly reduces the training time, while raising the accuracy (e.g., by around 2.2\% on MedMNIST). Similarly, compared with SL, HSFL significantly reduces the training time (by around 75\%) and improves the accuracy (e.g., about 10\% on Fashion-MNIST), while the communication cost of HFSL is slightly higher than that of SL. The above discussed results demonstrate that our proposed HSFL/HFSL can significantly reduce communication cost/training time without loss of accuracy, in comparison with conventional FL/SL. Besides, additional comparison between HSFl and HFSL shows that HSFL generally gets lower communication cost and training time, while HFSL obtains lower computation cost, and they can achieve similar test accuracy. Therefore, the implementation of HSFL and HFSL based on resources availability of clients and performance requirements of ML services call for further attentions.} \section{Conclusion} In this article, we propose two hybrid architectures for distributed ML in heterogeneous wireless networks, namely, HSFL, and HFSL, based on integrating federated and split architectures. We first present basic architectures and analyze the advantages as well as performance comparisons of HSFL and HFSL. Then, interesting research directions are discussed to point out the potential challenges and opportunities for the future implementations of our proposed architectures. Finally, we conduct primary simulations to verify the feasibility of our proposed architectures on three datasets via considering highly non-IID data settings. \bibliographystyle{ieeetr}
1,314,259,993,158
arxiv
\section{The affine root system}\label{chap:affine-root-system} \subsection{Group-theoretic setup}\label{sec:notation} We fix a non-archimedian local field $F$ whose completion of the maximal unramified extension will be denoted $L = \breve F$. We write $\mathcal O_F$ and $\mathcal O_L$ for the respective rings of integers. Let $\varepsilon \in F$ be a uniformizer. The Galois group $\Gamma = \Gal(L/F)$ is generated by the Frobenius $\sigma$. Concretely, this means we have one of the following situations: \begin{itemize} \item Mixed characteristic case: $F/\mathbb Q_p$ is a finite extension for some prime $p$. Then $\mathcal O_F$ is the set of integral elements of $F$. \item Equal characteristic case: $\mathcal O_F$ is a ring of formal power series $\mathbb F_q\doublebrack\varepsilon$, $F = \mathbb F_q\doubleparen\varepsilon$ is its fraction field, $\mathcal O_L = \overline{\mathbb F_q}\doublebrack\varepsilon$ and $L = \overline{\mathbb F_q}\doubleparen\varepsilon$. The Frobenius $\sigma$ acts on $L$ via \begin{align*} \sigma\left(\sum a_n \varepsilon^n\right) = \sum a_n^q \varepsilon^n. \end{align*} \end{itemize} We consider a connected and reductive group $G$ over $F$. We construct its associated affine root system and affine Weyl group following Haines-Rapoport \cite{Haines2008} and Tits \cite{Tits1979}. Fix a maximal $L$-split torus $S\subseteq G_L$ and write $T$ for its centralizer in $G_L$, so $T$ is a maximal torus of $G_L$. Write $\mathcal A = \mathcal A(G_L,S)$ for the apartment of the Bruhat-Tits building of $G_L$ associated with $S$. We pick a $\sigma$-invariant alcove $\mathfrak a$ in $\mathcal A$. This yields a $\sigma$-stable Iwahori subgroup $I\subset G(L)$. Denote the normalizer of $T$ in $G$ by $N(T)$. Then the quotient \begin{align*}\widetilde W = N_G(T)(L) / (T(L)\cap I)\end{align*} is called \emph{extended affine Weyl group}, and $W = N_G(T)(L)/T(L)$ is the \emph{(finite) Weyl group}. The Weyl group $W$ is naturally a quotient of $\widetilde W$. The affine roots as constructed in \cite[Section~1.6]{Tits1979} are denoted $\Phi_{\mathrm{af}}$. Each of these roots $a\in \Phi_{\mathrm{af}}$ defines an affine function $a:\mathcal A\rightarrow\mathbb R$. The vector part of this function is denoted $\cl(a) \in V^\ast$, where $V = X_\ast(S)\otimes\mathbb R = X_\ast(T)_{\Gamma_0}\otimes \mathbb R$. Here, $\Gamma_0 = \Gal(\overline L/L)$ is the absolute Galois group of $L$, i.e.\ the inertia group of $\Gamma = \Gal(\overline F/F)$. The set of \emph{(finite) roots} is\footnote{This is different from the root system that \cite{Tits1979} and \cite{Haines2008} denote by $\Phi$; it coincides with the root system called $\Sigma$ in \cite{Haines2008}.} $\Phi := \cl(\Phi_{\mathrm{af}})$. The affine roots in $\Phi_{\mathrm{af}}$ whose associated hyperplane is adjacent to our fixed alcove $\mathfrak a$ are called \emph{simple affine roots} and denoted $\Delta_{\mathrm{af}}\subseteq \Phi_{\mathrm{af}}$. Writing $W_{\mathrm{af}}$ for the extended affine Weyl group of the simply connected quotient of $G$, we get a natural $\sigma$-equivariant short exact sequence (cf.\ \cite[Lemma~14]{Haines2008}) \begin{align*} 1\rightarrow W_{\mathrm{af}}\rightarrow\widetilde W\rightarrow \pi_1(G)_{\Gamma_0}\rightarrow 1. \end{align*} Here, $\pi_1(G) := X_\ast(T)/\mathbb Z\Phi^\vee$ denotes the Borovoi fundamental group. For each $x\in \widetilde W$, we denote by $\ell(x)\in \mathbb Z_{\geq 0}$ the length of a shortest alcove path from $\mathfrak a$ to $x\mathfrak a$. The elements of length zero are denoted $\Omega$. The above short exact sequence yields an isomorphism of $\Omega$ with $\pi_1(G)_{\Gamma_0}$, realizing $\widetilde W$ as semidirect product $\widetilde W = \Omega\ltimes W_{\mathrm{af}}$. Each affine root $a\in \Phi_{\mathrm{af}}$ defines an affine reflection $r_a$ on $\mathcal A$. The group generated by these reflections is naturally isomorphic to $W_{\mathrm{af}}$ (cf.\ \cite{Haines2008}), so by abuse of notation, we also write $r_a\in W_{\mathrm{af}}$ for the corresponding element. We define $S_{\mathrm{af}} := \{r_a\mid a\in \Delta_{\mathrm{af}}\}$, called the set of \emph{simple affine reflections}. The pair $(W_{\mathrm{af}}, S_{\mathrm{af}})$ is a Coxeter group with length function $\ell$ as defined above. We pick a special vertex $\mathfrak x\in \mathcal A$ that is adjacent to $\mathfrak a$. We identify $\mathcal A$ with $V$ via $\mathfrak x\mapsto 0$. This allows us to decompose $\Phi_{\mathrm{af}} = \Phi\times\mathbb Z$, where $a = (\alpha,k)$ corresponds to the function \begin{align*} V\rightarrow \mathbb R, v\mapsto \alpha(v)+k. \end{align*} From \cite[Proposition~13]{Haines2008}, we moreover get semi-direct product decompositions $\widetilde W = W\ltimes X_\ast(T)_{\Gamma_0}$ and $W_{\mathrm{af}} = W\ltimes \mathbb Z\Phi^\vee$. Using this decomposition, we write elements $x\in \widetilde W$ as $x = w\varepsilon^\mu$ with $w\in W$ and $\mu\in X_\ast(T)_{\Gamma_0}$. For $a = (\alpha,k)\in \Phi_{\mathrm{af}}$, we have $r_a = s_\alpha \varepsilon^{k\alpha^\vee}\in W_{\mathrm{af}}$, where $s_\alpha\in W$ is the reflection associated with $\alpha$. The natural action of $\widetilde W$ on $\Phi_{\mathrm{af}}$ can be expressed as \begin{align*} (w\varepsilon^\mu)(\alpha,k) = (w\alpha,k-\langle\mu,\alpha\rangle). \end{align*} We define the \emph{dominant chamber} $C\subseteq V$ to be the Weyl chamber containing our fixed alcove $\mathfrak a$. This gives a Borel subgroup $B\subseteq G$, and corresponding sets of positive/negative/simple roots $\Phi^+, \Phi^-, \Delta\subseteq \Phi$. By abuse of notation, we denote by $\Phi^+$ also the indicator function of the set of positive roots, i.e. \begin{align*} \Phi^+:\Phi\rightarrow\{0,1\},\quad \alpha\mapsto\begin{cases}1,&\alpha\in \Phi^+,\\ 0,&\alpha\in \Phi^-.\end{cases} \end{align*} The following easy facts will be used often, usually without further reference: \begin{lemma}\label{lem:phiPlusFacts} Let $\alpha\in \Phi$. \begin{enumerate}[(a)] \item $\Phi^+(\alpha) + \Phi^+(-\alpha)=1$. \item If $\beta\in \Phi$ and $k,\ell\geq 1$ are such that $k\alpha+\ell \beta\in \Phi$, we have \begin{align*} &0\leq \Phi^+(\alpha)+\Phi^+(\beta)-\Phi^+(k\alpha+\ell\beta)\leq 1.\pushQED{\qed}\qedhere\popQED \end{align*} \end{enumerate} \end{lemma} The sets of positive and negative affine roots can be defined as \begin{align*} \Phi_{\mathrm{af}}^+:=&(\Phi^+\times \mathbb Z_{\geq 0})\sqcup (\Phi^-\times \mathbb Z_{\geq 1}) = \{(\alpha,k)\in \Phi_{\mathrm{af}}\mid k\geq \Phi^+(-\alpha)\}, \\\Phi_{\mathrm{af}}^- :=&-\Phi_{\mathrm{af}}^+ = \Phi_{\mathrm{af}}\setminus \Phi_{\mathrm{af}}^+= \{(\alpha,k)\in \Phi_{\mathrm{af}}\mid k< \Phi^+(-\alpha)\}. \end{align*} One checks that $\Phi_{\mathrm{af}}^+$ are precisely the affine roots that are sums of simple affine roots. Decompose $\Phi = \Phi_1\sqcup\cdots\sqcup \Phi_r$ as a direct sum of irreducible root systems. Each irreducible factor contains a uniquely determined longest root $\theta_i\in \Phi_i^+$. Now the set of simple affine roots is \begin{align*} \Delta_{\mathrm{af}} = \{(\alpha,0)\mid \alpha\in \Delta\}\sqcup\{(-\theta_i,1)\mid i=1,\dotsc,r\}\subset \Phi_{\mathrm{af}}^+. \end{align*} The \emph{Bruhat order} on $W_a$ is the usual Coxeter-theoretic notion. The Bruhat order on $\widetilde W$ can be defined as $\omega x\leq \omega'x'$ iff $\omega = \omega'$ and $x\leq x'$ for $\omega,\omega'\in \Omega$ and $x,x'\in W_a$. We call an element $\mu\in X_\ast(T)_{\Gamma_0}\otimes \mathbb Q$ \emph{dominant} if $\langle \mu,\alpha\rangle\geq 0$ for all $\alpha\in \Phi^+$. For elements $\mu,\mu'$ in $X_\ast(T)_{\Gamma_0}\otimes \mathbb Q$ (resp.\ $X_\ast(T)_{\Gamma_0}$ or $X_\ast(T)_{\Gamma}$), we write $\mu\leq \mu'$ if the difference $\mu'-\mu$ is a $\mathbb Q_{\geq 0}$-linear combination of positive coroots. The induced action of $\Gamma_0$ on $\mathcal A, \Phi_{\mathrm{af}}, \widetilde W, W_{\mathrm{af}}$ and $W$ is trivial by construction. The Frobenius action on $\mathcal A, X_\ast(T)_{\Gamma_0}, \Phi_{\mathrm{af}}$ and $\Phi$ will be denoted by $\sigma$. Note that $\sigma$ preserves the set of simple affine roots. The Frobenius action on $W, \widetilde W$ and $W_{\mathrm{af}}$ will be denoted by $x\mapsto \prescript\sigma{} x$. Then the action of $\prescript\sigma{} x$ on $X_\ast(T)_{\Gamma_0}$ is the same as the composed action $\sigma\circ x\circ\sigma^{-1}$ ($x\in W$ or $\widetilde W$). For the most part, we consider the case where $G$ is quasi-split over $F$. This is a convenient assumption that lightens the notational burden significantly. In Section~\ref{sec:gnpArbitraryGroups}, we return to the more general setting of connected reductive $G$ and generalize our main results via a reduction to the quasi-split case. If $G$ is quasi-split, we may and do choose the vertex $\mathfrak{x}$ to be $\sigma$-invariant. With this choice, the decompositions $\Phi_{\mathrm{af}} = \Phi\times\mathbb Z$ and $\widetilde W = W\ltimes X_\ast(T)_{\Gamma_0}$ are Frobenius equivariant. This means \begin{align*} \forall(\alpha,k)\in \Phi_{\mathrm{af}}:~&\sigma(\alpha,k) = (\sigma(\alpha),k),\\ \forall w\varepsilon^\mu\in \widetilde W:~&\prescript\sigma{}{}\!\left(w\varepsilon^\mu\right) = (\prescript\sigma{} w)\varepsilon^{\sigma(\mu)}. \end{align*} In particular, $\sigma$ preserves the set of simple roots $\Delta$. The case where $G$ is unramified has often been studied in the literature. In this case, $S$ is a maximal torus of $G_L$, so $S=T$ and $\Phi$ is the usual root system of $(G,T)$. Each root system $\Phi$ together with a Frobenius action comes from such an unramified group. However, care has to be taken when using results proved for unramified groups in the quasi-split setting, as $X_\ast(T)_{\Gamma_0}$ may have a torsion part if $G$ is not unramified. In particular, the map $X_\ast(T)_{\Gamma_0}\rightarrow X_\ast(T)_{\Gamma_0}\otimes\mathbb R=V\cong \mathcal A$ might fail to be injective. \subsection{Root functionals}\label{sec:root-functionals} For every coweight $\mu$, there exists a uniquely determined dominant coweight in the $W$-orbit of $\mu$. In other words, there exists some $w\in W$ such that $\mu(w\alpha)\geq 0$ for all $\alpha\in \Phi^+$. In this section, we introduce and study certain functions $\varphi:\Phi\rightarrow\mathbb Z$ which are more general than coweights, but still enjoy this property. \begin{definition}\label{def:rootFunctionals} \begin{enumerate}[(a)] \item A \emph{root functional} is a function $\varphi:\Phi\rightarrow\mathbb Z$ satisfying the following two conditions for all $\alpha,\beta\in \Phi$: \begin{enumerate}[(1)] \item $\abs{\varphi(\alpha)+\varphi(-\alpha)}\leq 1$. \item If $\alpha+\beta\in \Phi$, then \begin{align*} \abs{\varphi(\alpha+\beta)-\varphi(\alpha)-\varphi(\beta)}\leq 1. \end{align*} \end{enumerate} \item If $\varphi$ is a root functional, the \emph{dual root functional} $\varphi^\vee$ is defined for $\alpha\in \Phi$ by $\varphi^\vee(\alpha) = -\varphi(-\alpha)$. \item Let $v\in W$. The set of \emph{inversions} of $v$ with respect to $\varphi$ is \begin{align*} \inv_\varphi(v)=\{\alpha\in \Phi^+\mid \varphi(v\alpha)<0\}\cup \{\alpha\in \Phi^-\mid \varphi(v\alpha)>0\}. \end{align*} We call $v$ \emph{positive} for $\varphi$ if $\inv_\varphi(v)=\emptyset$. If $\alpha\in \inv_\varphi(v)$, we call $vs_\alpha\in W$ an \emph{adjustment} of $v$ for $\varphi$. \end{enumerate} \end{definition} \begin{lemma}\label{lem:rootFunctionalAdjustment} Let $\varphi : \Phi\rightarrow\mathbb Z$ be a root functional and $v\in W$ be \emph{not} positive for $\varphi$. If $v'$ is an adjustment of $v$ for $\varphi$, then \begin{align*} \#\inv_\varphi(v')<\#\inv_\varphi(v). \end{align*} \end{lemma} \begin{proof} Let $\alpha\in \inv_\varphi(v)$ with $v' = vs_\alpha$. Up to replacing $(\alpha,\varphi)$ by $(-\alpha,\varphi^\vee)$, we may assume $\alpha\in \Phi^+$, so $\varphi(v\alpha)<0$. Define \begin{align*} I := \{\beta\in \Phi^+\setminus\{\alpha\}\mid s_\alpha(\beta)\in \Phi^-\}. \end{align*} We write \begin{align*} \#\inv_{\varphi}(v') =& \#\{\beta\in \Phi^+\setminus I\mid \varphi(v'\beta)<0\} + \#\{\beta \in I\mid \varphi(v'\beta)<0\} \\&+\#\{\beta\in \Phi^-\setminus (-I)\mid \varphi(v'\beta)>0\} + \#\{\beta \in -I\mid \varphi(v'\beta)>0\} \end{align*} Note that $\varphi(v'\alpha) = \varphi(-v\alpha) \geq -1-\varphi(v\alpha)\geq 0$ and $s_\alpha(\Phi^+\setminus (I\cup\{\alpha\})) = \Phi^+\setminus(I\cup\{\alpha\})$. Thus \begin{align*} \#\{\beta\in \Phi^+\setminus I\mid \varphi(v'\beta)<0\} =& \#\{\beta\in \Phi^+\setminus (I\cup\{\alpha\})\mid \varphi(v s_\alpha\beta)<0\} \\=&\#\{\beta\in \Phi^+\setminus (I\cup\{\alpha\})\mid \varphi(v \beta)<0\} \\=&\#\{\beta\in \Phi^+\setminus I\mid \varphi(v \beta)<0\}-1. \end{align*} Similarly, we have \begin{align*} \#\{\beta\in \Phi^-\setminus (-I)\mid \varphi(v'\beta)>0\}=&\#\{\beta\in \Phi^-\setminus(-I\cup\{-\alpha\})\mid \varphi(v'\beta)>0\} \\=&\#\{\beta\in \Phi^-\setminus(-I\cup\{-\alpha\})\mid \varphi(v\beta)>0\} \\\leq &\#\{\beta\in \Phi^-\setminus (-I)\mid \varphi(v \beta)>0\}. \end{align*} Therefore, it suffices to prove the following estimates: \begin{align*} &\#\{\beta \in I\mid \varphi(v'\beta)<0\} \leq \#\{\beta \in I\mid \varphi(v\beta)<0\},\tag{1} \\&\#\{\beta \in -I\mid \varphi(v'\beta)>0\} \leq \#\{\beta \in -I\mid \varphi(v\beta)>0\}.\tag{2} \end{align*} We only prove (1), as the proof of (2) is similar. In order to prove (1), we consider the involution $\beta\mapsto -s_\alpha(\beta)$, which acts freely on $I$. Let $o = \{\beta,-s_\alpha(\beta)\}\subseteq I$ be an orbit for this involution. It suffices to show \begin{align*} \#\{\beta \in o\mid \varphi(v'\beta)<0\} \leq \#\{\beta \in o\mid \varphi(v\beta)<0\}.\tag{$\ast$} \end{align*} In order to prove this, we calculate \begin{align*} \#\{\beta \in o\mid \varphi(v'\beta)<0\} =& \#\{\beta \in -s_{\alpha}(o)\mid \varphi(v'\beta)<0\} \\=&\#\{\beta \in o\mid \varphi(-v\beta)<0\} \\\leq&\#\{\beta\in o\mid \varphi(v\beta)\geq 0\} \\=&2-\#\{\beta \in o\mid \varphi(v\beta)<0\}. \end{align*} If $\#\{\beta \in o\mid \varphi(v\beta)<0\}\geq 1$, we immediately get $(\ast)$. Now suppose that $\varphi(v\beta)\geq 0$ for all $\beta\in o$. Fix an element $\beta\in o$ and write \begin{align*}\beta' := -s_\alpha(\beta) = \langle \alpha^\vee,\beta\rangle \alpha-\beta. \end{align*} Note that $k\alpha-\beta\in \Phi$ for $k=0,\dotsc,\langle \alpha^\vee,\beta\rangle$. Thus \begin{align*} &\abs{\varphi(v\beta') - \langle \alpha^\vee,\beta\rangle \varphi(v\alpha) - \varphi(-v\beta)} \\\leq&\sum_{k=1}^{\langle \alpha^\vee,\beta\rangle} \abs{\varphi(v(k\alpha-\beta)) - \varphi(v\alpha) - \varphi(v(k-1)\alpha-\beta)} \\\leq&\langle \alpha^\vee,\beta\rangle. \end{align*} In particular, we get \begin{align*} \varphi(v\beta') - \varphi(-v\beta) \leq \langle \alpha^\vee,\beta\rangle(1+\varphi(v\alpha)) \leq 0. \end{align*} Thus $\varphi(-v\beta)\geq \varphi(v\beta')\geq 0$. Since $\beta\in o$ was arbitrary, we get $\varphi(v'\beta) = \varphi(-v(-s_\alpha)\beta)\geq 0$ for all $\beta\in o$. This proves $(\ast)$, which finishes the proof of the lemma. \end{proof} \begin{corollary}\label{cor:rootFunctionalAdjustments} If $\varphi: \Phi\rightarrow\mathbb Z$ is a root functional and $v\in W$ is any element, there is a sequence \begin{align*} v = v_1,\dotsc,v_k\in W \end{align*} such that $v_{i+1}$ is an adjustment for $v_i$ for $\varphi$ (where $i=1,\dotsc,k-1$), and $v_k$ is positive for $\varphi$. In particular, positive elements exist for each root functional.\pushQED{\qed}\qedhere\popQED \end{corollary} The most important root functional for us will be the length functional associated to an element $x\in \widetilde W$, which we introduce now. \begin{definition} Let $x = w\varepsilon^\mu\in \widetilde W$ and $\alpha\in \Phi$. We define \begin{align*} \ell(x,\alpha) := \langle \mu,\alpha\rangle +\Phi^+(\alpha) - \Phi^+(w\alpha). \end{align*} \end{definition} The absolute value $\abs{\ell(x,\alpha)}$ can be understood as counting affine root hyperplanes between the base alcove and $x\mathfrak a$, while the sign accounts for the orientations (cf.\ Lemma~\ref{lem:lengthFunctionalAsCountingAffineRoots}). \begin{lemma} Let $x=w\varepsilon^\mu\in \widetilde W$. Then $\ell(x,\cdot)$ is a root functional. For each $\alpha\in \Phi$, we have \begin{align*} \ell(x,\alpha) + \ell(x,-\alpha)=0. \end{align*} \end{lemma} \begin{proof} Let $\alpha,\beta\in \Phi$. \begin{enumerate}[(1)] \item We have \begin{align*} &\ell(x,\alpha) + \ell(x,-\alpha) \\=& \langle \mu,\alpha\rangle +\Phi^+(\alpha) - \Phi^+(w\alpha) + \langle \mu,-\alpha\rangle + \Phi^+(-\alpha) - \Phi^+(-w\alpha) \\=&\Phi^+(\alpha) + \Phi^+(-\alpha) - (\Phi^+(w\alpha) + \Phi^+(-w\alpha)) = 1-1=0. \end{align*} \item Suppose $\alpha+\beta\in \Phi$. We know that \begin{align*} 0\leq \Phi^+(\alpha) + \Phi^+(\beta)-\Phi^+(\alpha+\beta) \leq 1. \end{align*} Thus, we obtain \begin{align*} &\abs{\ell(x,\alpha+\beta)-\ell(x,\alpha)-\ell(x,\beta)} \\=&\lvert\underbrace{\Phi^+(\alpha+\beta)-\Phi^+(\alpha)-\Phi^+(\beta)}_{\in \{-1,0\}} \underbrace{- \Phi^+(w(\alpha+\beta)) + \Phi^+(w\alpha)+\Phi^+(w\beta)}_{\in \{0,1\}}\rvert\leq 1. \end{align*} \end{enumerate} This finishes the proof. \end{proof} \begin{definition}Let $x\in \widetilde W$ and $v\in W$. We say that $v$ is \emph{length positive for $x$} and write $v\in \LP(x)$ if $v$ is positive for the length functional $\ell(x,\cdot)$. Explicitly, $v$ is length positive for $x$ if $\ell(x,v\alpha)\geq 0$ for all $\alpha\in \Phi^+$. \end{definition} \begin{example}\label{ex:usualLPelement} Let $x = w\varepsilon^\mu\in \widetilde W$. The $W$-orbit of $\mu$ contains a unique dominant element of $X_\ast(T)_{\Gamma_0}$, and there is a unique $v\in W$ of minimal length such that $v^{-1}\mu$ is dominant. The element $v$ is uniquely determined by the following condition for each positive root $\alpha$: \begin{align*} \langle v^{-1}\mu,\alpha\rangle \geq \Phi^+(-v\alpha). \end{align*} It follows that \begin{align*} \ell(x,v\alpha) = \langle v^{-1}\mu,\alpha\rangle - \Phi^+(-v\alpha) + \Phi^+(-wv\alpha)\geq 0. \end{align*} We see that this particular $v$ is length positive. This gives an alternative proof that length positive elements always exist. Recall the definition of the virtual dimension for $x\in \widetilde W$ and $b\in B(G)$. \begin{align*} d_x(b) = \frac 12\left(\ell(x) +\ell(\eta_\sigma(x))-\langle\nu(b),2\rho\rangle-\defect(b)\right). \end{align*} Here, $2\rho\in X_\ast(T)^{\Gamma}$ denotes the sum of positive roots. With $v\in W$ constructed as above, we have \begin{align*} \eta_\sigma(x) = \prescript{\sigma^{-1}}{}(v) ^{-1}wv\in W. \end{align*} Because of the importance of the virtual dimension, the specific $v$ constructed in this example is of particular interest. However, the construction of this $v\in W$ is not quite natural in terms of $x\in \widetilde W$, e.g.\ in view of certain automorphisms of $\widetilde W$ that preserve dimensions of affine Deligne-Lusztig varieties. Studying the group ${\mathrm{GL}}_3$ for example, there are three simple affine reflections $s_0, s_1, s_2\in \widetilde W$. Each of these satisfies $\ell(s_i)=\dim X_{s_i}(1)=1$. The two simple affine reflections $s_1$ and $s_2$ that come from $W$ also satisfy $\ell(\eta_\sigma(s_1)) = \ell(\eta_\sigma(s_2))=1$, so that \begin{align*} d_{s_i}([1]_\sigma) = \frac 12\left(1+1-0-0\right)=1 = \dim X_{s_i}(1),\qquad i=1,2. \end{align*} For the remaining affine simple reflection $s_0$, we have $\ell(\eta_\sigma(s_0))=3$. Thus $d_{s_0}(1) = 2 > \dim X_{s_0}(1)$. We see that $s_1, s_2$ satisfy $\dim X_{s_i}(1)= d_{s_i}(1)$ (so both are cordial), whereas $s_0$ does not have this property. This is problematic insofar as there exists an automorphism of the affine Dynkin diagram sending $s_1$ to $s_0$, hence naturally $X_{s_0}(1) \cong X_{s_1}(1)$. This natural isomorphism is not reflected in the corresponding virtual dimensions, which comes precisely from the term $\ell(\eta_\sigma(x))$. Searching for a replacement of this specific $v$ that is invariant under such automorphisms, we found the notion of length positive elements. The set of length positive elements is well-behaved under such automorphisms, as it allows the following root-theoretic interpretation. \end{example} \begin{lemma}[{cf. \cite[Lemma~3.12]{Lenart2015}}]\label{lem:lengthFunctionalAsCountingAffineRoots} Let $x = w\varepsilon^\mu \in \widetilde W$ and $\alpha\in \Phi$. Then \begin{align*} \#\{k\in\mathbb Z\mid (\alpha,k)\in \Phi^+_{\mathrm{af}}\text{ and }x(\alpha,k)\in \Phi^-_{\mathrm{af}}\} = \max(0,\ell(x,\alpha)). \end{align*} \end{lemma} \begin{proof} We have \begin{align*} &\{(\alpha,k)\in \Phi_{\mathrm{af}}^+ \mid x(\alpha,k) \in \Phi_{\mathrm{af}}^-\} \\=& \{(\alpha,k) \in \Phi_{\mathrm{af}} \mid k\geq \Phi^+(-\alpha)\text{ and }(w\alpha,k-\langle \mu,\alpha\rangle)\in \Phi_{\mathrm{af}}^-\} \\=&\{(\alpha,k) \in \Phi_{\mathrm{af}} \mid k\geq \Phi^+(-\alpha)\text{ and }k-\langle \mu,\alpha\rangle \leq -\Phi^+(w\alpha)\}. \\\cong&\{k\in \mathbb Z\mid \Phi^+(-\alpha)\leq k \leq \langle \mu,\alpha\rangle -\Phi^+(w\alpha)\}. \end{align*} The cardinality of this set is given by \begin{align*} &\max(0, \langle \mu,\alpha\rangle +1-\Phi^+(w\alpha)-\Phi^+(-\alpha)) = \max(0,\ell(x,\alpha)).\qedhere \end{align*} \end{proof} \begin{corollary}[{\cite[Proposition~1.23]{Iwahori1965}}]\label{cor:IwahoriMatsumoto} Let $x = w\varepsilon^\mu\in \widetilde W$. Then \begin{align*} \ell(x) = \sum_{\alpha \in \Phi} \max(0,\ell(x,\alpha)). \end{align*} \end{corollary} \begin{proof} Use that \begin{align*} \ell(x) = \#\{(\alpha,k) \in \Phi_{\mathrm{af}}^+\mid x\alpha \in \Phi_{\mathrm{af}}^-\} \end{align*} and decompose the latter set depending on the $\alpha\in \Phi$. \end{proof} \begin{corollary}\label{cor:positiveLengthFormula} Let $x = w\varepsilon^\mu\in \widetilde W$ and $v\in W$. Then \begin{align*} \ell(x)\geq \langle v^{-1}\mu,2\rho\rangle - \ell(v) + \ell(wv). \end{align*} Equality holds if and only if $v$ is length positive for $x$. \end{corollary} \begin{proof} We calculate \begin{align*} \ell(x)\geq&\sum_{\alpha\in \Phi^+}\ell(x,v\alpha) \\=&\sum_{\alpha\in \Phi^+}\left(\langle \mu,v\alpha\rangle - \Phi^+(-v\alpha) + \Phi^+(-wv\alpha)\right) \\=&\langle v^{-1}\mu,2\rho\rangle - \ell(v) + \ell(wv).\qedhere \end{align*} \end{proof} \begin{lemma}\label{lem:lengthFunctionalForProducts} Let $x = w\varepsilon^\mu, x' = w'\varepsilon^{\mu'}\in \widetilde W$ and $\alpha \in \Phi$. \begin{enumerate}[(a)] \item $\ell(xx',\alpha) = \ell(x,w'\alpha) + \ell(x',\alpha).$ \item $\ell(x^{-1},\alpha) = -\ell(x,w^{-1}\alpha)$ and $\LP(x^{-1}) = w \LP(x) w_0$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(a)] \item Note that $xx' = ww'\varepsilon^{(w')^{-1}\mu + \mu'}$ such that \begin{align*} &\ell(x,w'\alpha) + \ell(x',\alpha) \\=& \,\langle \mu,w'\alpha\rangle + \langle \mu',\alpha\rangle - \Phi^+(ww'\alpha) + \Phi^+(w'\alpha) - \Phi^+(w'\alpha) + \Phi^+(\alpha) \\=&\,\langle (w')^{-1}\mu + \mu',\alpha\rangle -\Phi^+(ww'\alpha) + \Phi^+(\alpha)=\ell(xx',\alpha). \end{align*} \item By (a), we have \begin{align*} 0 = \ell(1,\alpha) = \ell(x x^{-1},\alpha)= \ell(x,w^{-1}\alpha) + \ell(x^{-1},\alpha). \end{align*} Now observe that for $v\in W$, \begin{align*} v\in \LP(x^{-1})\iff&\forall \beta\in \Phi^+:~\ell(x^{-1},v\beta)\geq 0 \\\iff&\forall \beta\in \Phi^+:~\ell(x^{-1},v(-w_0\beta))\geq 0 \\\iff&\forall\beta\in \Phi^+:~\ell(x,w^{-1} v w_0\beta)\geq 0 \iff v\in w\LP(x) w_0. \qedhere \end{align*} \end{enumerate} \end{proof} \begin{lemma}\label{lem:lengthAdditivity} Let $x = w\varepsilon^\mu, x' = w'\varepsilon^{\mu'}\in \widetilde W$. The following are equivalent: \begin{enumerate}[(i) \item $\ell(xx') = \ell(x) + \ell(x')$. \item For each root $\alpha\in \Phi$, the values $\ell(x,w'\alpha)$ and $\ell(x',\alpha)\in \mathbb Z$ never have opposite signs, i.e.\ \begin{align*} \ell(x,w'\alpha) \cdot \ell(x',\alpha)\geq 0. \end{align*} \item $\left((w')^{-1} \LP(x)\right)\cap \LP(x')\neq \emptyset$. \end{enumerate} In this case, $\LP(xx') = \left((w')^{-1} \LP(x)\right)\cap \LP(x')$. \end{lemma} \begin{proof} (i) $\iff$ (ii): By Corollary~\ref{cor:IwahoriMatsumoto} and the equation $\ell(x,\alpha) = -\ell(x,-\alpha)$, we get \begin{align*} \ell(xx') =& \sum_{\alpha\in \Phi^+}\abs{\ell(xx',\alpha)} \\\underset{\text{L\ref{lem:lengthFunctionalForProducts}(a)}}=&\sum_{\alpha\in \Phi^+}\abs{\ell(x,w'\alpha) + \ell(x',\beta)}\\\underset{(\ast)}\leq&\sum_{\alpha\in \Phi^+}\abs{\ell(x,w'\alpha)}+\abs{\ell(x',\alpha)} \\=&\ell(x) + \ell(x'). \end{align*} Equality holds at $(\ast)$ iff the values $\ell(x,w'\alpha)$ and $\ell(x',\alpha)$ never have opposite signs. We see that (i) $\iff$ (ii). (iii) $\Rightarrow$ (ii): Pick $v\in \left((w')^{-1} \LP(x)\right)\cap \LP(x')$. If $\alpha\in \Phi^+$, then both $\ell(x,w'v\alpha)$ and $\ell(x',v\alpha)$ must be non-negative by length positivity. If conversely $\alpha\in \Phi^-$, then both $\ell(x,w'v\alpha)$ and $\ell(x',v\alpha)$ must be non-positive. We see that (ii) holds true. Finally, let us assume that (ii) holds. It suffices to show that \begin{align*} \LP(xx') = \left((w')^{-1} \LP(x)\right)\cap \LP(x'), \end{align*} as (iii) follows from this identity. Now for $v\in W$, we have \begin{align*} v\in \LP(xx')\iff& \forall \alpha\in \Phi^+:~\ell(xx',v\alpha)\geq 0\\ \underset{\text{L\ref{lem:lengthFunctionalForProducts}(a)}}\iff&\forall \alpha\in \Phi^+:~\ell(x,w'v\alpha) + \ell(x',v\alpha)\geq 0\\ \underset{(ii)}\iff&\forall \alpha\in \Phi^+:~\ell(x,w'v\alpha)\geq 0\text{ and }\ell(x',v\alpha)\geq 0\\ \iff&v\in \left((w')^{-1} \LP(x)\right)\cap \LP(x').\qedhere \end{align*} \end{proof} Given one element $v\in \LP(x)$, one can use it to iteratively enumerate all length positive elements for $x$. \begin{lemma}\label{lem:LPEnumeration} Let $x =w\varepsilon^\mu\in \widetilde W$ and $v\in \LP(x)$. \begin{enumerate}[(a)] \item For every simple root $\alpha\in \Delta$, we have \begin{align*} \ell(x,v\alpha)=0\iff vs_\alpha\in \LP(x). \end{align*} \item If the root $\alpha\in \Phi^+$ satisfies $\ell(x,v\alpha)=0$, then there also exists a simple root with this property. \item Consider the undirected graph $G_{\LP(x)}$ whose vertices are given by $\LP(x)$ and whose edges are of the form $(v,vs_\alpha)$ for $\alpha\in \Delta$ and $v,vs_\alpha\in \LP(x)$. Then $G_{\LP(x)}$ is connected. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(a)] \item If $vs_\alpha\in \LP(x)$, then $\ell(x,v\alpha)$ and $\ell(x,vs_\alpha \alpha) = -\ell(x,v\alpha)$ must both be non-negative. This is only possible if $\ell(x,v\alpha)=0$. If $\ell(x,v\alpha)=0$, confirm that $\ell(x,v\beta)\geq 0$ for all $\beta\in \Phi^+\cup\{-\alpha\}$. The latter set is preserved by $s_\alpha$. \item Suppose $\alpha \in \Phi^+\setminus\Delta$ satisfies $\ell(x,v\alpha)=0$. We can write $\alpha=\beta+\gamma$ for positive roots $\beta,\gamma\in \Phi^+$. By length positivity, $\ell(x,v\beta),\ell(x,v\gamma)\geq 0$. If both of these values are $\geq 1$, we get $\ell(x,v\alpha)\geq 1$ by the root functional property. Hence $\ell(x,v\beta)=0$ or $\ell(x,v\gamma)=0$. We can iterate this argument. \item Let $C\subseteq \LP(x)$ denote the connected component that contains $v$. Among all $v'\in C$, pick one such that $\ell(wv')$ is minimal. We claim that \begin{align*}\forall\alpha\in \Delta:~\langle \mu,v'\alpha\rangle+\Phi^+(v'\alpha)\geq 1.\tag{$\ast$}\end{align*} \begin{itemize} \item If $\ell(x,v'\alpha)=0$, then $v's_\alpha\in C$. The minimality of $\ell(wv')$ ensures that $\ell(wv's_\alpha)\geq \ell(wv')$, i.e.\ $wv'\alpha\in \Phi^+$. By definition, $\ell(xv'\alpha)=0$ implies $\langle \mu,v'\alpha\rangle + \Phi^+(v'\alpha)=1$. \item If $\ell(x,v'\alpha)\geq 1$, we get \begin{align*} \langle \mu,v'\alpha\rangle+\Phi^+(v'\alpha)\geq \ell(x,v'\alpha)\geq 1. \end{align*} \end{itemize} Let us re-read condition $(\ast)$: Not only is $(v')^{-1}\mu$ dominant, we have $v'\alpha\in \Phi^+$ for all $\alpha\in \Delta$ with $\langle (v')^{-1}\mu,\alpha\rangle=0$. This describes exactly the length positive element constructed in Example~\ref{ex:usualLPelement}. To summarize: No matter which connected component of $G_{\LP(x)}$ we consider, it will always contain the one length positive element from Example~\ref{ex:usualLPelement}. Hence $G_{\LP(x)}$ is connected.\qedhere \end{enumerate} \end{proof} We obtain the following description of the shrunken Weyl chambers: \begin{proposition} For $x \in \widetilde W$, the following are equivalent: \begin{enumerate}[(a)] \item $x$ lies in the lowest two-sided Kazhdan-Lusztig cell of $\widetilde W$. \item For all $\alpha\in \Phi$, $\ell(x,\alpha)\neq 0$. \item The set $\LP(x)$ contains only one element. \end{enumerate} In this case, we say that $x$ lies in a \emph{shrunken Weyl chamber}. \end{proposition} \begin{proof} The equivalence (1) $\iff$ (2) is well known, cf.\ \cite[Section~3.1]{He2021c}. The equivalence (2) $\iff$ (3) follows directly from Lemma~\ref{lem:LPEnumeration}. \end{proof} \begin{remark} The length functional presented here is related to the $k$-function from \cite{Shi1987a}. For $w\in W, \mu\in X^\ast(T)$ and $\alpha\in \Phi$, Shi proves \begin{align*} k(wt^\mu,\alpha) = \langle \mu,\alpha^\vee\rangle + \Phi^+((\alpha)(w^{-1}))-\Phi^+(\alpha). \end{align*} This result is a translation of \cite[Lemma~3.1]{Shi1987a} and \cite[Theorem~3.3]{Shi1987a} into our \enquote{$\Phi^+(\cdot)$}-notation. Up to a few changes of conventions, this recovers exactly our length functional. We will make these changes to express a few of Shi's ideas in terms of the length functional. Shi classifies the functions $\Phi\rightarrow\mathbb Z$ that are of the form $\ell(x,\cdot)$ in \cite[Proposition~5.1]{Shi1987a}. Associated to each element $x\in \widetilde W$ and root $\alpha\in \Phi$, he defines the value $X(x,\alpha)\in \{+,\bigcirc,-\}$ as \begin{align*} X(x,\alpha) = \begin{cases}+,&\ell(x,\alpha)>0,\\ \bigcirc,&\ell(x,\alpha)=0,\\ -,&\ell(x,\alpha)<0. \end{cases} \end{align*} The \emph{sign type} of $x$ is defined as $\zeta(x) = (X(x,\alpha))_{\alpha\in \Phi}$. The \emph{admissible} sign types, i.e.\ the image of $\zeta:\widetilde W\rightarrow\{+,\bigcirc,-\}^\Phi$, is explicitly described in \cite[Theorem~2.1]{Shi1987b}. Shi also computes the number of sign types and canonical representatives in $W_a$ for each. For root systems of type $A_n$, the preimages $\zeta^{-1}(S)$ for the different admissible sign types $S$ form exactly the set of left Kazhdan-Lusztig cells for $W_a$ \cite{Shi1986}. An explicitly described equivalence relation of sign types then classifies the two-sided Kazhdan-Lusztig cells. The question to fully describe the Kazhdan-Lusztig cells for all affine Weyl groups seems to be open. The sign type $\zeta(x)$ determines the set of length positive elements for $x$. The converse is not true, i.e.\ it is possible to find groups $G$ and elements $x,y\in \widetilde W$ with $\LP(x) = \LP(y)$ but $\zeta(x)\neq \zeta(y)$. Computer searches have revealed such counterexamples for root systems of types $G_2$ and $B_2$, thus for every non simply-laced root system. For simply-laced root systems, we can prove that the set $\LP(x)$ determines the sign type $\zeta(x)$. \end{remark} \begin{proposition} Assume that $\Phi$ is simply laced, $x \in \widetilde W$ and $\alpha\in \Phi$. Then the following are equivalent: \begin{enumerate}[(i)] \item $\ell(x,\alpha)>0$. \item For all $v\in \LP(x)$, we have $v^{-1}\alpha\in \Phi^+$. \end{enumerate} \end{proposition} \begin{proof} The implication (i) $\Rightarrow$ (ii) follows from the definition of length positivity. Now assume (ii). The condition $v^{-1}\alpha\in \Phi^+$ for one $v\in \LP(x)$ already implies $\ell(x,\alpha)\geq 0$. Aiming for a contradiction, we thus assume that $\ell(x,\alpha)=0$. Recall from Example~\ref{ex:usualLPelement} that there exists an element $v\in \LP(x)$ such that \begin{align*} \forall \beta\in \Phi^+:~\langle \mu,v\beta\rangle + \Phi^+(v\beta)\geq 1. \end{align*} Considering the case $\beta = v^{-1}\alpha\in \Phi^+$ (by (ii)), we see \begin{align*} \ell(x,\alpha) = \langle \mu,v\beta\rangle + \Phi^+(v\beta) - \Phi^+(w\alpha)\geq 1-\Phi^+(w\alpha). \end{align*} So if $w\alpha \in \Phi^-$, we conclude (i). Considering the same situation for $x^{-1}$ by Lemma~\ref{lem:lengthFunctionalForProducts}, we find an element $v\in \LP(x)$ such that \begin{align*} \forall \beta\in \Phi^+:~\langle \mu,v\beta\rangle - \Phi^+(wv\beta)\geq 0. \end{align*} Considering the case $\beta = v^{-1}\alpha\in \Phi^+$, we see \begin{align*} \ell(x,\alpha) = \langle \mu,v\beta\rangle + \Phi^+(\alpha) - \Phi^+(wv\beta)\geq \Phi^+(\alpha). \end{align*} So if $\alpha\in \Phi^+$, we are done again. Let us thus assume from now on that $\alpha\in \Phi^-$ and $w\alpha\in \Phi^+$. In light of the assumption $\ell(x,\alpha)=0$, we can restate this as $\langle \mu,\alpha\rangle = -1$. For roots $\beta,\gamma\in \Phi$, we write $\beta\leq \gamma$ if the difference $\gamma-\beta$ is a sum of positive roots, and we write $\beta<\gamma$ is moreover $\beta\neq\gamma$. We define a \emph{root sequence} associated to an element $v\in \LP(x)$ to be a sequence \begin{align*} v^{-1}\alpha=\beta_1>\cdots>\beta_\ell\in \Phi^+ \end{align*} such that $\beta_{i+1}-\beta_i\in\Phi^+$ for $i=1,\dotsc,\ell-1$ and $\langle \mu,v\beta_i\rangle=-1$ for $i=1,\dotsc,\ell$. Certainly, we can find a root sequence for each $v\in \LP(x)$ of length $1$ by setting $\beta_1 =v^{-1}\alpha$. We order the set of root sequences lexicographically. Explicitly, consider root sequences $(\beta_1,\dotsc,\beta_\ell)$ associated with $v\in \LP(x)$ and $(\beta_1',\dotsc,\beta'_{\ell'})$ associated with $v'\in \LP(x)$. We write $(\beta_1,\dotsc,\beta_\ell)<(\beta_1',\dotsc,\beta'_{\ell'})$ if one of the following conditions is satisfied: \begin{itemize} \item There is $i\in \{1,\dotsc,\min\{\ell,\ell'\}\}$ with $\beta_{i'} = \beta_{i'}'$ for $i'=1,\dotsc,i-1$ and $\beta_i<\beta_i'$. \item We have $\ell>\ell'$ and $\beta_i = \beta_i'$ for $i=1,\dotsc,\ell'$. \end{itemize} Among all possible $v\in \LP(x)$ and root sequences $(\beta_1,\dotsc,\beta_\ell)$ associated with them, we choose a pair such that the root sequence becomes minimal with respect to the above order. We first claim that $\beta_\ell$ is simple: Indeed, if we had $\beta_\ell = \gamma_1+\gamma_2$ for positive roots $\gamma_1,\gamma_2$, then $\ell(x,v\gamma_1),\ell(v,\gamma_2)\geq 0$ by length positivity. Thus \begin{align*} \langle \mu,v\gamma_1\rangle\geq -1,\quad \langle \mu,v\gamma_2\rangle\geq -1,\quad \langle \mu,v\gamma_1+v\gamma_2\rangle = -1. \end{align*} Hence $\langle \mu,v\gamma_i\rangle=-1$ for one of the roots $\gamma_1,\gamma_2$. We see that we can extend the root sequence $(\beta_1,\dotsc,\beta_\ell)$, which contradicts minimality by definition. Note that $\langle \mu,v\beta_\ell\rangle=-1$ and $\ell(x,v\beta_\ell)\geq 0$ implies $\ell(x,v\beta_\ell)=0$. By Lemma~\ref{lem:LPEnumeration}, this means $v' = vs_{\beta_\ell}\in \LP(x)$. If $\ell=1$, then $(v')^{-1}\alpha = -v^{-1}\alpha$, so we get the desired contradiction to (ii). Therefore, $\ell>1$. We claim that $\langle \beta_\ell^\vee,\beta_i\rangle\geq 0$ for $i=1,\dotsc,\ell$: Indeed, if we had $\langle \beta_\ell^\vee,\beta_i\rangle<0$, then $\beta_i+\beta_\ell\in \Phi^+$. So we get \begin{align*} \ell(x,v(\beta_i+\beta_\ell))\geq 0\text{ and }\langle \mu,v\beta_i + v\beta_\ell\rangle=-2. \end{align*} This is impossible. Note that $\langle \beta_\ell^\vee,\beta_{\ell-1}\rangle=1$, as $\beta_{\ell-1}$ is the sum of $\beta_\ell$ with another root, and $\Phi$ is simply laced. We thus may pick $\ell'\in\{1,\dotsc,\ell-1\}$ minimally such that $\langle \beta_\ell^\vee,\beta_{\ell'}\rangle>0$. Consider the root sequence \begin{align*} \beta_i' = s_{\beta_\ell}(\beta_i),\quad i=1,\dotsc,\ell'. \end{align*} This is a root sequence associated with $v' = vs_{\beta_\ell}\in \LP(x)$. Since $\beta_i' = \beta_i$ for $i=1,\dotsc,\ell'-1$ (by choice of $\ell'$), and $\beta_{\ell'}' < \beta_{\ell'}$, it is a smaller root sequence. This is finally a contradiction to minimality. \end{proof} The above proof encodes an algorithm, which finds for each root $\alpha\in \Phi$ with $\ell(x,\alpha)=0$ and each $v\in \LP(x)$ a sequence for elements in $\LP(x)$ as in Lemma~\ref{lem:LPEnumeration}. The sequence starts at $v$ and ending in an element $v'\in \LP(x)$ satisfying $(v')^{-1}\alpha\in \Phi^-$. As noted before, this Proposition is false for every non simply laced root system. \subsection{Quantum Bruhat graph} Associated to the root system $\Phi$, we have the \emph{quantum Bruhat graph} $\QB(W)$ as introduced by Brenti-Fomin-Postnikov \cite{Brenti1998}. This graph and its associated weight function play a crucial role in Section~\ref{chap:generic-sigma-conjugation} when discussing generic $\sigma$-conjugacy classes. In the past, the quantum Bruhat graph was used as a technical tool in a number of different contexts \cite{Postnikov2005, Lam2010, Lenart2015, Milicevic2021, Milicevic2020, Sadhukhan2021, He2021c, He2021d}. \begin{definition} \begin{enumerate}[(a)] \item The \emph{quantum Bruhat graph} associated with $\Phi$, denoted $\QB(W)$, is a $\mathbb Z\Phi^\vee$-weighted directed graph with vertex set $W$. Its edges are of the form $w\rightarrow ws_\alpha$ for $w\in W$ and $\alpha\in \Phi^+$ such that one of the following conditions is satisfied: \begin{itemize} \item[(B)] $\ell(ws_\alpha) = \ell(w)+1$ or \item[(Q)] $\ell(ws_\alpha) = \ell(w) +1-\langle \alpha^\vee,2\rho\rangle$. \end{itemize} \item Edges of type (B) are called \emph{Bruhat edges} and have weight $0\in \mathbb Z\Phi^\vee$. Edges of type (Q) are called \emph{quantum edges} and have weight $\alpha^\vee\in \mathbb Z\Phi^\vee$. \item If $w, w'\in W$, a \emph{path from $w$ to $w'$} is a sequence of adjacent edges in $\QB(W)$ \begin{align*} p:w = w_1\rightarrow w_2\rightarrow\cdots\rightarrow w_{\ell(p)+1} = w'. \end{align*} The \emph{length} of $p$ is the number of edges, denoted $\ell(p)$. The \emph{weight} of $p$ is the sum of its edges weights', denoted $\wt(p)\in \mathbb Z\Phi^\vee$. \item A path $p$ from $w$ to $w'$ is \emph{shortest} if there is no path $p'$ from $w$ to $w'$ with $\ell(p')<\ell(p)$. In that case, we define $d(w\Rightarrow w') := \ell(p)$. \end{enumerate} \end{definition} \begin{lemma}[{\cite[Lemma~1]{Postnikov2005}}]Let $w, w'\in W$ \begin{enumerate}[(a)] \item There exists a path from $w$ to $w'$ in $\QB(W)$. \item Any two shortest paths from $w$ to $w'$ have the same weight, denoted $\wt(w\Rightarrow w')\in \mathbb Z\Phi^\vee$. \item Any path $p$ from $w$ to $w'$ has weight $\wt(p)\geq \wt(w\Rightarrow w')$.\pushQED{\qed}\qedhere\popQED \end{enumerate} \end{lemma} One interpretation of the weight function $\wt(w\Rightarrow w')$ is that it measures the failure of the inequality $w\leq w'$ in the Bruhat order on $W$. Indeed, $\wt(w\Rightarrow w')=0$ if and only if $w\leq w'$. We have the following converse to part (c) of the above lemma. \begin{lemma}[{\cite[Equation~(4.3)]{Milicevic2020}}]\label{lem:weight2rho}For any path $p$ from $w$ to $w'$, we have \begin{align*} \langle \wt(p),2\rho\rangle = \ell(w') - \ell(w) + \ell(p). \end{align*} In particular, \begin{align*} &\langle \wt(w\Rightarrow w'),2\rho\rangle = \ell(w') - \ell(w) + d(w\Rightarrow w').\pushQED{\qed}\qedhere\popQED \end{align*} \end{lemma} \section{Generic $\sigma$-conjugacy class}\label{chap:generic-sigma-conjugation} For an element $x\in \widetilde W$, the \emph{generic} $\sigma$-conjugacy class $[b] = [b_x]\in B(G)$ is the uniquely determined $\sigma$-conjugacy class such that $IxI\cap [b]$ is dense in $IxI$. For each $y\in \widetilde W$, we write $[y]\in B(G)$ for the $\sigma$-conjugacy class of any representative of $y$ in $G(L)$. We have the following description due to Viehmann: \begin{theorem}[{\cite[Corollary~5.6]{Viehmann2014}}]\label{thm:truncations} Let $x \in \widetilde W$. Then $[b_x]$ is the largest $\sigma$-conjugacy class in $B(G)$ of the form $[y]$ where $y\leq x$ in the Bruhat order on $\widetilde W$.\pushQED{\qed}\qedhere\popQED \end{theorem} Viehmann's original proof makes the assumption that the group under consideration is unramified, but it is not hard to remove this assumption. Indeed, we saw in Lemma~\ref{lem:nonEmptynessBruhatCondition} that \cite[Proposition~5.5]{Viehmann2014} can be proved without this assumption, and then Viehmann's proof of \cite[Corollary~5.6]{Viehmann2014} works without further changes. We can now describe this generic $\sigma$-conjugacy class more explicitly: \begin{theorem}\label{thm:genericGKP}Assume that $G$ is quasi-split. Let $x = w\varepsilon^\mu\in \widetilde W$ and denote by $[b_x]$ is generic $\sigma$-conjugacy class. Writing $\lambda_x := \lambda_G(b_x)$, we have \begin{align*} \lambda_x = \max_{v\in W} \left(v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))\right)\in X_\ast(T)_{\Gamma}. \end{align*} \end{theorem} We call $\lambda_x$ the \emph{generic $\lambda$-invariant} of $x$. We discuss previous works and some applications of this result now, before giving its proof in the next subsection. We begin with a more explicit way to calculate generic $\lambda$-invariants. The following lemma does not depend on the theorem, while the corollary does. \begin{lemma}\label{lem:genericGKPImprovements} Let $x=w\varepsilon^\mu\in \widetilde W$ and $v\in W$. \begin{enumerate}[(a)] \item If $v$ is not length positive for $x$, and $vs_\alpha$ is an adjustment, then \begin{align*} v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))\leq^\sigma (vs_\alpha)^{-1}-\wt(vs_\alpha\Rightarrow\prescript\sigma{}(wvs_\alpha)). \end{align*} \item We have \begin{align*} \langle v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv)),2\rho\rangle \leq \ell(x)-d(v\Rightarrow\prescript\sigma{}(wv)). \end{align*} Equality holds if and only if $v\in \LP(x)$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(a)] \item We compute \begin{align*} (vs_\alpha)^{-1}\mu - \wt(vs_\alpha\Rightarrow \prescript\sigma{}(wvs_\alpha))\geq&v^{-1}\mu - \langle \mu,v\alpha\rangle\alpha^\vee -\wt(vs_\alpha\Rightarrow v) \\&- \wt(v\Rightarrow \prescript\sigma{}(wv)) - \wt(\prescript\sigma{}(wv)\Rightarrow \prescript\sigma{}(wvs_\alpha)) \\\geq^\sigma&v^{-1}\mu - \langle \mu,v\alpha\rangle\alpha^\vee -\Phi^+(v\alpha)\alpha^\vee \\&- \wt(v\Rightarrow \prescript\sigma{}(wv)) - \Phi^+(-wv\alpha)\alpha^\vee \\=&v^{-1}\mu- \wt(v\Rightarrow \prescript\sigma{}(wv)) - (\ell(x,v\alpha)+1) \\\geq&v^{-1}\mu- \wt(v\Rightarrow \prescript\sigma{}(wv)). \end{align*} \item Indeed, using Corollary~\ref{cor:positiveLengthFormula} and Lemma~\ref{lem:weight2rho}, we obtain \begin{align*} \langle v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv)),2\rho\rangle=& \langle v^{-1}\mu,2\rho\rangle - \ell(v) + \ell(wv) - d(v\Rightarrow\prescript\sigma{}(wv)) \\\leq&\ell(x) - d(v\Rightarrow\prescript\sigma{}(wv)), \end{align*} with equality iff $v\in \LP(x)$.\qedhere \end{enumerate} \end{proof} \begin{corollary}\label{cor:genericGKPMinDistance} Let $x = w\varepsilon^\mu\in \widetilde W$. Among all elements $v\in \LP(x)$, pick one such that the distance $d(v\Rightarrow\prescript\sigma{}(wv))$ in the quantum Bruhat graph becomes minimal. Then \begin{align*} \lambda_x = v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv)) \in X_\ast(T)_{\Gamma}. \end{align*} In particular, the generic Newton point of $x$ is given by \begin{align*}\nu_x = \conv(v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))).\end{align*} \end{corollary} \begin{proof} We know that $\lambda_x = (v')^{-1}\mu - \wt(v'\Rightarrow \prescript\sigma{}(wv'))$ for some $v'\in W$ by the theorem. Using the above lemma, we conclude that the same equality holds for some $v'\in \LP(x)$. Now $v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))\leq (v')^{-1}\mu - \wt(v'\Rightarrow \prescript\sigma{}(wv'))$ by the theorem, and \begin{align*} \langle v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv)),2\rho\rangle\geq \langle (v')^{-1}\mu - \wt(v'\Rightarrow \prescript\sigma{}(wv')),2\rho\rangle \end{align*} by choice of $v$. The claim follows. \end{proof} The following lemma might be helpful for computing $\nu_x$. \begin{lemma}\label{lem:gnpJEstimate} Let $x = w\varepsilon^\mu\in \widetilde W$, $v\in \LP(x)$ and $J\subseteq \Delta$ such that $J=\sigma(J)$ and \begin{align*} \forall \alpha\in \Phi^+\setminus \Phi^+_J:~\ell(x,v\alpha)>0. \end{align*} Then there exists $J'\subseteq J$ with $\sigma(J') = J'$ and \begin{align*} \conv(v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))) = \pi_{J'}(v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))). \end{align*} \end{lemma} \begin{proof} In view of Lemma \ref{lem:convFacts} (e), it suffices to show for each $\alpha\in \Phi^+\setminus \Phi^+_J$ that \begin{align*} \langle \avg_\sigma(v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))),\alpha\rangle\geq 0. \end{align*} Let $N>1$ such that the action of $\sigma^N$ on $X_\ast(T)_{\Gamma_0}$ becomes trivial. Then \begin{align*} &\langle \avg_\sigma(v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))),\alpha\rangle\\=&\frac 1N\sum_{k=1}^N\langle v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv)),\sigma^k(\alpha)\rangle \\=&\frac 1N\sum_{k=1}^N\langle \mu,v\sigma^k(\alpha)\rangle -\langle \wt(v\Rightarrow\prescript\sigma{}(wv)),\sigma^k(\alpha)\rangle. \end{align*} By\footnote{The original formulation of this statement has a small typo, the version cited here is the correct one: Indeed, let $x,y\in W$ and define dominant coweights $\mu_1, \mu_2\in X_\ast(T)_{\Gamma_0}$ on each simple root $\alpha\in \Delta$ as follows: \begin{align*} \langle \mu_1,\alpha\rangle := \Phi^+(-y^{-1}\alpha),\quad \langle\mu_2,\alpha\rangle := \Phi^+(x\alpha). \end{align*} Then one checks easily that we are in the situation of \cite[Theorem~1.1]{He2021c}, and part (1) of this theorem yields $\langle\wt(y^{-1}\Rightarrow x),\alpha\rangle\leq \Phi^+(-y^{-1}\alpha) + \Phi^+(x\alpha)$.} \cite[Section~2.5]{He2021c}, we may estimate \begin{align*} \langle \wt(v\Rightarrow\prescript\sigma{}(wv)),\sigma^k(\alpha)\rangle\leq& \Phi^+(-v\sigma^k(\alpha)) + \Phi^+(\prescript\sigma{}(wv)\sigma^k(\alpha)) \\=& \Phi^+(-v\sigma^k(\alpha)) +\Phi^+(wv\sigma^{k-1}(\alpha)). \end{align*} Thus \begin{align*} &\frac 1N\sum_{k=1}^N\left(\langle \mu,v\sigma^k(\alpha)\rangle -\langle \wt(v\Rightarrow\prescript\sigma{}(wv)),\sigma^k(\alpha)\rangle\right) \\\geq&\frac 1N\sum_{k=1}^N\left(\langle \mu,v\sigma^k(\alpha)\rangle -\Phi^+(-v\sigma^k(\alpha)) -\Phi^+(wv\sigma^{k-1}(\alpha))\right) \\=&\frac 1N\sum_{k=1}^N\left(\langle \mu,v\sigma^k(\alpha)\rangle -\Phi^+(-v\sigma^k(\alpha)) -\Phi^+(wv\sigma^{k}(\alpha))\right) \\=&\frac 1N\sum_{k=1}^N\Bigl(\underbrace{\ell(x,\sigma^k(\alpha))}_{\geq 1}-1\Bigr)\geq 0. \end{align*} This finishes the proof. \end{proof} \begin{corollary}\label{cor:gnpShrunken} If $x = w\varepsilon^\mu$ lies in a shrunken Weyl chamber and $v\in W$ is the unique length positive element, then \begin{align*} \nu_x = v^{-1}\mu-\wt(v\Rightarrow\prescript\sigma{}(wv))\in X_\ast(T)_{\Gamma_0}\otimes\mathbb Q. \end{align*} \end{corollary} \begin{proof} Set $J:=\emptyset$ in the previous lemma. \end{proof} If $G$ is split and $\mu$ sufficiently regular, this corollary is the main result of \cite{Milicevic2021}, which was the first paper to derive an explicit formula for $\nu_x$ from Theorem~\ref{thm:truncations}. Mili\'cevi\'c's result since has been generalized by Sadhukhan \cite{Sadhukhan2021}, who proves the statement of Corollary~\ref{cor:gnpShrunken} if $G$ is split and $\mu$ satisfies a regularity condition that is weaker than Mili\'cevi\'c's. He and Nie \cite[Proposition~3.1]{He2021c} proved Corollary~\ref{cor:gnpShrunken} as stated here. \ifthesis\else These previous results allow for a short proof of the following essential statement on the quantum Bruhat graph. \begin{lemma}\label{lem:weightEstimate} Let $w\in W$ and $\alpha\in \Phi^+$. Then \begin{align*} \wt(ws_\alpha\Rightarrow w)\leq \alpha^\vee\Phi^+(w\alpha). \end{align*} \end{lemma} \begin{proof} If $w\alpha\in \Phi^-$, then $ws_\alpha\leq w$ in the Bruhat order of $W$, showing $\wt(ws_\alpha\Rightarrow w)=0$. Let us hence assume that $w\alpha\in \Phi^+$. We may assume that $G$ is split and pick a dominant and sufficiently regular coweight $\mu\in X_\ast(T)_{\Gamma_0}$. By Mili\'cevi\'c's result, the generic Newton point of $x := s_{w\alpha} \varepsilon^{ws_\alpha(\mu)}$ is given by \begin{align*} \nu_x = \mu - \wt(ws_\alpha\Rightarrow w). \end{align*} Observe that \begin{align*} \varepsilon^{ws_\alpha(\mu-\alpha^\vee)} = s_{w\alpha} \varepsilon^{-w\alpha^\vee}x<x \end{align*} in the Bruhat order. By Theorem~\ref{thm:truncations}, we get \begin{align*} \nu_x \geq \mu - \alpha^\vee, \end{align*} proving the claim. \end{proof} \fi As an application of Theorem~\ref{thm:genericGKP}, we classify the \emph{cordial} elements from Mili\'cevi\'c-Viehmann \cite{Milicevic2020}. \begin{definition} Let $x =w\varepsilon^\mu\in \widetilde W$ and $v\in W$ be the specific length positive element constructed in Example~\ref{ex:usualLPelement}. Then $x$ is cordial if \begin{align*} \ell(x)-\ell(v^{-1}\prescript\sigma{}(wv))= \langle \nu_x,2\rho\rangle - \defect(b_x). \end{align*} \end{definition} \begin{proposition}\label{prop:cordial} Let $x = w\varepsilon^\mu\in \widetilde W$ and $v\in \LP(x)$. Then \begin{align*} \ell(x) - \ell(v^{-1}\prescript\sigma{}(wv)) \leq \langle \nu_x,2\rho\rangle - \defect(b_x). \end{align*} Equality holds if and only if both conditions (a) and (b) are satisfied. Moreover, the condition (a) is always equivalent to (a'). \begin{itemize} \item[(a)] The generic $\lambda$-invariant $\lambda_x$ is given by \begin{align*} \lambda_x = v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))\in X_\ast(T)_{\Gamma}. \end{align*} \item[(a')] We have \begin{align*} d(v\Rightarrow\prescript\sigma{}(wv)) = \min_{v'\in \LP(x)} d(v'\Rightarrow\prescript\sigma{}(wv')). \end{align*} \item[(b)] We have $d(v\Rightarrow\prescript\sigma{}(wv)) = \ell(v^ {-1}\prescript\sigma{}(wv))$. \end{itemize} \end{proposition} \begin{proof} By Lemma~\ref{lem:genericGKPImprovements} and Theorem~\ref{thm:genericGKP}, (a) $\iff$ (a'). For the remaining claims, we calculate \begin{align*} \ell(x) - \ell(v^{-1}\prescript\sigma{}(wv)) \leq& \ell(x) - d(v\Rightarrow\prescript\sigma{}(wv)) \\=&\langle v^{-1}\mu-\wt(v\Rightarrow\prescript\sigma{}(wv)),2\rho\rangle \\\underset{\text{T\ref{thm:genericGKP}}}\leq&\langle \lambda_x,2\rho\rangle \underset{\text{P\ref{prop:defect}}}= \langle \nu_x,2\rho\rangle - \defect(b_x).\qedhere \end{align*} \end{proof} \begin{corollary}\label{cor:cordial} Let $x = w\varepsilon^\mu\in \widetilde W$ and $v\in W$ be of minimal length such that $v^{-1}\mu$ is dominant. Then $x$ is cordial if and only if the following two conditions are both satisfied: \begin{enumerate}[(1)] \item For each $v'\in \LP(x)$, $d(v\Rightarrow\prescript\sigma{}(wv))\leq d(v'\Rightarrow\prescript\sigma{}(wv'))$. \item $d(v\Rightarrow\prescript\sigma{}(wv)) = \ell(v^{-1}\prescript\sigma{}(wv))$.\pushQED{\qed}\qedhere\popQED \end{enumerate} \end{corollary} This corollary generalizes the description of superregular cordial element for split $G$ due to Mili\'cevi\'c-Viehmann \cite[Proposition~4.2]{Milicevic2020} and the description of shrunken cordial elements due to He-Nie \cite[Remark~3.2]{He2021c}. One can generalize the statement and proof of \cite[Theorem~1.2~(b),(c)]{Milicevic2020} accordingly. \subsection{Proof of the Theorem} Fix $x=w\varepsilon^\mu\in \widetilde W$. We need to show the following two claims: \begin{itemize} \item There exists some $v\in W$ such that \begin{align*} \lambda_x \leq v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))\in X_\ast(T)_{\Gamma}. \end{align*} \item For each $v\in W$, we have \begin{align*} v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv)) \leq \lambda_x\in X_\ast(T)_{\Gamma}. \end{align*} By definition of $\lambda_G(x)$, this is equivalent to \begin{align*} \avg_\sigma(v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv)))\leq \nu_x\in X_\ast(T)_{\Gamma_0}\otimes\mathbb Q. \end{align*} \end{itemize} Let us use the shorthand notation $\lambda \leq^\sigma \lambda'$ to say that the image of $\lambda$ in $X_\ast(T)_\Gamma$ is less than or equal to the image of $\lambda'$ in $X_\ast(T)_\Gamma$ ($\lambda, \lambda'$ being elements of $X_\ast(T), X_\ast(T)_{\Gamma_0}$ or $X_\ast(T)_\Gamma$). We write $\lambda\equiv^\sigma \lambda'$ to denote $\lambda\leq^\sigma\lambda'$ and $\lambda'\leq^\sigma \lambda$. Similarly, we write $\lambda<^\sigma\lambda'$ to denote $\lambda\leq^\sigma\lambda'$ but $\lambda'\not\leq^\sigma \lambda$. For this section, call an element $v\in W$ \emph{maximal} if there exists no $v'\in W$ such that \begin{align*} v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))<^\sigma(v')^{-1}\mu - \wt(v'\Rightarrow\prescript\sigma{}(wv')). \end{align*} \begin{lemma}\label{lem:maximalityConsequences} Let $v\in W$ be maximal. Moreover, fix a root $\alpha\in \Phi^+$ such that \begin{align*} \wt(v\Rightarrow\prescript\sigma{}(wv)) \equiv^\sigma \alpha^\vee\Phi^+(-v\alpha) + \wt(vs_\alpha\Rightarrow \prescript\sigma{}(wv)). \end{align*} Then precisely one of the following conditions is satisfied: \begin{enumerate}[(1)] \item $\ell(x,v\alpha)>0$, and the element \begin{align*} x' := w'\varepsilon^{\mu'} := xr_{v\alpha,\Phi^+(-v\alpha)}\in \widetilde W \end{align*} satisfies $x'<x$ and \begin{align*} (vs_\alpha)^{-1}\mu' - \wt(vs_\alpha\Rightarrow \prescript\sigma{}(w'vs_\alpha)) \equiv^\sigma v^{-1}\mu -\wt(v\Rightarrow \prescript\sigma{}(wv)). \end{align*} \item $\ell(x,v\alpha)=0$, $vs_\alpha\in W$ is maximal with \begin{align*} v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))\equiv^\sigma (vs_\alpha)^{-1}\mu - \wt(vs_\alpha \Rightarrow\prescript\sigma{}(wvs_\alpha)) \end{align*} and \begin{align*} \wt(vs_\alpha\Rightarrow \prescript\sigma{}(wvs_\alpha)) \equiv^\sigma \wt(vs_\alpha\Rightarrow \prescript\sigma{}(wv)) + \alpha^\vee\Phi^+(-wv\alpha). \end{align*} \end{enumerate} \end{lemma} \begin{remark} If $v\neq \prescript\sigma{}(wv)$ and $v\rightarrow vs_\alpha$ is an edge in $\QB(W)$ that is part of a shortest path from $v$ to $\prescript\sigma{}(wv)$, then the root $\alpha\in \Phi^+$ will satisfy the condition of the Lemma. \end{remark} \begin{proof}[Proof of Lemma~\ref{lem:maximalityConsequences}] We use maximality of $v$ by comparing to $vs_\alpha$. Now calculate \begin{align*} &(vs_\alpha)^{-1}\mu - \wt(vs_\alpha\Rightarrow\prescript\sigma{}(wvs_\alpha)) \\\geq&(vs_\alpha)^{-1}\mu - \wt(vs_\alpha\Rightarrow \prescript\sigma{}(wv)) - \wt(\prescript\sigma{}(wv)\Rightarrow \prescript\sigma{}(wvs_\alpha)). \\\equiv^\sigma &(vs_\alpha)^{-1}\mu + \alpha^\vee \Phi^+(-v\alpha) - \wt(v\Rightarrow\prescript\sigma{}(wv)) - \underbrace{\wt(wv\Rightarrow wvs_\alpha)}_{\leq \alpha^\vee \Phi^+(wv\alpha)\text{ by L\ref{lem:weightEstimate}}} \\\geq&v^{-1}\mu - \langle \mu,v\alpha\rangle \alpha^\vee + \alpha^\vee\Phi^+(-v\alpha) - \wt(v\Rightarrow\prescript\sigma{}(wv)) - \alpha^\vee\Phi^+(wv\alpha) \\=&v^{-1}\mu -\wt(v\Rightarrow\prescript\sigma{}(wv)) -\ell(x,v\alpha)\alpha^\vee. \end{align*} If $\ell(x,v\alpha)<0$, we get a contradiction to the maximality of $v$. Next assume that $\ell(x,v\alpha)=0$. Then every inequality in the above computation must be an equality (up to $\sigma$-coinvariants), or we would again get a contradiction. In particular, $vs_\alpha$ must be maximal, as \begin{align*} v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))\equiv^\sigma (vs_\alpha)^{-1}\mu - \wt(vs_\alpha \Rightarrow\prescript\sigma{}(wvs_\alpha)). \end{align*} Moreover, we obtain \begin{align*} \wt(vs_\alpha\Rightarrow\prescript\sigma{}(wvs_\alpha)) \equiv^\sigma \wt(vs_\alpha\Rightarrow \prescript\sigma{}(wv)) + \alpha^\vee(-wv\alpha). \end{align*} This shows all the claims in (2). Finally assume $\ell(x,v\alpha)>0$. Then $x'<x$, i.e.\ $x(v\alpha, \Phi^+(-v\alpha))\in \Phi^-_{{\mathrm{af}}}$, follows from Lemma~\ref{lem:lengthFunctionalAsCountingAffineRoots}. Calculating explicitly, we get \begin{align*} w' \varepsilon^{\mu'} = w\varepsilon^\mu s_{v\alpha} \varepsilon^{\Phi^+(-v\alpha)v\alpha^\vee} =ws_{v\alpha}\varepsilon^{s_{v\alpha}(\mu) + \Phi^+(-v\alpha)v\alpha^\vee}. \end{align*} So indeed, \begin{align*} (vs_\alpha)^{-1}\mu' - \wt(vs_\alpha\Rightarrow \prescript\sigma{}(w'vs_\alpha)) =& v^{-1}\mu - \alpha^\vee\Phi^+(-v\alpha) - \wt(vs_\alpha\Rightarrow\prescript\sigma{}(wv)) \\\equiv^\sigma&v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv)).\qedhere \end{align*} \end{proof} \begin{corollary}\label{cor:genericGKPDichotomy} Let $v$ be maximal. Then at least one of the following conditions is satisfied: \begin{enumerate}[(1)] \item There exists $x' = w'\varepsilon^{\mu'}<x$ and $v'\in W$ such that \begin{align*} v^{-1}\mu - \wt(v\Rightarrow \prescript\sigma{}(wv)) \equiv^\sigma (v')^{-1}\mu' - \wt(v'\Rightarrow \prescript\sigma{}(w'v')). \end{align*} \item The element $\prescript\sigma{}(wv)\in W$ is maximal, and we have \begin{align*} v^{-1}\mu - \wt(v\Rightarrow \prescript\sigma{}(wv))~\equiv^\sigma~ \prescript\sigma{}(wv)^{-1}\mu - \wt(\prescript\sigma{}(wv)\Rightarrow\prescript\sigma{}(w\prescript\sigma{}(wv))). \end{align*} \end{enumerate} \end{corollary} \begin{proof} Choose a shortest path in $\QB(W)$ \begin{align*} p : v\rightarrow vs_{\alpha_1}\rightarrow vs_{\alpha_1}s_{\alpha_2}\rightarrow\cdots\rightarrow vs_{\alpha_1}s_{\alpha_2}\cdots s_{\alpha_k} = \prescript\sigma{}(wv). \end{align*} Consider the roots \begin{align*} \beta_i = vs_{\alpha_1}\cdots s_{\alpha_{i-1}}(\alpha_i)\in \Phi,\qquad i=1,\dotsc,k. \end{align*} We fix $i^\ast \in \{0,\dotsc,k\}$ maximally such that $\ell(x,\beta_i)=0$ for $1\leq i\leq i^\ast$. We claim that each $v_i$ for $i=0,\dotsc,i^\ast$ satisfies the following conditions: \begin{enumerate}[(a)] \item $v_i$ is maximal, \item $d(v_i\Rightarrow\prescript\sigma{}(wv_i)) = d(v_i\Rightarrow\prescript\sigma{}(wv)) + d(\prescript\sigma{}(wv)\Rightarrow\prescript\sigma{}(wv_i))$. \item $v^{-1}\mu -\wt(v\Rightarrow\prescript\sigma{}(wv)) \equiv^\sigma v_i^{-1}\mu-\wt(v_i\Rightarrow\prescript\sigma{}(wv_i))$. \end{enumerate} Induction on $i$. Since $v_0=v$, the claim is clear for $i=0$. Now in the inductive step, assume that $i<i^\ast$ and that the conditions (a)--(c) are true for $v_i$. We apply Lemma~\ref{lem:maximalityConsequences} to $(v_i,\alpha_i)$. This is possible, as $v_i\rightarrow v_{i+1}$ is part of a shortest path from $v_i$ to $\prescript\sigma{}(wv)$ (by choice of the path $p$), hence part of a shortest path from $v_i$ to $\prescript\sigma{}(wv_i)$ by (b). Since $i<i^\ast$, we get $\ell(x,v_i\alpha_i)=0$, so condition (2) of Lemma~\ref{lem:maximalityConsequences} must be satisfied. Now (a) and (c) follow immediately for $v_{i+1}$. For condition (b), use condition (2) of the lemma to compute \begin{align*} &\wt(v_{i+1}\Rightarrow\prescript\sigma{}(wv_{i+1}))\\\equiv^\sigma &\wt(v_{i+1}\Rightarrow \prescript\sigma{}(wv_i))+\alpha_i^\vee\Phi^+(-wv_i\alpha_i) \\\underset{\text{(b)}}=&\wt(v_{i+1}\Rightarrow \prescript\sigma{}(wv)) + \wt(\prescript\sigma{}(wv)\Rightarrow \prescript\sigma{}(wv_i)) + \alpha_i^\vee\Phi^+(-wv_i\alpha_i) \\\underset{\text{L\ref{lem:weightEstimate}}}{\geq^\sigma}&\wt(v_{i+1}\Rightarrow \prescript\sigma{}(wv)) + \wt(\prescript\sigma{}(wv)\Rightarrow \prescript\sigma{}(wv_i)) +\wt(\prescript\sigma{}(wv_i)\Rightarrow\prescript\sigma{}(wv_{i+1}) \\\geq&\wt(v_{i+1}\Rightarrow \prescript\sigma{}(wv)) + \wt(\prescript\sigma{}(wv)\Rightarrow \prescript\sigma{}(wv_{i+1})) \\\geq&\wt(v_{i+1}\Rightarrow\prescript\sigma{}(wv_{i+1})). \end{align*} We see that equality must hold in every step (up to the $\sigma$-action). In light of Lemma~\ref{lem:weight2rho}, condition (b) for $v_{i+1}$ follows, finishing the induction. With the above claim proved for all $i\in\{0,\dotsc,i^\ast\}$, we distinguish two cases: \begin{enumerate}[(1)] \item Case $i^\ast<k$. Then $\ell(x, \beta_{i^\ast+1}) = \ell(x, v_{i^\ast}(\alpha_{i^\ast+1}))>0$ by choice of $i^\ast$. Applying Lemma~\ref{lem:maximalityConsequences} to $v_{i^\ast}$ and $\alpha_{i^\ast+1}$, we immediately get the desired $x'$. \item Case $i^\ast =k$. Then $\prescript\sigma{}(wv) = v_{i^\ast}$ and we obtain everything claimed.\qedhere \end{enumerate} \end{proof} \begin{lemma}\label{lem:genericKottwitzLowerBound} Let $v\in W$. Then there exists some $x'\leq x$ with \begin{align*} \nu(x')\geq \avg_{\sigma}(v^{-1}\mu - \wt(v\Rightarrow \prescript\sigma{}(wv))). \end{align*} In other words, $\lambda_x\geq^\sigma v^{-1}\mu - \wt(v\Rightarrow \prescript\sigma{}(wv))$. \end{lemma} \begin{proof} Induction on $\ell(x)$. We may certainly assume that $v$ is maximal. If there exists $x' = w'\varepsilon^{\mu'}<x$ and $v'\in W$ with \begin{align*} v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))~\equiv^\sigma~ (v')^{-1}\mu'-\wt(v'\Rightarrow\prescript\sigma{}(w'v')), \end{align*} we may apply the inductive hypothesis to $x'$ and are done. Let us assume that this is not the case. By the above corollary, we see that $\prescript\sigma{}(wv)$ is maximal and \begin{align*} v^{-1}\mu - \wt(v\Rightarrow \prescript\sigma{}(wv))~\equiv^\sigma ~\prescript\sigma{}(wv)^{-1}\mu - \wt(\prescript\sigma{}(wv)\Rightarrow\prescript\sigma{}(w\prescript\sigma{}(wv))). \end{align*} For $n\geq 0$, we define the element $v_n\in W$ by $v_0 := v$ and $v_{n+1} := \prescript\sigma{}(wv_n)\in W$. A simple induction argument shows that each $v_n$ is maximal and \begin{align*} v^{-1}\mu - \wt(v\Rightarrow \prescript\sigma{}(wv))\equiv^\sigma v_n^{-1}\mu - \wt(v_n\Rightarrow \prescript\sigma{}(wv_n)). \end{align*} We calculate for $\lambda\in X_\ast(T)_{\Gamma_0}$: \begin{align*} v_{n}\lambda = \prescript\sigma{}(wv_{n-1})\lambda = \sigma\circ wv_{n-1}\left(\sigma^{-1}\lambda\right) = (\sigma\circ w)^n v(\sigma^{-n} \lambda). \end{align*} Thus \begin{align*} v_n^{-1}\lambda = \sigma^n v^{-1}(\sigma\circ w)^{-n}(\lambda). \end{align*} Let $N\geq 1$ such that the action of $(\sigma\circ w)^N$ on $X_\ast(T)$ becomes trivial. We see that \begin{align*} \avg_\sigma\left(v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv))\right) =& \frac 1N\sum_{n=1}^N \avg_\sigma\left(v_n^{-1}\mu - \wt(v_n\Rightarrow\prescript\sigma{}(wv_n))\right) \\\leq&\frac 1N\sum_{n=1}^N \avg_\sigma\left(v_n^{-1}\mu \right) \\=&\frac 1N\sum_{n=1}^N \avg_\sigma\left(v^{-1}(\sigma\circ w)^{-n}\mu \right) \\=&\avg_\sigma v^{-1}\frac 1N\sum_{n=1}^N (\sigma\circ w)^{-n}\mu. \\\leq&\avg_\sigma \nu(x) = \nu(x). \end{align*} Thus we may choose $x'=x$, finishing the induction and the proof. \end{proof} \begin{lemma}\label{lem:genericKottwitzFundamental} Let $x=w\varepsilon^\mu\in \widetilde W$ be a fundamental element, and choose $v'\in \LP(x)$ with $\defect([x]_\sigma) = \ell((v')^{-1}\prescript\sigma{}(wv'))$ as in Lemma~\ref{lem:fundamentalDefect}. Then \begin{align*} \lambda_x \equiv^\sigma (v')^{-1}\mu - \wt(v'\Rightarrow\prescript\sigma{}(wv')). \end{align*} \end{lemma} \begin{proof} By Lemma~\ref{lem:genericKottwitzLowerBound}, we have \begin{align*} \lambda_x \geq^\sigma (v')^{-1}\mu - \wt(v'\Rightarrow\prescript\sigma{}(wv')). \end{align*} Now we calculate \begin{align*} &\langle \lambda_x- (v')^{-1}\mu + \wt(v'\Rightarrow\prescript\sigma{}(wv')),2\rho\rangle \\\underset{\text{L\ref{lem:genericGKPImprovements}}}=&\langle \lambda_x,2\rho\rangle - \ell(x) + d(v'\Rightarrow\prescript\sigma{}(wv')) \\\underset{\text{fund.}}=&\langle \lambda_G(x),2\rho\rangle - \langle \nu(x),2\rho\rangle+ d(v'\Rightarrow\prescript\sigma{}(wv')) \\\underset{\text{P\ref{prop:defect}}}=&-\defect([x]_\sigma) + d(v'\Rightarrow\prescript\sigma{}(wv')) \\\leq&-\defect([x]_\sigma) + \ell((v')^{-1}\prescript\sigma{}(wv')) \underset{\text{assump.}}=0. \end{align*} The inequality on the last line is \cite[Lemma~4.3]{Milicevic2020}. \end{proof} \begin{lemma}\label{lem:genericKottwitzUpperBound} There exists $v\in W$ such that \begin{align*} \lambda_x\leq^\sigma v^{-1}\mu - \wt(v\Rightarrow\prescript\sigma{}(wv)). \end{align*} \end{lemma} \begin{proof} Induction on $\ell(x)$. Let us first consider the case that there exists an element $x' = w'\varepsilon^{\mu'}<x$ with $[b_{x'}] = [b_x]\in B(G)$. If this is the case, we may further assume by definition of the Bruhat order that $x' = xr_a$ for some affine root $a\in \Phi_{\mathrm{af}}^+$. Using the induction assumption, we find some $v'\in W$ such that \begin{align*} \lambda_{x'} = \lambda_x\leq^\sigma (v')^{-1}\mu'-\wt(v'\Rightarrow\prescript\sigma{}(w'v')). \end{align*} Write $a = (\alpha,k)$ such that $w' = ws_\alpha$ and $\mu' = s_\alpha(\mu)+k\alpha^\vee$. The condition $\ell(x')<\ell(x)$ means that $xa \in \Phi_{\mathrm{af}}^-$, which we can rewrite as \begin{align*} k-\langle \mu,\alpha\rangle<\Phi^+(w\alpha). \end{align*} We distinguish the following cases. \begin{itemize} \item Case $(v')^{-1}\alpha \in \Phi^-$. Define $v := s_\alpha v'$ and compute \begin{align*} \lambda_x\leq^\sigma~&(v')^{-1}\mu'-\wt(v'\Rightarrow\prescript\sigma{}(w'v')) \\=&v^{-1}(\mu - k\alpha^\vee) - \wt(s_\alpha v \Rightarrow \prescript\sigma{}(wv)) \\\leq&v^{-1}\mu - kv^{-1}\alpha^\vee - \wt(v\Rightarrow \prescript\sigma{}(wv)) + \wt(v\Rightarrow s_\alpha v) \\\underset{\text{L\ref{lem:weightEstimate}}}\leq&v^{-1}\mu - kv^{-1}\alpha^\vee - \wt(v\Rightarrow \prescript\sigma{}(wv)) + v^{-1}\alpha^\vee \Phi^+(-\alpha) \\=&v^{-1}\mu - \wt(v\Rightarrow \prescript\sigma{}(wv)) + (\Phi^+(-\alpha)-k)v^{-1}\alpha^\vee \\\leq&v^{-1}\mu - \wt(v\Rightarrow \prescript\sigma{}(wv)). \end{align*} The inequality on the last line follows since $\Phi^+(-\alpha)-k\leq 0$ (as $a\in \Phi_{\mathrm{af}}^+$) and $v^{-1}\alpha\in \Phi^+$ by assumption. \item Case $(v')^{-1}\alpha \in \Phi^+$. Define $v := v'$ and compute \begin{align*} \lambda_x\leq^\sigma~&(v')^{-1}\mu'-\wt(v'\Rightarrow\prescript\sigma{}(w'v')) \\=&v^{-1}(\mu - \langle\mu,\alpha\rangle \alpha^\vee+ k\alpha^\vee) - \wt(v \Rightarrow ws_\alpha v) \\\leq^\sigma &v^{-1}\mu + (-\langle \mu,\alpha\rangle +k)v^{-1}\alpha^\vee - \wt(v\Rightarrow \prescript\sigma{}(wv)) + \wt(ws_\alpha v\Rightarrow w v) \\\underset{\text{L\ref{lem:weightEstimate}}}\leq&v^{-1}\mu + (-\langle \mu,\alpha\rangle +k)v^{-1}\alpha^\vee - \wt(v\Rightarrow \prescript\sigma{}(wv)) - v^{-1}\alpha^\vee \Phi^+(w\alpha) \\=&v^{-1}\mu + (-\langle \mu,\alpha\rangle +k-\Phi^+(w\alpha))v^{-1}\alpha^\vee - \wt(v\Rightarrow \prescript\sigma{}(wv)) \\\leq&v^{-1}\mu-\wt(v\Rightarrow \prescript\sigma{}(wv)). \end{align*} The inequality on the last line follows since $-\langle \mu,\alpha\rangle +k-\Phi^+(w\alpha)\leq 0$ (as $xa\in \Phi_{\mathrm{af}}^-$) and $v^{-1}\alpha\in \Phi^+$ by assumption. \end{itemize} In any case, we find an element $v\in W$ with the desired property, proving the claim for $x$. It remains to study the case where $[b_x]>[b_{x'}]$ for all $x'<x$. By Lemma~\ref{lem:nonEmptynessBruhatCondition}, $x$ must be fundamental. The result follows from Lemma~\ref{lem:genericKottwitzFundamental}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:genericGKP}] The Theorem follows immediately from Lemmas \ref{lem:genericKottwitzLowerBound} and \ref{lem:genericKottwitzUpperBound}. \end{proof} \subsection{General groups}\label{sec:gnpArbitraryGroups} In this section, we drop the assumption that $G$ should be quasi-split. We keep the notation from Section~\ref{sec:notation}. As announced, we show how to compute generic $\sigma$-conjugacy classes and classify cordial elements in this case. The Frobenius action on the apartment $\mathcal A$ preserves the base alcove $\mathfrak a$, but no longer the chosen special vertex $\mathfrak x$. We denote by $\mu_\sigma\in V$ the uniquely determined element such that $\sigma(\mathfrak x) = \mathfrak x + \mu_\sigma$. Moreover, there is a natural Frobenius action on $X_\ast(T)_{\Gamma_0}$. We denote the induced linear map by $\sigma_{\text{lin}}:V\rightarrow V$ Under the identification of $\mathcal A$ with $V$ by $\mathfrak x\mapsto 0$, the map $\sigma_{\text{lin}}$ is given by \begin{align*} \sigma_{\text{lin}}:V\rightarrow V,\quad v\mapsto \sigma(v) - \mu_\sigma. \end{align*} Since $\sigma_{\text{lin}}$ permutes the alcoves in $\mathcal A$, it permutes the Weyl chambers in $V$. We hence find a uniquely determined element $\sigma_1\in W$ with $\sigma_{\text{lin}}(C) = \sigma_1(C)$. Define $\sigma_2 := \sigma_1^{-1}\circ \sigma_{\text{lin}}$ such that $\sigma_2(C) = C$. Then the action of $\sigma$ on $V$ is given by the composed action \begin{align*} \sigma = t_{\mu_\sigma}\circ \sigma_1\circ \sigma_2, \end{align*} where $t_{\mu_\sigma}$ is the translation by $\mu_\sigma$. Note that $\sigma_2$ fixes both $0$ and $C$, hence it fixes $\mathfrak a$ being the only alcove in $C$ adjacent to $0$. It follows that also $t_{\mu_\sigma}\circ \sigma_1$ fixes $\mathfrak a$. So the map $t_{\mu_\sigma}\circ \sigma_1:V\rightarrow V$ \enquote{looks like} the action of an element in $\Omega\subseteq\widetilde W$, except that a lift of $\mu_\sigma\in V$ to $X_\ast(T)_{\Gamma_0}$ might not exist; and if it exists, it might not be unique. For each $w_1, w_2\in W$, the difference $w_1\mu_\sigma-w_2\mu_\sigma$ lies in $\mathbb Z\Phi^\vee$, so we may consider $w_1\mu_\sigma-w_2 \mu_\sigma$ as a well-defined element of $X_\ast(T)_{\Gamma_0}$ even if neither $w_1\mu_\sigma$ nor $w_2 \mu_\sigma$ lies in $X_\ast(T)_{\Gamma_0}$. We define maps \begin{align*} \avg_{\sigma_2} :& X_\ast(T)_{\Gamma_0}\otimes\mathbb Q\rightarrow X_\ast(T)_{\Gamma_0}\otimes\mathbb Q,\\ \avg_J:& X_\ast(T)_{\Gamma_0}\otimes\mathbb Q\rightarrow X_\ast(T)_{\Gamma_0}\otimes\mathbb Q\quad (J\subseteq \Delta) \end{align*} as in Section~\ref{sec:parabolic-averages}. If $J = \sigma_2(J)$, we define $\pi_J := \avg_J\circ\avg_{\sigma_2}$. For an element $\mu\in X_\ast(T)_{\Gamma_0}\otimes\mathbb Q$ or $\mu\in X_\ast(T)_{\Gamma}$, we define \begin{align*} \conv(\mu) := \max_{\substack{J\subseteq \Delta\\ J=\sigma_2(J)}}\avg_J\avg_{\sigma_2}(\mu)\in X_\ast(T)_{\Gamma_0}\otimes\mathbb Q. \end{align*} Then we can describe generic Newton points as follows: \begin{theorem}\label{thm:gnpGeneralGroups} Assume that $\mathrm{char}(F)$ does not divide the order of $\pi_1(G_{\mathrm{ad}})$, the Borovoi fundamental group of the adjoint quotient\footnote{It is conjectured in \cite[Section~2.2]{Goertz2015} that this assumption can be dropped; and in fact, it does not appear any more in \cite[Section~3.2]{He2021c}.}. Let $x = w\varepsilon^\mu\in \widetilde W$. The generic Newton point of $x$ is given by \begin{align*} \nu_x = \max_{v\in W} \conv\left(v^{-1}\mu - \wt(\sigma_1^{-1}v\Rightarrow\prescript{\sigma_2}{}(wv)) + \frac 1{\#W}\sum_{u\in W}(v^{-1}\mu_\sigma-u^{-1}\mu_\sigma)\right). \end{align*} In fact, the maximum is attained for some $v\in \LP(x)$. \end{theorem} We prove this theorem by reduction to the previously established results for quasi-split groups, following Goertz-He-Nie \cite[Section~2]{Goertz2015}. By \cite[Corollary~2.2.2]{Goertz2015}, it suffices to prove the Theorem for adjoint groups, by comparing $B(G)_x$ with $B(G_{\mathrm{ad}})_x$. Let us now assume that $G$ is adjoint. Then $\gamma := \varepsilon^{\mu_\sigma}\circ \sigma_1$ is a well-defined element of $\widetilde W$, hence of $\Omega$. Following \cite[Proposition~2.5.1]{Goertz2015}, we can identify $B(G)_x$ with $B(\tilde G)_{x\gamma}\cdot \gamma^{-1}$. Here, $\tilde G$ is a quasi-split inner form of $G$ with maximal torus $T$ and Frobenius given by $\sigma_2$. We see that \begin{align*} \nu_x = \nu^G\Bigl(\max_{[b]\in B(G)_x} [b]\Bigr) = \nu^G\Bigl(\max_{[b]\in B(\tilde G)_{x\gamma}} [b\gamma^{-1}]\Bigr). \end{align*} A quick calculation shows that for all $[b]\in B(G)$, we have \begin{align*} \nu^G([b]) = \nu^{\tilde G}([b\gamma]) - \frac 1{\# W}\sum_{u\in W} u \mu_\sigma. \end{align*} Thus \begin{align*} \nu_x = \nu^{\tilde G}([b_{x\gamma}]) - \frac 1{\# W}\sum_{u\in W}u\mu_\sigma. \end{align*} Calculating $\nu^{\tilde G}([b_{x\gamma}])$ using Corollary~\ref{cor:genericGKPMinDistance} shows Theorem~\ref{thm:gnpGeneralGroups}. Let us return to the general situation. Following Mili\'cevi\'c-Viehmann \cite[Remark~1.3]{Milicevic2020}, we define an element $x\in \widetilde W$ to be cordial if the corresponding element $\tilde x$ in the extended affine Weyl group of the quasi-split group $\tilde G$ under the above reduction is cordial. Then the results from \cite{Milicevic2020} on cordial elements guarantee that the affine Deligne-Lusztig varieties associated with $\tilde x$ satisfy the most desirable properties as discussed earlier. By the above reduction method of \cite{Goertz2015}, it follows that also the affine Deligne-Lusztig varieties associated with $x$ satisfy these properties. Straightforward calculation shows the following: \begin{proposition}\label{prop:cordialGeneralGroups} Assume that $\mathrm{char}(F)$ does not divide the order of $\pi_1(G_{\mathrm{ad}})$. Let $x = w\varepsilon^\mu\in \widetilde W$ and pick $v\in W$ of minimal length such that \begin{align*} v^{-1}\mu + v^{-1}\mu_\sigma\in V \end{align*} is dominant. Then $\sigma_1^{-1}v\in \LP(x)$. The element $x$ is cordial if and only if the following two conditions are both satisfied: \begin{enumerate}[(1)] \item For any $\sigma_1^{-1}v'\in \LP(x)$, we have \begin{align*} d(\sigma_1^{-1}v'\Rightarrow\prescript{\sigma_2}{}(wv'))\geq d(\sigma_1^{-1}v\Rightarrow\prescript{\sigma_2}{}(wv)). \end{align*} \item We have \begin{align*} &d(\sigma_1^{-1}v\Rightarrow\prescript{\sigma_2}{}(wv)) = \ell\left(v^{-1}\sigma_1\prescript{\sigma_2}{}(wv)\right).\pushQED{\qed}\qedhere\popQED \end{align*} \end{enumerate} \end{proposition} \section{Introduction} We refer to Section~\ref{sec:notation} for a complete description of our setup and notation. For now, we summarize that $G$ denotes a reductive group over the local field $F$. There are two important decompositions of $G(\breve F)$, namely the Iwahori-Bruhat decomposition and the decomposition into $\sigma$-conjugacy classes. Two elements $x_1, x_2\in G(\breve F)$ are $\sigma$-conjugate if $x_1 = y^{-1} x_2 \sigma(y)$ for some $y\in G(\breve F)$. We denote the set of $\sigma$-conjugacy classes by $B(G)$. By Kottwitz \cite{Kottwitz1985, Kottwitz1997}, we know that the $\sigma$-conjugacy class $[g]\in B(G)$ of an element $g\in G(\breve F)$ is uniquely determined by two invariants, namely the Kottwitz point $\kappa(g)$ and the Newton point $\nu(g)$. The closure of a $\sigma$-conjugacy class in the topological space $G(\breve F)$ is a union of $\sigma$-conjugacy classes, defining a partial order on $B(G)$. This order has an accessible combinatorial description in terms of Kottwitz and Newton points, due to Rapoport-Richartz, Viehmann and He, \cite{Rapoport1996, Viehmann2013, He2016}. From Chai \cite{Chai2000}, we know a full description of the set of Kottwitz and Newton points, yielding an explicit description of $B(G)$. The Iwahori-Bruhat decomposition expresses $G(\breve F)$ as \begin{align*} G(\breve F) = \bigsqcup_{x\in \widetilde W} IxI, \end{align*} where $I\subseteq G(\breve F)$ denotes the Iwahori subgroup and $\widetilde W$ is the extended affine Weyl group. This decomposition and its applications to the Bruhat-Tits building \cite[Section~4]{Bruhat1972} also have been studied intensively and are well-understood\footnote{Properties of the Iwahori-Bruhat decomposition are typically related to Coxeter-theoretic properties of the group $\widetilde W$, some of which are better understood than others. We would like to announce an upcoming paper providing new descriptions of the closures $\overline{IxI}$ and $\overline{IxIyI}$ for $x, y\in \widetilde W$, using the same methods as this paper.}. We are interested in the intersections $IxI\cap [b]$ for $x\in \widetilde W$ and $[b]\in B(G)$, called \emph{Newton strata}. It is an important open question which Newton strata are non-empty, i.e.\ to describe the set \begin{align*} B(G)_x := \{[b]\in B(G)\mid IxI\cap [b]\neq\emptyset\}. \end{align*} Related to these intersections are the \emph{affine Deligne-Lusztig varieties} (cf.\ \cite{Rapoport2002}), defined by \begin{align*} X_x(b)(\overline{\mathbb F_q}) =\{g\in G(\breve F)/I\mid g^{-1} b\sigma(g)\in IxI\}. \end{align*} The dimension and the question of equi-dimensionality of $X_x(b)$ have been intensively studied in the past, yet both problems remain largely open \cite{Goertz2006, Goertz2010, Goertz2010b, He2014, Milicevic2019}. Affine Deligne-Lusztig varieties for certain groups of small rank have been studied explicitly \cite{Reuman2002, Beazley2009, Yang2014}. Affine Deligne-Lusztig varieties have been introduced by Rapoport~\cite{Rapoport2002} to define Rapoport-Zink moduli spaces, which play an important role for the study of Shimura varieties. The construction of affine Deligne-Lusztig varieties resembles a classical construction of certain varieties due to Deligne-Lusztig \cite{Deligne1976}. They used the cohomology of these Deligne-Lusztig varieties to describe all complex representations of finite groups of Lie type. If one replaces the Iwahori subgroup by a hyperspecial subgroup, the resulting affine Deligne-Lusztig varieties have been well-understood after concentrated effort by many researchers, e.g.\ \cite{Kottwitz2006, Goertz2006, Viehmann2006, Hamacher2015}. For the affine Deligne-Lusztig varieties considered in this paper, there are a number of important partial results describing their geometry. It is proved by Görtz-He-Nie \cite{Goertz2015} and Viehmann \cite{Viehmann2021} that $B(G)_x$ always contains a uniquely determined smallest element, which is explicitly described. Moreover, $B(G)_x$ always contains a uniquely determined largest element. This follows from the specialization theorem of Rapoport-Richartz \cite[Theorem~3.6]{Rapoport1996}, as explained by Viehmann~\cite[Proof of Corollary~5.6]{Viehmann2014}. Rapoport-Richartz also prove a version of \emph{Mazur's inequality}, which states that for $[b]\in B(G)_x$ with $x=w\varepsilon^\mu$, we must have an identity of Kottwitz points $\kappa(b) = \kappa(x)$ and the inequality $\nu(b)\leq \mu^{{\mathrm{dom}}}\in X_\ast(T)_{\Gamma_0}\otimes\mathbb Q$. While the dimension $\dim X_x(b)$ is difficult to compute, the \emph{virtual dimension} $d_x(b)$ introduced by He \cite{He2014} is easy to evaluate and always an upper bound for $\dim X_x(b)$. Moreover, we have $\dim X_x(b) = d_x(b)$ for a number of cases, but not always. Cf.\ \cite{He2014, Milicevic2020, He2021a}, affirming conjectures of Reuman and others \cite{Reuman2002, Goertz2006}. The virtual dimension is defined as \begin{align*} d_x(b) = \frac 12\left(\ell(x) + \ell(\eta_\sigma(x)) - \langle \nu(b),2\rho\rangle-\defect(b)\right). \end{align*} Here, $\ell(x)$ denotes the length of $x$ in $\widetilde W$, as explained in Section~\ref{sec:notation}. By $\eta_\sigma(x)$, we denote a certain element in the finite Weyl group associated with $x$, as explained in Section~\ref{sec:root-functionals}. These two terms only depend on the element $x\in \widetilde W$. The \emph{defect} of a $\sigma$-conjugacy class is a non-negative integer that is bounded by the rank of the root system. We will focus on this invariant in Section~\ref{sec:defect}. While the results using the virtual dimension are promising, they have important shortcomings. First, it is often assumed that $x$ must satisfy certain regularity assumptions, which are not satisfied for the applications to Shimura varieties. Secondly, the properties of the virtual dimension are, in general, a lot simpler than those of the actual dimension $\dim X_x(b)$. One can find plenty of examples where the virtual dimension fails to capture the delicate interplay between the elements $x\in \widetilde W$ and $[b]\in B(G)$. The uniquely determined largest element of $B(G)_x$ is called \emph{generic $\sigma$-conjugacy class} of $x$ and denoted $[b_x]$. It is the unique $\sigma$-conjugacy class such that $[b_x]\cap IxI$ is dense in $IxI$. The Kottwitz point of $b_x$ coincides with the Kottwitz point of $x$, which is easy to compute. The calculation of its Newton point, i.e.\ the \emph{generic Newton point} of $x$, is an important open problem. Our first main result fully solves it, generalizing earlier partial results \cite{Milicevic2021, He2021a, Sadhukhan2021, He2021c}. \begin{theorem}[Cf.\ Corollary~\ref{cor:genericGKPMinDistance} and Theorem~\ref{thm:gnpGeneralGroups}] Let $x = w\varepsilon^\mu\in \widetilde W$. We can give an explicit closed formula for the generic Newton point $\nu_x = \nu(b_x)$ in terms of $\mu$ and the weight function of the quantum Bruhat graph. \end{theorem} This theorem may be seen as a refinement of the aforementioned Mazur inequality, as it gives a sharp upper bound for $\{\nu(b)\mid [b]\in B(G)_x\}$. We also give a concise formula for the \emph{$\lambda$-invariant} $\lambda_G([b_x])$ as introduced by Hamacher-Viehmann \cite{Hamacher2018}. This result is useful for proving our second main result. If the dimension coincides with the virtual dimension for the generic $\sigma$-conjugacy class, i.e.\ $\dim X_x(b_x) = d_x(b_x)$, the element $x$ is called \emph{cordial} following Mili\'cevi\'c-Viehmann \cite{Milicevic2020}. They prove in \cite[Corollary~3.17, Theorem~1.1]{Milicevic2020} that cordial elements satisfy the most desirable properties. In particular, the set $B(G)_x$ is explicitly described as a closed interval in $B(G)$, and for each $[b]\in B(G)_x$, the affine Deligne-Lusztig variety $X_x(b)$ is equi-dimensional of dimension $d_x(b)$. Using our result on generic Newton points, we are able to fully classify the cordial elements in $\widetilde W$. \begin{theorem}[Cf.\ Corollary~\ref{cor:cordial} and Proposition~\ref{prop:cordialGeneralGroups}] Let $x\in \widetilde W$. Then $x$ is cordial if and only if two conditions are satisfied, that we can summarize as a genericness condition on $x$ and an extremality condition on certain vertices in the quantum Bruhat graph. \end{theorem} The theory of cordial elements has been used by He \cite{He2021a} to compute the dimensions of many affine Deligne-Lusztig varieties, even for non-cordial elements $x\in \widetilde W$. In order to prove our main results, we introduce new methods and refine existing ones. We introduce the language of length functionals in Section~\ref{sec:root-functionals}, which is essential when studying elements $x\in \widetilde W$ without regularity assumptions (e.g.\ minuscule $x$). A few new insights on the quantum Bruhat graph complement the combinatorics needed to prove our results. As a preparation for the more geometric aspects of our proofs, we review and refine a number of known results in Section~\ref{chap:sigma-conjugation}. Our main results hold true whenever $G$ is connected and reductive. Following Görtz-He-Nie \cite{Goertz2015}, we can prove this via a reduction to the case where $G$ is quasi-split. However, many important foundational results have been proved only under the somewhat stricter assumption that $G$ should be unramified. We show how to generalize these classical results to the quasi-split case, allowing us to prove our main results in this setting (Corollaries \ref{cor:genericGKPMinDistance} and \ref{cor:cordial}). This enables us to conclude them for arbitrary connected reductive groups (Theorem~\ref{thm:gnpGeneralGroups} and Proposition~\ref{prop:cordialGeneralGroups}). This paper covers parts of the author's PhD thesis. \subsection{Acknowledgements} First and foremost, I would like to thank my advisor Eva Viehmann for her constant support throughout my PhD time. I am deeply thankful for her invaluable help in both mathematical and administrative matters. I would like to thank Paul Hamacher and Xuhua He for inspiring discussions. The author was partially supported by the ERC Consolidator Grant 770936: \emph{NewtonStrat}, the German Academic Scholarship Foundation, the Marianne-Plehn-Program and the DFG Collaborative Research Centre 326: \emph{GAUS}. \section{$\sigma$-conjugacy classes}\label{chap:sigma-conjugation} In this section, we review various descriptions of the set $B(G)$ of $\sigma$-conjugacy classes in $G(L)$. This serves mostly as a preparation for the next section, which discusses the \emph{generic} $\sigma$-conjugacy class of an element $x\in \widetilde W$. Throughout this section, we assume that $G$ is quasi-split. We begin with the classical result of Kottwitz \cite{Kottwitz1985, Kottwitz1997} that describes the $\sigma$-conjugacy class of an element $g\in G(L)$ by two invariants. These are called \emph{Kottwitz point} $\kappa(g)\in \pi_1(G)_{\Gamma}=(X_\ast(T)/\mathbb Z\Phi^\vee)_{\Gamma}$ and \emph{(dominant) Newton point} $\nu(g) \in X_\ast(T)_{\Gamma_0}\otimes \mathbb Q$. If $g$ lies in the normalizer of the maximal torus, $g\in N_G(T)(L)$, then it corresponds to an element in $w\varepsilon^\mu\in \widetilde W$. In this case, $\kappa(g)$ is the image of $\mu$ in $\pi_1(G)_{\Gamma}$. Viewing both $w$ and $\sigma$ as automorphisms of $X_\ast(T)_{\Gamma_0}$, we write $\sigma\circ w$ for their composition. Let $N\geq 1$ such that the $(\sigma\circ w)^N$ is the identity map. Then $\nu(g)\in X_\ast(T)_{\Gamma_0}\otimes\mathbb Q$ is the unique dominant element in the $W$-orbit of \begin{align*} \frac 1N\sum_{k=1}^N (\sigma\circ w)^k\mu. \end{align*} It is true, e.g.\ by \cite[Section~3.3]{He2014}, that each $\sigma$-conjugcacy class $[b]\in B(G)$ contains an element of $N_G(T)(L)$, so that the above descriptions of $\kappa(g)$ and $\nu(g)$ actually cover all $\sigma$-conjugacy classes. In this section, we review a few important results related to these invariants. Our main concern is to bridge the gap between the unramified case, which is often studied in the relevant literature, and the quasi-split case, which we need for our final generalization. \subsection{Parabolic averages and convex hull}\label{sec:parabolic-averages} We start by formally defining some averaging functions and proving their basic properties. Neither our results nor our proofs in this section should be too surprising for the educated reader, especially if one keeps the example of ${\mathrm{GL}}_n$ and its Newton polygons in mind. Let $N\geq 1$ be an integer such that the action of $\sigma^N$ on $X_\ast(T)$ becomes trivial. Then we define the \emph{$\sigma$-average} of an element $\mu \in X_\ast(T)_{\Gamma_0}\otimes \mathbb Q$ by \begin{align*} \avg_\sigma(\mu) := \frac 1N\sum_{k=1}^N \sigma^k(\mu)\in (X_\ast(T)_{\Gamma_0}\otimes \mathbb Q)^{\langle \sigma\rangle}. \end{align*} Since $\avg_{\sigma}$ vanishes on terms of the form $\mu -\sigma(\mu)$, it follows that we get a well-defined map $\avg_{\sigma} : X_\ast(T)_\Gamma \rightarrow (X_\ast(T)_{\Gamma_0}\otimes \mathbb Q)^{\langle \sigma\rangle}$. A similar notion of average is the following: For $J\subseteq \Delta$, denote by $W_J$ the Coxeter subgroup of $W$ generated by the reflections $\{s_\alpha\mid \alpha\in J\}$. For $\mu \in X_\ast(T)_{\Gamma_0}\otimes \mathbb Q$, we define \begin{align*} \avg_J(\mu) := \frac 1{\# W_J} \sum_{w\in W_J} w(\mu)\in X_\ast(T)_{\Gamma_0}\otimes \mathbb Q. \end{align*} Finally, if $J = \sigma(J)$, we define the function $\pi_J$ by \begin{align*} \pi_J := \avg_J \circ \avg_\sigma = \avg_\sigma \circ \avg_J : X_\ast(T)_{\Gamma_0}\otimes \mathbb Q\rightarrow (X_\ast(T)_{\Gamma_0}\otimes \mathbb Q)^{\langle \sigma\rangle}. \end{align*} This map was introduced by Chai \cite[Definition~3.2]{Chai2000}. Again, we get an induced map $\pi_J : X_\ast(T)_\Gamma\rightarrow (X_\ast(T)_{\Gamma_0}\otimes \mathbb Q)^{\langle \sigma\rangle}$. If $G$ is split, it can be identified with the \emph{slope map} as introduced by Schieder \cite[Section~2.1.3]{Schieder2015}. We start with a collection of easy facts on these averages. \begin{lemma}\label{lem:avgSimpleFacts} Let $\beta \in X_\ast(T)_{\Gamma}$ and $\mu \in X_\ast(T)_{\Gamma_0}\otimes {\mathbb Q}$. Let $J\subseteq \Delta$ be any subset. \begin{enumerate}[(a)] \item For any preimage $\beta'\in X_\ast(T)_{\Gamma_0}$ of $\beta$, we have \begin{align*} \langle \beta',2\rho\rangle = \langle \avg_\sigma(\beta),2\rho\rangle. \end{align*} In particular, it makes sense to write $\langle \beta,2\rho\rangle$. \item If $\langle \mu,\alpha\rangle=0$ for all $\alpha\in J$, then $\avg_J(\mu) = \mu$. \item For all $\alpha\in J$, we have $\langle \avg_J(\mu),\alpha\rangle=0$. \item If $\mu\geq 0$, then $\avg_J(\mu)\geq 0$. \item If $\langle \mu,\alpha\rangle \leq 0$ for all $\alpha\in J$, then $\mu\leq w\mu$ for all $w\in W_J$. In particular, $\mu\leq \avg_J(\mu)$. \end{enumerate} \end{lemma} \begin{proof} (a) follows since $\sigma(2\rho)= 2\rho$ and $\avg_\sigma(b) = \avg_\sigma(b')$. For (b) and (c), note that the following are equivalent: \begin{itemize} \item $\langle \mu,\alpha\rangle=0$ for all $\alpha\in J$, \item $w(\mu)=\mu$ for all $w\in W_J$. \end{itemize} Then both statements follow easily. For (d), it suffices to only consider the case where $\mu$ is a simple coroot $\mu = \alpha^\vee$. If $\alpha\in J$, then $\avg_J(\mu)=0$. Otherwise $w(\alpha)\in \Phi^+$ for all $w\in W_J$, such that $\avg_J(\mu)>0$. We prove (e) via induction on $\ell(w)$, the inductive start being clear. If now $\ell(w)\geq 1$ and $w\alpha\in \Phi^-$ for some $\alpha\in J$, then \begin{align*} w\mu = (ws_\alpha)(\mu-\langle\mu,\alpha\rangle \alpha^\vee) = (ws_\alpha)\mu + \langle\mu,\alpha\rangle w\alpha^\vee\geq (ws_\alpha)\mu\underset{\text{ind.}}\geq \mu. \end{align*} This finishes the induction and the proof. \end{proof} \begin{definition} Let $\mu \in X_\ast(T)_{\Gamma_0}\otimes {\mathbb Q}$ and $J\subseteq \Delta$ be any subset. \begin{enumerate}[(a)] \item We say that $J$ is \emph{$\mu$-improving} if we can write $J = \{\alpha_1,\dotsc,\alpha_k\}$ such that \begin{align*} \langle \avg_{\{\alpha_1,\dotsc,\alpha_{i-1}\}}(\mu),\alpha_i\rangle \leq 0 \end{align*} for $i=1,\dotsc,k$. \item We say that $J$ is \emph{maximally $\mu$-improving} if it is $\mu$-improving, and any $\mu$-improving superset $J'\supseteq J$ satisfies $\avg_J(\mu) = \avg_{J'}(\mu)$. \end{enumerate} \end{definition} E.g.\ any $\mu$-improving subset of maximal cardinality will be maximally $\mu$-improving. Since the empty set is $\mu$-improving, it follows that maximally $\mu$-improving subsets always exist. We make the following immediate observations: \begin{lemma}\label{lem:convPreparation} Let $\mu \in X_\ast(T)_{\Gamma_0}\otimes {\mathbb Q}$ and $J\subseteq \Delta$. \begin{enumerate}[(a)] \item If $J$ is $\mu$-improving, then $\mu\leq \avg_J(\mu)$. \item If $J$ is maximally $\mu$-improving, then $\avg_J(\mu)$ is dominant. \item If $c\in X_\ast(T)_{\Gamma_0}\otimes \mathbb Q$ is dominant and $\mu\leq c$, then \begin{align*}&\avg_J(\mu)\leq \avg_J(c)\leq c.\pushQED{\qed}\qedhere\popQED\end{align*} \end{enumerate} \end{lemma} If follows that there is a uniquely determined maximum \begin{align*} \conv'(\mu) := \max_{J\subseteq \Delta} \avg_J(\mu), \end{align*} and that $\conv'(\mu) = \avg_J(\mu)$ for every maximally $\mu$-improving $J$. We define \begin{align*} \conv(\mu) := \conv' (\avg_\sigma(\mu)),\qquad \mu\in X_\ast(T)_{\Gamma_0}\otimes\mathbb Q\text{ or }\mu\in X_\ast(T)_{\Gamma}. \end{align*} \begin{example} For the split group $G = {\mathrm{GL}}_n$, the operations $\conv$ and $\conv'$ agree. Drawing elements of $X_\ast(T)\otimes {\mathbb Q}$ as polygons, the function $\conv$ corresponds to taking the upper convex hull (hence its name). \end{example} \begin{lemma}\label{lem:convFacts} Let $\mu \in X_\ast(T)_{\Gamma_0}\otimes {\mathbb Q}$. \begin{enumerate}[(a)] \item The value $\conv'(\mu)$ is the uniquely determined element $c\in X_\ast(T)_{\Gamma_0}$ satisfying the following three conditions:\begin{itemize} \item $\mu\leq c$,\item $c$ is dominant and\item $c = \avg_J(\mu)$ for some $J\subseteq \Delta$. \end{itemize} \item If $\mu'\in X_\ast(T)_{\Gamma_0}\otimes \mathbb Q$ satisfies $\mu\leq \mu'$, then $\conv'(\mu)\leq \conv'(\mu')$. \item Write \begin{align*} \conv'(\mu) - \mu =& \sum_{\alpha\in \Delta} c_\alpha \alpha^\vee, \\J_1:=&\{\alpha\in \Delta\mid c_\alpha\neq 0\}, \\J_2 :=&\{\alpha\in \Delta\mid\langle \conv'(\mu),\alpha\rangle=0\}. \end{align*} For any subset $J\subseteq \Delta$, we have \begin{align*} \conv'(\mu) = \avg_J(\mu)\iff J_1\subseteq J\subseteq J_2. \end{align*} \item There exists $J\subseteq \Delta$ with $\sigma(J) = J$ and $\conv(\mu) = \pi_J(\mu)$. In particular, \begin{align*} \conv(\mu) = \max_{\substack{J\subseteq \Delta\\\sigma(J) = J}} \pi_J(\mu). \end{align*} \item Let $J\subseteq \Delta$ such that \begin{align*} \forall \alpha \in \Phi^+\setminus \Phi_J^+:~\langle \mu,\alpha\rangle\geq 0. \end{align*} Then there exists $J'\subseteq J$ with $\conv'(\mu) = \avg_{J'}(\mu)$. In other words, the set $J_1$ from (c) is a subset of $J$. \end{enumerate} \end{lemma} \begin{proof} (a) and (b) are immediate. \begin{enumerate}[(a)] \addtocounter{enumi}{2} \item Let us first consider a subset $J\subseteq \Delta$ with $\conv'(\mu) = \avg_J(\mu)$. Then $\conv'(\mu) - \mu \in \mathbb Q\Phi_J^\vee$ by definition of $\avg_J(\mu)$. We see that $J_1\subseteq J$ must hold. Similarly, $\langle \conv'(\mu),\alpha\rangle=0$ for all $\alpha\in J$ by Lemma~\ref{lem:avgSimpleFacts}. Thus we must have $J_1\subseteq J\subseteq J_2$. We show that $\avg_{J_1}(\mu)$ is dominant. Let $\alpha\in \Delta$. If $\alpha\in J_1$, then $\langle \avg_{J_1}(\mu),\alpha\rangle=0$ by Lemma~\ref{lem:avgSimpleFacts}. So let us assume that $\alpha\in \Delta\setminus J_1$. Because $\avg_{J_1}(\mu)\leq \conv'(\mu)$ and $\avg_{J_1}(\mu)\equiv \mu\equiv \conv'(\mu)\pmod{\mathbb Q\Phi_{J_1}^\vee}$, we can write \begin{align*} \conv'(\mu) - \avg_{J_1}(\mu) = \sum_{\beta\in J_1} c_\beta' \beta^\vee,\quad c_\beta'\in \mathbb Q_{\geq 0}. \end{align*} Now we get \begin{align*} \langle \avg_{J_1}(\mu),\alpha\rangle=\underbrace{\langle \conv'(\mu),\alpha\rangle}_{\geq 0} +\sum_{\beta\in J_1} \underbrace{c_\beta'\langle -\beta^\vee,\alpha\rangle}_{\geq 0}\geq 0. \end{align*} This shows that $\avg_{J_1}(\mu)$ is dominant. If $J$ is chosen such that $\conv'(\mu) = \avg_J(\mu)$, then \begin{align*} \conv'(\mu)\geq \avg_{J_1}(\mu)\underset{\text{L\ref{lem:avgSimpleFacts}}}\geq \avg_J\avg_{J_1}(\mu)\underset{J_1\subseteq J}=\avg_J(\mu) = \conv'(\mu). \end{align*} Thus $\avg_{J_1}(\mu) = \conv'(\mu)$. So if for any intermediate set $J_1\subseteq J\subseteq J_2$, we obtain \begin{align*} \avg_J(\mu) = \avg_J(\avg_{J_1}(\mu)) = \avg_J(\conv'(\mu))\underset{J\subseteq J_2}=\conv'(\mu). \end{align*} \item Replacing $\mu$ by $\avg_\sigma(\mu)$, we may certainly assume $\mu \in (X_\ast(T)_{\Gamma_0}\otimes \mathbb Q)^{\sigma}$. Since $\mu = \sigma(\mu)$, we conclude $\conv'(\mu) = \sigma(\conv'(\mu))$. Then we can choose $J$ be either of the sets $J_1$ or $J_2$ from (c). Now the \enquote{in particular} part is easy to see. \item Let $J'\subseteq J$ be a $\mu$-improving subset such that there is no $\mu$-improving subset $J'\subsetneq J''\subseteq J$. By Lemma~\ref{lem:convPreparation}, $\mu\leq \avg_{J'}(\mu)$. It suffices to show that $\avg_{J'}(\mu)$ is dominant. Seeing $\mu$ as a coweight for the root system $\Phi_J$, the set $J'$ is maximally $\mu$-improving from this perspective, so $\langle \avg_{J'}\mu,\alpha\rangle\geq 0$ for all $\alpha\in \Phi_J^+$. If $\alpha\in \Phi^+\setminus \Phi_J^+$, then $w\alpha\in \Phi^+\setminus \Phi_J^+$ for all $w\in W_J$, such that \begin{align*} \langle \avg_{J'}(\mu),\alpha\rangle = \frac 1{\# W_{J'}} \sum_{w\in W_{J'}}\underbrace{\langle \mu,w\alpha\rangle}_{\geq 0}\geq 0. \end{align*} Here, we use the assumption made on $\mu$ and $J$. As $\avg_{J'}(\mu)$ is dominant, we get the desired result by (a). \qedhere \end{enumerate} \end{proof} As an immediate application, let us describe Newton points of elements in $\widetilde W$ with this language: \begin{definition} For $w\in W$,we write $\supp(w)\subseteq \Delta$ for the set of all simple roots whose corresponding simple reflections occur in some/every reduced expression for $w$. Define $\supp_\sigma(w) := \bigcup_{n\in \mathbb Z}\sigma^n(\supp(w))$. \end{definition} \begin{lemma}\label{lem:newtonPointsAsAverages} Let $x = w\varepsilon^\mu\in \widetilde W$ and $N>0$ such that $(\sigma\circ w)^N=\mathrm{id}$. Pick $v\in W$ such that \begin{align*} v^{-1}\frac 1N\sum_{k=1}^N (\sigma\circ w)^k(\mu)\in X_\ast(T)_{\Gamma_0}\otimes \mathbb Q \end{align*} becomes dominant. Let $J = \supp_{\sigma}(v^{-1}\prescript\sigma{}(wv))$. Then \begin{align*} \nu(x) = \pi_J(v^{-1}\mu). \end{align*} \end{lemma} \begin{proof} Straightforward calculation. For an alternative proof, cf.\ \cite[Proposition~4.1]{Chai2000}. \end{proof} \subsection{$\lambda$-invariant and defect}\label{sec:defect} For this section, we fix a $\sigma$-conjugacy class $[b]\in B(G)$. Following Hamacher-Viehmann \cite[Lemma/Definition~2.1]{Hamacher2018}, we define its \emph{$\lambda$-invariant} by \begin{align*} \lambda_G(b) := \max\{\tilde\lambda\in X_\ast(T)_{\Gamma}\mid \avg_\sigma(\tilde\lambda)\leq\nu(b)\text{ and }\kappa(b) = \lambda +\mathbb Z\Phi^\vee\text{ in }\pi_1(G)_{\Gamma}\}. \end{align*} While the article of Hamacher-Viehmann assumes the group to be unramified, the construction of $\lambda_G(b)$ works without changes for quasi-split $G$. Let us write \begin{align*} \nu(b) - \avg_\sigma(\lambda_G(b)) =& \sum_{\alpha\in \Delta}c_\alpha\alpha^\vee,\\ J_1:=&\{\alpha\in \Delta\mid c_\alpha\neq 0\},\\ J_2:=&\{\alpha\in \Delta\mid\langle \nu(b),\alpha\rangle=0\}. \end{align*} We have the following simple observations: \begin{lemma}\label{lem:lambdaSimpleFacts} \begin{enumerate}[(a)] \item Pick $\mu \in X_\ast(T)_{\Gamma}$ and $J\subseteq \Delta$ with $J = \sigma(J)$ such that $\nu(b) = \pi_J(\mu)$ and $\kappa(b) = \mu + \mathbb Z\Phi^\vee \in \pi_1(G)_{\Gamma}$. Then \begin{align*}\nu(b) = \pi_J(\lambda_G(b)) = \conv(\lambda_G(b)).\end{align*} \item We have $J_1\subseteq J_2$. For $J\subseteq\Delta$ with $\sigma(J) = J$, \begin{align*} \nu(b) = \pi_{J}(\lambda_G(b))\iff J_1\subseteq J\subseteq J_2. \end{align*} \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(a)] \item Choose a lift $\tilde \mu\in X_\ast(T)_{\Gamma_0}$. Then \begin{align*} \pi_J(\mu) = \pi_J(\tilde \mu) = \avg_\sigma\sum_{w\in W_J} w\tilde \mu. \end{align*} We can choose an element $w\in W_J$ such that $w\tilde\mu$ becomes anti-dominant with respect to the roots in $J$, i.e.\ $\langle w\tilde \mu,\alpha\rangle\leq 0$ for all $\alpha\in J$. Then $\pi_J(\tilde \mu)=\pi_J(w\tilde \mu)\geq w\tilde \mu$ by Lemma~\ref{lem:avgSimpleFacts}. In particular, the image of $w\tilde \mu$ in $X_\ast(T)_{\Gamma}$ is $\leq\lambda_G(b)$ by construction of $\lambda_G(b)$. Thus \begin{align*} \nu(b) = \pi_{J}(w\tilde \mu)\leq \pi_{J}(\lambda_G(b))\leq \conv(\lambda_G(b)). \end{align*} Since $\avg_\sigma(\lambda_G(b))\leq \nu(b)$ and $\nu(b)$ is dominant, we use Lemma~\ref{lem:convPreparation} to see that $\conv(\lambda_G(b))\leq \nu(b)$. Hence $\nu(b) = \conv(\lambda_G(b)) = \pi_{J}(\lambda_G(b))$. \item By \cite[Section~3.3]{He2014}, $b = [x]$ for some $x\in \widetilde W$. Applying Lemma~\ref{lem:newtonPointsAsAverages} to $x$, we see that $\mu$ and $J$ exist as in (a). In particular, $\nu(b) = \conv(\lambda_G(b))$. Now all claims follow from Lemma~\ref{lem:convFacts}.\qedhere \end{enumerate} \end{proof} Related to the notion of the $\lambda$-invariant is the notion of \emph{defect} of an element $[b]\in B(G)$. Following \cite[Proposition~6.2]{Kottwitz1985}, we fix an element $x = w\varepsilon^\mu$ of length zero in the extended affine Weyl group $\widetilde W_{J_2}$ of the Levi subgroup of $G$ associated with $J_2$ such that $[b] = [x]\in B(G)$. We denote by $J_b$ the $\sigma$-twisted centralizer of $b\in G(L)$, i.e.\ the reductive group over $F$ with $F$-valued points \begin{align*} J_b(F) = \{g\in G(L)\mid g^{-1} b\sigma(g) = b\}. \end{align*} Then the defect of $[b]$ has the following equivalent descriptions: \begin{proposition}\label{prop:defect} The following non-negative integers all agree. The common value is called the \emph{defect} of $[b]$, denoted $\defect(b)$. \begin{enumerate}[(i)] \item $\dim (X_\ast(T)_{\Gamma_0}\otimes\mathbb Q)^{\sigma} -\dim (X_\ast(T)_{\Gamma_0}\otimes\mathbb Q)^{\sigma w}$, \item $\rk_F(G) - \rk_F(J_b)$, \item $\langle \nu(b),2\rho\rangle-\langle\lambda_G(b),2\rho\rangle$, \item $\#(J_1/\sigma)$, the number of $\sigma$-orbits in $J_1$, \item $\min_{v\in W}\ell(v^{-1}\prescript\sigma{}(w v))$, \item $\min_{v\in W_{J_1}}\ell(v^{-1}\prescript\sigma{} (wv))$. \end{enumerate} \end{proposition} The notion of defect was originally defined in \cite[Equation~1.9.1]{Kottwitz2006} for split groups, using the expression in (i). Kottwitz shows the equality with (ii) as \cite[Theorem~1.10.1]{Kottwitz2006} and the equality with (iii) as \cite[Theorem~1.9.2]{Kottwitz2006}. If $G$ is not split, the expression of (ii) is commonly used as definition. In the unramified case, the equality of (ii) with (iii) is then known as \cite[Proposition~3.8]{Hamacher2015}, and Hamacher's proof shows the equality with (i) and (iv). For the remainder of this section, we sketch how to prove Proposition~\ref{prop:defect} for quasi-split groups $G$. The main idea is a reduction to the superbasic case. \begin{lemma} Assume that $[b]$ is superbasic. Denote by $n = \#(\Delta/\sigma)$ the number of $\sigma$-orbits in $\Delta$. \begin{enumerate}[(a)] \item We have \begin{align*} (X_\ast(T)_{\Gamma_0}\otimes\mathbb Q)^{\sigma w} = \{\mu\in X_\ast(T)_{\Gamma_0}\otimes\mathbb Q\mid \sigma(\mu)=\mu\text{ and }\langle \mu,\Phi\rangle = \{0\}\}. \end{align*} In particular, \begin{align*} n = \dim (X_\ast(T)_{\Gamma_0}\otimes\mathbb Q)^{\sigma}-(X_\ast(T)_{\Gamma_0}\otimes\mathbb Q)^{\sigma w}. \end{align*} \item We have \begin{align*}n = \min_{v\in W}\ell(v^{-1}\prescript\sigma{}(wv)). \end{align*} More precisely, we find $v\in W$ and a subset $\Delta'\subseteq \Delta$ such that $\# \Delta' = n$ and $v^{-1}\prescript\sigma{}(wv)$ is a Coxeter element for $\Delta'$. \item We have \begin{align*} n= \langle \nu(b) - \avg_\sigma(\lambda_G(b)),2\rho\rangle. \end{align*} \end{enumerate} \end{lemma} \begin{proof} Superbasic elements only exist if each irreducible component of $\Phi$ is a root system of type $A$. All claims may certainly be checked individually on each $\sigma$-connected component, so to lighten our notation, we will assume that $\Delta$ is $\sigma$-connected. \begin{enumerate}[(a)] \item If $\mu\in X_\ast(T)_{\Gamma_0}\otimes\mathbb Q$ is $\sigma$-stable and orthogonal to all roots, it is certainly fixed by $\sigma w$. Let conversely $\mu\in X_\ast(T)_{\Gamma_0}\otimes\mathbb Q$ satisfy $\sigma w(\mu)=\mu$. Then we find $v\in W$ such that $v\mu\in X_\ast(T)_{\Gamma_0}\otimes\mathbb Q$ is dominant. Observe that \begin{align*} \left(v\prescript\sigma{}( w v^{-1})\right) \sigma v\mu = v\sigma w v^{-1} v\mu = v\mu. \end{align*} Since $\sigma v\mu$ is dominant and in the $W$-orbit of $v\mu$, we get $\sigma v\mu=v\mu$. In particular, the dominant coweight $v\mu$ gets stabilized by $v\prescript\sigma{}(wv^{-1})\in W$. Let $J:=\Stab(v\mu)$ denote the stabilizer of the dominant coweight $v\mu$. Then $J=\sigma(J)$, so $J$ defines a $\sigma$-stable Levi subgroup of $G$. Its extended affine Weyl group $\widetilde W_J$ contains $v^{-1}\prescript\sigma{}(xv)$, so $b$ comes from a $\sigma$-conjugacy class in this Levi subgroup. This is only possible if $J=\Delta$, i.e.\ $\langle v\mu,\Phi\rangle=\{0\}$. In particular, $v\mu = v^{-1}(v\mu)=\mu$, proving the claim. \item Decompose the Dynkin diagram of $\Delta$ into connected components, written as $\Delta = C_1\sqcup \dotsc\sqcup C_k$, such that $\sigma(C_i) = C_{i+1}$ for $i=1,\dotsc,k-1$ and $\sigma(C_k) = C_1$. Let $W_C := W_{C_1}$ denote the Weyl group of $C:=C_1$. Note that each $C_i$ is of type $A_n$ with $n$ as given. Write $C_{\mathrm{af}}$ for the affine Dynkin diagram associated with $C = C_1$. Then the action of $\sigma^k$ on $C_{\mathrm{af}}$ must fix the special node, and be either the identity or the unique involution on the complement, i.e.\ $C$. The element $x\prescript\sigma{} x\cdots\prescript{\sigma^{k-1}}{}x$, being an element of length zero in the affine Weyl group of $C$, acts on $C_{\mathrm{af}}$ by some cyclic permutation. The composition of these two maps, $(\sigma\circ x)^k$, should act transitively on $C_{\mathrm{af}}$. One quickly checks that this is only possible if $\sigma^k$ is the identity map on $C_{\mathrm{af}}$. Now write $w = w_1 \prescript\sigma{}(w_2)\cdots\prescript{\sigma^{k-1}}{}(w_k)$ with $w_1,\dotsc,w_k\in W_C$. Let $v_1\in W_C$ and define \begin{align*} v := v_1\prescript\sigma{}(v_2)\cdots\prescript{\sigma^k}{}(v_k)\in W,\qquad v_{i+1} = w_i v_i\text{ for }i=1,\dotsc,k-1. \end{align*} Then \begin{align*} v^{-1} \prescript\sigma{}(wv) =& v_1^{-1}\prescript\sigma{}(v_2^{-1})\cdots\prescript{\sigma^k}{}(v_k^{-1})\cdot (w_kv_k)\prescript\sigma{}(w_1v_1)\cdots \cdots\prescript{\sigma^{k-1}}{}(w_{k-1}v_{k-1}) \\=&v_1^{-1} w_k v_k = v_1^{-1} w_k\cdots w_1 v_1\in W_C. \end{align*} We know that $W_C$ is a Coxeter group of type $A_n$, so a symmetric group. It is a classical result that each element in a symmetric group is conjugate to a Coxeter element for a parabolic subgroup. In other words, we find $v_1$ and $\Delta'\subseteq C$ such that $v_1^{-1}w_k\cdots w_1 v_1$ is a Coxeter element of $\Delta'$. In particular, we get \begin{align*} n=\#C\geq\#\Delta' = \ell(v^{-1}\prescript\sigma{}(wv))\geq \#\supp(v^{-1}\prescript\sigma{}(wv))\underset{\text{superbasic}}\geq n. \end{align*} Thus $\#\Delta'=n$. \item It remains to evaluate \begin{align*} \langle \nu(b) - \avg_\sigma(\lambda_G(b)),2\rho\rangle = \sum_{\alpha\in \Delta}2c_\alpha. \end{align*} This calculation is carried out by Hamacher \cite[Section~3]{Hamacher2015}, and we obtain the value $n$ as claimed. The equality only depends on the affine root system together with the $\sigma$-action, so the fact that Hamacher only considers unramified groups is irrelevant. While his argument using characters of finite group representations is very elegant, one can also obtain the same result in a more straightforward manner with explicit calculations of Newton polygons (as we are in the $A_n$ case).\qedhere\end{enumerate} \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:defect}] The equality of (i) with (ii) is a standard Bruhat-Tits theoretic argument, cf.\ \cite[Section~4.3]{Kottwitz2006} or \cite[Proof of Prop.~3.8]{Hamacher2015}. Observe that the values of (i), (iii), (iv) and (vi) do not change if we pass to the Levi subgroup of $G$ defined by $J_1$. If we do so, $[b]$ becomes a superbasic $\sigma$-conjugacy class. Then the equalities of (i), (iii), (iv) and (vi) follow immediately from the preceding lemma. It remains to show that, in the general case, (v) agrees with (vi). Suppose this was not the case. Then we would find some $v\in W$ such that \begin{align*} \ell(v^{-1} \prescript\sigma{}(wv))<\#(J_1/\sigma). \end{align*} Consider the element $y = v^{-1}\prescript\sigma{}(xv)\in \widetilde W$ and the subset $J\subseteq \Delta$ given by $J:=\supp_\sigma(v^{-1} \prescript\sigma{}(wv))$. Then $J$ defines a $\sigma$-stable Levi subgroup $M\subseteq G$ such that $[b]$ has a preimage in $B(M)$. This is only possible if $J_1\subseteq J$, so $J = J_1$. But we must have \begin{align*} \ell(v^{-1} \prescript\sigma{}(wv))\geq \#\supp(v^{-1}\prescript\sigma{}(wv)) \geq \#(J/\sigma) = \#(J_1/\sigma), \end{align*} contradiction! \end{proof} \subsection{Fundamental elements}\label{sec:fundamental-elements} Recall the equivalent characterizations of fundamental elements: \begin{proposition}\label{prop:fundamental}For $x = w\varepsilon^\mu\in\widetilde W$, the following are equivalent: \begin{enumerate}[(i)] \item $\ell(x) = \langle\nu(x),2\rho\rangle$. \item For all $n\geq 1$, we have \begin{align*} \ell(x\cdot \prescript\sigma{} x\cdots \prescript{\sigma^{n-1}}{}x) = n\ell(x). \end{align*} \item There exist $v\in \LP(x)$ and a $\sigma$-stable $J\subseteq \Delta$ such that $v^{-1}\prescript\sigma{}(wv)\in W_J$ and for all $\alpha\in \Phi_J$, we have $\ell(x,v\alpha)=0$. \item For every orbit $O\subseteq \Phi$ with respect to the action of $(\sigma\circ w)$ on $\Phi$, we have \begin{align*} \left(\forall \alpha \in O:~\ell(x,\alpha)\geq 0\right)\text{ or }\left(\forall \alpha\in O:~\ell(x,\alpha)\leq 0\right). \end{align*} \end{enumerate} If $G$ is defined over $\mathcal O_F$, this is moreover equivalent to \begin{enumerate}[(i)] \addtocounter{enumi}{4} \item Every element $y\in IxI$ is of the form $y = i^{-1} x\prescript\sigma{} i$ for some $i\in I$. \end{enumerate} If these equivalent conditions are satisfied, we call $x$ \emph{fundamental}.\pushQED{\qed}\qedhere\popQED \end{proposition} Let us first discuss the unramified case. In this case, the equivalence of (i) and (ii) is due to He \cite[Lemma~8.1]{He2010}. Elements satisfying these conditions are called \emph{good} in \cite{He2010} and \emph{$\sigma$-straight} in more recent literature. Condition (iii) is a reformulation of the notion of \emph{fundamental $(J,w,\delta)$-alcoves} from Goertz-He-Nie \cite[Section~3.3]{Goertz2015}. Condition (v) is the notion of fundamental elements from \cite{Goertz2010}. The equivalence of (i), (iii) and (v) is a result of Nie \cite{Nie2015}. Condition (iv) is new, but we will not need it in the sequel. If $G$ is quasi-split but not unramified, the cited proofs fail because the map $X_\ast(T)_{\Gamma_0}\rightarrow X_\ast(T)_{\Gamma_0}\otimes\mathbb Q$ might no longer be injective. It is conceivable that the proofs might be generalized with a bit of work. Instead, we sketch how to prove the equivalences of (i)--(iv) using our language of length functionals, where issues with the torsion part of $X_\ast(T)_{\Gamma_0}$ are non-existent. \begin{proof}[Proof of Proposition~\ref{prop:fundamental}] Lemma~\ref{lem:lengthAdditivity} implies the equivalence of (ii) and (iv). Moreover, the implication (iii) $\implies$ (iv) is immediate. Let $N>0$ such that the action of $(\sigma\circ w)^N$ on $X_\ast(T)_{\Gamma_0}$ becomes trivial. For any $v\in W$ and $\alpha\in \Phi$, we calculate \begin{align*} &\left\langle \frac 1Nv^{-1}\sum_{k=1}^N(\sigma\circ w)^k\mu,\alpha\right\rangle \\=&\frac 1N\sum_{k=1}^N \langle \mu,(\sigma\circ w)^k v\alpha\rangle \\=&\frac 1N\sum_{k=1}^N \langle \mu,(\sigma\circ w)^k v\alpha\rangle + \Phi^+((\sigma\circ w)^k v\alpha) - \Phi^+((\sigma\circ w)^{k+1} v\alpha) \\=&\frac 1N\sum_{k=1}^N \ell(x,(\sigma\circ w)^kv\alpha). \end{align*} Pick now $v\in W$ such that $v^{-1}\sum_{k=1}^N (\sigma\circ w)^k\mu = \nu(x)$. Then \begin{align*} \langle \nu(x),2\rho\rangle = \sum_{\alpha\in \Phi^+}\frac 1N\sum_{k=1}^N \ell(x,(\sigma\circ w)^kv\alpha)\geq \ell(x). \end{align*} Equality holds if and only if $(\sigma\circ w)^kv\in \LP(x)$ for all $k\in \mathbb Z$. If we define $J := \supp_\sigma(v^{-1}\prescript\sigma{}(wv))$, we see that (ii) implies (iii). It remains to show that (iv) implies (ii). This follows directly from the above calculation. \end{proof} Fundamental elements play an important role for our description of generic $\sigma$-conjugacy classes. If $x$ is fundamental, the generic $\sigma$-conjugacy class $[b_x]$ coincides with the $\sigma$-conjugacy class of $x$, whose Newton and Kottwitz points are easily computed. The $\lambda$-invariant and the defect of $[x]$ however are less straightforward to see. For now, we compute the defect. This will later help to compute the $\lambda$-invariant, in view of \begin{align*} \defect([x]) = \langle \nu(x),2\rho\rangle - \langle \lambda(x),2\rho\rangle\qquad \text{(Proposition~\ref{prop:defect})}. \end{align*} \begin{lemma}\label{lem:fundamentalDefect} Let $x$ be fundamental, and choose $v\in \LP(x)$ and $J\subseteq \Delta$ as in Proposition~\ref{prop:fundamental} (iii). \begin{enumerate}[(a)] \item Every $v'\in vW_J$ is length positive for $x$. Moreover, $(x, v', J)$ also satisfies condition (iii) of Proposition~\ref{prop:fundamental}. \item If $v\in W^J$, then $(\prescript{\sigma^{-1}}{}v)^{-1}xv$ coincides with an element of length zero in the extended affine Weyl group $\widetilde W_J = W_J\ltimes X_\ast(T)_{\Gamma_0}$. \item The defect of $x$ is given by \begin{align*} \defect([x]) = \min_{v'\in vW_J} \ell( (v')^{-1}\prescript\sigma{}(wv')) = \min_{v'\in W}\ell( (v')^{-1}\prescript\sigma{}(wv)). \end{align*} \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(a)] \item This is a very straightforward calculation. \item By definition, $(\prescript{\sigma^{-1}}{}v)^{-1}xv\in\widetilde W_J$. The length calculation is straightforward using Lemma~\ref{lem:lengthFunctionalForProducts}. For an alternative proof concept, cf.\ \cite[Proposition~3.2]{He2014b}. \item In view of (a), we may assume $v\in W^J$. Then \begin{align*} \defect([x])=\defect\left([(\prescript{\sigma^{-1}}{}v)^{-1}xv]\right) \end{align*} By (b), the element $(\prescript{\sigma^{-1}}{}v)^{-1}xv\in \widetilde W$ satisfies the conditions needed to compute its defect using Proposition \ref{prop:defect} (v) and (vi). The claim follows.\qedhere \end{enumerate} \end{proof} In order to reduce claims about arbitrary elements in $\widetilde W$ to fundamental ones, we need the following lemma. If $G$ is unramified, this is a classical result of Viehmmann \cite[Proposition~5.5]{Viehmann2014}. \begin{lemma}\label{lem:nonEmptynessBruhatCondition} Let $x\in \widetilde W$ and $[b]\in B(G)_x$, i.e.\ $[b]\in B(G)$ with $X_x(b)\neq\emptyset$. Then there exists a fundamental element $y\in \widetilde W$ such that $y\leq x$ in the Bruhat order and $[y] = [b]$ in $B(G)$. \end{lemma} \begin{proof} Induction by $\ell(x)$. We distinguish a number of cases. \begin{enumerate}[1.] \item Suppose that $x$ is of minimal length in its $\sigma$-conjugacy class in $\widetilde W$ and that $x = uy$ for some fundamental $y\in \widetilde W$ with $\ell(x) = \ell(u) + \ell(y)$ and $[x] = [y]$. By \cite[Theorem~3.5]{He2014}, $[b] = [x]$ so that $y\leq x$ satisfies the desired conditions. \item Suppose that there exists a simple affine reflection $s\in S_{\mathrm{af}}$ such that $\ell(sx\prescript\sigma{} s)<\ell(x)$. By the \enquote{Deligne-Lusztig reduction method} of Goertz-He \cite[Corollary~2.5.3]{Goertz2010b}, we must have $[b] \in B(G)_{x'}$ for $x'=sx\prescript\sigma{} s$ or $x'=sx$. By induction, we get an element $y\leq x'$ with the desired properties. Since $x'< x$, the claim follows. \item In general, we find by \cite[Theorem~3.4]{He2014b} a sequence of elements \begin{align*} x = x_1,\dotsc,x_n\in \widetilde W \end{align*} such that \begin{itemize} \item $x_{i+1} = s_i x_i \prescript\sigma{} s_i$ for some simple reflection $s_i\in S$ ($i=1,\dotsc, n-1$), \item $\ell(x_{i}) = \ell(x)$ for $i=1,\dotsc,n$ and \item $x_n$ satisfies condition 1.\ or 2. \end{itemize} In particular, we find $y'\leq x_n$ fundamental with $[y'] = [b]$. By \cite[Lemma~2.3]{Nie2015}, there exists $y\leq x$ with $\ell(y)\leq \ell(y')$ and $y$ being $\sigma$-conjugate to $y'$ in $\widetilde W$. While Nie's proof only covers unramified groups, this statement is purely about combinatorics of root systems and affine Weyl groups, so the generalization to quasi-split groups is immediate. Now observe that $[y] = [y'] = [b]\in B(G)$. In particular, \begin{align*} \langle \nu(b),2\rho\rangle\leq \ell(y) \leq \ell(y') = \langle \nu(b),2\rho\rangle. \end{align*} We see that $y$ must be fundamental as well. \end{enumerate} In any case, the claim follows, finishing the induction and the proof. \end{proof}
1,314,259,993,159
arxiv
\section*{Introduction} Triangular fully packed loop configurations (TFPLs) first appeared in the study of ordinary fully packed loop configurations (FPLs). There they were used to show that the number of FPLs corresponding to a given link pattern with $m$ nested arches is a polynomial in $m$, see~\cite{CKLN}. It soon turned out that TFPLs possess a number of nice properties, which made them worthy objects of study by themselves. For instance, they can be seen as a generalized model for Littlewood--Richardson coefficients, thereby establishing an unexpected link to algebra. This was first proven in~\cite{Nadeau2} by a convoluted argument and later in~\cite{TFPL} in a direct combinatorial manner and in a more general setting. Other combinatorial aspects of TFPLs, many of them still conjectural, are studied in~\cite{Nadeau1,Thapper}. In 2000 Wieland~\cite{Wieland1} invented the operation on FPLs which bears his name. The \emph{Wieland gyration} was used to prove the rotational invariance of the numbers $A_\pi$ of FPLs corresponding to a given link pattern $\pi$. It was later heavily used by Cantini and Sportiello~\cite{CanSport} to prove the Razumov--Stroganov conjecture. It also came up in connection with TFPLs already in~\cite{Nadeau1} following work of~\cite{Thapper}. \medskip The main contribution of this article is the explicit definition of Wieland gyration for TFPLs together with a detailed study of some of its properties. While the usual Wieland gyration of FPLs is an involution, our \emph{left-Wieland gyration} $\operatorname{WL}$ acting on TFPLs is not. By a finiteness argument, the sequence \hbox{$(\operatorname{WL}^m(f))_{m\geq 0}$} is eventually periodic. In Theorem \ref{Thm:EventuallyStable}, it will be proven that the length of the period is always one, which means one always reaches a TFPL which is invariant under left-Wieland gyration. In fact, if $N$ is the size of $f$, then less that $2N$ iterations of $\operatorname{WL}$ will suffice to obtain such \emph{stable} configurations. A key step in the proof of Theorem \ref{Thm:EventuallyStable} is to classify these stable TFPLs. It turns out that this depends solely on the occurrence of a certain type of edges called \emph{drifters}: this is the content of Theorem \ref{Thm:StableTFPL}. These results also hold for \emph{right-Wieland gyration}. Now to each TFPL are assigned three binary words $u$, $v$ and $w$ that encode its boundary conditions. Such binary words $\sigma$ are naturally associated with Young diagrams $\lambda(\sigma)$, and by the results of \cite{Nadeau2,TFPL}, TFPLs with boundary $(u,v;w)$ such that \hbox{$|\lambda(u)|+|\lambda(v)|=|\lambda(w)|$} are enumerated by the Littlewood-Richardson-coefficient $c_{u,v}^w$. We will show that such TFPLs are stable. In general, the boundary $(u,v;w)$ of a TFPL has to satisfy $|\lambda(u)|+|\lambda(v)|\leq |\lambda(w)|$: this was proven in~\cite{Thapper} using Wieland gyration and a certain degree argument, and later reproven in a combinatorial fashion in~\cite{TFPL}. Here we will use left- and right-Wieland gyrations to give a simple proof of this inequality. \medskip The paper is divided as follows. In Section~\ref{Section:definitions_results} we recall the definition of FPLs and TFPLs as well as elementary properties of binary words and Young diagrams. Section~\ref{Section:Wieland} contains the definition of our main construction, the left-Wieland gyration acting on TFPLs, based on Wieland's original definition. It is introduced in Definition~\ref{def:LeftWieland} and we give its first properties, culminating in Theorem~\ref{Thm:WielandBijectiveTFPL}. We can then state the theorems about stability of TFPLs, namely Theorems~\ref{Thm:EventuallyStable} and~\ref{Thm:StableTFPL}, which are proven in Section~\ref{Section:stable}. Finally, Section~\ref{Section:applications} contains applications of our gyration to enumerative questions concerning TFPLs. \section{Definitions and elementary properties} \label{Section:definitions_results} In this section we recall the definitions of FPLs and TFPLs, and the binary words attached to the {\em boundary} of a TFPL with the necessary conditions they must satisfy. \subsection{Fully packed loop configurations}\label{Section:FPL} Fully packed loop configurations first came up in statistical physics; they are an alternative representation of \emph{six-vertex model} configurations which are in one-to-one correspondence with \emph{square-ice} configurations, see for example~\cite{FPL1} and~\cite{Wieland1}. Furthermore, they are in bijection with alternating sign matrices and other combinatorial configurations, cf.~\cite{Propp}. We start with the graph $G_n$, which is defined as the square grid with $n^2$ vertices together with $4n$ \emph{external edges}. The $(n+1)^2$ unit squares of this grid, including external cells that have two or three surrounding edges only, are said to be the \emph{cells} of $G_n$. They are partitioned into odd and even cells in a chessboard manner where by convention the cells on the Northwest-Southeast diagonal are odd. In Figure~\ref{G_8} the graph $G_8$ together with its odd and even cells is depicted. \begin{figure}[tbh] \centering \includegraphics[width=.4\textwidth]{G_8.pdf} \caption{The graph $G_8$ with its odd and even cells.} \label{G_8} \end{figure} \begin{Def} A \emph{fully packed loop configuration} (FPL) of size $n$ is a subgraph $F$ of $G_n$ satisfying that \begin{enumerate} \item each vertex of $G_n$ is incident to two edges of $F$, and \item precisely every other external edge belongs to $F$. \end{enumerate} \end{Def} Given an FPL $F$ a \emph{cell of $F$} is defined as a cell of $G_n$ together with those of its surrounding edges that belong to $F$. An example of an FPL is given in Figure~\ref{ExampleFPL}. In a natural way, every FPL defines a non-crossing matching of the occupied external edges -- its so-called \emph{link pattern} -- by matching those which are joined by a path. \begin{figure}[tbh] \centering \includegraphics[width=.4\textwidth]{FPL_Example.pdf} \caption{A FPL of size 8.} \label{ExampleFPL} \end{figure} In the course of the study of FPLs corresponding to fixed link patterns with a sufficiently large number of \emph{nested arches}, TFPLs first occurred: such FPLs admit a combinatorial decomposition, in which TFPLs naturally arise. This combinatorial decomposition first came up in the course of the proof in~\cite{CKLN} of a conjecture in~\cite{Zuber} stating that if we introduce $m$ nested arches in a fixed link pattern $\pi$ then the number of FPLs corresponding to this link pattern is a polynomial function in $m$ as $m$ varies. \subsection{Triangular fully packed loop configurations} \label{subsection:TFPL} To give the definition of triangular fully packed loop configurations, we need the following graph: \begin{Def}[The graph $G^N$] Let $N$ be a positive integer. The graph $G^N$ is defined as the induced subgraph of the square grid made up of $N$ consecutive centered rows of \hbox{$3,5,\dots, 2N+1$} vertices from top to bottom together with $2N+1$ vertical \emph{external} edges incident to the $2N+1$ bottom vertices. \end{Def} \begin{figure}[tbh] \centering \includegraphics[width=.7\textwidth]{G7_Triangle.pdf} \caption{The graph $G^7$ with its odd and even cells.} \label{Fig003} \end{figure} In the following, let \hbox{$\mathcal{L}^N=\{L_1,L_2,\dots,L_N\}$} (resp. \hbox{$\mathcal{R}^N=\{R_1,R_2,\dots,R_N\}$}) be the set made up of the leftmost (resp. rightmost) vertices of the $N$ rows of $G^N$, where the vertices are numbered from left to right. Furthermore, the $N(N+1)$ unit squares of $G^N$, including external unit squares that have three surrounding edges only, are said to be the \emph{cells} of $G^N$. They are partitioned into \emph{odd} and \emph{even} cells in a chessboard manner where by convention the top left cell of $G^N$ is odd. In Figure~\ref{Fig003} the graph $G^7$ together with its odd and even cells is pictured. \begin{Def}[\cite{TFPL}] \label{defi:tfpl} Let $N$ be a positive integer. A \emph{triangular fully packed loop configuration} (TFPL) of size $N$ is a subgraph $f$ of $G^N$ such that: \begin{enumerate} \item[(i)] Every other external edge starting with the second one belongs to $f$. \item[(ii)] The $2N$ vertices in $\mathcal{L}^N\cup\mathcal{R}^N$ have degree 0 or 1. \item[(iii)] All other vertices of $G^N$ have degree 2. \item[(iv)] A path in $f$ neither connects two vertices of $\mathcal{L}^N$ nor two vertices of $\mathcal{R}^N$. \end{enumerate} \end{Def} \begin{figure}[tbh] \centering \includegraphics[width=.7\textwidth]{TFPLexample3.pdf} \caption{A TFPL of size 7.} \label{Fig002} \end{figure} An example of a TFPL is given in Figure~\ref{Fig002}. Similar to FPLs, a \emph{cell of $f$ }is a cell of $G^N$ together with those of its surrounding edges that belong to $f$. A cell is called \emph{interior} if it does not contain a vertex in $\mathcal{L}^N \cup \mathcal{R}^N$.\\ By {\em binary words} we refer to words \mbox{$\sigma=\sigma_1\cdots\sigma_N$} where $\sigma_i\in\{0,1\}$ for each $1\leq i\leq N$. In the following, the number of occurrences of $1$ (resp. $0$) in a binary word $\sigma$ is denoted by $\vert\sigma\vert_1$ (resp. $\vert\sigma\vert_0$). To each TFPL of size $N$ a triple of binary words of length $N$ is assigned as follows: \begin{Def} Let $f$ be a TFPL of size $N$. The \emph{boundary} of $f$ is a triple $(u,v;w)$ of binary words of length $N$ defined as follows: \begin{enumerate} \item $u_i=1$ if and only if $L_i\in\mathcal{L}^N$ has degree 1, \item $v_i=1$ if and only if $R_i\in\mathcal{R}^N$ has degree 0 and \item $w_i=1$ if and only if the $i$-th external edge in $f$ -- counted from left to right -- is connected by a path in $f$ either with a vertex in $\mathcal{L}^N$ or with an external edge to its left. \end{enumerate} \end{Def} The set of all TFPLs with boundary $(u,v;w)$ is denoted by $T_{u,v}^w$ and its cardinality by $t_{u,v}^w$. For example, the triple \hbox{$(0101111,0011111;1101101)$} is the boundary of the TFPL depicted in Figure~\ref{Fig002}. A triple $(u,v;w)$ that is the boundary of a TFPL has to fulfill certain necessary conditions. To formulate them the following standard result is needed: \begin{Prop}[\cite{Nadeau2}]\label{Prop005} For given non-negative integers $k$ and $\ell$ the following two sets are in bijection: \begin{enumerate} \item[(i)] the set of binary words $\sigma$ satisfying $\vert\sigma\vert_0=k$ and $\vert\sigma\vert_1=\ell$ and \item[(ii)] the set of Young diagrams fitting in the rectangle with $k$ rows and $\ell$ columns. \end{enumerate} \end{Prop} In Figure~\ref{Fig4}, an example for the bijection between binary words and Young diagrams is given. The Young diagram corresponding to a binary word $\sigma$ is denoted by $\lambda(\sigma)$. Furthermore, $\lambda(\tau)\subseteq\lambda(\sigma)$ means that the Young diagram $\lambda(\tau)$ is included in the Young diagram $\lambda(\sigma)$, and $\vert \lambda(\sigma)\vert$ denotes the number of cells of the Young diagram $\lambda(\sigma)$. Note that $\vert \lambda(\sigma) \vert$ coincides with the number of inversions of the binary word $\sigma$. \begin{Thm}[\cite{CKLN,Thapper,TFPL}] In order for a TFPL configuration with boundary $(u,v;w)$ to exist, the following must be satisfied: \begin{align} |u|_0 = &|v|_0 = |w|_0, \label{Necessary1}\\ \lambda(u)\subseteq\lambda(w) &\textnormal{ and }\lambda(v)\subseteq\lambda(w), \label{Necessary2}\\ \vert \lambda(u)\vert + &\vert \lambda(v)\vert\leq\vert \lambda(w)\vert. \label{Necessary3} \end{align} \end{Thm} Conditions~\eqref{Necessary1}and~\eqref{Necessary2} are reasonably easy to prove. In Section~\ref{Section:applications}, we will provide a new proof of Condition~\eqref{Necessary3} using Wieland gyration on TFPLs. \begin{figure}[tbh] \centering \includegraphics[width=1\textwidth]{boundary_Ferrer.pdf} \caption{The Young diagrams which correspond to the boundary $(0101111,0011111;1101101)$ of the TFPL in Figure~\ref{Fig002}.} \label{Fig4} \end{figure} To end this section, we need certain skew shapes which play an important role in the context of left- and right-Wieland gyration. A skew shape is said to be a \emph{horizontal strip} ( \emph{resp. a vertical strip}) if each of its columns (\emph{resp.} rows) contains at most one cell. Examples are given in Figure~\ref{Fig008}. Consider two binary words $\sigma$ and $\tau$ satisfying $\vert \sigma\vert_1=\vert \tau\vert_1$ and $\vert\sigma\vert_0=\vert \tau\vert_0$. Then the skew shape \hbox{$\lambda(\tau)/\lambda(\sigma)$} is a horizontal strip (\emph{resp.} a vertical strip) if and only if for each \hbox{$j\in\{1,\dots,\vert \sigma\vert_1\}$} the following holds: {\it If $\sigma_i$ is the \hbox{$j$-th} one (\emph{resp.} zero) in $\sigma$, then $\tau_{i-1}$ or $\tau_i$ (\emph{resp.} $\tau_{i}$ or $\tau_{i+1}$) is the \hbox{$j$-th} one (\emph{resp.} zero) in $\tau$.} \begin{figure}[tbh] \centering \includegraphics[width=.6\textwidth]{horizontalstrip.pdf} \caption{\label{Fig008} The horizontal strip $\lambda(1111001100) / \lambda(0111100110)$ and the vertical strip $\lambda(1100111100)/\lambda(1001111001)$.} \end{figure} In the following, if the skew shaped Young diagram \hbox{$\lambda(\tau)/\lambda(\sigma)$} is a horizontal strip (\emph{resp.} a vertical strip), we will write \hbox{$\sigma\stackrel{\mathrm{h}}{\longrightarrow}\tau$} (\emph{resp.} \hbox{$\sigma\stackrel{\mathrm{v}}{\longrightarrow}\tau$}). \section{Wieland gyration for TFPLs} \label{Section:Wieland} In this subsection the definitions of left- and right-Wieland gyration for TFPLs are given and some first properties are derived. The starting point is the definition of Wieland gyration for FPLs. It is composed of local operations on all \emph{active} cells of an FPL: the active cells of an FPL can be chosen to be either all its odd cells or all its even cells. Now, let $F$ be an FPL and $c$ be an active cell of $F$. Then we must distinguish two cases, namely whether $c$ contains precisely two edges of $F$ on opposite sides or not. If this is the case, Wieland gyration $\operatorname{W}$ leaves $c$ invariant. Otherwise, the effect of $\operatorname{W}$ on $c$ is that edges and non-edges of $F$ are exchanged. In Figure~\ref{Wieland}, the action of $\operatorname{W}$ on an active cell is illustrated. The result of applying $\operatorname{W}$ to each active cell of $F$ is said to be the image of $F$ under Wieland gyration and is denoted by $\operatorname{W}(F)$. \begin{figure}[tbh] \includegraphics[width=.4\textwidth]{WielandForCells.pdf} \caption{Up to rotation, the action of $\operatorname{W}$ on the active cells of an FPL.} \label{Wieland} \end{figure} In Figure~\ref{WielandFPLExample} the image of the FPL depicted in Figure~\ref{ExampleFPL} under Wieland gyration with the odd cells being active is pictured.\\ \begin{figure}[tbh] \includegraphics[width=.9\textwidth]{Wieland_FPL_Example.pdf} \caption{The image of the FPL depicted in Figure~\ref{ExampleFPL} under Wieland gyration with the odd cells being active.} \label{WielandFPLExample} \end{figure} Wieland gyration as it will be defined for TFPLs is based on the operation $\operatorname{W}$. As active cells of a TFPL can be chosen either all its odd cells or all its even cells. Choosing all odd cells as active cells will lead to what will be defined as left-Wieland gyration, whereas choosing all even cells as active cells will lead to what will be defined as right-Wieland gyration. \begin{Def}[Left-Wieland gyration] \label{def:LeftWieland} Let $f$ be a triangular fully packed loop configuration with left boundary word $u$, and let $u^-$ be a binary word such that $u^-\stackrel{\mathrm{h}}{\longrightarrow} u$. The \emph{image of $f$ under left-Wieland gyration with respect to $u^-$} is determined as follows: \begin{enumerate} \item Insert a vertex $L'_i$ to the left of $L_i$ for $1\leq i\leq N$. Then run through the occurrences of ones in $u^-$: Let $\{i_1 < i_2 < \ldots < i_{N_1}\} = \{ i | u^-_i=1\}$. \begin{enumerate} \item If $u_{i_j}$ is the $j$-th one in $u$, add a horizontal edge between $L'_{i_j}$ and $L_{i_j}$. \item If $u_{i_{j}-1}$ is the $j$-th one in $u$, add a vertical edge between $L'_{i_j}$ and $L_{{i_j}-1}$. \end{enumerate} \item Apply Wieland gyration to each odd cell of $f$ . \item Delete all vertices in $\mathcal{R}^N$ and their incident edges. \end{enumerate} After shifting the whole construction one unit to the right, one obtains the desired image $\operatorname{WL}_{u^-}(f)$. \end{Def} In the case $u^-=u$, we will simply write $\operatorname{WL}(f)$ and speak of the \emph{image of $f$ under left-Wieland gyration}. \begin{figure}[tbh] \begin{center} \includegraphics[width=.7\textwidth]{WielandexampleTFPL.pdf} \caption{The TFPL depicted in Figure~\ref{Fig002} and its image under left-Wieland gyration with respect to \hbox{$0011111$}.} \label{WielandexampleTFPL} \end{center} \end{figure} In the following, to distinguish between vertices in $f$ and in $\operatorname{WL}_{u^-}(f)$ the following notation is chosen: when regarding the image under left-Wieland gyration with respect to $u^-$, we will write $x'$ for each vertex $x$ of $G^N$ (before the shifting is performed). In Figure~\ref{WielandexampleTFPL} the TFPL depicted in Figure~\ref{Fig002} with its odd cells marked by gray discs and its image under left-Wieland gyration with respect to \hbox{$0011111$} are pictured. It is a TFPL with boundary \hbox{$(0011111,0101111;1101101)$}. Note that the left boundary of the TFPL pictured in \hbox{Figure~\ref{Fig002}} is $0101111$ and \hbox{$0011111\stackrel{\mathrm{h}}{\longrightarrow}0101111$}. Also, the new right boundary $0101111$ and the right boundary $0011111$ of the preimage satisfy that \hbox{$0011111\stackrel{\mathrm{v}}{\longrightarrow}0101111$}. This turns out to hold in general: \begin{Prop}\label{imageWielandTFPL} Let $f$ be a TFPL with boundary $(u,v;w)$ and let $u^-$ be a binary word satisfying $u^-\stackrel{\mathrm{h}}{\longrightarrow}u$. Then $\operatorname{WL}_{u^-}(f)$ is a TFPL with boundary $(u^-,v^+;w)$ where $v^+$ is a binary word satisfying $v\stackrel{\mathrm{v}}{\longrightarrow}v^+$. \end{Prop} \begin{proof} First, we have to check that $\operatorname{WL}_{u^-}(f)$ indeed is a TFPL, i.e. the four conditions in Definition~\ref{defi:tfpl} must be satisfied. By definition, the vertices $L'_1,L'_2,\dots,L'_N$ have degree $0$ or $1$. For the degree of $R_i'$ to be $2$ in $\operatorname{WL}_{u^-}(f)$, the vertex to the left of $R_i$ would need to be adjacent both to $R_{i-1}$ and $R_i$ in $f$, which is excluded since no path in $f$ joins two vertices in $\mathcal{R}^N$ by Definition~\ref{defi:tfpl}(iv). Thus, the vertices $R_1',R_2',\dots,R_N'$ have degree $0$ or $1$ in $\operatorname{WL}_{u^-}(f)$. All other vertices have degree $2$ in $\operatorname{WL}_{u^-}(f)$ since they simply come from the application of $\operatorname{W}$ to cells of $f$. Finally let $f'$ denote the configuration that is obtained before the vertices of $\mathcal{R}^N$ are deleted. Since Wieland gyration preserves the connectivity of path endpoints in each active cell, this is also true in $f'$. Thus, a path in $\operatorname{WL}_{u^-}(f)$ neither joins two vertices in $\mathcal{L}^{N\prime}$ nor two vertices in $\mathcal{R}^{N\prime}$ and Definition~\ref{defi:tfpl}(iv) is satisfied. It remains to check the assertion on the boundary. The left boundary of $\operatorname{WL}_{u^-}(f)$ is $u^-$ by construction. The right boundary $v^+$ of $\operatorname{WL}_{u^-}(f)$ satisfies $v\stackrel{\mathrm{v}}{\longrightarrow}v^+$ by Proposition \ref{PropRightBoundary} below and the characterization of pairs $\sigma,\sigma^+$ of binary words satisfying $\sigma\stackrel{\mathrm{v}}{\longrightarrow}\sigma^+$ at the end of Section~\ref{Section:definitions_results}. Finally, the bottom boundary of $\operatorname{WL}_{u^-}(f)$ is $w$ because Wieland gyration preserves the connectivity of path endpoints in each active cell. \end{proof} The lemma below treats the effects of left-Wieland gyration along the right boundary of a TFPL. \begin{Lemma}\label{WielandRightBoundaryTFPL} Let $f,u^-,v^+$ be as in Proposition~\ref{imageWielandTFPL}. Then $v^+\neq v$ if and only if there exists a vertex in $\mathcal{R}^N$ which is incident to a vertical edge of $f$. \end{Lemma} \begin{proof} We denote by $x_s$ the vertex to the left of $R_s$ for all $1\leq s\leq N$, and write $f'$ to denote $\operatorname{WL}_{u^-}(f)$. Let $f$ be a TFPL with a vertex $R_j$ incident to a vertical edge, and pick $j$ minimal. Then $x_j$ is necessarily adjacent both to the vertex to its left and to the vertex below, so by Wieland gyration $R'_j$ is of degree $0$ in $f'$. Since $R_j$ is of degree $1$ this shows $v\neq v^+$. Conversely, suppose that $v^+\neq v$. By Proposition~\ref{imageWielandTFPL} there exists necessarily $j\in\{1,2,\dots,N-1\}$ such that $v_{j}=0$ and $v^+_j=1$. $R_j'$ is of degree $0$ in $f'$, so $x_j$ is adjacent in $f$ both to the vertex to its left and to the vertex below it. Since $R_j$ is of degree $1$, it is necessarily incident to a vertical edge. \end{proof} As a byproduct of the previous proof, one can in fact precisely describe the right boundary $v^+$ as follows: \begin{Prop}\label{PropRightBoundary} Conserve the hypotheses of Lemma~\ref{WielandRightBoundaryTFPL}. For each $i$ such that $R_i$ is adjacent to a horizontal edge (\emph{resp.} a vertical edge) then $v^+_i=0$ (\emph{resp.} $v^+_{i+1}=0$). All other values $v^+_j$'s are equal to $1$. \end{Prop} {\noindent \bf Right-Wieland gyration.} In the definition of left-Wieland gyration, the active cells are all odd cells of a TFPL. When selecting all even cells of a TFPL as active cells, \emph{right-Wieland gyration} is obtained. It depends on a binary word $v^-$ satisfying $v^-\stackrel{\mathrm{v}}{\longrightarrow}v$ that encodes what happens along the right boundary of a TFPL with right boundary $v$ and is denoted by $\operatorname{WR}_{v^-}$ repsectively $\operatorname{WR}$ if $v^-=v$. It is defined in an obvious way as the symmetric version of left gyration, and we shall simply illustrate it on an example in Figure~\ref{InverseWielandExampleTFPL}. \begin{figure}[tbh] \includegraphics[width=.7\textwidth]{InverseWielandExampleTFPL.pdf} \caption{A TFPL and its image under right-Wieland gyration with respect to \hbox{$0011111$}.} \label{InverseWielandExampleTFPL} \end{figure} There are immediate symmetrical versions of Propositions~\ref{imageWielandTFPL} and~\ref{PropRightBoundary} for $\operatorname{WR}$ which we record: \begin{Prop}\label{ImageRightWieland} The image of a TFPL with boundary $(u,v;w)$ under right-Wieland gyration with respect to $v^-$ is a TFPL with boundary $(u^+,v^-;w)$ where $u^+$ is a binary word satisfying $u\stackrel{\mathrm{h}}{\longrightarrow}u^+$. \end{Prop} \begin{Prop}\label{PropLeftBoundary} Keep the notations of the previous proposition. For each index $i$ such that $L_i$ is adjacent to a horizontal edge (\emph{resp.} a vertical edge), there holds $u^+_i=1$ (\emph{resp.} $u^+_{i-1}=1$). All other values $u^+_j$'s are equal to $0$. \end{Prop} Given a TFPL with right boundary $v$, the effect of left-Wieland gyration along the right boundary of the TFPL is inverted by right-Wieland gyration with respect to $v$. On the other hand, given a TFPL with left boundary $u$ the effect of right-Wieland gyration along the left boundary is inverted by left-Wieland gyration with respect to $u$. Since Wieland gyration is an involution on each cell, it follows: \begin{Thm}\label{Thm:WielandBijectiveTFPL} \begin{enumerate} \item Let $f$ be a TFPL with boundary $(u^+,v;w)$ and $u$ be a binary word such that $u\stackrel{\mathrm{h}}{\longrightarrow}u^+$. Then \[ \operatorname{WR}_v(\operatorname{WL}_{u}(f))=f. \] \item Let $f$ be a TFPL with boundary $(u,v^+;w)$ and $v$ be a binary word such that $v\stackrel{\mathrm{v}}{\longrightarrow}v^+$. Then \begin{equation} \notag \operatorname{WL}_u(\operatorname{WR}_{v}(f))=f. \end{equation} \end{enumerate} \end{Thm} \begin{Rem} It is perhaps useful to point out that $\operatorname{WR}(\operatorname{WL}(f))\neq f$ in general. Indeed by Lemma~\ref{WielandRightBoundaryTFPL} equality will hold precisely when all vertices $R_i$ of degree one are adjacent to horizontal edges. \end{Rem} In Section~\ref{Section:stable}, we will study the behaviour of TFPLs under iterated applications of $\operatorname{WL}$. In Figure~\ref{Example:EventuallyStable}, an example of a TFPL to which left-Wieland gyration is repeatedly applied is depicted: one checks that the last TFPL in the sequence is invariant by left-Wieland gyration. In the following, a TFPL that is invariant under left-Wieland gyration is said to be \emph{stable}. \begin{figure}[tbh] \centering \includegraphics[width=1\textwidth]{Example_Eventually_Periodic.pdf} \caption{A TFPL to which left-Wieland gyration is repeatedly applied.} \label{Example:EventuallyStable} \end{figure} Given a TFPL $f$, the sequence $(\operatorname{WL}^m(f))_{m\geq 0}$ is eventually periodic since there are only finitely many TFPLs of a fixed size. The length of this period is in fact always 1. \begin{Thm}\label{Thm:EventuallyStable} Let $f$ be a TFPL of size $N$. Then $\operatorname{WL}^{2N-1}(f)$ is stable, so that the following holds for all $m\geq 2N-1$ : \[ \operatorname{WL}^m(f)=\operatorname{WL}^{2N-1}(f). \] The same holds for right-Wieland gyration. \end{Thm} For that purpose, it is necessary to characterize TFPLs that are invariant under left-Wieland gyration. Note that a TFPL is invariant under left-Wieland gyration if and only if it is invariant under right-Wieland gyration by Theorem~\ref{Thm:WielandBijectiveTFPL}. \section{Stable TFPLs} \label{Section:stable} From now on the vertices of $G^N$ are partitioned into odd and even vertices in a chessboard manner such that by convention the vertices in $\mathcal{L}^N$ are odd. In our pictures, the odd vertices are depicted by circles and the even vertices by squares. An example of a TFPL where the partition of its vertices into odd and even vertices is indicated is depicted in \hbox{Figure~\ref{StableExcess1}}. \begin{figure}[tbh] \centering \includegraphics[width=.5\textwidth]{StableTFPL.pdf} \caption{The bottom right TFPL configuration of size 7 in Figure~\ref{Example:EventuallyStable} with its odd resp. even vertices illustrated by circles resp. squares.} \label{StableExcess1} \end{figure} It will be proven that stable TFPLs can be characterized as follows: \begin{Thm}\label{Thm:StableTFPL} A TFPL is stable if and only if it contains no edge of the form $\vcenter{\hbox{\includegraphics[width=.0125\textwidth]{drifter.pdf}}}$. \end{Thm} The TFPL depicted in Figure~\ref{StableExcess1} is stable by Theorem~\ref{Thm:StableTFPL}. \begin{Def} An edge of the form $\vcenter{\hbox{\includegraphics[width=.0125\textwidth]{drifter.pdf}}}$ is called a \emph{drifter}. \end{Def} In the following, the possible cells of a TFPL play an important role in the proofs. For convenience, notations for the 16 odd and 16 even cells of a TFPL are fixed. In Figure~\ref{ListCells} the chosen notation can be seen. \begin{figure}[tbh] \includegraphics[width=0.8\textwidth]{ListCells.pdf} \caption{The notations for the 16 odd and 16 even cells of a TFPL, with emphasis on the subsets $\mathfrak{O}=\{\mathfrak{o}_1,\mathfrak{o}_2, \mathfrak{o}_3,\mathfrak{o}_4,\mathfrak{o}_5\}$ and $\mathfrak{E}=\{\mathfrak{e}_1,\mathfrak{e}_2, \mathfrak{e}_3,\mathfrak{e}_4,\mathfrak{e}_5\}$.} \label{ListCells} \end{figure} \subsection{Characterization of stable TFPLs} \label{Sub:CharacterizationStable} To prove Theorem~\ref{Thm:StableTFPL}, we will begin by showing that a TFPL containing a drifter is not stable. \begin{Prop}\label{Prop:InstableTFPLs} Let $f$ be a TFPL that contains a drifter. Then $\operatorname{WL}(f)\neq f$. \end{Prop} \begin{proof} If $f$ contains a drifter incident to a vertex in $\mathcal{R}^N$, then by Lemma~\ref{WielandRightBoundaryTFPL} we know that the right boundaries of $f$ and $\operatorname{WL}(f)$ are different, so that necessarily $\operatorname{WL}(f)\neq f$. We can now assume that no vertex in $\mathcal{R}^N$ is incident to a drifter. Let $\iota$ be a drifter in $f$ with maximal $x$-coordinate, and consider the odd cell $o$ in $f$ that contains $\iota$. Let $x$ be the top right vertex of $o$ and $y$ be the bottom right vertex of $o$. By the choice of $\iota$ the vertices $x$ and $y$ are not incident to a drifter. \begin{center} \includegraphics[height=1cm]{odd_cell.pdf} \end{center} Therefore, $o\in\{\mathfrak{o}_8, \mathfrak{o}_9, \mathfrak{o}_{10}, \mathfrak{o}_{11}, \mathfrak{o}_{12}\}$. If $o$ is of the form $\mathfrak{o}_8$ or $\mathfrak{o}_{10}$ the vertex to the right of $x'$ is incident to a drifter in $\operatorname{WL}(f)$. In that case, $\operatorname{WL}(f)\neq f$ because the vertex to the right of $x$ in $f$ is not incident to a drifter by assumption. If $o$ is of the form $\mathfrak{o}_9$, $\mathfrak{o}_{11}$ or $\mathfrak{o}_{12}$, the vertices $x'$ and $y'$ are not adjacent in $\operatorname{WL}(f)$. Thus, $\operatorname{WL}(f)\neq f$ because $x$ and $y$ are adjacent in $f$. \end{proof} To prove that a TFPL without a drifter is indeed stable, we need to determine the types of cells which may occur. Define $\mathfrak{O}=\{\mathfrak{o}_1,\mathfrak{o}_2, \mathfrak{o}_3,\mathfrak{o}_4,\mathfrak{o}_5\}$ and $\mathfrak{E}=\{\mathfrak{e}_1,\mathfrak{e}_2,\mathfrak{e}_3,\mathfrak{e}_4,\mathfrak{e}_5\}$. \begin{Lemma}\label{Lemma:CellsStableTFPL} If $f$ is a TFPL without drifters, then all interior odd cells belong to $\mathfrak{O}$ while all of its interior even cells belong to $\mathfrak{E}$. \end{Lemma} \begin{proof} Let $f$ be a TFPL without a drifter, and $o$ be one of its interior odd cells. Since $o$ has no drifter, it can only belong to $\mathfrak{O}$ or have one of the types $\mathfrak{o}_6,\mathfrak{o}_7$ or $\mathfrak{o}_{13}$. But in types $\mathfrak{o}_6,\mathfrak{o}_{13}$ (\emph{resp.} $\mathfrak{o}_7,\mathfrak{o}_{13}$), there would exist an interior cell above $o$ ( \emph{resp.} below $o$) that contains a drifter, what is excluded. The case of even cells is entirely analogous. \end{proof} Furthermore, in a TFPL with no drifter each odd cell has a uniquely determined even cell to its right. \begin{Lemma}\label{Lemma:PairsStableTFPL} Let $f$ be a TFPL without drifters, $o$ an odd cell of $f$ and $e$ the even cell of $f$ to the right of $o$. If $o$ and $e$ are interior, then they can only occur as part of one of the following pairs: \begin{center} \includegraphics[width=0.8\textwidth]{Cor001.pdf} \end{center} On the other hand, if $o$ or $e$ contains an external edge, then $o$ and $e$ can only occur as part of one of the following pairs: \begin{center} \includegraphics[width=.3\textwidth]{ExternalCells.pdf} \end{center} \end{Lemma} \begin{proof} Here, only the case when $o$ is an interior odd cell and $o=\mathfrak{o}_1$ is considered, the other cases being similar. Obviously, the cell $e$ cannot equal $\mathfrak{e}_4$. But it cannot equal $\mathfrak{e}_1$, $\mathfrak{e}_2$ or $\mathfrak{e}_3$ either, since otherwise one of the right vertices of $o$ would be be incident to a drifter. The only remaining possibility is that $e$ is of type $\mathfrak{e}_5$ by Lemma~\ref{Lemma:CellsStableTFPL}. \end{proof} We can now complete the proof of Theorem~\ref{Thm:StableTFPL} by showing that a TFPL without drifters is invariant under left-Wieland gyration. \begin{Prop}\label{Prop007} If $f$ is a TFPL without drifters, then $\operatorname{WL}(f)=f$. \end{Prop} \begin{proof} Let $o$ be an odd cell of $f$ and $e$ be the even cell to its right. By \hbox{Lemma~\ref{Lemma:PairsStableTFPL}}, $e$ is uniquely determined by $o$. The crucial observation is that $e$ coincides with the image of $o$ under Wieland gyration. Thus, each even cell of $f$ and its corresponding even cell of $\operatorname{WL}(f)$ coincide. By definition all edges and non-edges of $f$ incident to a vertex in $\mathcal{L}^N$ are preserved by left-Wieland gyration. In summary, $\operatorname{WL}(f)=f$. \end{proof} \subsection{TFPLs are eventually stable under Wieland gyration} In this section, we will prove Theorem~\ref{Thm:EventuallyStable}. The idea of the proof is the following: when applying left-Wieland gyration to a TFPL, the drifters of the TFPL are globally moved to the right. Thus, after a finite number of applications of left-Wieland gyration, all drifters eventually disappear through the right boundary. As a consequence of Theorem~\ref{Thm:StableTFPL} a stable TFPL is then obtained. In a TFPL of size $N$, there are $2N+1$ columns of vertices which we label from left to right from $1$ to $2N+1$. \begin{Prop}\label{Prop:EventuallyStable} Let $f$ be a TFPL of size $N$ that contains a drifter in the $n$-th column but no drifter in the columns $1,\ldots,n-1$ to its left. Then $\operatorname{WL}(f)$ contains no drifter in any of the columns $1,\ldots,n$. \end{Prop} \begin{proof} First of all, notice that by the definition of left-Wieland gyration, there is no vertex of $\mathcal{L}^{N\prime}$ incident to a drifter in $\operatorname{WL}(f)$. By definition of $\operatorname{WL}$, the occurrence of a drifter in an even cell $e'$ of $\operatorname{WL}(f)$ depends solely on the odd cell to the left of the corresponding even cell $e$ in $f$. By hypothesis, no odd cell of $f$ occurring to the left of the $(n-1)$-st column has a vertex incident to a drifter. It follows from the proof of Lemma ~\ref{Lemma:CellsStableTFPL} that all these odd cells belong to $\mathfrak{O}$. This entails that all even cells of $\operatorname{WL}(f)$ to the left of the $n$-th column belong to $\mathfrak{E}$, and thus do not contain a drifter. Since these even cells cover all vertical edges in the columns $1,\ldots,n$, the proof is complete. \end{proof} \begin{proof}[Proof of Theorem~\ref{Thm:EventuallyStable}] By immediate induction on the result of Proposition~\ref{Prop:EventuallyStable}, we know that the configuration $\operatorname{WL}^{2N+1-n}(f)$ contains no drifter, and thus by Theorem~\ref{Thm:StableTFPL} it is stable under $\operatorname{WL}$, so that \begin{equation} \operatorname{WL}^m(f)=\operatorname{WL}^{2N+1-n}(f) \label{Equation:EventuallyStable}\end{equation} for all $m\geq 2N+1-n$. Since the first column of vertices of a TFPL consists only of the vertex $L_1$, we have $n\geq 2$, which proves the theorem. \end{proof} \section{Applications of Wieland gyration on TFPLs} \label{Section:applications} \subsection{Some linear relations} The following was conjectured for Dyck words in~\cite{Thapper} and proved in~\cite{Nadeau1} using Wieland gyration on FPLs. \begin{Prop}\label{Prop:LinearRelation} Let $u$, $v$ and $w$ be binary words. Then \begin{equation} \sum_{u^+: u\stackrel{\mathrm{h}}{\longrightarrow}u^+} t_{u^+,v}^w=\sum_{v^+:v\stackrel{\mathrm{v}}{\longrightarrow}v^+}t_{u,v^+}^w. \notag\end{equation} \end{Prop} \begin{proof} Indeed the function $\operatorname{WL}_u(\cdot)$ acts on all TFPLs with boundary $(u^+,v;w)$, while $\operatorname{WR}_v(\cdot)$ acts on TFPLs with boundary $(u,v^-;w)$. By Theorem~\ref{Thm:WielandBijectiveTFPL}, these functions are inverse of one another, and the result is obtained by taking cardinalities. \end{proof} \subsection{The inequality ~\eqref{Necessary3}} This states that $\vert \lambda(u)\vert + \vert \lambda(v)\vert\leq\vert \lambda(w)\vert$ always holds for the boundaries $(u,v;w)$ of TFPLs. It was given in~\cite[Lemma 3.7]{Thapper} in the Dyck word case. Later, another proof in connection with TFPLs together with an orientation of the edges was given in~\cite{TFPL}. More precisely, it was shown there that in an oriented TFPL with boundary $(u,v;w)$, the quantity \hbox{$\vert\lambda(w)\vert-\vert\lambda(u)\vert-\vert\lambda(v)\vert$} counts occurrences of certain local patterns in the TFPL. We now give an independent proof based on the properties of Wieland gyration; the idea for this proof comes from the original one by Thapper, which can be seen as relying on Wieland gyration on FPLs in an indirect way. \begin{proof}[Proof of ~\eqref{Necessary3}] Let $f$ be a TFPL with boundary $(u,v;w)$. The proof is done by induction on $\vert\lambda(u)\vert$. In the case when $\vert\lambda(u)\vert=0$ we have \hbox{$\lambda(v)\subseteq\lambda(w)$} by Equation~\eqref{Necessary2}, which implies \hbox{$\vert\lambda(v)\vert\leq\vert\lambda(w)\vert$}. Assume now $\vert\lambda(u)\vert\geq 1$. By removing a corner of $\lambda(u)$, there exists a Young diagram \hbox{$\lambda(u^-)\subseteq\lambda(u)$} with one cell less than $\lambda(u)$. In particular $\lambda(u)/\lambda(u^-)$ is a horizontal strip. We first want to prove that there exists $i>0$ such that $\operatorname{WL}_{u^-}^i(f)$ has right boundary $v^+\neq v$. Assume the contrary, that is the right boundary of $\operatorname{WL}_{u^-}^i(f)$ is $v$ for all $i>0$. Since there are only a finite number of TFPLs with boundary $(u^-,v;w)$, there exist integers $i_0,p>0$ such that \[ \operatorname{WL}_{u^-}^{i_0+p}(f)=\operatorname{WL}_{u^-}^{i_0}(f). \] We can then apply $\operatorname{WR}^{i_0}_{v}$ to both sides of the identity, and by Theorem~\ref{Thm:WielandBijectiveTFPL} we obtain \hbox{$\operatorname{WL}_{u^-}^p(f)=f$}. But these configurations have left boundaries $u,u^-$ respectively and we assumed $u^-\neq u$, which is a contradiction. Hence, let $i$ be a positive integer such that $\operatorname{WL}_{u^-}^i(f)$ has boundary $(u^-,v^+;w)$ where $v^+\neq v$. By Proposition~\ref{imageWielandTFPL} we have $\lambda(v)\subsetneq\lambda(v^+)$ and therefore \hbox{$\vert\lambda(v)\vert+1\leq\vert\lambda(v^+)\vert$}. Applying the induction hypothesis to $\operatorname{WL}_{u^-}^i(f)$ completes the proof: \begin{equation*} \vert\lambda(u)\vert+\vert\lambda(v)\vert=\vert\lambda(u^-)\vert+1+\vert\lambda(v)\vert\leq\vert\lambda(u^-)\vert+\vert\lambda(v^+)\vert\leq\vert\lambda(w)\vert. \end{equation*} \end{proof} \subsection{Excesses $0,1$ and beyond} For a TFPL with boundary $(u,v;w)$, the nonnegative integer \hbox{$\vert\lambda(w)\vert-\vert\lambda(u)\vert-\vert\lambda(v)\vert$} is called the \emph{excess} of $f$. \begin{Prop} \label{prop:Excess0Stable} If a TFPL has excess $0$, then it is stable. \end{Prop} \begin{proof} It is a consequence of~\cite[Proposition 5.2]{TFPL} that TFPLs of excess $0$ do not contain drifters, so we can conclude with Theorem~\ref{Thm:StableTFPL}. \end{proof} These TFPLs are known to be counted by Littlewood--Richardson coefficients~\cite{Nadeau2,TFPL} as recalled in the introduction. In~\cite{TFPL}, configurations of excess $1$ were also studied in some detail and enumerated. The authors defined a number of \emph{moves} on such (oriented) configurations in order to transform them, and ultimately reach a configuration of excess $0$. It turns out that these complicated moves are essentially equivalent to a simple application of $\operatorname{WL}$, at least when the configuration is not stable. Therefore stable configurations should first be studied and enumerated, and other configurations may then be related to them through Wieland gyration, in order to find for instance linear relations between their cardinalities. The feasibility of such an approach is in particular supported by Theorem~\ref{Thm:EventuallyStable}.
1,314,259,993,160
arxiv
\section{Introduction} The almost perfect isotropy of the cosmic microwave background (CMB) is among the pillars of the cosmological standard model according to which our universe can be described, at large scales, as a Friedmann-Lemaitre-Robertson-Walker (FLRW) universe with small perturbations. This isotropy comes at different levels (see \cite{Akrami:2018vks} for CMB data from Planck and \cite{Hoffman:2015waa} for the peculiar velocities, or \cite{Tanabashi:2018oca} for a useful summary): the actual observations (terrestrial or from satellites) show deviations in temperature of $\d T/T \approx 0.12 \%$, but once the dipole contribution is subtracted, this improves to a value of $\delta_\mathrm{df}} \def\dlg{\delta_\mathrm{lg} \approx 10^{-5}$ (here and in the following, we abbreviate $\d T/T$ by $\d$ and use subscripts such as `df' for `dipole-free' to indicate which observer we are referring to). This means that an observer passing through our solar system at a velocity of $370$\,km/sec (in the right direction) will see the latter spectacularly small level; on the other hand, an observer comoving with our local galaxy group sees an anisotropy of $\dlg\approx 0.2\%$. According to the Copernican principle, the situation should be similar at most locations in the present era. It is important to note the difference between $\delta_\mathrm{df}} \def\dlg{\delta_\mathrm{lg}$ and $\dlg$, not only in size ($\delta_\mathrm{df}} \def\dlg{\delta_\mathrm{lg} \approx 10^{-5}\ll \dlg \approx 2\times 10^{-3}$), but also in quality: whereas $\delta_\mathrm{df}} \def\dlg{\delta_\mathrm{lg}$ is determined by a full celestial sphere's worth of observations, the value of $\dlg$ comes from a single draw from a distribution with mean zero. For these reasons, we would very much prefer the use of $\delta_\mathrm{df}} \def\dlg{\delta_\mathrm{lg}$ over that of $\dlg$ in an analysis of the structure of the universe. In other words, we want to work in a frame comoving with the CMB, not with the matter. The wavelength of a CMB photon is the product of its value at last scattering and the redshift factor picked up on the way to the observer. Unless one believes in strange nonlocal correlations between the two, one can only conclude that neither the original wavelength nor the redshift factor should feature deviations that are larger than the ones seen by the observer. In the present work we will be interested only in the extremely precise matching of the redshifts in the different directions. The celebrated Ehlers-Geren-Sachs (EGS) theorem \cite{Ehlers:1966ad} states that the existence of a perfectly isotropic radiation background combined with reasonable assumptions on the matter content of the universe implies FLRW. There is a number of generalizations to `almost EGS' theorems (e.g.\ \cite{Stoeger:1994qs,Clarkson:1999yj,Rasanen:2009mg,Clarkson:2010uz}) stating that small deviations from isotropy should lead only to small deviations from FLRW; see section 11.1 of Ref.\ \cite{emm} for a very clear summary. These works usually (with an exception in \cite{Rasanen:2009mg}) assume that the radiation 4-velocity (i.e.\ the velocity field $\udf$ of the dipole-free observers) is geodesic. This is an additional input which can be argued for only if one does not distinguish the CMB frame from the matter frame. Thus it holds only at the level of $\dlg$, not at the level of $\delta_\mathrm{df}} \def\dlg{\delta_\mathrm{lg}$. In the present work we are interested in precision at the level of $\delta_\mathrm{df}} \def\dlg{\delta_\mathrm{lg} \approx 10^{-5}$, so we do \emph{not} take the radiation velocity to be geodesic. Our analysis will rely on redshift rather than distribution functions for the radiation, which simplifies matters considerably. The timelike vector field $\udf$ that determines a preferred observer at every spacetime point can, in principle, be completed to an orthonormal frame $\{e_0 = \udf,\,e_1,\,e_2,\,e_3\}$ which we would call a CMB frame. In practice the requirement of a vanishing dipole is highly nonlocal and therefore analytically intractable. Instead, we are going to work with a locally well-defined quantity which, as we shall explicitly verify, comes very close to defining the level of anisotropy. It turns out that this quantity can be simplified by a conformal transformation, and that the most important contributions to it can be eliminated by a gauge choice. The physical observable $\delta_\mathrm{df}} \def\dlg{\delta_\mathrm{lg}$ is of course gauge invariant and can therefore be computed in any gauge. Choosing the one suggested here makes it particularly transparent why $\delta_\mathrm{df}} \def\dlg{\delta_\mathrm{lg}$ is so small despite the fact that the actual universe shows a considerable amount of inhomogeneity. Working in this gauge significantly improves the tractability of light propagation compared to the synchronous and the longitudinal gauge, which are the ones that are used most frequently. An explicit comparison in linear perturbation theory shows that the metric perturbations in the new gauge are not much larger than those in the longitudinal gauge, which is usually considered to be optimal in that respect. In the next section we introduce a quantity that vanishes if an isotropically redshifted CMB is observed everywhere, and show how it simplifies under a conformal transformation. In section 3 we formulate a gauge that eliminates two of three contributions to this quantity and thereby comes close to defining a CMB frame; we also give explicit conditions on a metric implementing this gauge. Section 4 contains an analysis of this metric in linear perturbation theory and comparisons with other gauges. In the final section we argue that other distance measures are also well behaved in the new gauge, make some remarks on the controversy about the impact of inhomogeneities on the expansion of the universe, and discuss open questions about our gauge. \section{Redshift and conformal transformation} We consider a photon emitted at some point $x_e$ by a source moving along a worldline with a tangent vector $u_e$ normalized to $u_e^2 = g_{\m\n}u_e^\m u_e^\n = -1$, where $g_{\m\n}$ is the pseudo-Riemannian spacetime metric of type $-+++$. This photon propagates along a lightlike geodesic which we describe by an affine parameter $\l$ such that the tangent vector to the geodesic is $k^\m = dx^\m / d\l $. The redshift $z_{e\to o}$, as seen by an obvserver at $x_o$ whose wordline has the tangent vector $u_o$ (normalized to $u_o^2 = -1$), is determined by the well-known formula \beq 1+z_{e\to o} = {(u\cdot k)_e\0 (u\cdot k)_o}.\eeql{redsh} In an idealized universe in which every spacetime point admits a distinguished observer who sees a perfectly isotropically redshifted last scattering surface, there would exist a global vector field $u$ characterizing such observers, as well as a globally well defined function \beq a(x) = 1 + z_{\mathrm{lss}\to x} = \frac{(u\cdot k)_\mathrm{lss}}{(u\cdot k)_x} \eeql{ax} that determines this redshift. We could then determine the redshifts between preferred observers via \beq 1+z_{e\to o} = \frac{a(x_o)}{a(x_e)}\eeql{redsha} as a direct consequence of Eqs.~(\ref{redsh}) and (\ref{ax}). Along any geodesic described with an affine parameter $\l$ and tangent vector $k$, the value of $a(x) (u\cdot k)(x)$ would remain constant and therefore the quantity \beq d(x,k) = \frac{d}{d\l }[a(x) (u\cdot k)(x)] \eeql{dkdef} would have to vanish at every spacetime point $x$ for every lightlike tangent vector $k$ at $x$. For an arbitrary timelike vector field $u$ and non-vanishing scalar $a$, where $d(x,k)$ need not vanish, a redshift formula can still be obtained by noting that \beq \ln[- a(x) (u\cdot k)(x)]_e^o = \int_e^o \frac{d(x,k)}{a(x) (u\cdot k)(x)} d\l \eeq implies \beq 1+z_{e\to o} = {(u\cdot k)_e\0 (u\cdot k)_o} = \frac{a(x_o)}{a(x_e)} \exp\(-\int_e^o \frac{d(x,k)}{a(x) (u_\r k^\r)(x)} d\l \)\, . \eeql{redshfD} In the following we would like to treat the requirement \beq \< d(x,k) \> = 0, \quad \< d(x,k)^2 \> \;\mathrm{small}, \eeql{dkcond} where $\< ~\cdots ~ \>$ should represent the average over the celestial sphere, \beq \< ~\cdots ~\> = \frac{1}{4\pi} \int \cdots ~d\O , \eeql{celav} as a local proxy for the conditions defining the CMB frame. Using the facts that differentiation by $\l$ corresponds to covariant differentiation along $k$ and that $k^\n k_{\m;\n} = 0$ we get \beq d(x,k) = k^\n [a(x) (u\cdot k)(x)]_{,\n} = a_{,\n} k^\n (u\cdot k) + au_{\m;\n} k^\n k^\m . \eeql{ddlauk} Motivated by the FLRW case, we introduce the conformally transformed quantities \beq {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{\m\n} = a^{-2} g_{\m\n}, ~~~\hat u_\m = a^{-1}u_\m , ~~~\hat u^\m = {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}^{\m\n} \uh_\n = a u^\m \eeql{hatted} with $\hat u_\m \hat u_\n {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}^{\m\n} = u_\m u_\n g^{\m\n} = -1$. Then a short calculation gives \beq a^2 \uh_{\m \,\hat ;\,\n} = a u_{\m;\n} + a_{,\m}u_\n - a_{,\r} u^\r g_{\m\n} , \eeql{cdu} where $\hat ;$ denotes covariant differentiation with respect to $\hat g$. Contraction with $k^\m k^\n$ shows that \beq d(x,k) = \D_{\m\n}(x)k^\m k^\n = \hat\D_{\m\n}(x)\hat k^\m \hat k^\n\eeql{dk} with \beq \D_{\m\n} = a u_{(\m;\n)} + a_{,(\m} u_{\n)} - a_{,\r} u^\r g_{\m\n} = a^2 \hat\D_{\m\n}, \quad \hat\D_{\m\n} = \uh_{(\m \,\hat ;\,\n)}.\eeql{Dmn} Thus Killing's equation $\uh_{(\m\sch\n)}=0$ implies $d(x,k)=0$, and with a little work the converse can also be shown. This corresponds to the well-known result \cite{Tauber:1961lbq} that a spacetime admits a perfectly isotropic CMB background if and only if its metric is conformal to a metric with a timelike Killing vector; this fact is essential for the derivation of the EGS theorem \cite{Ehlers:1966ad}. The standard decomposition (see e.g.\ chapter 4 of \cite{emm}) of \beq g_{\m\n} = -u_\m u_\n + h_{\m\n} \eeq into projection operators $-u_\m u_\n$ (timelike) and $h_{\m\n}$ (spacelike), with $u^\m h_{\m\n}=0$ and $h^{\m\n}h_{\m\n}=3$, (or, equivalently, $ {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{\m\n} = -\uh_\m \uh_\n + \hat h_{\m\n}$ etc.) affords a decomposition of any symmetric tensor $\D_{\m\n}$ as \beq \D_{\m\n} = u_\m u_\n \D^\mathrm{St} + h_{\m\n} \D^\mathrm{Ss} - u_\m \D^\mathrm{V}_\n - u_\n \D^\mathrm{V}_\m + \D^\mathrm{T}_{\m\n} \eeql{Ddecomp} in terms of scalars ${\D^\mathrm{St}}$ and $\D^\mathrm{Ss}$ (related to the time and space projections, respectively), a vector $\D^\mathrm{V}_\m$ satisfying $\D^\mathrm{V}_\m u^\m=0$ and a symmetric tensor $\D^\mathrm{T}_{\m\n}$ satisfying $\D^\mathrm{T}_{\m\n} u^\m = 0$ and $\D^\mathrm{T}_{\m\n} h^{\m\n} = 0$. Assuming that we have parametrized the geodesic in such a way that $u\cdot k = -1$ at the point $x$ where we compute $d(x,k)$, writing \beq k^\m = u^\m + e^\m, \eeq and using the conditions $u^2 = -1$ and $k^2 = 0$, we find that \beq u \cdot e = 0, \quad e^2 = 1\quad \hbox{and}\quad e^\m h_{\m\n} = e_\n, \eeq i.e.\ $e$ must be a spacelike unit vector orthogonal to $u$. Applying this to Eq.~(\ref{dk}) with the decomposition (\ref{Ddecomp}), we find \beq d(x,k) = \D^\mathrm{S} + 2 \D^\mathrm{V}_\n e^\n + \D^\mathrm{T}_{\m\n}e^\m e^\n \quad \hbox{with} \quad \D^\mathrm{S} = \D^\mathrm{St} + \D^\mathrm{Ss}. \eeql{dDe} In order to evaluate averages of the type (\ref{celav}) we introduce spacelike unit vectors $e_1^\m$, $e_2^\m$, $e_3^\m$ that form a tetrad together with $u^\m$, and define $e^\m(\O) = \cos\varphi\, \sin\vartheta \, e^\m_1 + \ldots$ through standard spherical coordinates $\O = (\varphi, \vartheta)$; these quantities satisfy \beq \< e^{\m_1}\, \cdots \, e^{\m_{2p+1}}\> =0,\quad \< e^{\m} e^{\n}\> = \frac{1}{3}h^{\m\n}, \quad \< e^{\m} e^{\n}e^{\r} e^{\s}\> = \frac{1}{15}(h^{\m\n}h^{\r\s} + h^{\m\r}h^{\n\s} + h^{\m\s}h^{\n\r}). \eeql{eforms} Note how Eq.\ (\ref{dk}) expresses the quantity $d(x,k)$, which depends both on the spacetime coordinates $x^\m$ and the tangent space coordinates $k^\m$, in terms of the tensor quantity $\D_{\m\n}$ (depending \emph{only} on the $x^\m$) and the bilinear $k^\m k^\n$. Therefore $\D^\mathrm{S}$, $\D^\mathrm{V}_\n$ and $\D^\mathrm{T}_{\m\n}$ do not depend on $e^\m$, and one can directly apply (\ref{eforms}) to find \beq \<d(x,k)\> = \D^\mathrm{S},\quad \<d(x,k)^2\> = (\D^\mathrm{S})^2 + \frac{4}{3}h^{\m\n}\D_\m^\mathrm{V}\D_\n^\mathrm{V} + \frac{2}{15}\D_{\m\n}^\mathrm{T}h^{\n\r}\D_{\r\s}^\mathrm{T}h^{\s\m}. \eeq Returning to the specific form (\ref{Dmn}) of $\D_{\m\n}$, application of the projection operators (in the `hatted' version) gives $\hat\D^\mathrm{St} = 0$ (so that $\hat\D^\mathrm{S} = \hat\D^\mathrm{Ss}$) and \bea \hat\D^\mathrm{S} &=& \frac{1}{3}{\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}^{\m\n}\uh_{\m\sch\n}, \\ \hat\D_\m^\mathrm{V} &=& \2 \uh_{\m\sch\r}\uh^\r, \\ \hat\D_{\m\n}^\mathrm{T} &=& \uh_{(\m\sch\n)} - \hat h_{\m\n} \hat\D^\mathrm{S} + \uh_\m \hat\D_\n^\mathrm{V} + \uh_\n \hat\D_\m^\mathrm{V}, \eea i.e.\ these quantities correspond to the expansion, the acceleration and the shear of the timelike vector field $\uh$ with respect to the metric ${\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}$. This has the following effects on the redshift. In the integral in Eq.\ (\ref{redshfD}) we can write $(\hat\D_{\m\n} / \hat u_\r)k^\m k^\n$ instead of $d(x,k) / (a\, u_\r)$. Furthermore, since $(k^\m k^\n / k^\r) d \l$ is invariant under arbitrary reparametrizations of the geodesic, we can replace it by $(\tilde k^\m \tilde k^\n / \tilde k^\r) d \tilde \l$ with $\tilde k^\m = \hat u^\m + \hat e^\m$ chosen such that $\hat u_\r \tilde k^\r = -1$ everywhere along the geodesic; the factor $\hat\D_{\m\n}$ is unaffected because it depends only on $x$, not on $k$. Thus the argument of the exponential in Eq.\ (\ref{redshfD}) becomes $\int_e^o \hat \D_{\m\n}\, \tilde k^\m \, \tilde k^\n d\tilde\l$. Then, using the analog of Eq.\ (\ref{dDe}) for the metric $\hat g$, we get \beq 1+z_{e\to o} = \frac{a(x_o)}{a(x_e)} \exp\(\int_e^o (\hat \D^\mathrm{S} + 2 \hat \D^\mathrm{V}_\n \hat e^\n + \hat \D_{\m\n}^\mathrm{T}\,\hat e^\m \, \hat e^\n ) d\tilde\l\) \eeql{redshgen} \del This has the following effects on the redshift. In the integral in Eq.\ (\ref{redshfD}) we can write $(\hat\D_{\m\n} / \hat u_\r)k^\m k^\n$ instead of $d(x,k) / (a\, u_\r)$. Furthermore, since $(k^\m k^\n / k^\r) d \l$ is invariant under arbitrary reparametrizations of the geodesic, we can replace it by $(\tilde k^\m \tilde k^\n / \tilde k^\r) d \tilde \l$ with $\tilde k^\m$ chosen such that $\hat u_\r \tilde k^\r = -1$ everywhere along the geodesic. Then we get \beq 1+z_{e\to o} = \frac{a(x_o)}{a(x_e)} \exp\(\int_e^o \hat \D_{\m\n}\, \tilde k^\m \, \tilde k^\n d\tilde\l\)\, \eeql{redshgen} \enddel for our preferred sources and observers whose worldlines have tangent vectors $u^\m$. If the actual emitter (`ae') and actual observer (`ao') have different tangent vectors (but the same positions), we must of course correct this via \beq 1+z_{ae\to ao} = (1+z_{ae\to e}) (1+z_{e\to o}) (1+z_{o\to ao}) ,\eeql{Doppler} where $1+z_{ae\to e}$ and $1+z_{o\to ao}$ are just the standard special-relativistic Doppler factors coming from the relative velocities between the actual and preferred sources and observers, respectively. \section{Gauge choice and metric} The actual universe features deviations from homogeneity, so we do not expect all components of $\D_{\m\n}$ to vanish. Why can we nevertheless find a local frame in which the CMB has almost exactly the same temperature in all directions? We propose that this can be explained in the following manner. Eqs.\ (\ref{redshgen}), (\ref{Doppler}) give the correct redshift for arbitrary sources and observers and arbitrary functions $a(x)$ and vector fields $u(x)$. The result is of course independent of the choice of $a$ and $u$; for most choices, several of the factors occurring in Eqs.\ (\ref{redshgen}), (\ref{Doppler}) will get large or small, and the computation of the CMB redshift will involve cancellations between these factors. If, however, we choose our setup such that $a(x)$ varies very little on the last scattering surface, the relative velocities of the CMB sources are very small, and the observer is the preferred one, then the only factor that can still exhibit a strong direction dependence is the exponential occurring in Eq.\ (\ref{redshgen}). If we want to interpret the average of $a(x_o)/a(x_e)$, with the source positions $x_e$ on the last scattering surface, as `the' redshift, and every other factor as providing at most a further small fluctuation, we need to ensure that the integral in Eq.\ (\ref{redshgen}) is small. We suggest to achieve this by choosing $a$ and $u$ in such a way that \beq \D^\mathrm{S} = 0, \quad \D_\m^\mathrm{V} = 0, \eeql{gc} which is an admissible \emph{gauge choice}. Indeed, $\D^\mathrm{S}$ and $\D_\m^\mathrm{V}$ correspond to $1+3=4$ degrees of freedom, which is just the number of quantities that can be fixed by a gauge. \del Then, using the analog of Eq.\ (\ref{dDe}) for the metric $\hat g$ and the gauge conditions (\ref{gc}), and writing $\tilde k^\m = \hat u^\m + \hat e^\m$, Eq.\ (\ref{redshgen}) becomes \beq 1+z_{e\to o} = \frac{a(x_o)}{a(x_e)} \exp\(\int_e^o \hat \D_{\m\n}^\mathrm{T}\,\hat e^\m \, \hat e^\n d\tilde\l\)\, \eeql{redshfDn} with $\hat \D_{\m\n}^\mathrm{T} = \uh_{(\m\sch\n)}$. \enddel This choice reduces the redshift formula (\ref{redshgen}) to \beq 1+z_{e\to o} = \frac{a(x_o)}{a(x_e)} \exp\(\int_e^o \hat \D_{\m\n}^\mathrm{T}\,\hat e^\m \, \hat e^\n d\tilde\l\).\, \eeql{redshfDn} The tracelessness of $\hat \D_{\m\n}^\mathrm{T}$ together with statistical isotropy ensures that the integrand $\hat \D_{\m\n}^\mathrm{T}\,\hat e^\m \, \hat e^\n$ has vanishing expectation value, and in the next section we shall also see that it vanishes in linear perturbation theory. Thus it is not so surprising that the integral is small. In terms of the original timelike field $u$, the effects of this choice on the expansion $\Theta = h^{\m\n}u_{\m ;\n}$, the acceleration $\dot u_\m = u_{\m;\r}u^ \r$, the shear $\s_{\m\n} = u_{\m;\n}^\mathrm{PSTF}$ (the projected symmetric tracefree part of $u_{\m;\n}$, i.e.\ what remains after symmetrizing, projecting with $h$ and removing the $h$-trace) and the vorticity $\o_{\m\n} = h_\m{}^\r h_\n{}^\s u_{[\r;\s]}$ are easily found with the help of Eq.~(\ref{cdu}): \beq \dot u_\m = h_\m{}^\n\frac{a_{,\n}}{a}, \quad \Theta = 3 u^\r \frac{a_{,\r}}{a}, \quad \s_{\m\n} = a \uh_{(\m\sch\n)}, \quad \o_{\m\n} = a \uh_{[\m\sch\n]}. \eeq In words, expansion and acceleration correspond to the timelike and spacelike components of $(\ln a)_{,\m}$, respectively; shear and vorticity are multiples of the corrresponding quantities in the conformally transformed frame. \del Let us return to the redshift experienced by a photon emitted at $x_e$ and observed at $x_o$, with both source and observer moving along worldlines whose tangent vectors are determined by $u$. By virtue of Eq.\ (\ref{dkdef}) we get \bea \ln[- a(x) (u\cdot k)(x)]_e^o &=& \int_e^o \frac{d(x,k)}{a(x) (u\cdot k)(x)} d\l \\ &=& \int_e^o \frac{\D_{\m\n}\,k^\m\, k^\n}{a\, u_\r\,k^\r} d\l \\ &=& \int_e^o \frac{\hat\D_{\m\n}\,\hat k^\m\, \hat k^\n}{\hat u_\r\,\hat k^\r} d\hat\l \\ &=& -\int_e^o \hat \D_{\m\n}^\mathrm{T}\,\hat e^\m \, \hat e^\n d\hat\l \, ;\eea in passing from the second line, where we still use the affine parametrization, to the third, where $\hat k$ is chosen to satisfy $\hat u \cdot\hat k = -1$ we have used the fact that $(k^\m k^\n / k^\r) d \l$ is invariant under reparametrizations of the geodesic; for obtaining the last line we used Eqs.\ (\ref{dDe}) and (\ref{gc}). This has the following effects on the redshift. In the integral in Eq.\ (\ref{redshfD}) we can write $(\hat\D_{\m\n} / \hat u_\r)k^\m k^\n$ instead of $d(x,k) / (a\, u_\r)$. Furthermore, since $(k^\m k^\n / k^\r) d \l$ is invariant under arbitrary reparametrizations of the geodesic, we can replace it by $(\tilde k^\m \tilde k^\n / \tilde k^\r) d \tilde \l$ with $\tilde k^\m = \hat u^\m + \hat e^\m$ chosen such that $\hat u_\r \tilde k^\r = -1$ everywhere along the geodesic. \enddel Let us now find explicit coordinates that implement our gauge (\ref{gc}). Choosing $\uh$ to be the vector with components $\uh^0=1$ and $\uh^i=0$, we get ${\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{00}=-1$, $\uh^\m{}_{\sch\r} = \uh^\m{}_{,\r} + \Gh^\m{}_{\r\n}\uh^\n = \Gh^\m{}_{\r 0}$ and therefore \beq \uh_{\m\sch\r} = \Gh_{\m\r 0}. \eeql{uhGh} Upon demanding $ 0 = 2\hat\D^\mathrm{V}_\m = \uh_{\m\sch\r} \uh^\r = \Gh_{\m 0 0} = {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{\m 0,0}$, the metric takes the form $ds^2 = a^2\,d\hat s^2$ with \beq d\hat s^2 = -(dx^0 - V_i\, dx^i)^2 + \g_{ij}dx^idx^j, \eeql{shat} where $a$ and $\g_{ij}$ can depend on all coordinates $x^\m$ whereas $V_i$ depends only on the spatial coordinates $x^j$. The inverse metric ${\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}^{\m\n}$ has the components \beq {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}^{00} = -1 + V_i \g^{ij}V_j,\quad {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}^{0j} = \g^{jk}V_k, \quad {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}^{ij} = \g^{ij}, \eeq where $\g^{ij}$ is defined by the requirement $\g^{ij}\g_{jk} = \d^i_k$. In matrix notation, the original metric and its inverse are given by \beq g = a^2 \pmatrix{ -1 & V^T \cr V & \g - V V^T } ,\qquad g^{-1} = a^{-2}\pmatrix{ -1 + V^T \g^{-1} V & V^T \g^{-1}\cr \g^{-1} V & \g^{-1} } . \eeql{matmet} Finally, $ 0 = 6\hat \D^\mathrm{S} = 2{\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}^{\m\n}\uh_{\m\sch\n} = 2{\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}^{\m\n}\Gh_{\m\n 0} = {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}^{\m\n}({\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{\m\n,0} + {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{\m 0,\n} - {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{\n 0,\m}) = {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}^{\m\n}{\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{\m\n,0} = {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}^{ij}\g_{ij,0} = \mao{tr}} \def\Tr{\mao{Tr}} \def\str{\mao{str}(\g^{-1}\g_{,0}) = (\mao{tr}} \def\Tr{\mao{Tr}} \def\str{\mao{str}\ln\g)_{,0} = (\ln\det\g)_{,0}$ implies $x^0$-independence of $\det \g$. The conditions $V_{i,0} = 0$ and $(\det \g)_{,0} = 0$ do not completely fix the form of our metric (\ref{shat}). For example, they also hold in a transformed frame $\{\tilde x^\m\}$ with \beq \tilde x^0 = x^0 + f(x^j), \quad \tilde x^i = \tilde x^i(x^j). \eeql{resgi} We can use parts of this freedom to assign a single time coordinate to the initial singularity and to set $\det \g = 1$. \del Upon the standard split of $V$ into a gradient and a divergence free vector, the former can be absorbed into a redefinition of $x^0$, so we may assume that $V$ satisfies \beq V_{i,0} = 0, \quad \d^{ij} V_{i,j} = 0. \eeq The 1-form $\uh$ with components $\uh_\m = {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{\m\n}\uh^\n = {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{\m 0}$ is $\uh = -dx^0 + V_idx^i$. Therefore, with (\ref{shat}), the projection operator $\hat h_{\m\n} = \uh_\m \uh_\n + {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{\m\n}$ satisfies $\hat h_{\m\n}dx^\m dx^\n = \g_{ij}dx^idx^j$. In terms of this explicit frame we find that the non-vanishing components of shear and vorticity are given by \beq \s_{ij} = \frac{a}{2} \g_{ij,0}, \quad \o_{ij} = a V_{[i,j]}. \eeq \enddel \section{Linear perturbation theory} We would now like to consider the consequences of our gauge choice (\ref{gc}) in the context of linear perturbation theory \cite{Bardeen:1980kt}. Our notation will be similar to that of Refs.\ \cite{Mukhanov:1990me,emm} which we also recommend for further details. A metric corresponding to a small perturbation of the conformally flat case is given, before gauge fixing, by \beq ds^2 = \ah^2(x^0) \{- (1+2\phi) (dx^0)^2 + 2 (B_{,i}-S_i)dx^i dx^0 + [(1-2\psi) \d_{ij} + 2 E_{,ij} + 2 F_{(i,j)} + h_{ij}] dx^idx^j\}; \eeql{linmet} here $\ah(x^0)$ represents the scale factor for the corresponding homogeneous case $(g_{\mathrm{h}})_{\m\n} = \ah^2 \eta_{\m\n}$; $\phi$, $\psi$, $B$ and $E$ are scalars; $S_i$ and $F_i$ are transverse vectors (i.e.\ they satisfy $\d^{ij}S_{i,j} = 0$ and $\d^{ij}F_{i,j} = 0$); $h_{ij}$ is a symmetric traceless transverse tensor ($h_{ij} = h_{ji}$, $\d^{ij}h_{ij}=0$, $\d^{ik}h_{ij,k}=0$). The gauge freedom $x^\m\to \tilde x^\m(x^\n)$ can be expressed at the linearized level in terms of a transverse vector $\xi^i$ and scalars $\xi^0$ and $\xi$; the corresponding transformations \bea &\tilde \phi = \phi - \frac{\ah'}{\ah}\xi^0 -{\xi^0}_{,0}, \quad \tilde \psi = \psi + \frac{\ah'}{\ah}\xi^0, \quad \tilde B = B + \xi^0 - \xi_{,0}, \quad \tilde E = E - \xi,&\\ & \tilde F_i = F_i -\xi_i, \quad \tilde S_i = S_i + \xi_{i,0}, \quad \tilde h_{ij} = h_{ij} & \eea can then be used to eliminate two of the scalars and one of the transverse vectors. The two most popular gauge choices are longitudinal gauge with $B=E=0$ (usually accompanied by neglecting vector and tensor modes), and synchronous gauge, which manifests itself at the linearized level as $\phi = B = 0$, $S_i=0$. A well-known solution to the Einstein equations for irrotational dust with $\L = 0$ (hence $\ah = \mathrm{const} \times (x^0)^2$), which is believed to give a good description of the early matter dominated era of our universe, relies on a single time-independent function $\phi_\mathrm{N}$ which is just the Newtonian potential. In the longitudinal gauge this solution is given by $\phi_\mathrm{long} = \psi_\mathrm{long} = \phi_\mathrm{N}$; it can be transformed to the synchronous gauge via $\xi^0 = x^0 \phi_\mathrm{N} /3$, $\xi = (x^0)^2 \phi_\mathrm{N} /6$, resulting in $E_\mathrm{sync} = -(1/6) (x^0)^2 \phi_\mathrm{N}$, $\psi_\mathrm{sync} = (5/3) \phi_\mathrm{N}$. In the latter case, second derivatives of $\phi_\mathrm{N}$ occur in the metric and tend to make the perturbations large for moderate $x^0$, which is often used as an argument against employing the synchronous gauge in situations other than the very early universe. What about the gauge (\ref{gc}) and the corresponding metric (\ref{matmet})? If we assume that we have used some of our residual gauge freedom to set $\det \g =1$, then in the linearized version $\g_{ij} - \d_{ij}$ must be traceless. Writing $a = (1+\phi )\ah$, this implies $\d^{ij}E_{,ij}=3(\phi + \psi)$. It turns out that without violating our gauge conditions we can set $B$ and $S_i$ to zero, so that the metric becomes (up to quadratic and higher terms) \beq ds^2 = \ah^2(x^0) (1+2\phi) \{- (dx^0)^2 + [\d_{ij} + 2 (E_{,ij} - \frac{1}{3}\d^{kl}E_{,kl}\d_{ij}) + 2 F_{(i,j)} + h_{ij}] dx^idx^j\}. \eeql{mymetp} For the special solution considered above we can get to this form by applying a transformation with $\xi^0 = 0$, $\xi^i = 0$ and $\xi$ satisfying $\xi_{,0} =0$ and $\d^{ij}\xi_{,ij} = - 6 \phi_\mathrm{N}$ to the metric in the longitudinal gauge. This results in $\phi = \phi_\mathrm{N}$ and $E$ chosen such that $\d^{ij}E_{,ij} = 6 \phi_\mathrm{N}$. Thus we can interpret $E$ as a gravitational prepotential. In particular, the expressions $E_{,ij}$ occurring in the metric should be roughly of the same order of magnitude as $\phi_\mathrm{N}$. It is instructive to apply our formalism to the metric (\ref{linmet}) that is not restricted by a gauge choice. Considering the preferred observer to be the comoving one, we get $a(x) = \ah(x^0)(1+\phi)$ and \beq d\hat s^2 = - (dx^0)^2 + 2 (B_{,i}-S_i)dx^i dx^0 + [(1-2\psi-2\phi) \d_{ij} + 2 E_{,ij} + 2 F_{(i,j)} + h_{ij}] dx^idx^j. \eeql{linmethat} Using Eq.\ (\ref{uhGh}), we find $\hat \D_{\m\n} = \uh_{(\m\sch\n)} = \Gh_{(\m\n) 0} = \2 {\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{\m\n,0}$ for a general ${\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{\m\n}$. It is straightforward to compute and decompose this expression for the metric (\ref{linmethat}), resulting in \bea \hat\D^\mathrm{S} &=& -\psi_{,0} -\phi_{,0} + \frac{1}{3}\d^{ij}E_{,ij0}, \\ \hat\D_i^\mathrm{V} &=& \2 (B_{,i} - S_i)_{,0}, \\ \hat\D_{ij}^\mathrm{T} &=& E_{,ij0} - \frac{1}{3}\d_{ij}\d^{kl}E_{,kl0} + F_{(i,j)0} + \2 h_{ij,0}. \eea We see again how the metric (\ref{matmet}) ensures the vanishing of $\D^\mathrm{S}$ and $\D^\mathrm{V}$. Expanding Eqs.\ (\ref{redshgen}), (\ref{Doppler}), with source and observer velocities of $v^i_e$ and $v^i_o$, respectively, to the linear level, results in \beq 1+z_{ae\to ao} = \frac{\ah(x_o)}{\ah(x_e)}\{1 + [\phi + v_i\hat e^i]_e^o + \int_e^o[-\psi_{,0} - \phi_{,0} + (B_{,i} - S_i)_{,0}\hat e^i + (E_{,ij0} + F_{(i,j)0} + \2 h_{ij,0})\hat e^i \hat e^j ] d\tilde \l \} . \eeql{linredsh} This expression is in full agreement with corresponding results in the literature. (To get, for example, Eq.\ (11) of Ref.\ \cite{Yoo:2009au}, one has to note several different naming and sign conventions including the directions of the unit vectors, and to partially integrate the $(B_{,i} - S_i)$-term.) As explained in detail in Ref.\ \cite{Yoo:2009au}, Eq.\ (\ref{linredsh}) contains all the standard contributions to the redshift, such as, for example, the Sachs-Wolfe effect \cite{Sachs:1967er}. For the dust solution considered above, neither the longitudinal gauge nor the gauge advocated here lead to corrections at the linearized level since the linearized fields are $x^0$-independent in these gauges; in contrast to this, the synchronous gauge features corrections because $E_\mathrm{sync} = -(1/6) (x^0)^2 \phi_\mathrm{N}$, in consistency with observations which show that the matter frame (the preferred frame in the synchronous gauge) substantially differs from the CMB frame. While linear perturbation theory provides an excellent description of the early universe, nonlinearities do play an important role in later eras, and this is where we expect differences between the gauge (\ref{gc}) and some nonlinearly consistent version of the longitudinal gauge such as the Poisson gauge to manifest themselves. In the simplified model mentioned above one could compute the source velocities as the matter velocities $v_i = T_{0i}/T_{00} = G_{0i}/G_{00}$ from the components of the energy-momentum tensor and therefore from the Einstein tensor, but this would neglect the different motions of visible and dark matter. A complete analysis of the CMB fluctuations would include an early, perturbative part in which these and many more details are taken into account; this would include the temperature variations, the actual source velocities taking into account the incomplete alignment of dark and hadronic matter, contributions of the radiation field to the energy-momentum tensor, etc. This can be done with the perturbative version (\ref{mymetp}) of the metric (\ref{matmet}), or by transforming results obtained in any other gauge to the present setup. At a point in the history of the universe where linear perturbation theory is still a good approximation but radiation can already be neglected, one should then hand over to a fully relativistic $\L$CDM simulation in the gauge (\ref{gc}). Let us briefly summarize the results of this section. The present formalism passes the consistency check of providing the correct linearized redshift formula (\ref{linredsh}) in a general gauge. Our metric is well behaved: in contrast to the synchronous gauge, the linearized expressions do not exhibit a time dependence that would quickly lead to troubles. The integral occurring in the redshift formula (\ref{redshgen}), which represents those deviations from the uniform case that cannot be attributed to properties of the sources, vanishes at first order of perturbation theory in a simple matter-only model, both in longitudinal gauge and in the gauge (\ref{gc}), but only in the latter the first two contributions $\hat\D^\mathrm{S}$ and $\hat\D^\mathrm{V}_\n\hat e^\n$ vanish at all orders. The remaining quantity $\hat\D^\mathrm{T}_{\m\n} e^\m e^\n$ has an expectation value of zero at all orders. These facts make our formalism particularly useful for understanding why we observe almost perfect isotropy of the CMB despite the existence of severe inhomogeneities in the non-linear era. \section{Concluding remarks} Observational cosmology relies not only on the redshift, but also on other distance measures such as the angular diameter distance and the luminosity distance. These quantities can be computed via arguments based on fluxes. For known redshift, one can use a comparison between the total number of photons emitted per unit of time in a specific frequency range, and the number of photons, in the appropriately transformed frequency range, arriving in a given area at the observer's location. Because of the non-acceleration and non-expansion of the vector field $\uh$ with resepct to ${\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}$, the number of photons arriving per unit of $x^0$ (the time coordinate related to $\uh$) on a suitable hypothetic screen enveloping the source must be identical with the number of photons emitted during the corresponding $x^0$-interval of the same duration (as measured with ${\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}$). Therefore, on average the photon count with respect to ${\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}$ behaves like the photon count in a static universe. Upon proper rescalings of the time and area values with the corresponding powers of $a$ one gets formulas for averaged fluxes that are identical in form with those for a homogeneous universe, but with $\ah$ replaced by $a$. Thus the overall expansion, as inferred from measured redshift-distance relations, is given straightforwardly by the values of $a$ at the sources and at our spacetime position. There have been suggestions (for a small subset, see e.g.\ \cite{Schwarz:2002ba,Wiltshire:2007jk,Buchert:1999er,Rasanen:2006kp,Buchert:2007ik,Skarke:2015yza}) that the perceived acceleration of the universe's expansion may not be due to a cosmological constant or dark energy, but to some effect stemming from the inhomogeneities of the actual universe. This possibility is rejected in papers such as \cite{Ishibashi:2005sj,Green:2014aga}, giving rise to further rounds of controversy \cite{1505.07800,1506.06452}. One of the main points of \cite{Ishibashi:2005sj,Green:2014aga} is an attack on the choice of synchronous gauge on which many attempts to explain the data without $\L$ are based; instead the use of the longitudinal gauge is advocated. From the present work it is clear that neither of these gauges is as directly related to observations as the one presented here in Eq.~(\ref{gc}). This makes a thorough investigation of the properties and consequences of this gauge choice highly desirable. Open questions include the following. What residual gauge freedom is there beyond that indicated in (\ref{resgi})? Is the possibility of setting ${\hat g}} \def\gt{{\tilde g}} \def\Sh{{\hat S}} \def\uh{{\hat u}_{0i} = V_i$ to zero general or specific to linear perturbation theory? What are the Einstein equations in linear and second order perturbation theory, for collisionless dust and more generally? Can we reproduce arguments along the lines of \cite{Ishibashi:2005sj,Green:2014aga}? What can we say beyond perturbation theory, either by analytic arguments or numerically? \noindent {\it Acknowledgements:} It is a pleasure to thank Dominik Schwarz for discussions.
1,314,259,993,161
arxiv
\section{Introduction} \label{sec:introduction} Over the past few decades, quantum mechanical calculations based on Kohn-Sham Density Functional Theory (KS-DFT) have provided important insights into a variety of material systems \citep{Martin_ES, LeBris_ReviewBook, Saad_Chelikowsky_Shontz_review}. One of the most widely used and successful methods for numerical solution of the equations of Kohn-Sham theory is the pseudopotential plane-wave method \citep{Kresse_abinitio_iterative, Kresse_metal_semiconductor, Hutter_abinitio_MD, Teter_Payne_Allan_2, Barnett_Landman}, currently available in a number of software packages \citep{Kresse_abinitio_iterative, Gonze_ABINIT_1, CASTEP_1, Quantum_Espresso_1}. The advantages of plane-waves include the fact that they are orthonormal and therefore result in simple discretized expressions. Also, they form a complete basis, thus allowing for systematic convergence with increasing basis set size, governed by a single parameter, the energy cutoff. The global nature of the plane wave basis also results in minimum user interference in terms of basis set choice. Being a Fourier basis, plane-waves allow for spectral convergence leading to highly accurate numerical solutions \citep{Cances_planewave_numerical_analysis}. Further, independence of the basis functions on atomic positions results in the absence of (the otherwise difficult to compute) Pulay forces \citep{CASTEP_1}. On the downside, while the plane-wave method is ideally suited for the study of periodic systems such as crystals, its application to non-periodic systems such as molecules and clusters is more limited due to the need for introducing artificial periodicity in the form of the supercell method \citep{Martin_ES, Hutter_abinitio_MD, Rappe_planewaves_molecules}. In addition, while studying such systems, plane-wave method codes only take advantage of symmetry groups which are {compatible} with translational symmetry (such as some of the crystallographic point groups). Alternatives to the plane-wave approach include the use of atom centered basis functions such as Gaussians and atomic orbitals \citep{LCAO_3, LCAO_famous, SIESTA_1}, as well as real space discretization approaches such as finite differences and finite elements \citep{Chelikowsky_Saad_1, Octopus_1, Pask_FEM_review, Gavini_Kohn_Sham, Gavini_higher_order}. Atom centered basis functions generally require fewer basis functions per atom compared to plane-waves but these basis sets are usually incomplete and they suffer from basis set superposition errors \cite{LeBris_ReviewBook}. Thus, they have issues with systematic convergence. {Finite element methods, in contrast, have systematic convergence properties but the quality of the solution as well as the efficiency of the method is heavily dependent on the quality of the mesh as well as the type of element used for the calculation \citep{Gavini_higher_order}. {With the use of higher order finite elements in pseudopotential calculations, simple uniform meshes usually suffice, but mesh-coarsening is generally required for obtaining high efficiency in the vacuum region while dealing with isolated systems \citep{Gavini_higher_order}.} {Readers may refer to \citep{Saad_Chelikowsky_Shontz_review} for a more general review of various basis sets and numerical methods that are in common use today for solution of the Kohn-Sham problem.} From the above discussion, it is quite clear that it would be highly desirable to have methods which are very similar to the plane-wave method but that are designed for systems which are non-periodic. Accordingly, in this work, we develop a scheme that is in many respects an exact analog of the plane-wave method but one which is designed with isolated systems such as clusters and molecules in mind. {Ab initio} studies of clusters, including various fullerenes and nanostructures, have received and continue to receive a lot of attention in different contexts \citep{Optical_magnetic_Boron_fullerene, Chelikowsky_silicon_nanostructures, PARSEC, Parallel_Chebyshev, Abinito_Fullerenes_Science, Small_Metal_Clusters, B80_abinitio, Gold_atomic_electronic_structure, Super_Atoms_1, Super_Atoms_2}. The methodology developed in this work therefore is likely to be useful for carrying out first principles studies of such systems in a consistent, systematic and efficient manner. In order to formulate the appropriate basis functions for our method, we first make the observation that plane-waves are eigenfunctions of the periodic Laplacian. Using eigenfunctions of the Laplacian as basis functions leads to numerous advantages, including the fact that the kinetic energy operator is diagonalized in such cases. Accordingly, our method uses eigenfunctions of the Dirichlet Laplacian (i.e., the Laplacian operator with Dirichlet boundary conditions) in a spherical domain as the basis set. Our basis functions are expressible as the product of spherical harmonics with spherical Bessel functions.\footnote{For the purpose of clarity, we emphasize that our basis functions are centered on the origin; i.e., we are dealing with a molecule centered basis set as opposed to an atom centered basis set.} Let us remark that the use of a spherical (or near spherical) domain for the study of cluster systems has been used earlier in finite difference and finite element methods \cite{PARSEC, Gavini_Kohn_Sham}. To the best of our knowledge however, this is the first work to make systematic use of Laplacian eigenfunction expansions in non-periodic domains for use in electronic structure calculations. Spherical basis functions have been used in earlier {work} to compute electronic properties of small metallic clusters \citep{electronic_sodium_magnesium_clusters, spherical_averaged_jellium} as well as that of $\textrm{C}_{60}$ \citep{C60_in_spherical_basis, Broglia_original_paper}. These basis functions have the distinct advantage that for many cluster systems, the Kohn-Sham eigenstates (molecular orbitals) and their symmetry properties are relatively easy to interpret using the quantum numbers associated with the basis functions\footnote{This allows systems such as super atoms \citep{Super_Atoms_1, Super_Atoms_2} to be studied conveniently.} themselves \citep{solid_state_finite}. As explained in \citep{Broglia_original_paper} the choice of spherical basis functions is usually motivated by the fact that the systems under study are nearly spherical. We show in this work however, that such a constraint on the system under study is unnecessary\footnote{This is owing to the fact that we have a complete orthonormal basis.} and that a wide variety of cluster systems including ones which are far from being spherical can be studied efficiently with our method. In contrast to our use of spherical Bessel functions, the radial part of the spherical basis functions used in {these aforementioned studies} has typically been obtained by solving a one dimensional radial eigenvalue problem. In order to avoid computational complexity, many of the aforementioned {studies} use a simplified treatment of the electron-nucleus interaction in the form of simple-jellium or pseudo-jellium models \citep{spherical_averaged_jellium}. The use of these simplified models however, can often lead to inaccuracies, even while studying simple metallic clusters \citep{Review_metal_clusters}. In our view, one of the main reasons behind the computational difficulties encountered by these workers is due to the formulation of their methods in which convolution sums are carried out in reciprocal space by means of coupling coefficients (e.g. \cite{spherical_averaged_jellium} and \cite{solid_state_finite}). This makes certain operations such as computation of the electron density from the wavefunctions unmanageable beyond relatively small system sizes, unless approximations are used. In addition, these studies also rely on setting up of the full Hamiltonian matrix and then performing diagonalization of this matrix using direct methods, at each self-consistent field iteration cycle. This is quite unlike the approach employed by modern plane-wave codes where a dual representation of various quantities is employed for efficiency purposes and the Fast Fourier Transform (FFT) is used to switch between real and reciprocal space \citep{Hutter_abinitio_MD}. In addition, instead of direct diagonalization methods, most plane-wave codes employ matrix free iterative diagonalization methods to compute the occupied eigenspace of the Hamiltonian \citep{Kresse_abinitio_iterative, Teter_Payne_Allan_2}. We adopt similar strategies in this work and show that this leads to a method where accurate ground state electronic structure calculations for cluster systems containing many hundreds of electrons can be done routinely using our code. In particular, employing widely used, accurate {ab initio} norm conserving pseudopotentials for modelling the electron-nucleus interaction, without resorting to any form of spherical averaging of the potentials \citep{spherical_averaged_jellium}, poses no difficulty in our method. As mentioned earlier, one of the key aspects of the plane-wave method is the use of three dimensional FFTs to switch between quantities expressed in real and reciprocal space. Analogously, we require efficient transforms to switch between quantities expressed on an appropriate grid used to discretize our spherical domain and the expansion coefficients of that quantity when expanded using our basis set (i.e., reciprocal space). Our strategy for obtaining efficient transforms is to use separation of variables into radial and angular parts and handling each of these parts through efficient transforms individually. Specifically, the radial part is computed through Gauss-Jacobi quadrature \citep{Gauss_Jacobi_Quad} while the angular part is handled using high performance Spherical Harmonics Transforms (SHTs)\citep{shtns}. Another key requirement for carrying out accurate Kohn-Sham calculations is the ability to accurately evaluate the electrostatics terms. We accomplish this task here by developing an expansion of the Green's function of the associated Poisson problem in terms of our basis functions. This is followed by computing the convolution of the Green's function with the electronic charge. This is somewhat similar in spirit to the Green's function based methods developed in the context of plane-wave codes \citep{Hutter_abinitio_MD, Hockney_1970, Eastwood_Brownrig, Reciprocal_Hockney} {and free-boundary problems \citep{genovese2006efficient}}. The calculation of the Green's function (in terms of its expansion) can be done ahead of time and does not have to be repeated. As explained later, this means that during the SCF (Self-Consistent Field) cycle, our method allows for the computation of the Hartree potential from the electron density at the cost of a single forward and inverse transform pair. Also, the use of the Green's function ensures that the appropriate decay of the electrostatic potentials is correctly handled, without having to use large computational domains or non-trivial boundary conditions. Computation of the occupied eigenspace of the discretized Kohn-Sham eigenvalue problem is the most computationally demanding step in a typical self consistent field calculation. Accordingly, a number of strategies have been devised over the years for an efficient solution of this problem through iterative diagonalization methods \citep{Teter_Payne_Allan_2, Kresse_abinitio_iterative, various_eigensolvers, Hutter_abinitio_MD}. We have adopted the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) algorithm \citep{LOBPCG_1} for this purpose in our code. This robust method has been implemented with success in other state-of-the-art Kohn-Sham codes \citep{ABINIT_LOBPCG, Octopus_LOBPCG}. With the aid of a simple diagonal preconditioner (described later) we have found it to work well for a variety of systems. For relatively large system sizes however, especially while running under distributed memory environments, LOBPCG-like methods suffer from the need to repeatedly orthogonalize the computed eigenstates. For dealing with such situations, we have adopted a highly efficient Chebyshev polynomial filtered subspace iteration algorithm \citep{Serial_Chebyshev, Parallel_Chebyshev} which avoids explicit diagonalization and minimizes orthonormalizations. For these large scale calculations therefore, LOBPCG is used only in the first SCF step (to generate a good guess for the occupied eigenspace) while Chebyshev polynomial filtering is used exclusively on subsequent SCF steps. Spectral methods like the plane-wave method and the method presented here\footnote{{More correctly, both the plane-wave method and the method presented here should be referred to as pseudospectral since they rely on interpolatory transforms.}} are susceptible to suffer from scalability issues while running under distributed memory environments, since the global nature of the basis functions involved tends to induce communication between the processing elements. To ameliorate this difficulty, we have adopted a two-level parallelization scheme over electronic states as well as physical space, much in the spirit of some large scale plane-wave codes \citep{Gygi_2D_parallel}. This effectively reduces global communications to an order of the square root of the number of processes (instead of the total number of processes) and it has resulted in speed-critical portions of our code scaling well up to 512 processing units. These strategies and methods combine to give an unusually efficient and accurate method, as seen in the examples presented in Section \ref{sec:examples}. We employ norm conserving {ab initio} pseudopotentials for most of our calculations. We first study the convergence properties of our basis set and demonstrate faster than polynomial rates of convergence (i.e., spectral convergence). Then, starting from light atoms, small molecules and clusters (metallic and non metallic), we visit various examples involving organic molecules, fullerenes and large face centered cubic (FCC) aluminum clusters. The largest example system considered here contains 1688 aluminum atoms (over 5000 electrons). Timing comparisons reveal that our method outperforms competing finite element and plane-wave method codes in benchmark calculations involving aluminum clusters. {Also, comparison against well converged plane-wave method results allow us to show that extremely high accuracies in ground state energy calculations (of the order of $10^{-6}$ Hartrees per atom) are routinely achievable by our code.} By visiting an example that involves the calculation of electrostatic multipole moments, we demonstrate that systematic convergence properties of our basis set allow for easy and accurate calculations of important physical properties. The rest of the paper is organized as follows. Section \ref{sec:formulation} describes the formulation of our method while Section \ref{sec:implementation_details} describes various implementation aspects. Section \ref{sec:examples} presents the example problems solved using our method and compares our results with the literature to assess the efficacy of our method. Section \ref{sec:conclusions} concludes the paper with a summary and comments on future directions. \section{Formulation} \label{sec:formulation} We describe the KS-DFT energy functional and the associated Kohn-Sham equations in this section. {We outline the key aspects of discretization of the Kohn-Sham equations using our spectral basis and also describe our approach to computation of the various terms that appear in these equations.} The atomic unit system with $m_e=1,e=1,\hbar=1,\frac{1}{4\pi\epsilon_0}=1$, is chosen for the rest of the work, unless otherwise mentioned. \subsection{The Kohn-Sham eigenvalue problem} \label{subsec:KS_eigenvalue} We consider a system consisting of $N_e$ electrons moving in the fields produced by $M$ nuclei. We assume that the nuclei have charges $(z_1, \ldots, z_M)$ and that they are clamped to the positions $({\bf x}_1,\ldots,{\bf x}_M) \in \mathbb{R}^{3M}$. For the sake of simplicity, we consider a system in which spin polarization effects are absent and we consider $N_e$ to be even. The extension of the present work to study spin-polarized systems is straight-forward.\footnote{Some of our example calculations presented in section \ref{sec:examples} do use spin-polarization.} In line with the Born-Oppenheimer approximation \citep{LeBris_ReviewBook}, the Kohn-Sham model \citep{KohnSham_DFT,LeBris_ReviewBook} computes the total energy of this system (denoted here as $E^{\text{KS}}_{N_e, M}$) at absolute zero, by splitting it into an electronic part (denoted here as ${\cal E}^{\text{KS}}_{N_e, M}$) and a nucleus-nucleus interaction part: \begin{align} E^{\text{KS}}_{N_e, M} = {\cal E}^{\text{KS}}_{N_e} + \sum_{1\leq k < l \leq M} \frac{z_kz_l}{\abs{{\bf x}_k-{\bf x}_l}}\;. \label{KS_total_Energy} \end{align} The electronic part of the energy in the Kohn-Sham model is computed in terms of orbitals; i.e., an $N_e / 2$ tuple of complex valued scalar fields $\{\phi_i\}_{i = 1}^{N_e / 2}$, as follows:\footnote{In the equations that follow, $\mathsf{L}^2(\mathbb{R}^3)$ is used to denote the usual space of square integrable functions on $\mathbb{R}^3$ while $\mathsf{H}^1(\mathbb{R}^3)$ denotes the Sobolev space of functions in $\mathsf{L}^2(\mathbb{R}^3)$ whose first order weak derivatives also lie in $\mathsf{L}^2(\mathbb{R}^3)$.} \begin{align} \label{KS_model} {\cal E}^{\text{KS}}_{N_e} &=\,\displaystyle \inf_{\substack{{\phi_i \in \mathsf{H}^1(\mathbb{R}^3),}\\{\innprod{\phi_i}{\phi_j}{\mathsf{L}^2(\mathbb{R}^3)}=\delta_{ij}}}} {\biggl\{\frac{1}{2} \sum_{i=1}^{N_e/2} \int_{\mathbb{R}^3}\!f_i\abs{\nabla \phi_i}^2 +\int_{\mathbb{R}^3}\!\rho V_{\text{nu}} + \frac{1}{2}\int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\!\frac{\rho({\bf x})\rho({\bf y})}{\abs{{\bf x}-{\bf y}}}\;d{\bf x}\,d{\bf y}} +E_{\text{xc}}(\rho)\biggr\}\\ \label{density_expression} &\textrm{with the electron density,}\quad\rho({\bf x})=2 \sum_{i=1}^{N_e / 2} f_i\abs{\phi_i({\bf x})}^2\;. \end{align} The scalars $f_i$ denote orbital occupations and have values $0 \leq f_i \leq 1$. {These values need to be specified apriori or they can be computed as part of the solution (using thermalization for instance, see Section \ref{subsubsec:mixing_smearing})}. The factor of two in eq.~\ref{density_expression} above, is due to the assumption of dealing with a spin-unpolarized system, as a consequence of which, each orbital is doubly occupied. The orthonormalization constraint on the orbitals implies that the electron density $\rho$ satisfies the normalization condition: \begin{align} \label{density_normalized} \int_{\mathbb{R}^3}\!\rho = N_e\;. \end{align} The first term in eq.~\ref{KS_model} (involving the gradient of the orbitals) models the kinetic energy of the electrons. The second term models the interaction of the nuclei with the electrons. The nuclear potential appearing in that term is given as: \begin{align} \label{Vnu_original} V_{\text{nu}}({\bf x}) = - \sum_{k=1}^{M}\frac{z_k}{\abs{{\bf x}-{\bf x}_k}}\;. \end{align} {In practice, the Coulombic singularities present in eq.~\ref{Vnu_original} cause problems with efficient numerical solution of the equations and spectral methods (like the plane-wave method and the present one) are particularly affected due to appearance of Gibbs phenomenon \citep{Folland_Real}. The pseudopotential approximation\footnote{{In particular, ab initio pseudopotentials provide a well defined recipe of carrying out this smoothening procedure such that the energy and length scales of the problem are dictated by the chemically relevant electronic states. See \citep{Troullier_Martins_pseudo} for more details.}} allows these issues to be mitigated by smoothening out these singularities \citep{LeBris_ReviewBook, Martin_ES, Troullier_Martins_pseudo}.} The bulk of the present work is devoted to pseudopotential calculations. {The third term in eq.~\ref{KS_model} represents the mutual electrostatic repulsion of the electrons (Hartree energy). Finally, the term $E_{\text{xc}}(\rho)$ is the exchange correlation energy. We adopt here the widely used Local Density Approximation (LDA) \citep{Parr_Yang, KohnSham_DFT} of this term by using the parametrization presented in \citep{perdew_zunger, ceperley_alder}. An extension of our method to density gradient corrected functionals \citep{GGA_made_simple_perdew} poses no particular difficulty.} The Euler-Lagrange equations of the minimization problem (eq.~\ref{KS_model}) are the celebrated Kohn-Sham equations \citep{KohnSham_DFT}, {which, with the definition of the electron density introduced in eq.~\ref{density_expression}} can be written as follows: \begin{align} \label{KS_equations} K(\rho)\,\phi_i&=\lambda_i\,\phi_i\;;\;\innprod{\phi_i}{\phi_j}{\mathsf{L}^2(\mathbb{R}^3)}=\delta_{ij}\;,\\ \label{KS_explain1} \text{with}\quad K(\rho)&=-\frac{1}{2} \Delta + V_{\text{nu}} + \int_{\mathbb{R}^3}\!\frac{\rho({\bf y})}{\abs{{\bf x}-{\bf y}}}\;d{\bf y}+V_{\text{xc}}(\rho)\;,\\ \text{where}\quad V_{\text{xc}}(\rho)&=\pd{E_{\text{xc}}(\rho)}{\rho}\;. \label{KS_explain2} \end{align} The $\lambda_i$ that appear in eq.~\ref{KS_equations} are the Lagrange multipliers of the orthonormality constraints on the orbitals. They are taken to be the lowest $N_e / 2$ eigenvalues of the Kohn-Sham operator $K(\rho)$. The usual method of solution of the Kohn-Sham equations is by a Self-Consistent Field (SCF) approach \citep{KohnSham_DFT, Martin_ES, LeBris_ReviewBook}. In practice, a variety of mixing schemes are employed to accelerate convergence of the SCF iterations \citep{Martin_ES, Dederichs_Zeller_SCF}. \subsection{Problem set-up and discretization} \label{subsec:discretization} Let ${\cal B}_R$ denote the sphere of radius $R$ centered at the origin. For the purpose of this work, we will restrict the physical domain to ${\cal B}_R$ and the cluster / molecular system of interest will be embedded within this spherical region. We will apply Dirichlet boundary conditions to the electron density on the surface of the sphere in accordance with the well-known spatial exponential decay of the electron density \citep{wavefunc_decay1, wavefunc_decay2}. The relation between the electron density and the wavefunctions (eq.~\ref{density_expression}) automatically implies that the Dirichlet boundary conditions apply to the wavefunctions\footnote{Application of Dirichlet boundary conditions to the Kohn-Sham wavefunctions has been considered earlier in the context of real-space methods \citep{Gavini_Kohn_Sham, Chelikowsky_Saad_1, Chelikowsky_Saad_2}} as well. {We do not enforce specific boundary conditions on the various potential terms. These are applied implicitly based on the method of computation: in case of the Hartree term for instance, our method of calculation automatically ensures that the right kind of decay of that term is obtained (Section \ref{subsubsec:hartree_potential}).} \subsubsection{Basis set} \label{subsubsec:basis_set} The particular choice of a spherical domain allows for the Laplacian eigenfunctions in this domain to be represented analytically in spherical coordinates.\footnote{In our notation for spherical coordinates, we denote $r\in [0,R]$ as the radial coordinate, $\vartheta \in [0,\pi]$ as the polar angle and $\varphi\in[0,2\pi]$ as the azimuthal angle. The Cartesian coordinates $(x,y,z)$ are obtained as $x = r\sin{\vartheta}\cos\varphi,\;y = r\sin{\vartheta}\sin\varphi,\; z = r\cos{\vartheta}$.} Specifically, we consider the $\Lpspc{2}{}{{\cal B}_R}$ orthonormal eigenfunctions of the Laplacian operator in the spherical domain and we impose Dirichlet boundary conditions on the surface of the domain. In this setup, a simple separation of variables calculation shows that the eigenfunctions of the Laplacian which are regular at the origin are expressible in terms of spherical Bessel functions of the first kind and spherical harmonics \citep[see e.g.][for details of this calculation]{My_MS_Thesis}. Letting $(l,m,n) \in \Gamma_{\infty}$ with: \begin{align} {\Gamma}_{\infty} = \bigg\{(l,m,n):l\in\{0,1,\ldots\},m\in\{-l,\ldots,l\},n\in\{0,1,\ldots\}\bigg\}\;, \end{align} the Laplacian eigenfunctions take the form: \begin{align} \label{eigfunction_1} {F}_{l,m,n}(r,\vartheta,\varphi) &= {\cal R}_{l,n}(r)\;{\cal Y}^{m}_l(\vartheta,\varphi)\;, \\ \intertext{with the radial part being the spherical Bessel functions of the first kind:} \label{spherical_bessel_1} {\cal R}_{l,n}(r) &= \displaystyle\frac{1}{R J_{l+\frac{3}{2}}\bigl(b^{n}_{l+\frac{1}{2}}\bigr)} \sqrt{\frac{2}{r}}\displaystyle J_{l+\frac{1}{2}}\biggl(\displaystyle\frac{b^{n}_{l+\frac{1}{2}}}{R}r\biggr)\;,\\ \intertext{and the angular part being the spherical harmonics:} {\cal Y}^{m}_l(\vartheta,\varphi) &= c_{l,m}\,{\cal P}_l^m (\cos{\vartheta}) \, e^{i m \varphi }\;, \label{spherical_harmonics_1} \text{with}\;c_{l,m} = \sqrt{\frac{(2l+1)}{4\pi}\frac{(l-m)!}{(l+m)!}}\,\;. \end{align} In eq.~\ref{spherical_bessel_1}, $\displaystyle J_{l+\frac{1}{2}}(\cdot)$ denotes the (ordinary) Bessel function of the first kind of order $(l+\frac{1}{2})$, while $\displaystyle b^{n}_{l+\frac{1}{2}}$ denotes its $(n + 1)^{\text{th}}$ root. Thus, ${\cal R}_{l,n}(r)$ attains a value of zero $(n + 1)$ times in the interval $[0,R]$. In eq.~\ref{spherical_harmonics_1}, ${\cal P}_l^m(\cdot)$ denotes the associated Legendre polynomial of degree $l$ and order $m$. The eigenvalue associated with the eigenfunction ${F}_{l,m,n}$ is given by: \begin{align} \label{laplacian_eigenvalue} {\Lambda}_{l,m,n} = \biggl(\frac{b^{n}_{l+\frac{1}{2}}}{R}\biggr)^2\;. \end{align} Since the Laplacian is a self-adjoint operator with a compact resolvent, the infinite collection of eigenfunctions ${\cal E}_{{\Gamma}_{\infty}} = \big\{{F}_{l,m,n}: {(l,m,n) \in {\Gamma_{\infty}}}\big\}$ form an orthonormal basis of $\Lpspc{2}{}{{\cal B}_R}$ \citep{Evans_PDE, Kato}. Further, elliptic-regularity results \citep{Evans_PDE} imply that each basis function ${F}_{l,m,n}$ is smooth. We now choose a finite subset of ${\cal E}_{\tilde{\Gamma}}$ as our basis set. We fix ${\cal L},{\cal N} \in \mathbb{N}$ (henceforth referred to as the angular and radial cutoff, respectively), and form $\Gamma \subset {\Gamma_{\infty}} $ by restricting\footnote{For each $l$, $m$ is allowed to vary in $\{-l,\ldots,l\}$ as before.}\footnote{{For the purpose of this article, we have used a uniform basis set in which the maximum value of $n = ({\cal N} -1)$ for \emph{every} $l$. We are aware however, that a non-uniform basis set in which the maximum value of $n$ is allowed to vary with $l$ allows more flexibility and can lead to a number of desirable effects. For instance, this setting allows (as in plane-wave codes), the introduction of a kinetic energy cut-off in which the basis set only includes spherical waves whose kinetic energies lie below a specified threshold. This results in a basis set which is optimal in the sense of the Sobolev $\mathsf{H}^1$ norm. Also, we are aware that a uniform basis set can sometimes face a loss of approximation power for large radial distances, although we have not encountered any serious issues from this effect in this work. A non-uniform basis set however, can be made to correct for this issue automatically. More details of this methodology is the scope of future work.}} $l\in\{0,\ldots,{\cal L}-1\}$ and $n\in\{0,\ldots,{\cal N}-1\}$. Given any function $f \in \Lpspc{2}{}{{\cal B}_R}$, for the purpose of numerical discretization, we approximate it using the functions in ${\cal E}_{{\Gamma}} = \big\{{F}_{l,m,n}: {(l,m,n) \in {\Gamma}}\big\}$ as: \begin{align} \label{basis_approx} f =\!\sum_{(l,m,n) \in {\Gamma}}\!\hat{f}_{l,m,n}\; {F}_{l,m,n}\;. \end{align} We may observe that the span of the functions in ${\cal E}_{\tilde{\Gamma}}$ form a linear subspace of $\Lpspc{2}{}{{\cal B}_R}$ of dimension $\mathit{{d}} = {\cal L}^2 {\cal N}$. The expansion coefficients can be obtained from orthonormality of the basis functions by: \begin{align} \label{flmn_def} \hat{f}_{l,m,n} = \innprod{f}{{F}_{l,m,n}}{\Lpspc{2}{}{{\cal B}_R}} = \int_{{\cal B}_R}\!f\;\overline{{F}_{l,m,n}}\;, \end{align} and the collection of expansion coefficients $\big\{\hat{f}_{l,m,n}:(l,m,n) \in {\Gamma} \big\}$ will often be interpreted interchangeably with vectors in $\mathbb{C}^\mathit{d}$. If the function $f$ is real valued, as it is for example, in case of the electron density, the expansion coefficients obey the additional condition\footnote{If the Condon-Shortley phase \citep{CS_phase_encyclopedia} is included, this becomes $\hat{f}_{l,-m,n} = (-1)^m \overline{\hat{f}_{l,m,n}}$. We do not make use of the Condon-Shortley phase in this work.} $\hat{f}_{l,-m,n} = \overline{\hat{f}_{l,m,n}}$. \subsubsection{Basis transforms} \label{subsubsec:basis_transform} In order to perform the quadratures required for evaluation of the expansion coefficients via eq.~\ref{flmn_def} we introduce a discretization of the domain $B \subset {\cal B}_R$. Akin to the terminology used in the plane-wave literature, we will often refer to the representation of a given function in terms of its expansion coefficients as the \textit{reciprocal space representation} while the representation of the same function on the grid points in $B$ will be referred to as the \textit{real space representation}. The operations that allow us to switch between these two representations will be referred to as \textit{basis transforms}. The specific choice of the grid points is made as follows. Let $N_r, N_{\vartheta}$ and $N_{\varphi}$ denote the number of discretization points in the radial, polar and azimuthal directions respectively. These quantities are dependent on the radial and angular cutoffs and are chosen in accordance with constraints of the sampling theorem. We discretize the unit sphere by choosing $N_{\vartheta}$ Gauss quadrature points in $\cos(\vartheta)$ over the interval $[-1,1]$ and $N_{\varphi}$ equally spaced points in $\varphi$ over the interval $[0,2\pi]$. In the radial direction, we choose $N_r$ Gauss-Jacobi quadrature nodes \citep{Gauss_Jacobi_Quad} associated with the quadrature weight of $r^2$ over the interval $[0,R]$. The set $B$ is now taken to be a Cartesian product of the radial quadrature points and the unit sphere discretization points. This allows a separation of variables in the angular and radial directions while carrying out the basis transforms, thereby reducing computational complexity. Given the real space representation $\tilde{f}:B\to\mathbb{C}$, we obtain the reciprocal space representation by first computing spherical harmonic transforms holding the radial variable fixed: \begin{align} \nonumber A(r;\,l,m) &= \int_{0}^{2\pi}\int_{0}^{\pi}f(r;\vartheta, \varphi)\;\overline{{\cal Y}^{m}_l(\vartheta,\varphi)}\,\sin(\vartheta)\;d\vartheta\,d\varphi\;,\\ \label{Ar_lm} &= c_{l,m}\int_{-1}^{1}{\cal P}_l^m(t)\bigg[\int_{0}^{2\pi}\!f(r;\,\cos^{-1}(t), \varphi)\,e^{-i m \varphi }\,d\varphi\bigg]\,dt\,, \end{align} and then performing radial quadratures using the quadrature nodes $\{r_{k_r}\}_{k_r=1}^{N_r}$ and corresponding weights $\{w_{k_r}\}_{k_r=1}^{N_r}$: \begin{align} \label{Arlm_to_coeff} \hat{f}_{l,m,n} = \int_{0}^{R}\!A(r;\,l,m)\,{\cal R}_{l,n}(r)\,r^2\,dr \approx \sum_{k_r = 1}^{N_r}w_{k_r}A(r_{k_r};\,l,m)\,{\cal R}_{l,n}(r_{k_r})\;. \end{align} The spherical harmonic transform as expressed in eq.~\ref{Ar_lm} itself consists of two steps: first holding $\vartheta$ fixed, the Fast Fourier Transform (FFT)\citep{FFT_Cooley_Tukey} is used to evaluate the inner integral involving $\varphi$. Subsequently, a quadrature in $t = \cos(\vartheta)$ is carried out on the result to evaluate the outer integral. Similarly, given the reciprocal space representation $\hat{f}:\Gamma \to \mathbb{C}$, the inverse transform can be carried out by first computing: \begin{align} \label{eq:G_lmr} {G}(l,m;r_{k_r}) = \sum_{n = 0}^{{\cal N}-1}\hat{f}_{l,m,n}\,{\cal R}_{l,n}(r_{k_r})\,, \end{align} while holding $l$ and $m$ fixed and then, for each radial grid node $r_{k_r}$, performing inverse spherical harmonics transforms (using inverse FFTs and dot products as in \citep{shtns}). The basis transforms as described above, have a time complexity of $O({\cal L}^3{\cal N} + {\cal L}^2{\cal N}^2)$ in terms of the angular and radial cutoffs.\footnote{A naive implementation of the transforms, that is, one that does not employ the separation of variables structure, would have a time complexity of $O({\cal L}^4{\cal N}^2)$ in terms of the angular and radial cutoffs.}\footnote{Using more sophisticated techniques for carrying out the associated Legendre polynomial transforms \citep{Mohlenkamp_SHT,Driscoll_Healey_SHT}, this can be reduced to $O({\cal L}^2(\log{{\cal L}})^2{\cal N} + {\cal L}^2{\cal N}^2)$.} As far as practical implementation is concerned, the use of Gauss quadrature points as well as various numerical and implementation level optimizations \citep[see][for example]{shtns} can be used to ensure that the prefactor for this asymptotic estimate is rather low. This allowed us to carry out basis transforms routinely and efficiently even with basis sets containing millions of basis functions in our code. \subsubsection{Set up of matrix eigenvalue problem} \label{subsubsec:matrix_eigprob} Within the self-consistent field iterations, the governing equations (eq.~\ref{KS_equations} - \ref{KS_explain2}) posed in the spherical domain take the form of the following linearized eigenvalue problem with an effective potential $V^{\text{eff}}$: \begin{align} \label{lin_eigen} \Bigl(-\frac{1}{2} \Delta + V^{\text{eff}}\Bigr)\phi_i =& \;\lambda_i\,\phi_i\;\text{for} \;i=1,\ldots,N_e/2\quad,\\ \phi_i =& \;0\;\text{on}\;\partial {\cal B}_R\;, \label{eq:lin_eigen_BC} \end{align} and the effective potential at a point ${\bf x} \in {\cal B}_R$ is given as: \begin{align} V^{\text{eff}}({\bf x}) = V_{\text{xc}}(\rho({\bf x})) + \int_{{\cal B}_R}\!\frac{\rho({\bf y})}{\abs{{\bf x}-{\bf y}}}\,d{\bf y} + V_{\text{nu}}({\bf x})\quad. \end{align} We choose to ignore any non-local contributions to the ionic pseudopotentials at this point. The specific treatment of these non-local terms is discussed later in section \ref{susubbsec:pseudo_pot_terms}. To discretize eq.~\ref{lin_eigen} we set: \begin{align} \label{phi_expansion} \displaystyle\phi_i =& \sum_{(l,m,n)\in \Gamma}\!\hat{\phi}^{i}_{l,m,n}\;F_{l,m,n}\;, \intertext{noting that this ensures that the Dirichlet boundary conditions on the wavefunctions are satisfied automatically. This gives us:} \frac{1}{2}\sum_{\Gamma}\!\hat{\phi}^{i}_{l,m,n}\;\Lambda_{l,m,n}\;F_{l,m,n}\,+& \;V^{\text{eff}}\sum_{\Gamma}\hat{\phi}^{i}_{l,m,n}\; F_{l,m,n} = \;\lambda_i\;\sum_{\Gamma}\hat{\phi}^{i}_{l,m,n}\;F_{l,m,n}\;. \label{temp_gov_eqn_in_basis_set} \end{align} Now, if the expansion coefficients of $V^{\text{eff}}$ are known as $\{\hat{V}^{\text{eff}}_{\tilde{l},\tilde{m},\tilde{n}}: (\tilde{l},\tilde{m},\tilde{n}) \in \Gamma\}$, we may substitute the expansion of $V^{\text{eff}}$ into eq.~\ref{temp_gov_eqn_in_basis_set} to get: \begin{align} \nonumber \displaystyle\frac{1}{2}\sum_{\Gamma}\!\hat{\phi}^{i}_{l,m,n}\;\Lambda_{l,m,n}\;F_{l,m,n} &+\sum_{\Gamma}\sum_{\Gamma}\,\hat{\phi}^{i}_{l,m,n}\,\hat{V}^{\text{eff}}_{\tilde{l},\tilde{m},\tilde{n}}\; F_{\tilde{l},\tilde{m},\tilde{n}}\; F_{l,m,n}\\ &=\lambda_i \sum_{\Gamma}\;\hat{\phi}^{i}_{l,m,n}F_{l,m,n}\;. \label{gov_eqn_in_basis_set_with_potential} \end{align} We now take the inner product of this equation with $F_{l',m',n'}$ and use orthonormality of the basis functions to obtain the following system of linear equations for $\hat{\phi}^{i}_{l',m',n'}$, with $\,(l',m',n')\in \Gamma$ : \begin{align} \displaystyle\frac{1}{2}\Lambda_{l',m',n'}\,\hat{\phi}^{i}_{l',m',n'}+&\sum_{\Gamma}\sum_{\Gamma} {\cal W}^{(l',m',n')}_{(l,m,n)\,,\,(\tilde{l},\tilde{m},\tilde{n})}\,\hat{V}^{\text{eff}}_{\tilde{l},\tilde{m},\tilde{n}}\,\hat{\phi}^{i}_{l,m,n} =\lambda_i\,\hat{\phi}^{i}_{l',m',n'}\;, \label{discretized_eqn_with_W_coefficient} \end{align} where $\displaystyle {\cal W}^{(l',m',n')}_{(l,m,n)\,,\,(\tilde{l},\tilde{m},\tilde{n})}$ denote the coupling coefficients of the basis set, i.e., \begin{align} {\cal W}^{(l',m',n')}_{(l,m,n)\,,\,(\tilde{l},\tilde{m},\tilde{n})} =\biginnprod{F_{\tilde{l},\tilde{m},\tilde{n}}\,F_{l,m,n}\,}{F_{l',m',n'}}{\Lpspc{2}{}{{\cal B}_R}}\;. \label{what_is_W} \end{align} It is possible to express these coupling coefficients in terms of Wigner {3-j} symbols \citep{Messiah_3j_symbol} and the integral of the product of three spherical basis functions taken together \citep{My_MS_Thesis}. Such an expression allows us to see that the coupling coefficients are non-zero only when $\abs{l-\tilde{l}}\leq l'\leq l+\tilde{l}$, $m + m' + \tilde{m} = 0$ and $l+l'+\tilde{l}$ is odd. To recognize the finite dimensional linear eigenvalue problem expressed in eq.~\ref{discretized_eqn_with_W_coefficient}, we may introduce an indexing map ${\cal I}:\Gamma\to\{1,2,\ldots,\mathit{d}\}$ and let ${\cal J}$ denote its inverse.\footnote{A simple indexing map is, for instance, $(l,m,n)\mapsto (l^2 + l + m) * {\cal N} + (n + 1)$.} We rewrite eq.~\ref{discretized_eqn_with_W_coefficient} using the map ${\cal J}$ to obtain a matrix problem of the form: \begin{align} \label{hphi_lambdaphi} {\mathbf{H}\,\mathbf{X}} = {\mathbf{X}\,\mathfrak{D}}\;, \end{align} where $\mathbf{H} \in \mathbb{C}^{\mathit{d} \times \mathit{d}}, \mathbf{X}\in \mathbb{C}^{\mathit{d} \times (N_e/2)}$ and $\mathfrak{D} \in \mathbb{R}^{(N_e/2) \times (N_e/2)}$. {We note that the matrix $\mathbf{H}$ is Hermitian, while the matrix $\mathbf{X}$ is unitary in the sense that $\mathbf{X}^{\dagger}\,\mathbf{X}$ is an identity matrix of dimension $(N_e/2)\times (N_e/2)$.} Denoting $\delta_{\alpha,\beta}$ as the Kronecker delta, we see that matrices $\mathbf{H}, \mathbf{X}$ and $\mathfrak{D}$ have entries of the following form (in terms of the indexing map): \begin{align} \label{Hij_def} \mathbf{H}_{\alpha,\beta} &= \frac{1}{2}\delta_{\alpha,\beta}\, \Lambda_{{\cal J}(\alpha)}+\sum_{(\tilde{l},\tilde{m},\tilde{n})\in \Gamma}\!\hat{V}^{\text{eff}}_{\tilde{l},\tilde{m},\tilde{n}}\;{\cal W}^{{\cal J}(\alpha)}_{{\cal J}(\beta)\,,\,(\tilde{l},\tilde{m},\tilde{n})}\,,\\ \mathbf{X}_{\alpha,\beta} &= \hat{\phi}^{\beta}_{{\cal J}(\alpha)}\;\text{and}\; \mathfrak{D}_{\alpha,\beta} = \delta_{\alpha,\beta}\,\lambda_{{\cal J}(\beta)}\,, \end{align} with $\alpha, \beta$ varying within relevant matrix dimensions. As we mentioned earlier in this article, setting up of the matrix eigenvalue problem followed by direct diagonalization are both expensive operations, although this approach seems to have been adopted by earlier work involving spherical basis functions \citep[see e.g.][]{solid_state_finite}. From eq.~\ref{Hij_def}, for instance, we can see that the matrix $\mathbf{H}$ is dense and therefore, the asymptotic computational complexity of the matrix setup is of cubic order in the total number of basis functions. Direct diagonalization of the Hamiltonian, even by the most efficient algorithms available today \citep[see e.g.][]{MRRR}, will have the same cubic computational complexity in the number of basis functions due to the necessity of reducing the matrix to tridiagonal form. In addition, the memory storage requirement of the full Hamiltonian matrix scales as the square of the number of basis functions and therefore this becomes an additional constraint while trying to deal with even moderate sized systems. \subsubsection{Set up of matrix-vector products} \label{subsubsec:matrix_vector_prods} To avoid the computational difficulties associated with direct diagonalization, we choose to employ matrix-free iterative methods for computing the occupied eigenspace of the Hamiltonian matrix \citep[see][for a detailed discussion of this class of methods]{Saad_large_eigenvalue_book}. As the name suggests, these methods do not need access to the individual matrix entries but only require matrix vector products to be specified. To see how the product of a given vector with the Hamiltonian matrix may be calculated efficiently, without explicit involvement of the coupling coefficients, we proceed as follows: Let $\mathbf{Y} \in \mathbb{C}^{\mathit{d}}$ be a given vector and let $\mathbf{Z} \in \mathbb{C}^{d}$ be the result of the matrix vector product, that is, $\mathbf{Z} = {\mathbf{H}\,\mathbf{Y}}$. In terms of components we have : \begin{align} \nonumber \mathbf{Z}_{\alpha} &= \sum_{\beta = 1}^{\mathit{d}} \mathbf{H}_{\alpha,\beta}\,\mathbf{Y}_{\beta}\, = \sum_{\beta = 1}^{\mathit{d}}\bigg(\frac{1}{2}\delta_{\alpha,\beta}\, \Lambda_{{\cal J}(\alpha)}+\sum_{(\tilde{l},\tilde{m},\tilde{n})\in \Gamma}\!\hat{V}^{\text{eff}}_{\tilde{l},\tilde{m},\tilde{n}}\;{\cal W}^{{\cal J}(\alpha)}_{{\cal J}(\beta)\,,\,(\tilde{l},\tilde{m},\tilde{n})}\bigg)\mathbf{Y}_{\beta}\\ &= \frac{1}{2}\Lambda_{{\cal J}(\alpha)}\mathbf{Y}_{\alpha} + \sum_{\beta = 1}^{\mathit{d}}\!\bigg(\sum_{(\tilde{l},\tilde{m},\tilde{n})\in \Gamma}\!\hat{V}^{\text{eff}}_{\tilde{l},\tilde{m},\tilde{n}}\;{\cal W}^{{\cal J}(\alpha)}_{{\cal J}(\beta)\,,\,(\tilde{l},\tilde{m},\tilde{n})}\bigg)\mathbf{Y}_{\beta}\,. \intertext{The second term, by making use of eq.~\ref{what_is_W} and the linearity of the inner product, can be written as:} \nonumber &=\sum_{\beta = 1}^{\mathit{d}}\sum_{(\tilde{l},\tilde{m},\tilde{n})\in \Gamma}\!\hat{V}^{\text{eff}}_{\tilde{l},\tilde{m},\tilde{n}}\,\mathbf{Y}_{\beta}\,\biginnprod{F_{\tilde{l},\tilde{m},\tilde{n}}\,F_{{\cal J}(\beta)}\,}{F_{{\cal J}(\alpha)}}{\Lpspc{2}{}{{\cal B}_R}} \\\nonumber &= \biginnprod{\bigg(\sum_{(\tilde{l},\tilde{m},\tilde{n})\in \Gamma}\!\hat{V}^{\text{eff}}_{\tilde{l},\tilde{m},\tilde{n}}F_{\tilde{l},\tilde{m},\tilde{n}}\bigg)\,\bigg(\sum_{\beta = 1}^{\mathit{d}}\mathbf{Y}_{\beta} F_{{\cal J}(\beta)}\bigg)\,}{F_{{\cal J}(\alpha)}}{\Lpspc{2}{}{{\cal B}_R}}\,. \intertext{We recognize the first term in parentheses as the expansion of the effective potential in our basis set, and therefore, the final expression for $\mathbf{Z}_{\alpha}$ becomes:} \label{matvec_final} \mathbf{Z}_{\alpha} &= \frac{1}{2}\Lambda_{{\cal J}(\alpha)}\mathbf{Y}_{\alpha} + \biginnprod{{V}^{\text{eff}}\,\bigg(\sum_{\beta = 1}^{\mathit{d}}\mathbf{Y}_{\beta} F_{{\cal J}(\beta)}\bigg)\,}{F_{{\cal J}(\alpha)}}{\Lpspc{2}{}{{\cal B}_R}}\,. \end{align} The above equation suggests that the computation of the Hamiltonian times vector product is to be carried out in two stages and the results from these stages should be summed. First, the action of the kinetic energy operator is to be carried out in reciprocal space because of the diagonal structure of that operator in that space. In the second stage, the action of the operator expressing the action of the effective potential is to be computed. This operator however, is diagonal in real space. Thus, given the vector $\mathbf{Y}$, we imagine its components $\{\mathbf{Y}_{\alpha}\}_{\alpha = 1}^{\mathit{d}}$ to represent expansion coefficients and we perform an inverse basis transform to obtain a function $Y$ defined on the gridpoints in $B$. We then perform a pointwise multiplication of $Y$ with the effective potential (also defined over $B$) and we finally compute a forward basis transform of the product $({V}^{\text{eff}}\cdot Y)$ to obtain the result of the second stage. The principal computational cost of the process described above arises from a pair of basis transforms and therefore the associated time and space complexities are $O({\cal L}^3{\cal N} + {\cal L}^2{\cal N}^2)$ and $O({\cal L}^2{\cal N})$ respectively. In contrast, a direct matrix vector product, once the Hamiltonian matrix has been set up, would involve $O({\cal L}^4{\cal N}^2)$ complexity both in memory and speed. Note that the discussion above does not take into account the role played by non-local pseudopotentials. When such non-local terms are present, the Hamiltonian as described above, has an additional projection operator term that acts on the given vector $\mathbf{Y}$. The action of this additional operator on the given vector can be directly computed as a dense linear algebra operation. \subsubsection{Computation of the electron density} \label{subsubsec:electronic_density} Using the expansion of the wavefunctions (eq.~\ref{phi_expansion}) as well as the expression in eq.~\ref{density_expression}, we see that the electron density admits expansion coefficients of the form: \begin{align} \label{tau_lmn_expression} \tau_{l',m',n'}= 2\sum_{j=1}^{N_e/2}\sum_{\Gamma}\sum_{\Gamma}{\cal W}^{(l',m',n')}_{(l,m,n)\,,\, (\tilde{l},\tilde{m},\tilde{n})}\,\hat{\phi}^{i}_{\tilde{l},\tilde{m},\tilde{n}}\, \overline{\hat{\phi}^{i}_{l,-m,n}}\;. \end{align} Two comments are in order at this stage. First, since the coupling coefficients are non-zero only when $\abs{l-\tilde{l}}\leq l'\leq l+\tilde{l}$ and we have $0 \leq l,\tilde{l} \leq ({\cal L} - 1)$, we see that $\tau_{l',m',n'}$ may have non-zero values for all $l'$ satisfying $0 \leq l' \leq 2({\cal L} - 1)$. Thus, due to the quadratic non-linearity in eq.~\ref{density_expression}, the electron density needs to be represented using a basis set that is larger than the one used to represent the wavefunctions. A similar situation also arises in the context of the plane-wave method, where sometimes, compared to the wavefunctions, the electron density expansion employs a larger energy cutoff \citep{Large_scale_plane_wave}. Often however, plane wave codes allow the so called \textit{dualing approximation} to be engaged, as a result of which, the electron density is expanded using the same basis set as the wavefunctions \citep{Large_scale_plane_wave}. In the same vein, our implementation allows a larger basis set (as well as a correspondingly denser real space grid) for the electron density to be employed based on user choice. The second comment is that, while eq.~\ref{tau_lmn_expression} illustrates the basis requirements while computing the electron density, a direct application of this expression to compute the electron density expansion coefficients is prohibitively expensive. The time complexity of the operations involved in that expression, in terms of the basis set size and the number of electrons involved, is $O(N_e\mathit{d}^3)$. Instead, starting from the expansion coefficients of the wavefunctions, we may compute the real space representations of the wavefunctions using inverse basis transforms. We may then use eq.~\ref{density_expression} to compute the electron density at the grid points in $B$ and finally apply a forward basis transform to obtain the required expansion coefficients of the density. This method results in the reduced time complexity of $O(N_e({\cal L}^3{\cal N} + {\cal L}^2{\cal N}^2))$ and in practice, it turns out to be much more efficient. The methods for computation of the electron density (as described above) and matrix vector products (described earlier in section \ref{subsubsec:matrix_vector_prods}) are both based on the general idea of evaluating convolution sums through efficient transforms \citep{Spectral_Methods_book}. While this technique seems to have been used quite commonly in fluid dynamics simulations both for spherical and periodic domains \citep{orszag_1, orszag_2}, it's application to Kohn-Sham density functional theory seems to have been only in the context of the plane-wave method \citep{Hutter_abinitio_MD} and spherical basis function based methods seem to have ignored it \citep[see e.g.][]{solid_state_finite,spherical_averaged_jellium}. However, the two different methods of evaluation of the convolution sums (i.e., direct application of eq.~\ref{tau_lmn_expression} vs. the transform method described above) can lead to very large differences in computation times. To illustrate this point, we carried out computation of the electron density coefficients from randomly generated families of single electronic states using both methods. While using the direct method, we used hash functions \citep{rasch_wu_hash} for expedited computation of the coupling coefficients\footnote{Storage of all the coupling coefficients becomes memory intensive quite quickly.} that appear in eq.~\ref{tau_lmn_expression}. For both the direct method as well as the transform method, we varied the angular and radial cutoffs independently in order to directly observe the computational scalings of the density calculation routine.\footnote{For both routines, we investigated the angular cutoff range $10 \leq {\cal L} \leq 60$ and the radial cutoff range $10 \leq {\cal N} \leq 60$.} Figure~\ref{fig:density_scaling} shows that the transform routine has far better scaling behavior than the direct routine for both discretization parameters. In practice, the computation run times for both these routines can differ by many orders of magnitude\footnote{Similar conclusions can be drawn about the matrix-vector product computation routines described earlier in section \ref{subsubsec:matrix_vector_prods}.} as Table~\ref{table:direct_vs_transform} shows. The angular and radial cutoffs that were chosen for the comparison in that table are very typical for obtaining acceptable levels of convergence in the total energies of small sized cluster systems containing a few light metallic atoms. \begin{figure}[ht] \centering \includegraphics[scale=0.28]{./figures/Fig1.eps} \caption{Observed scaling of the density computation routine using the direct method (blue curves) and the transform method (red curves) with increasing discretization parameter. Both the angular cutoff variation (curves with triangles) as well as the radial cutoff variation (curves with circles) are shown. Slopes of the fitted lines (black) indicate scaling behavior.} \label{fig:density_scaling} \end{figure} \begin{table}[ht] \begin{center} \begin{tabular}{ | c | c | c |} \hline Transform method & Direct method: & Direct method:\\ & without hash functions & with hash functions\\\hline\hline $1.00$ & $2.18 \times 10^6$ & $1.71 \times 10^6$\\\hline \end{tabular} \caption{Relative timings of the density computation routines for ${\cal L} = 30, {\cal N} = 30$ normalized using the transform method timing.} \label{table:direct_vs_transform} \end{center} \end{table} \subsection{Computation of the potentials} \label{subsec:potential_terms} We describe the computation of the various potential terms in this section with particular attention to the Hartree and pseudopotential terms. The exchange correlation terms (within the Local Density Approximation) are evaluated directly using the real space representation of the electron density and deserve no further comments. \subsubsection{Computation of the Hartree potential} \label{subsubsec:hartree_potential} The Hartree potential at a point ${\bf x} \in {\cal B}_R$ is given as: \begin{align} \label{Hartree_potential} V_H({\bf x}) = \int_{{\cal B}_R} \!\frac{\rho({\bf y})}{\abs{{\bf x}-{\bf y}}}\;d{\bf y}\;. \end{align} One of the most popular approaches to solving this equation is by employing Poisson solvers \citep{Gavini_Kohn_Sham, BigDFT, Chelikowsky_Saad_1}. Our approach to the computation of $V_H$ is to directly deal with eq.~\ref{Hartree_potential} by exploiting the so called Laplace expansion \citep{Jackson_ElectroDyn} of the Green's kernel: \begin{align} \frac{1}{\abs{{\bf x}-{\bf y}}}=\displaystyle\sum_{l=0}^{\infty}\frac{4\pi}{2l+1}\sum_{m=-l}^{m=l} \frac{r^l_{<}}{r^{l+1}_{>}}\,\overline{{\cal Y}_{l}^{m}(\vartheta_{{\bf x}},\varphi_{{\bf x}})}\, {\cal Y}_{l}^{m}(\vartheta_{{\bf y}},\varphi_{{\bf y}})\;. \label{Laplace_expansion} \end{align} In the equation above, $r_{<}=\min{(r_{{\bf x}},r_{{\bf y}})},r_{>}=\max{(r_{{\bf x}},r_{{\bf y}})}$ and $(r_{{\bf x}},\vartheta_{{\bf x}},\varphi_{{\bf x}})$\\ and $(r_{{\bf y}},\vartheta_{{\bf y}},\varphi_{{\bf y}})$ denote ${\bf x}$ and ${\bf y}$ in spherical coordinates respectively. For a typical point ${\bf y} \in {\cal B}_R$, the electron density $\rho$ is available through a basis expansion as: \begin{align} \label{rho_expansion_spherical} \rho(r_{{\bf y}},\vartheta_{{\bf y}},\varphi_{{\bf y}})&=\displaystyle\sum_{\hat{\Gamma}}\tau_{\hat{l},\hat{m},\hat{n}}\, {\cal R}_{\hat{l},\hat{n}}(r_{{\bf y}})\, {\cal Y}_{\hat{l}}^{\hat{m}}(\vartheta_{{\bf y}},\varphi_{{\bf y}})\;, \end{align} with $\hat{\Gamma}$ denoting the same basis set as $\Gamma$, or a larger one, depending on whether the dualing approximation has been used or not. Now, if $d\breve{{\bf y}}$ denotes the volume element in the sphere ${\cal B}_R$, that is, $d\breve{{\bf y}}=r_{{\bf y}}^2\,\sin\vartheta_{{\bf y}}\,dr_{{\bf y}}d\vartheta_{{\bf y}} d\varphi_{{\bf y}}$, then substituting eqs. \ref{Laplace_expansion} and \ref{rho_expansion_spherical} in eq.~\ref{Hartree_potential} and using orthonormality of the spherical harmonics, we get: \begin{align} \nonumber {V}_H(r_{{\bf x}},\vartheta_{{\bf x}},\varphi_{{\bf x}})&=\sum_{l=0}^{\infty}\frac{4\pi}{2l+1}\sum_{m=-l}^{m=l} \sum_{\hat{\Gamma}} \tau_{\hat{l},\hat{m},\hat{n}} {\cal Y}_{l}^{m}(\vartheta_{{\bf x}},\varphi_{{\bf x}})\\\nonumber &\quad\times\;\int_{{\cal B}_R}\! \frac{r^l_{<}}{r^{l+1}_{>}}\,\overline{{\cal Y}_{l}^{m}(\vartheta_{{\bf y}},\varphi_{{\bf y}})} {\cal R}_{\hat{l},\hat{n}}(r_{{\bf y}})\, {\cal Y}_{\hat{l}}^{\hat{m}}(\vartheta_{{\bf y}},\varphi_{{\bf y}})\,d\breve{{\bf y}}\,,\\\nonumber &=\sum_{l=0}^{\infty}\frac{4\pi}{2l+1}\sum_{m=-l}^{m=l} \sum_{\hat{\Gamma}} \tau_{\hat{l},\hat{m},\hat{n}}\, {\cal Y}_{l}^{m}(\vartheta_{{\bf x}},\varphi_{{\bf x}})\,\delta_{l,\hat{l}}\,\delta_{m,\hat{m}}\\\nonumber &\quad\quad\quad\times\; \int_{r_{{\bf y}}=0}^{r_{{\bf y}}=R}\!\frac{r^{\hat{l}}_{<}}{r^{\hat{l}+1}_{>}}\,{\cal R}_{\hat{l},\hat{n}}(r_{{\bf y}})\, r_{{\bf y}}^2\,dr_{{\bf y}}\,, \intertext{which we may rewrite as,} \label{hartree_calc1} &:=\displaystyle \sum_{\hat{\Gamma}}\frac{4\pi}{2\hat{l}+1}\,\tau_{\hat{l},\hat{m},\hat{n}}\, {\cal Y}_{\hat{l}}^{\hat{m}}(\vartheta_{{\bf x}},\varphi_{{\bf x}})\,\mathfrak{Z}_{\hat{l},\hat{n}}(r_{{\bf x}})\;. \end{align} This suggests that computing the Hartree potential from the electron density expansion coefficients is very much like performing an inverse basis transform. The key difference is that the functions $\mathfrak{Z}_{\hat{l},\hat{n}}(r)$ need to be used, instead of the usual radial basis functions ${\cal R}_{{l},{n}}(r)$, while carrying out the radial part of the calculation. If the $\mathfrak{Z}_{\hat{l},\hat{n}}(r)$ functions are pre-computed and stored, the method described here turns out to be extremely efficient: in our implementation, the entire calculation of obtaining the real space representation of ${V}_H$, starting from the real space representation of $\rho$, consumes less than $0.03$\% of the total time of a typical SCF cycle.\footnote{{This happens to be true even for the largest example systems considered later in this paper. From the discussion above, it is clear that the performance and scaling of the electrostatics terms will depend entirely on the efficiency and scalability of the basis transforms themselves. Various aspects of performance and scaling of the basis transforms are discussed throughout this paper. We do mention however, that in order to obtain better scalability of the electrostatics computation routines, we adopted a two level scheme that uses a process grid (the same grid discussed in Section \ref{subsec:two_level_scheme}) to obtain parallelization in the radial variable and in the radial basis function number (while using eqs.~\ref{hartree_calc1} and \ref{rho_expansion_spherical}), thus effectively reducing communication loads.}} The functions $\mathfrak{Z}_{\hat{l},\hat{n}}(r_{{\bf x}})$ may be written as follows: \begin{align} \nonumber \mathfrak{Z}_{\hat{l},\hat{n}}(r_{{\bf x}}) =&\;\frac{1}{RJ_{\hat{l}+\frac{3}{2}}(b^{\hat{n}}_{\hat{l}+\frac{1}{2}})}\, \int_{r_{{\bf y}}=0}^{r_{{\bf y}}=R} \!\frac{r^{\hat{l}}_{<}}{r^{\hat{l}+1}_{>}}\,\sqrt{\frac{2}{r_{{\bf y}}}}\, J_{\hat{l}+\frac{1}{2}}\bigg(\frac{b^{\hat{n}}_{\hat{l}+\frac{1}{2}}}{R}r_{{\bf y}}\bigg)\,r_{{\bf y}}^2\,dr_{{\bf y}}\,, \\\nonumber :=&\;\sqrt{2R}\,\,\tilde{\mathfrak{Z}}_{\hat{l},\hat{n}}(s)\,,\text{with}\, s = r_{{\bf x}}/R\;\text{and}\; s \in [0,1]\;, \\\nonumber \tilde{\mathfrak{Z}}_{\hat{l},\hat{n}}(s)=&\;\frac{1}{J_{\hat{l}+\frac{3}{2}}(b^{\hat{n}}_{\hat{l}+\frac{1}{2}})}\,\bigg[s^{\hat{l}+1} \int_{0}^{s}r_{1}^{\hat{l}+\frac{3}{2}} J_{\hat{l}+\frac{1}{2}}\bigg({b^{\hat{n}}_{\hat{l}+\frac{1}{2}}}r_{1}\bigg)\,dr_{1}\\ &\quad\quad\quad\quad\quad\quad +s^{\hat{l}} \int_{s}^{1}\frac{1}{r_{1}^{\hat{l}-\frac{1}{2}}} J_{\hat{l}+\frac{1}{2}}\bigg({b^{\hat{n}}_{\hat{l}+\frac{1}{2}}}r_{1}\bigg)\,dr_{1} \bigg]\,, \label{zeta_definition} \end{align} and $r_1$ simply denotes an integration variable. The integrals in eq.~\ref{zeta_definition} can be carried out numerically using Gauss quadrature. In our implementation, we have computed $\tilde{\mathfrak{Z}}_{\hat{l},\hat{n}}(s)$ accurately for a large number of values of $\hat{l}$ and $\hat{n}$ over a fine grid of values over $[0,1]$ and stored the results. The values of $\tilde{\mathfrak{Z}}_{\hat{l},\hat{n}}(s)$ at other values of $s \in [0,1]$ are computed using cubic spline interpolation as and when required. During an actual simulation, this procedure is used to quickly set up the functions $\mathfrak{Z}_{\hat{l},\hat{n}}(r_{{\bf x}})$ at the different radial grid points before the first SCF step. \subsubsection{Computation of the pseudopotential terms} \label{susubbsec:pseudo_pot_terms} Modern pseudopotentials usually consist of local and non-local terms \citep{Martin_ES}. We now look at how each of these terms can be evaluated within our framework. The total local pseudopotential at a point ${\bf x} \in {\cal B}_R$ is a combination of terms of the form: \begin{align} \label{local_pseudo_def} {V}_{\text{nu}}({\bf x}) = \sum_{j = 1}^{M} {V}^{j}_{\text{nu}}(\abs{{\bf x} - {\bf x}_j})\;, \end{align} where each of the functions ${V}^{j}_{\text{nu}}$ is reasonably smooth.\footnote{These functions are actually in $\mathsf{C}^{\infty}(\mathbb{R})$ for the pseudopotentials considered in this work. They have a somewhat lower regularity for the popular Troullier-Martins pseudopotentials \citep{Troullier_Martins_pseudo}.} By observing that ${V}_{\text{nu}}$ consists of radially symmetric terms which are centered at the atoms while the basis functions are centered at the origin, it is possible to make use of L\"{o}wdin transformations \citep{Lowdin_1956_quantum} to directly arrive at the expansion coefficients of local pseudopotential terms \citep{solid_state_finite,Broglia_original_paper}. Our method for dealing with the local pseudopotential however, is to directly evaluate eq.~\ref{local_pseudo_def} at the gridpoints in $B$. This is because the local pseudopotential enters the Kohn-Sham calculation through the total effective potential and as described earlier, the total effective potential is required in real space representation during the computation of matrix vector products. The reciprocal space representation of the local pseudopotential can be evaluated by carrying out forward basis transforms, if required. Non-local pseudopotentials are used in electronic structure methods in order to account for the effect of the inert core electrons on the chemically active valence electrons, without directly introducing these core states into the calculation \citep{Martin_ES,LeBris_ReviewBook}. From a computational point of view, the inclusion of a non-local pseudopotential means that a projection operator needs to be added to the Hamiltonian while performing matrix vector products or while computing the total energies / forces. In general, this projection operator can be written as the sum of atom centered rank one operators. By definition, the action of a rank one operator ${\cal O} = p_1\otimes p_2$ on a function $f \in \Lpspc{2}{}{{\cal B}_R}$ is simply given as ${\cal O}\,f = \innprod{p_2}{f}{\Lpspc{2}{}{{\cal B}_R}}\;p_1$. {In our implementation, we first evaluate each of the projector functions $p_1$ and $p_2$ on the underlying real space grid, following which, we compute and store their the expansion coefficients by means of basis transforms (ahead of the first SCF step). From these expansion coefficients, the action of the projector $p_1\otimes p_2$ on an electronic state can be carried out in reciprocal space as the action of a rank one operator as described above.} The collective action of all the atom centered projectors on all the electronic states can be conveniently described through a pair of matrix-matrix multiplications. Instead of this reciprocal space formulation of the non-local pseudopotential terms, it is possible to carry out this calculation more efficiently in real space by making use of the fact that the functions $p_1$ and $p_2$ are usually short ranged (in real space). Some additional care is required so as to ensure that aliasing errors are avoided in this approach \citep{king_smith_non_local_pseudo} and therefore, we intend to explore this methodology in future work. \section{Implementation} \label{sec:implementation_details} We outline various implementation related issues and solution strategies in this section. In particular, we discuss methods of obtaining the occupied eigenspace of the Hamiltonian as well as parallelization aspects of some of the key routines and procedures employed in our method. \subsection{Diagonalization using LOBPCG} \label{subsec:LOBPCG} As remarked earlier, efficient eigensolvers for iterative diagonalization of the Hamiltonian matrix are necessary for dealing with large systems. Perhaps the most commonly used diagonalization method in {ab initio} calculations is the band-by-band conjugate gradient algorithm for direct minimization of the total energy \citep{Teter_Payne_Allan_1, Teter_Payne_Allan_2}, later modified to fit the iterative diagonalization framework\citep{Kleinman_Bylander_band_by_band_CG}. In this work, we have adopted the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method \citep{LOBPCG_1}. The LOBPCG algorithm is much better supported theoretically \citep{LOBPCG_support}, it has been shown to outperform the traditional preconditioned conjugate gradient method \citep{ABINIT_LOBPCG} and it has found applications in numerous electronic structure methods \citep{Meza_Yang_DCM, E_Lin_LOBPCG_F, Octopus_LOBPCG, ABINIT_LOBPCG} due to its robustness. When dealing with relatively small sized example systems (approximately a couple of hundred electrons), we have used the LOBPCG method exclusively to carry out diagonalization in all SCF steps. For some of the larger example systems described later, we used the LOBPCG method only in the first SCF step so as to generate a good guess for the Chebyshev polynomial filtered subspace iteration algorithm (described later) that was used in the subsequent SCF steps. \subsubsection{Implementation details} \label{subsubsec:lobpcg_implementation_details} Our implementation of the LOBPCG method follows the algorithmic steps outlined in \cite{LOBPCG_3}. This allowed us to take advantage of \textit{soft locking} whenever some eigenvectors converged faster than others,\footnote{This can have a considerable impact on speeding up SCF iterations -- in many example systems, we found that diagonalization via LOBPCG in first SCF step is about 1.5 -- 2 times slower than in SCF steps which are close to attaining convergence. The eigenvectors in the latter are already close to their converged values and therefore soft locking allows for faster progression of LOBPCG.} as well as \textit{hard locking} which allowed us to carry out deflation against fixed orthonormal constraints. The latter proves to be particularly useful in calculations on large systems since the total number of electronic states may be too numerous to fit all required eigenstates into one LOBPCG block. This is primarily due to the large computational demands of the Raleigh-Ritz step used by LOBPCG. A second detail is related to the use of Cholesky factorization for orthonormalization (of the residual vectors and conjugate directions) in the LOBPCG implementation. This technique is more computationally efficient but also known to be less reliable than the traditional approach involving QR factorization \citep{LOBPCG_3}. Computation of the Cholesky decomposition of matrices that are poorly scaled are often required by the LOBPCG method and so, it is crucial to either use a Cholesky decomposition that is numerically invariant with respect to matrix scaling, or to scale the columns of such matrices\footnote{We are grateful to Andrew Knyazev (Mitsubishi Electric Research Laboratories) for his consistent support and suggestions during our implementation of LOBPCG, and in particular, for pointing out the stability issues related to Cholesky factorization.} before performing the factorization \citep{Knyazev_email}. In addition, our experience has been that numerical noise or round off errors (arising from the transform based matrix-vector product computations, for instance) can sometimes cause the Cholesky factorization or the Raleigh-Ritz procedure to become unstable. In these situations, we have always found it useful to restart the LOBPCG iterations (discarding the computed conjugate direction and residual vectors) by using the most recently computed block of eigenvectors as the initial guess of a fresh set of iterations. This simple strategy seems to result in a much more robust implementation and it does not introduce any computational bottlenecks. \subsubsection{Use of the Teter-Payne-Allan preconditioner} \label{subsubsec:lobpcg_Teter preconditioner} The need for a good preconditioner for use with the LOBPCG method been emphasized in \citep{LOBPCG_3, LOBPCG_support}. A majority of the generic preconditioners that have been developed over the years, are aimed towards sparse systems. These are unsuitable for our purposes because our method is matrix free, and moreover, the underlying matrix involved is dense. Fortunately, the presence of the Laplacian operator in the Kohn-Sham eigenvalue problem suggests a viable preconditioner \citep{Teter_Payne_Allan_1, Teter_Payne_Allan_2}. These authors introduced a preconditioner within the context of the plane-wave method that has the particular advantage of being diagonal in reciprocal space (and it is therefore, inexpensive to apply). The formal similarities of our spectral method with the plane-wave method allowed us to directly adopt the diagonal preconditioner introduced by these authors. Specifically, we used a preconditioning matrix $\textbf{T}_{p_{\alpha,\beta}}$ of the form: \begin{align} \textbf{T}_{p_{\alpha,\beta}} = \frac{27 + 18g + 12g^2 + 8g^3}{27 + 18g + 12g^2 + 8g^3 + 16g^4}\;\delta_{\alpha,\beta}\,, \end{align} where $g$ is the ratio of the Laplacian eigenvalue to the kinetic energy of the residual vector on which the preconditioner is being applied, i.e., denoting the residual vector as ${\bf Y} \in \mathbb{C}^{\mathit{d}}$, \begin{align} \displaystyle g =\Lambda_{{\cal J}(\alpha)} \bigg /\bigg( \frac{1}{2}\sum_{\alpha = 1}^{\mathit{d}}\Lambda_{{\cal J}(\alpha)}\abs{{\bf Y}_\alpha}^2\bigg )\;. \end{align} As $g$ approaches zero, the preconditioner elements approach one with zero derivatives upto third order and so, $\textbf{T}_{p}$ leaves the low energy states unchanged. On the other hand, above $g = 1$, $\textbf{T}_{p}$ asymptotically approaches the inverse Laplacian thus suitably damping out the high kinetic energy states that are responsible for ill-conditioning. As can be seen from Figure~\ref{fig:c_teter_prec}, this simple and inexpensive preconditioner makes a marked difference in the rate of convergence of the residuals in LOBPCG. The particular system used for the demonstration was an 18 atom Barium cluster for which 4000 basis functions were used and only the linear part of the Kohn-Sham equations was solved. This preconditioner was therefore adopted in all further calculations wherever the LOBPCG solver was employed. \begin{figure}[ht] \centering \includegraphics[scale=0.28]{./figures/Fig2.eps} \caption{Effect of the diagonal preconditioner on LOBPCG iterations. The average residual for the first few eigenvectors has been plotted against the iteration number.} \label{fig:c_teter_prec} \end{figure} \subsubsection{Parallelization scheme and scaling} \label{subsubsec:lobpcg_parallel_elemental} Our strategy for parallelization of the LOBPCG method is to carry out relevant linear algebra operations using a distributed memory dense linear algebra library. For this purpose, we have adopted the state of the art numerical library\footnote{We are grateful to Jack Poulson (Georgia Tech.), the lead author of the Elemental package for his suggestions and help with the package.} Elemental \citep{Elemental_Poulson}. This library has been designed to be a more scalable and easier to interface successor of the ScaLAPACK \citep{ScaLAPACK_1, ScaLAPACK_2} and PLAPACK \citep{PLAPACK_1, PLAPACK_2} libraries that have already found widespread use in other electronic structure codes. Elemental uses an element-wise block-cyclic distribution of matrices over a two-dimensional grid of processors.\footnote{See Figure~\ref{fig:data_redistrib} to see an example of how the data is distributed among processors.} The Message Passing Interface (MPI) is used for interprocess communication while linear algebra operations that are local to each process are carried out by making calls to (serial) BLAS and LAPACK libraries.\footnote{To ensure maximum use of hardware resources, our code was linked to machine optimized BLAS and LAPACK libraries.} The dimensions of the process grid that underlies the linear algebra operations in Elemental can have an impact on the resulting parallel efficiency of the LOBPCG routine. We have used square process grids in most cases. In some cases however, we found that the use of rectangular process grids, in which the height of the grid was longer than the width of the grid seemed to result in better performance. In order to judge the parallel efficiency of our LOBPCG implementation, we studied the weak scaling of our routine in the following manner. We generated random dense hermitian matrices of various sizes and computed the first few hundred eigenstates. We used between 16 and 512 c.p.u cores, the matrix size was increased in proportion to the number of cores used\footnote{The computational platform details are described in a later section.} and the number of states computed was held constant. We measured the average time per LOBPCG step and the results from this study are plotted in Figure~\ref{fig:lobpcg_scal_weak}. Keeping in mind that one of the most computationally expensive steps in the setting of dense matrices is due to parallel matrix vector multiplications in which the problem size grows quadratically\footnote{This is unlike the weak scaling studies presented in \citep{LOBPCG_3} in which sparse matrices were used, resulting in linear growth of problem size with increasing matrix dimension.} with increasing matrix dimension \citep{poulson_parallel_matmul}, we have also plotted in Figure~\ref{fig:lobpcg_scal_weak}, the computational complexity adjusted parallel efficiency. As is evident from the figure, the adjusted parallel efficiency remains above 90\% up to 256 c.p.u. cores and drops to a little less than 80\% at 512 c.p.u. cores.\footnote{{Here as well as in Section \ref{subsubsec:scaling_perf_and_PG}, we have focussed on weak scalability results. This is because we were primarily interested in ensuring that our code is able to handle even large materials systems within a reasonably constant wall-clock time. Our underlying assumption of course, was that more computational resources would to be allocated to the code, when necessary, in order to meet the wall-clock time requirements. We do mention however that the strong scaling of our LOBPCG routine is reasonably good, although it is not as encouraging as its weak scaling. In a test involving the computation of a few hundred eigenstates of a randomly generated hermitian matrix of dimension $40,960 \times 40,960$, the strong parallel efficiency was about $60$ \% for 128 MPI processes and it dropped to about $40$\% for 256 MPI processes. More details may be found in \citep{My_PhD_Thesis}.}} \begin{figure}[ht] \centering \includegraphics[scale=0.28]{./figures/Fig3.eps} \caption{Weak parallel scaling behavior of the LOBPCG implementation measured by time taken per LOBPCG step vs. the number of MPI processes employed, while keeping the problem size per process constant.} \label{fig:lobpcg_scal_weak} \end{figure} \subsection{Chebyshev polynomial filter accelerated subspace iterations} \label{subsec:Chebyshev_polynomial_filter} Subspace iteration algorithms constitute a generalization of the classical power iterations approach to computation of eigenpairs \citep{Saad_large_eigenvalue_book, Online_Template_Eigenvalue_Problem_Book}. These methods allow the computation of multi-dimensional invariant subspaces rather than one eigenvector at a time. Since the electron density or the total Kohn-Sham energy do not depend explicitly on the eigenvectors of the Hamiltonian, but only on the occupied subspace, subspace iterations have often been used for electronic structure calculations \citep{stephan1998improved, bekas2005computing, baroni1992towards}. The Chebyshev polynomial filtered SCF iteration technique for computing the occupied eigenspace of the Kohn-Sham operator was introduced in \cite{Serial_Chebyshev, Parallel_Chebyshev}, and was originally presented in the context of the finite difference method. However, this method has {enjoyed success} in conjunction with finite elements as well \citep{Gavini_higher_order}. The method can be thought of as a form of non-linear subspace iteration which takes advantage of the fact that eigenvectors of the Hamiltonian do not need to be computed accurately at every SCF step since the Hamiltonians involved are approximate as well. This allows one to exploit the non-linear nature of the problem in the sense that the technique removes emphasis on the accurate solution of the intermediate linearized Kohn-Sham eigenvalue problems. By means of spectral mapping, the method employs the exponential growth property of the Chebyshev polynomials outside the region $[-1,1]$ to damp out the unwanted part of the spectrum of the Hamiltonian thus accelerating the subspace iterations. Although the Chebyshev polynomial filtered SCF iteration technique does not seem to have been adopted in the plane-wave method literature so far, it was apparent to us that as long as one has access to efficient matrix-vector product routines, the technique is likely to yield large savings compared to traditional diagonalization methods. This is primarily due to the fact that orthonormalization and other linear algebra operation costs that accompany traditional diagonalization methods are minimal in this method. \subsubsection{Implementation details} \label{subsusec:cheby_implementation} The Chebyshev polynomial filtered SCF iteration technique is presently the work horse of most medium to large sized computations carried out using the ClusterES package. In our implementation of this method, we first {obtain a guess for the initial electron density by linearly superposing precomputed atomic electron densities. Next, having computed the potentials, we use the LOBPCG method (on a collection of randomly generated vectors used as an initial guess\footnote{{This appears to be a fairly common practice in the literature - see for example \citep{Teter_Payne_Allan_1, zhou_2014_chebyshev}.}}) to obtain a good eigenbasis of the Hamiltonian for the first SCF step.} This is used to serve as a good guess for the occupied subspace of the Hamiltonian at self-consistency.\footnote{Typically, a few extra states (about 10 -- 20) are included from the LOBPCG calculation so that the Raleigh-Ritz step (used in the Chebyshev filtering method) and finite-temperature Fermi-Dirac smearing (used for metallic systems) can be employed.} The Chebyshev polynomial filtered subspace iterations begin after this first SCF step and they attempt to adaptively improve the initial guess subspace by polynomial filtering.\footnote{{We recently became aware of techniques which can omit the first diagonalization step in lieu of filtering \citep{zhou_2014_chebyshev}. We intend to adopt this methodology into our code in the near future.}} In the original presentation of the Chebyshev filtering method, the authors used the DGKS algorithm \citep{DGKS} for orthonormalization of the basis vectors of the occupied subspace. In the spirit of the LOBPCG method as well as the RMM-DIIS method \citep{Kresse_abinitio_iterative}, we have used the faster (but less stable) Cholesky factorization method instead. This helped speed up the orthonormalization calculation (by a factor of 2--3 in most cases) and we have not witnessed any problematic side effects. As described in \cite{Serial_Chebyshev, Parallel_Chebyshev}, the bulk of the Chebyshev filtering algorithm consists of evaluating the polynomial filter using matrix vector products. The additional linear algebra operations involved are in the form of scaling and shifting, orthonormalization and the Raleigh-Ritz step. Therefore, as in our implementation of the LOBPCG method, we used the Elemental package and its underlying process grid structure for carrying out these dense linear algebra operations in parallel. Table~\ref{table:cheby_vs_lobpcg} shows the performance gains obtained by our use of the Chebyshev filtered subspace iteration method when compared to LOBPCG. For the two examples presented in that table, each typical Chebyshev filtered SCF step is about 10 -- 20 times faster than each typical LOBPCG based SCF step while the total number of SCF steps used by both methods to reach similar levels of convergence is about the same. Thus, there is an enormous amount of savings in the total computation time for the examples presented. It seems likely that for larger material systems, the savings are even greater. \begin{table}[ht] \begin{center} \resizebox{13.5cm}{!}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{} & No. of Basis & No. of & No. of & No. of & Ratio of LOBPCG \\ Material & functions & electronic & LOBPCG & Chebyshev & step time to\\ System & used & states used & SCF steps & SCF steps & Chebyshev step time\\\hline\hline 172 atom & & & & & \\ Aluminum & 512000 & 280 & 22 & 23 & 20.2\\ FCC cluster & & & & & \\ \hline $\text{C}_{60}$ & & & & & \\ Buckyball & 343000 & 136 & 11 & 13 & 12.3 \\ \hline \end{tabular} } \caption{Performance of Chebyshev Filtered SCF iterations compared against LOBPCG based SCF iterations.} \label{table:cheby_vs_lobpcg} \end{center} \end{table} \subsection{Parallelization of Matrix vector products and electron density computation : Two level scheme} \label{subsec:two_level_scheme} For systems containing up to a few thousand electronic states, the Hamiltonian matrix times vector computation routine is one of the main computationally intensive steps in the LOBPCG method and it is the principle one in the Chebyshev filtering method. These methods typically require the product of the Hamiltonian with a block of vectors to be computed. Due to our use of the two dimensional process grid for carrying out dense linear algebra operations (see sections \ref{subsubsec:lobpcg_parallel_elemental}, \ref{subsusec:cheby_implementation}), the block of vectors that needs to be multiplied with the Hamiltonian already appears distributed over the process grid. Specifically, the states are distributed over the process grid columns and each state is further distributed over process grid rows. In this situation, it is natural to parallelize the matrix vector product over the different Kohn-Sham states (i.e., band/state parallelization, as it is often called in the plane-wave literature) since this involves no communication between the processors that store the different states. However, we observe that the inverse basis transform requires access to all the expansion coefficients that constitute an entire state (eq.~\ref{eq:G_lmr}) while the forward basis transform requires access to function values at all the grid points (eqs.~\ref{Ar_lm}, \ref{Arlm_to_coeff}). Since the process grid for the linear algebra operations distributes each state over the process grid rows, this requires the basis transforms to induce communication within process grid columns. The data redistribution over the process grid that is required for these basis transforms is shown schematically in Figure~\ref{fig:data_redistrib}. A crucial detail is that the forward and inverse spherical harmonics transforms (which constitute the bulk of the operations involved within the basis transforms) can be performed independently over the various radial grid points. Thus, we may adopt a second level of parallelization by distributing real space quantities over different values of the radial gridpoints since this will ensure that the basis transform routines are mostly communication free. Figure~\ref{fig:big_figure} shows a schematic outline of the individual steps of the matrix vector computation procedure over the process grid. \begin{figure}[ht] \centering \begin{subfigure}[b]{\textwidth} \centering \begin{tabular}{|c|c|} \hline 0 & 2 \\\hline 1 & 3 \\\hline \end{tabular} \caption{Numbering of processes in a $2\times2$ process grid. } \label{fig:proc_grid} \end{subfigure} \begin{subfigure}[b]{\textwidth} \footnotesize \centering $\left(\begin{array}{ccccc} 0 & 2 & 0 & 2\\ 1 & 3 & 1 & 3\\ 0 & 2 & 0 & 2\\ 1 & 3 & 1 & 3\\ 0 & 2 & 0 & 2\\ 1 & 3 & 1 & 3 \end{array}\right)$ \begin{picture}(60,60) \thicklines \put(0,15){\vector(1,0){60}} \put(0,20){Fwd. Trans.} \put(60,6){\vector(-1,0){60}} \put(8,-5){Inv. Trans.} \end{picture} $\left(\begin{array}{cccc} \{0,1\} & \{2,3\} & \{0,1\} & \{2,3\} \\ \{0,1\} & \{2,3\} & \{0,1\} & \{2,3\} \\ \{0,1\} & \{2,3\} & \{0,1\} & \{2,3\} \\ \{0,1\} & \{2,3\} & \{0,1\} & \{2,3\} \\ \{0,1\} & \{2,3\} & \{0,1\} & \{2,3\} \\ \{0,1\} & \{2,3\} & \{0,1\} & \{2,3\} \end{array}\right)$ \caption{Individual matrix entry ownership (by processes) during basis transforms.} \label{fig:data_ownership} \end{subfigure} \caption{Example of data redistribution during forward and inverse transforms. A process grid of dimension $2\times2$ has been used for storing 4 electronic states and the number of expansion coefficients for each state (i.e., basis set size) is 6. Inter process communication occurs only along process grid columns during the transforms.} \label{fig:data_redistrib} \end{figure} The two level parallelization scheme described above is in the spirit of similar schemes adopted by some large scale plane-wave codes \citep{Large_scale_plane_wave, Gygi_2D_parallel}. Due to this scheme, the only communication involved during the matrix vector product calculations is over individual process grid columns: one time during the broadcast of the different portions of a particular state while the inverse basis transform occurs and a second time during computation of the radial quadratures while the forward basis transform occurs. The important point however, is that the communication load gets reduced from the total number of processors involved, to roughly the square root of the total number of processors (if a square process grid is in use). Also, due to the distribution of various real space quantities over the radial grid points, the memory overhead is reduced as well. The computation of the electron density from the Kohn-Sham states can also be made to follow this two level parallelization scheme. In this case, while computing the electron density in real space (from the Kohn-Sham states in reciprocal space), the communication involved is once along individual process grid columns during computation of the inverse basis transform and a second time along individual process grid rows while summing the results from the different Kohn-Sham states according to eq.~\ref{density_expression}. Once again, this means that the communication load scales roughly as the square root of the number of processors. \begin{figure}[ht] \centering \begin{subfigure}[h]{0.45\textwidth} \centering \scalebox{0.73}{ \begin{picture}(235,170) \thicklines \multiput(55, 130)(120, 0 ){2}{\oval(100,70)} \multiput(55, 40)(120, 0 ){2}{\oval(100,70)} \put(38, 150){\color{red}{Proc. 0}} \put(38, 60){\color{red}{Proc. 1}} \put(158, 150){\color{red}{Proc. 2}} \put(158, 60){\color{red}{Proc. 3}} \put(24, 130){{${\bf X}^0_{0,2,4}\;,\;{\bf X}^2_{0,2,4}$}} \put(16, 110){{$V(r_0,\ldots,r_{5};\varpi)$}} \put(24, 40){{${\bf X}^0_{1,3,5}\;,\;{\bf X}^2_{1,3,5}$}} \put(14, 20){{$V(r_6,\ldots,r_{11};\varpi)$}} \put(144, 130){{${\bf X}^1_{0,2,4}\;,\;{\bf X}^3_{0,2,4}$}} \put(136, 110){{$V(r_0,\ldots,r_5;\varpi)$}} \put(144, 40){{${\bf X}^1_{1,3,5}\;,\;{\bf X}^3_{1,3,5}$}} \put(134, 20){{$V(r_6,\ldots,r_{11};\varpi)$}} \end{picture}} \caption{Initial distribution.} \label{fig1:big_fig_1} \end{subfigure} \quad \begin{subfigure}[h]{0.45\textwidth} \centering \scalebox{0.73}{ \begin{picture}(235,170) \thicklines \multiput(55, 130)(120, 0 ){2}{\oval(100,70)} \multiput(55, 40)(120, 0 ){2}{\oval(100,70)} \put(38, 150){\color{red}{Proc. 0}} \put(38, 60){\color{red}{Proc. 1}} \put(158, 150){\color{red}{Proc. 2}} \put(158, 60){\color{red}{Proc. 3}} \put(24, 130){{${\bf X}^0_{0,\ldots,5}\;,\;{\bf X}^2_{0,\ldots,5}$}} \put(16, 110){{$V(r_0,\ldots,r_{5};\varpi)$}} \put(24, 40){{${\bf X}^0_{0,\ldots,5}\;,\;{\bf X}^2_{0,\ldots,5}$}} \put(14, 20){{$V(r_6,\ldots,r_{11};\varpi)$}} \put(144, 130){{${\bf X}^1_{0,\ldots,5}\;,\;{\bf X}^3_{0,\ldots,5}$}} \put(136, 110){{$V(r_0,\ldots,r_5;\varpi)$}} \put(144, 40){{${\bf X}^1_{0,\ldots,5}\;,\;{\bf X}^3_{0,\ldots,5}$}} \put(136, 20){{$V(r_6,\ldots,r_{11};\varpi)$}} \put(50,95){\color{blue}\vector(0,-1){20}} \put(60,75){\color{blue}\vector(0,1){20}} \put(170,95){\color{blue}\vector(0,-1){20}} \put(180,75){\color{blue}\vector(0,1){20}} \end{picture}} \caption{Column-wise communication.} \label{fig:big_fig_2} \end{subfigure} \begin{subfigure}[h]{0.45\textwidth} \centering \scalebox{0.73}{ \begin{picture}(235,170) \thicklines \multiput(55, 130)(120, 0 ){2}{\oval(100,70)} \multiput(55, 40)(120, 0 ){2}{\oval(100,70)} \put(38, 150){\color{red}{Proc. 0}} \put(38, 60){\color{red}{Proc. 1}} \put(158, 150){\color{red}{Proc. 2}} \put(158, 60){\color{red}{Proc. 3}} \put(12, 140){{${\bf X}^0(r_0,\ldots,r_5;\varpi)$}} \put(12, 125){{${\bf X}^2(r_0,\ldots,r_5;\varpi)$}} \put(14, 110){{$V(r_0,\ldots,r_{5};\varpi)$}} \put(10, 50){{${\bf X}^0(r_6,\ldots,r_{11};\varpi)$}} \put(10, 35){{${\bf X}^2(r_6,\ldots,r_{11};\varpi)$}} \put(12, 20){{$V(r_6,\ldots,r_{11};\varpi)$}} \put(132, 140){{${\bf X}^1(r_0,\ldots,r_5;\varpi)$}} \put(132, 125){{${\bf X}^3(r_0,\ldots,r_5;\varpi)$}} \put(134, 110){{$V(r_0,\ldots,r_{5};\varpi)$}} \put(130, 50){{${\bf X}^1(r_6,\ldots,r_{11};\varpi)$}} \put(130, 35){{${\bf X}^3(r_6,\ldots,r_{11};\varpi)$}} \put(132, 20){{$V(r_6,\ldots,r_{11};\varpi)$}} \end{picture}} \caption{Real space conversion at every radial node (eq.~\ref{eq:G_lmr} + inverse spherical harmonic transforms).} \label{fig1:big_fig_3} \end{subfigure} \quad \begin{subfigure}[h]{0.45\textwidth} \centering \scalebox{0.73}{ \begin{picture}(235,170) \thicklines \multiput(55, 130)(120, 0 ){2}{\oval(100,70)} \multiput(55, 40)(120, 0 ){2}{\oval(100,70)} \put(38, 150){\color{red}{Proc. 0}} \put(38, 60){\color{red}{Proc. 1}} \put(158, 150){\color{red}{Proc. 2}} \put(158, 60){\color{red}{Proc. 3}} \put(10, 135){{$V\!{\bf X}^0(r_0,\ldots,r_5;\varpi)$}} \put(10, 115){{$V\!{\bf X}^2(r_0,\ldots,r_5;\varpi)$}} \put(7, 45){{$V\!{\bf X}^0(r_6,\ldots,r_{11};\varpi)$}} \put(7, 25){{$V\!{\bf X}^2(r_6,\ldots,r_{11};\varpi)$}} \put(130, 135){{$V\!{\bf X}^1(r_0,\ldots,r_5;\varpi)$}} \put(130, 115){{$V\!{\bf X}^3(r_0,\ldots,r_5;\varpi)$}} \put(127, 45){{$V\!{\bf X}^1(r_6,\ldots,r_{11};\varpi)$}} \put(127, 25){{$V\!{\bf X}^3(r_6,\ldots,r_{11};\varpi)$}} \end{picture}} \caption{Point-wise multiplication of total local potential $V$ with each state in real space.} \label{fig:big_fig_4} \end{subfigure} \caption{Schematic of the steps involved in computing the Hamiltonian times block of vectors product using the $2\times2$ process grid. Only the local part of the total potential is considered here. The block of vectors ${\bf X}$ has 4 states (denoted with superscripts) with 6 expansion coefficients (labelled using subscripts) used for each state. The real space grid has 12 points $r_0,\ldots,r_{11}$ in the radial direction. The angular grid is left unspecified here and denoted as $\varpi = (\vartheta,\varphi)$. Real space quantities are shared along process grid columns by distributing the radial nodes.} \label{fig:big_figure} \end{figure} \begin{figure}[ht] \ContinuedFloat \centering \begin{subfigure}[h]{0.45\textwidth} \centering \scalebox{0.73}{ \begin{picture}(235,170) \thicklines \multiput(55, 130)(120, 0 ){2}{\oval(100,70)} \multiput(55, 40)(120, 0 ){2}{\oval(100,70)} \put(38, 150){\color{red}{Proc. 0}} \put(38, 60){\color{red}{Proc. 1}} \put(158, 150){\color{red}{Proc. 2}} \put(158, 60){\color{red}{Proc. 3}} \put(8, 132){{${\bf Y}^{0}(r_0,\ldots,r_5;l,m)$}} \put(8, 112){{${\bf Y}^{2}(r_0,\ldots,r_5;l,m)$}} \put(6, 42){{${\bf Y}^{0}(r_6,\ldots,r_{11};l,m)$}} \put(6, 22){{${\bf Y}^{2}(r_6,\ldots,r_{11};l,m)$}} \put(128, 132){{${\bf Y}^{1}(r_0,\ldots,r_5;l,m)$}} \put(128, 112){{${\bf Y}^{3}(r_0,\ldots,r_5;l,m)$}} \put(126, 42){{${\bf Y}^{1}(r_6,\ldots,r_{11};l,m)$}} \put(126, 22){{${\bf Y}^{3}(r_6,\ldots,r_{11};l,m)$}} \end{picture}} \caption{Spherical harmonic transforms at every radial node (eq.~\ref{Ar_lm}).} \label{fig1:big_fig_5} \end{subfigure} \quad \begin{subfigure}[h]{0.45\textwidth} \centering \scalebox{0.73}{ \begin{picture}(235,170) \thicklines \multiput(55, 130)(120, 0 ){2}{\oval(100,70)} \multiput(55, 40)(120, 0 ){2}{\oval(100,70)} \put(38, 150){\color{red}{Proc. 0}} \put(38, 60){\color{red}{Proc. 1}} \put(158, 150){\color{red}{Proc. 2}} \put(158, 60){\color{red}{Proc. 3}} \put(15, 132){{$\mathfrak{Q}_{0}^{5}\big[{\bf Y}^{0}(r;l,m)\big]$}} \put(15, 112){{$\mathfrak{Q}_{0}^{5}\big[{\bf Y}^{2}(r;l,m)\big]$}} \put(15, 42){{$\mathfrak{Q}_{6}^{11}\big[{\bf Y}^{0}(r;l,m)\big]$}} \put(15, 22){{$\mathfrak{Q}_{6}^{11}\big[{\bf Y}^{2}(r;l,m)\big]$}} \put(137, 132){{$\mathfrak{Q}_{0}^{5}\big[{\bf Y}^{1}(r;l,m)\big]$}} \put(137, 112){{$\mathfrak{Q}_{0}^{5}\big[{\bf Y}^{3}(r;l,m)\big]$}} \put(137, 42){{$\mathfrak{Q}_{6}^{11}\big[{\bf Y}^{1}(r;l,m)\big]$}} \put(137, 22){{$\mathfrak{Q}_{6}^{11}\big[{\bf Y}^{3}(r;l,m)\big]$}} \end{picture}} \caption{Partial radial quadratures using local radial nodes.} \label{fig1:big_fig_6} \end{subfigure} \begin{subfigure}[h]{0.45\textwidth} \centering \scalebox{0.73}{ \begin{picture}(235,170) \thicklines \multiput(55, 130)(120, 0 ){2}{\oval(100,70)} \multiput(55, 40)(120, 0 ){2}{\oval(100,70)} \put(38, 150){\color{red}{Proc. 0}} \put(38, 60){\color{red}{Proc. 1}} \put(158, 150){\color{red}{Proc. 2}} \put(158, 60){\color{red}{Proc. 3}} \put(24, 124){{${\bf Y}^0_{0,2,4}\;,\;{\bf Y}^2_{0,2,4}$}} \put(24, 34){{${\bf Y}^0_{1,3,5}\;,\;{\bf Y}^2_{1,3,5}$}} \put(144, 124){{${\bf Y}^1_{0,2,4}\;,\;{\bf Y}^3_{0,2,4}$}} \put(144, 34){{${\bf Y}^1_{1,3,5}\;,\;{\bf Y}^3_{1,3,5}$}} \put(50,95){\color{blue}\vector(0,-1){20}} \put(60,75){\color{blue}\vector(0,1){20}} \put(170,95){\color{blue}\vector(0,-1){20}} \put(180,75){\color{blue}\vector(0,1){20}} \end{picture}} \caption{Column-wise communication to evaluate radial quadratures (eq.~\ref{Arlm_to_coeff}) from partial results.} \label{fig:big_fig_7} \end{subfigure} \caption{(Continued) Schematic of computation of the Hamiltonian times block of vectors product over $2\times2$ process grid. The product of the Hamiltonian with the block of vectors is denoted as ${\bf Y}$ here. The symbol $\mathfrak{Q}_{i}^{j}\big[t(r)\big]$ is used to denote a partial radial quadrature (i.e., evaluation of eq.~\ref{Arlm_to_coeff}) over the radial nodes $r_i,\ldots,r_j$.} \end{figure} \subsubsection{Scaling performance and process grid geometry choice} \label{subsubsec:scaling_perf_and_PG} We now discuss the scaling performance of the matrix vector product routine. In order to properly assess and interpret the scaling performance, we need to be mindful of the two dimensional nature of the underlying parallelization scheme. In particular, the choice of an optimal process grid geometry for a fixed basis set size, may be done as follows. First, with the given basis set size, we observe the computation time for the matrix vector product routine using only one electronic state. We increase the process grid height (keeping the process grid width fixed at 1) till the optimum performance is reached. Due to increasing communication costs during basis transforms, the performance saturates after a sufficiently large process grid height. In Figure~\ref{subfig:single_band_saturation}, we used one million basis functions for our study and for this basis set size, saturation occurs\footnote{Note that while the number of basis functions used was one million, the number of radial points used was only $200$. This helps us understand why the parallelization based on decomposition of the real space grid (which is based on the radial variable), has a relatively quick saturation at a grid height of 16.} after a grid height of 16. Now, keeping the process grid height at 16, the process grid width may be varied in proportion to the number of Kohn-Sham states that are required for the calculation. The reason for this strategy is clear from Figure~\ref{subfig:grid_width_scaling} -- the two level parallelization scheme is able to make use of the embarrassingly parallel nature of the problem with the number of Kohn-Sham states in use and this is reflected from the nearly perfect weak scaling performance of the code.\footnote{A weak scaling parallel efficiency of over 98 \% is attained with a process grid width of 32, i.e., a total of 512 processes. We changed the process grid width (from 1 to 32, in multiples of 2) in proportion to the number of states for the weak scaling study, thus varying the total number of MPI processes between 16 and 512.} For this same reason, we have also been able to verify that the strong scaling behavior of our code remains highly favorable\footnote{The strong parallel efficiency remains well above 97 \% at process grid width of 32 (i.e. 512 total MPI processes). {For testing the strong scalability, we kept the basis set size as well as the number of Kohn-Sham states constant (at $10^6$ and $256$ respectively). The process grid height was kept constant, while the process} {grid width was allowed to vary with increase in the number of MPI processes employed. More details may be found in \citep{My_PhD_Thesis}.}} at even 512 total MPI processes. We anticipate that the scaling performance is likely to remain at such favorable levels for even larger numbers of processors. \begin{figure}[ht] \centering \begin{subfigure}[h]{\textwidth} \centering \includegraphics[scale=0.20]{./figures/Fig6a.eps} \caption{Normalized matrix vector product computation time variation with increasing process grid height. A single Kohn-Sham state and $10^6$ basis functions was used for the computation. Saturation occurs at a grid height of 16.\\} \label{subfig:single_band_saturation} \end{subfigure} \begin{subfigure}[h]{\textwidth} \centering \includegraphics[scale=0.19]{./figures/Fig6b.eps} \caption{Normalized matrix vector product computation time variation with increasing process grid width and proportionate increase in the number of Kohn-Sham states ($10^6$ basis functions for each state). The grid height was kept fixed at 16 for the blue curve and so, a total of 16 to 512 MPI processes were used.} \label{subfig:grid_width_scaling} \end{subfigure} \caption{Parallel scaling efficiency of the matrix-vector product routine and its dependence on process grid geometry.}\label{fig:matvec_scaling} \end{figure} Although this straight-forward method for obtaining optimum process grid geometries (based on insights into the scaling performance) is useful, in practice it is sometimes possible to get even better performance from individual process grid configurations, depending on the specific number of basis functions and states in use. Some examples of such cases are also shown\footnote{The problem sizes used for these individual examples was commensurate with the total number of processes in use, i.e., the $8\times8$ and $16\times4$ process grids were made to use the same number of processes, for example.} in Figure~\ref{subfig:grid_width_scaling}. The overall conclusion that we were able to draw from these studies is that out of the two levels of parallelism available in our implementation, state parallelism is more effective and scalable than the physical domain parallelism. The scaling performance of the electron density computation routine can be understood on similar lines. We choose to skip further details of this since, unlike the time spent on matrix vector products, the time spent on computing the electron density is typically a relatively minor fraction of the total time spent in every SCF step. \subsection{Miscellaneous details} \label{subsec:misc_impl} We briefly outline miscellaneous implementation related details in this section. \subsubsection{Mixing and smearing schemes} \label{subsubsec:mixing_smearing} As mentioned earlier, SCF iterations typically employ mixing schemes in order to accelerate convergence towards the fixed point of the Kohn-Sham map \citep{Martin_ES}. The importance of mixing schemes in SCF iterations has been recognized both empirically and theoretically \citep{dederichs_zeller}, leading to the development of various methods over the years \citep{anderson_mixing, broyden_mixing, pulay_mixing, johnson_modified_broyden, cances_mixing, secant_mixing_saad}. We employed the multiple stage Anderson mixing scheme \citep{anderson_mixing, Kohanoff} in this work. Our implementation allows for mixing of the total effective potentials or of the electron density. We have found that potential mixing tends to result in faster convergence of the total energies in most systems. A complete mixing history was used in all the examples and the associated linear mixing parameter used was between 0.1 and 0.3. Regardless of the mixing procedure, materials systems which have small or no band gaps (metallic systems, for instance) tend to experience convergence issues in the SCF iterations. This occurs due to degenerate energy levels near the Fermi surface in these systems, and it usually manifests itself as \textit{charge sloshing} \citep{Kresse_abinitio_MD}. A common solution to this problem is to introduce \textit{smearing} of the Fermi surface by prescribing a distribution of occupation numbers for the Kohn-Sham states \citep{Kohanoff} . We implemented the widely used Fermi-Dirac distribution for this purpose. This scheme introduces electronic temperature dependent orbital occupations as: \begin{align} f_{i} &= \frac{1}{1+\exp\big(\frac{\lambda_i - \epsilon_F}{K_B\,\Theta}\big)}\,, \intertext{in which, the Fermi level $\epsilon_F$ can be determined by solving the constraint:} \label{eq:Fermi_constraint} \int_{\mathbb{R}^3}\!\rho &= N_e\quad \implies \sum_{i=1}^{N_e / 2}f_{i} = N_e / 2\,. \end{align} We solved eq.~\ref{eq:Fermi_constraint} using Brent's method \citep{Brent_Method_Book} and we set the electronic temperature $\Theta$ to 100 -- 200 Kelvin for all simulations where Fermi-Dirac smearing was used. \subsubsection{The ClusterES package} \label{subsubsec:ClusterES_package} We have incorporated all the methods and algorithms discussed so far into an efficient and reliable package called ClusterES (Cluster Electronic Structure). Since our package makes heavy use of Spherical Harmonics Transforms, access to optimized and efficient routines for carrying out these transforms is essential for good performance of our code. We adopted the state of the art SHTns\footnote{We are grateful to Nathana{\"e}l Schaeffer (CNRS, France) for his help and support with the SHTns library.} library \citep{shtns} for this purpose. In spite of using a traditional cubic order algorithm for computation (as opposed to algorithms which are asymptotically faster, e.g. \cite{Mohlenkamp_SHT}) this library has been shown to far outperform other Spherical Harmonics Transform routines because of its use of various hardware level optimizations \citep{shtns}. The spherical Bessel functions and the Associated Legendre polynomials required for various computations in our code were generated using routines from the GNU Scientific Library \citep{GSL_manual}. Evaluation of the Gauss quadrature weights and nodes were carried out using the algorithm presented in \citep{Golub_Gauss_Quadrature}. Computation of the roots of the Bessel functions was carried out by Halley's method \citep{CS_phase_encyclopedia}. \subsubsection{Computational platform} \label{subsubsec:platform_details} All computations were carried out on the Itasca cluster of the Minnesota Supercomputing Institute. Itasca is an HP Linux cluster with 1,091 HP ProLiant BL280c G6 blade servers, each of which have two-socket, quad-core 2.8 GHz Intel Xeon X5560 ``Nehalem EP" processors sharing 24 GB of system memory, with a 40 gigabit QDR InfiniBand (IB) interconnect. The GNU g++ compiler (ver. 4.8.1) along with Open MPI (ver. 1.7.1) was used and all serial linear algebra and FFT operations were carried out using the hardware optimized Intel Math Kernel Library (ver. 11.0). \section{Numerical results, example systems and applications} \label{sec:examples} We finally describe various numerical results obtained using our method in this section. For all the computations described here, the radius of the spherical domain was chosen by following the procedure suggested in \citep{Chelikowsky_Saad_1}: We first center the cluster~/~molecular system under study at the origin and then ensure that the atom(s) in the system that are furthest from the origin, are atleast 8--16 atomic units away from the boundary of the sphere.\footnote{The specific choice is dictated by computing single atom solutions and observing the decay rates of the electron density in such solutions. If multiple atom species are present, the atom with the slowest decay rate of the electron density is used to set the radius for the spherical domain in case of the cluster system.} \subsection{Convergence properties} \label{subsec:convergence} We begin by studying the convergence properties of our method using numerical examples. First, we computed the ground state of the Hydrogen atom based on the Schr\"odinger equation (i.e., Kohn-Sham self-consistent iterations were not used). This system has the particular advantage that the ground state energy is known analytically to be $-0.5$ Hartrees and so, it serves as an accurate reference for convergence studies. Since the ground state wave function is radially symmetric, we used only the $l =0, m = 0$ spherical harmonic in the angular direction. Figure~\ref{fig:spectral_conv} shows the convergence of the numerical solution to the analytical one with increasing number of basis functions. Due to the Coulombic singularity in the nuclear potential, the plot is a straight line indicating that the convergence rate is polynomial. Next, we replaced the Coulombic potential for Hydrogen with a smooth pseudopotential as parametrized in \cite{GTH_pseudoptential}. We computed the Kohn-Sham ground state of the Hydrogen (pseudo) atom with this pseudopotential for increasing values of ${\cal N}$, while using only the $l =0, m = 0$ spherical harmonic in the angular direction in every case. We used the ${\cal N} = 50$ case as a reference\footnote{{We verified that this reference case agrees with results from a standard plane-wave code upto atleast $10^{-5}$ Hartrees.}} and plotted the (logarithmic) relative errors with increasing basis set size (with respect to the reference) in Figure~\ref{fig:spectral_conv}. In this case, due to the smoothness of the potential used, the plot has an overall curvature indicating a faster than polynomial rate of convergence (i.e., spectral convergence). \begin{figure}[ht] \centering \includegraphics[scale=0.28]{./figures/Fig7.eps} \caption{Convergence of numerical solutions with increasing number of basis functions. The Coulombic nuclear potential shows a polynomial rate of convergence while smooth pseudopotential solutions show a faster than polynomial rate of convergence.} \label{fig:spectral_conv} \end{figure} Finally, in order to assess the convergence properties for a full scale problem, we computed the ground state of a $2\times2\times2$ body centered cubic (BCC) cluster of Barium. This cluster system has 35 atoms. We employed the smooth `Evanescent Core' local pseudopotential \citep{EC_Fiolhais} for simulation of the Barium atoms. In order to be able to obtain results which can be compared to the literature, we followed \citep{Gavini_higher_order} to use a lattice constant of 9.5 a.u. and an electronic temperature of 200 K for our calculations. To make apparent the convergence rate of our method, we computed the ground state energy of this system using ${\cal L} = 100, {\cal N} = 100$ (i.e., one million basis functions) and used it as a reference value. Thereafter, starting from ${\cal L} = 7, {\cal N} = 7$ we computed the ground state energy for increasing values of ${\cal L}$ and ${\cal N}$ and plotted the (logarithmic) relative errors (compared to the reference value) as a function of the (logarithmic) basis set size as shown in Figure~\ref{fig:spectral_conv}. We see from this figure that because of our use of smooth pseudopotentials, once again, the convergence is rapid: even on a logarithmic scale, there is an overall curvature in the plot, thus indicating faster than polynomial rate of convergence (i.e., spectral convergence). Figure~\ref{fig:Ba2x2x2} shows contour plots of the electron density for the barium cluster obtained using our code. For the Barium cluster, the ground state energy per atom obtained using our code comes out to be $-0.6386253$ Ha which compares well with the value of $-0.6386277$ Ha obtained using a plane-wave code\footnote{The relative difference is of the order of $10^{-6}$.} in \citep{Gavini_higher_order}. This indicates not only rapid convergence of our code but also convergence to the correct value. As a matter of further comparison, we mention that in order to reach these aforementioned numbers, the plane-wave code needed to use over two million plane-waves (mainly arising due to a large vacuum region that had to be used around the cluster) whereas, our code used only $216,000$ basis functions. Even with approximately $55,000$ basis functions (a calculation that took only about 15 c.p.u. minutes on a laptop), we were able to reach convergence levels of about $2\times 10^{-4}$ eV/atom, which is an order of magnitude smaller than the usual levels of convergence demanded in accurate ground state energy calculations. Aside from the numerical observations presented here, we should point out that an application of the analysis presented in \citep{Zhou_finite_dimensional_numerical_analysis} rigorously establishes that our basis set correctly approximates the Kohn-Sham ground states. A full scale mathematical investigation of the convergence rates of our basis set (similar to the results presented in \citep{Cances_planewave_numerical_analysis} in the context of the plane-wave method) is the scope of future work.\footnote{We are grateful to Eric Canc{\`e}s (Ecole des Ponts ParisTech, France) for providing us with useful suggestions in this direction.} \begin{figure}[ht] \centering \includegraphics[scale=0.32]{./figures/Fig8.eps} \caption{Electron density contour plot of the $2\times2\times2$ BCC Barium cluster computed using ClusterES.} \label{fig:Ba2x2x2} \end{figure} \subsection{Example calculations of various materials systems} \label{subsec:example_calculations} Having ascertained the convergence properties of our method, we now compute the ground state properties of various metallic and non-metallic materials systems using our code and compare our results with the literature. Many of our examples related to metallic clusters (computed using local pseudopotentials) are motivated by recent work in finite element methods for the Kohn-Sham equations \citep{Gavini_Kohn_Sham, Gavini_higher_order}. These examples gave us a way of verifying and benchmarking our calculations, as well as helping establish the relative ease and effectiveness by which such computations can be done routinely using our method. \subsubsection{All-electron calculations of light atoms} \label{subsubsec:light_atom_AE} We begin by computing the ground state electronic structures of the first few elements of the periodic table. This serves as a simple test of our implementation. No pseudopotential was used. That is, these are all electron calculations. We used the parametrization of the Local Density Approximation as presented in \cite{perdew_zunger, ceperley_alder}. The results of our computations are shown in Table~\ref{table:light_atoms} and compared with values from the literature. \begin{table}[ht] \begin{center} \begin{tabular}{ | c || c | c | c |} \hline Element & ClusterES & Ref. \cite{Kotochigova_NIST} & Ref. \cite{Phanish_max_ent}\\\hline\hline Hydrogen & -0.445 & -0.445 & -0.445 \\\hline Helium & -2.833 & -2.834 & -2.830 \\\hline Lithium & -7.327* & -7.335 & -7.338 \\\hline Beryllium & -14.265* & -14.447 & -14.434 \\\hline \end{tabular} \caption{Ground state energies of a few light atoms (Hartree units used). Items marked with * indicate results where basis set convergence was not pursued due to the requirement of a large number of radial basis functions.} \label{table:light_atoms} \end{center} \end{table} While the results are largely positive, they also illustrate the difficulty that our code faces when dealing with all-electron calculations. In spite of the spherical symmetry of the ground state, the Coulombic singularity at the origin makes it necessary to use a large number of radial basis functions to converge towards expected results. As the atomic numbers increase, so does the strength of the singularity and hence the increased difficulty of the computation. Therefore, we did not pursue the Lithium and Beryllium atom calculations after we ascertained that our results were within about 1\% of the values from the literature. All subsequent calculations reported here employ pseudopotentials to mitigate this issue. \subsubsection{Local pseudopotential calculations} \label{subsubsec:local_pseudo} Having validated the basic correctness of our methodology and implementation using the all electron atomic calculations, we now move to pseudopotential calculations. We first work with the smooth local `Evanescent Core' pseudopotential \citep{EC_Fiolhais}. This bulk-fitted pseudopotential has been designed to deal with various simple metallic systems and because of the lack of non-local projectors, it is relatively computationally inexpensive. Due to the smoothness of the pseudopotential, we witnessed rapid convergence of our code with increasing basis set size in all the examples that follow. We first compute the ground state energies of various pseudo-atoms using the pseudopotential and compare with the values from the literature. The results displayed in Table~\ref{Table:ec_atoms} show perfect agreement. \begin{table}[ht] \begin{center} \begin{tabular}{ | c || c | c | c |} \hline Element & ClusterES & Ref. \citep{Nogueira_EC_transferability} & Ref. \cite{Gavini_Kohn_Sham}\\\hline\hline Lithium & -5.97 & -5.97 & -5.97 \\\hline Sodium & -5.21 & -5.21 & -5.21 \\\hline Magnesium & -23.06 & -23.06 & -23.05 \\\hline \end{tabular} \caption{Ground state energies of a few light atoms (electron volt units used).} \label{Table:ec_atoms} \end{center} \end{table} Next, we computed the ground state properties of lithium and sodium dimers and octahedral clusters. We computed the binding energy (in electron volts per atom units) and the bond length (in atomic units) of these systems. For the octahedral clusters, as in \citep{Nogueira_EC_transferability, Phanish_max_ent}, we did not perform any geometry optimization but only sought minima in terms of the nearest neighbour bond length. Also, following these authors, the cluster system ground states were computed without spin polarization while the individual atomic data used spin polarization. {For reference purposes, we also carried out well converged plane-wave calculations\footnote{{We employed an energy cutoff of $30$ Hartrees and a cell length of $30$ atomic units or more in ABINIT.}} on these cluster systems using the ABINIT \citep{Gonze_ABINIT_1} code .} The results are shown in Table~\ref{Table:Li_Na_clusters}. \begin{table}[ht] \begin{center} \begin{tabular}{ |c|c||c|c|c|c| } \hline {Cluster} & {Parameters} & {ClusterES} & {{Plane-wave}} & {Ref. \citep{Phanish_max_ent}} & {Ref.\citep{Nogueira_EC_transferability}} \\\hline\hline \multirow{2}{*}{Li$_2$} & Binding Energy & {-0.48} & {-0.48} & -0.49 & -0.52\\ & Bond Length & {4.75} & {4.75} & 4.86 & 4.92 \\\hline \multirow{2}{*}{Na$_2$} & Binding Energy & {-0.35} & {-0.35} & -0.36 & -0.46\\ & Bond Length & {5.71} & {5.72} & 5.72 & 5.77 \\\hline \multirow{2}{*}{Li$_6$} & Binding Energy & {-0.54} & {-0.54} & -0.50 & -0.72\\ & Bond Length & {5.72} & {5.72} & 5.69 & 5.79 \\\hline \multirow{2}{*}{Na$_6$} & Binding Energy & {-0.43} & {-0.43} & -0.42 & -0.53\\ & Bond Length & {6.79} & {6.79} & 6.80 & 6.87 \\\hline \end{tabular} \caption{Binding energy in electron volts per atom and bond length in atomic units for sodium and lithium dimers and octahedral clusters.} \label{Table:Li_Na_clusters} \end{center} \end{table} {We see that our results match with the plane-wave results almost exactly. Additionally, the overall agreement with the values in the literature is also good.} The observable discrepancies with the results of \citep{Nogueira_EC_transferability} is probably because of the use of the LCAO method by those authors. {We are not completely sure of the reasons behind the minor discrepancies with the results of \citep{Phanish_max_ent}. However, there seems to be some confusion in the literature about the correct values of the parameters used by the evanescent core pseudopotential \citep{EC_Fiolhais_erratum} and this might have caused a slightly different set of parameters to have been used in \citep{Phanish_max_ent}. Also, as noted in more recent work \citep{Gavini_higher_order}, higher order finite elements are often necessary for well converged reliable calculations and these were not employed in \citep{Phanish_max_ent}. We believe however, that the precise agreement between our results and the plane-wave code lend support to the credibility of our results.} Next, we study the properties of a few larger clusters of sodium consisting of $2\times2\times 2$, and $3\times3\times3$ body centered cubic unit cells. We calculated the binding energy per atom and lattice constant for these clusters by computing the total energy for various values of the lattice parameter and then fitting this data to a cubic polynomial. {Our results, compare essentially exactly to the results from well converged plane-wave calculations as Table~\ref{Table:BCC_sodium} shows, assuring us of the efficacy of our method.} {{As a matter of further illustration, let us mention that for the $3\times3\times3$ sodium cluster, at the minimum energy bond length, the total ground state free energies from the plane-wave code and our code are $-20.010982$ Hartrees and $-20.011008$ Hartrees respectively. This corresponds to a difference of less than $0.5$ micro-Hartrees per atom, demonstrating the extremely high accuracies that are easily accessible with our code.}} {Other values from the literature are also shown in that table. The overall agreement with these values is also very good (the bond lengths agree to within 1\%, for example). The minor discrepancies from \citep{Gavini_Kohn_Sham} are likely to be explained by the factors mentioned above.}\footnote{{Also, it was not completely clear to us if the authors of \citep{Gavini_Kohn_Sham} used spin-polarization for these particular set of calculations. As before, we computed the cluster system ground states without spin polarization while the individual atomic data used spin polarization.}} \begin{table}[ht] \begin{center} \begin{tabular}{ |c|c||c|c|c| } \hline {Sodium cluster} & {Properties} & {ClusterES} & {{Plane-wave}} & Ref. \citep{Gavini_Kohn_Sham} \\\hline\hline \multirow{2}{*}{$2\times 2 \times 2$} & Binding Energy (eV/atom) & {-0.71} & {-0.71} & -0.71\\ & Bond Length (a.u.) & {7.61} & {7.61} & 7.55 \\\hline \multirow{2}{*}{$3 \times 3 \times 3$} & Binding Energy (eV/atom) & {-0.78} & {-0.78} &-0.80\\ & Bond Length (a.u.) & {7.78} & {7.78} & 7.75 \\\hline \end{tabular} \caption{Binding energy per atom and lattice constant of sodium BCC unit cells.} \label{Table:BCC_sodium} \end{center} \end{table} \subsubsection{Non-local pseudopotential calculations} \label{subsubsec:non_local_pseudo} In order to deal with a wider variety of materials systems, we now turn to calculations involving {ab initio} norm-conserving non-local pseudopotentials. This class of pseudopotentials is attractive because the pseudopotentials are accurate and transferable and at the same time, they are available for all elements in the periodic table (including ones which require relativistic treatment of the core electrons). Here, we look at the results obtained using the separable dual space Gaussian pseudopotentials introduced in \cite{GTH_pseudoptential, GTH_relativistic}. This pseudopotential is available in analytical form with a small set of parameters for every element (thus allowing for easy implementation) and it satisfies an optimality criterion for the real-space integration of the nonlocal part. While this pseudopotential is known to be harder than other norm conserving pseudopotentials (i.e., it requires many more basis functions per atom for converged results), it is also known to be more accurate and transferable than other pseudopotentials \citep{GTH_pseudoptential}. We computed the bond lengths of a few small molecules using our spectral code and compared our results with values from literature, as presented in Table~\ref{Table:GTH_molecules}. Our results all agree to within 0.2\% of values obtained by the authors of \cite{GTH_pseudoptential}. \begin{table}[ht] \begin{center} \begin{tabular}{| c || c | c |} \hline {Molecule} & {Bond length: ClusterES} & {Bond length: Ref. \citep{GTH_pseudoptential}} \\\hline\hline CO & {2.128 a.u.} & 2.127 a.u.\\\hline CH$_4$ & {2.074 a.u.} & 2.072 a.u.\\\hline SiH$_4$ & {2.810 a.u.} & 2.810 a.u.\\\hline NH$_3$ & {1.928 a.u.} & 1.931 a.u.\\\hline H$_2$O & {1.833 a.u.} & 1.835 a.u. \\\hline \end{tabular} \caption{Bond lengths of a few small molecules computed using the Goedecker-Teter-Hutter pseudopotentials.} \label{Table:GTH_molecules} \end{center} \end{table} Next, we computed the ground state properties of a few larger systems consisting of organic molecules and fullerenes.\footnote{We are grateful to Qing-Bo Yan (UCAS, China) for making the coordinates of the Boron fullerene available to us.} {We compared our results with the literature as well as with plane-wave code\footnote{{The hardness of the pseudopotentials used often required energy cutoffs as large as $200$ Hartrees to be employed for the plane-wave code.}} calculations (using ABINIT \citep{Gonze_ABINIT_1}) and finite difference method calculations (using the Octopus code \citep{Octopus_1}). The results are presented in Table~\ref{table:buckyball_etc}. The agreement with the plane-wave and finite difference method results is excellent, thereby confirming the efficacy of our method. The overall agreement with other independent sources from the literature is also very good.} The relatively minor differences with the results presented in \citep{zhou_hexahedral_fem} is most likely because of the use of a different pseudopotential by the authors of that work, while the difference from \citep{Boron_fullerenes} occurs probably because of the use of an LCAO basis with a gradient corrected functional by those authors. Figure~\ref{fig:buckyball} shows the electron density iso-surfaces of the Buckyball cluster while figure ~\ref{fig:azobenzene_contours} shows the electron density contour plots for the Azobenzene molecule. \begin{table}[ht] \begin{center} \resizebox{15.5cm}{!}{ \begin{tabular}{ |c|c||c|c|c|c| } \hline {System} & {Properties} & {ClusterES} & {Plane-wave} & F.D.M & Other sources \\\hline\hline \multirow{2}{*}{} \footnotesize{Benzene} & \footnotesize{Ground State Energy} & {-85.47} & {-85.47} & {-85.48} & {-85.65} \small{(Ref. \citep{zhou_hexahedral_fem})}\\ $\text{C}_{6}\text{H}_{6}$ & \footnotesize{HOMO-LUMO gap} & {5.15} & {5.15} & {5.15} & {5.22} \small{(Ref. \citep{zhou_hexahedral_fem})} \\\hline \multirow{2}{*}{} \footnotesize{Buckyball} & \footnotesize{Ground State Energy} & {-155.09} & {-155.09} & {-155.09} &-155.02 \small{(Ref. \citep{zhou_hexahedral_fem})}\\ $\text{C}_{60}$ & \footnotesize{HOMO-LUMO gap} & {1.64} & {1.64} & 1.64 & 1.64 \small{(Ref. \citep{C60_bandgap})} \\\hline \multirow{2}{*}{} \footnotesize{Azobenzene} & \footnotesize{Ground State Energy} & {-106.68} & {-106.68} & {-106.68} & {--}\\ $\text{C}_{12}\text{H}_{10}\text{N}_{2}$ & \footnotesize{HOMO-LUMO gap} & {1.39} & {1.39} & {1.39} & -- \\\hline \multirow{2}{*}{} \footnotesize{Boron fullerene} & \footnotesize{Ground State Energy} & {-76.94} & {-76.94} & {-76.95} & --\\ $\text{B}_{96}$ & \footnotesize{HOMO-LUMO gap} & {0.79} & {0.79} & 0.79 & 0.78 \small{(Ref. \cite{Boron_fullerenes})} \\\hline \end{tabular}} \caption{Ground State Energy (eV/atom) and HOMO-LUMO gap (eV) of some organic molecules and fullerenes computed using our code and compared with results obtained from other sources. F.D.M denotes finite difference method calculations done using the Octopus \citep{Octopus_1} code. Plane-wave calculations were carried out using the ABINIT code \citep{Gonze_ABINIT_1}.} \label{table:buckyball_etc} \end{center} \end{table} \begin{figure}[ht] \centering \begin{subfigure}[h]{0.45\textwidth} \centering \includegraphics[scale=0.30]{./figures/Fig9a.eps} \caption{Electron density isosurface of $\text{C}_{60}$.} \label{fig:buckyball} \end{subfigure} \begin{subfigure}[h]{0.45\textwidth} \centering \includegraphics[scale=0.30]{./figures/Fig9b.eps} \caption{Electron density contours of azobenzene.} \label{fig:azobenzene_contours} \end{subfigure} \caption{Ground state electron density plots for the $\text{C}_{60}$ Buckyball and the azobenzene molecule.}\label{fig:C60_azobenzene} \end{figure} Some of the examples presented above (for both local and non-local pseudopotentials) highlight the fact that our code is easily able to handle arbitrary cluster / molecule shapes and geometries. Indeed, thanks to the convergence properties of our basis set and our efficient implementation, even linear or planar molecules, which are quite far from spherical shapes, present no issues.\footnote{{In order to further verify that our code does not face any difficulties in dealing with asymmetric or non-spherical systems, we carried out the following test (suggested to us by an anonymous reviewer): We computed the ground state energy of single silicon atom placed at the origin of a domain of radius $20$ a.u.} {and then observed the change in energy of the system as this atom was moved outward radially. We observed that even at a radial distance of 8 a.u. from the origin, the energy change in the system remained less than $0.1$ milli eV. The basis set size used in these examples was not particularly large - in fact even when we used a basis set which was $40$ \% smaller in size, the change in energy of the system changed only by $0.25$ milli eV when placed at a distance of $8$ a.u. from the origin. Thus, the origin does not have a special status in our method and our code does not seem to require the use of very large basis sets in dealing with highly asymmetric systems. Of course, in most practical situations, the system under study is placed such that it's center of mass coincides with the origin, atleast approximately.}} The use of spherical harmonics in our basis set makes it very convenient to systematically compute several other quantities of interest, such as electrostatic multipole moments. Indeed, following the expressions presented in \citep[page 108]{dipole_electrostatic_book} we see that quadruple or dipole moments can be easily obtained in our method by carrying out computations similar to the computation of forward basis transforms. We carried out this exercise for obtaining the dipole moment of the carbon monoxide molecule. We chose this particular example since it appears to us that various authors seem to have obtained a wide range of values of this quantity due to systematic errors in their computations -- probably, either through incomplete basis sets (see \citep{gunnarsson_diatomic} and \citep{dipole_electrostatic_book} where values of -0.01 D and -0.60 D are respectively mentioned) or possibly through the use of unconverged grids or inaccurate pseudopotentials (as apparently obtained in \citep{Chelikowsky_Saad_1}). The currently accepted value of this quantity at equilibrium bond length (using LDA calculations)\footnote{This differs from the experimental value by about a factor of two \citep[see e.g.][]{dipole_moments_1}. This discrepancy is usually ascribed to correlation effects being insufficiently modelled in Kohn-Sham LDA calculations \citep{dipole_electrostatic_book}.} appears to be about -0.22 D \citep{dipole_electrostatic_book, dipole_moments_1, dipole_moments_2} which agrees well with our results as Table~\ref{table:CO_Dipole} shows. \begin{table}[ht] \begin{center} \begin{tabular}{ |c|c||c|c|c| } \hline {System} & {Property} & {ClusterES} & Ref. \citep{dipole_moments_1} & F.D.M \\\hline\hline \multirow{2}{*}{} CO & Dipole Moment (Debye) & {-0.23} & -0.22 & {-0.23} \\\hline \end{tabular} \caption{Dipole moment of the carbon monoxide molecule at equilibrium bond length. F.D.M stands for a calculation using the finite difference method carried out using the Octopus code \citep{Octopus_1}.} \label{table:CO_Dipole} \end{center} \end{table} \subsection{Benchmark calculations on large systems} \label{subsec:benchmark} Finally, in order to demonstrate the capabilities of our method in dealing with large systems efficiently, we carry out computations of the ground states of large aluminum clusters. We looked at $3\times3\times3$, $5\times5\times5$ and $7\times7\times7$ face centered cubic (FCC) clusters for this study. The lattice spacing was fixed at 7.45 a.u. for all the clusters and we used the `Evanescent Core' pseudopotential \citep{EC_Fiolhais} for these calculations. A thermalization temperature of 100 Kelvin was used. For the $3\times3\times3$ and $5\times5\times5$ clusters, in order to assess the efficacy of our method, we aimed to converge our ground state energies (per atom) to within one--two milli electron volts of the plane-wave and higher order finite element method (FEM) results\footnote{This corresponds to relative errors of the order of $10^{-5}$.}\footnote{{It was pointed out to us by an anonymous reviewer that these levels are convergence are more demanding than the standards typically adhered to in many electronic structure calculations. Often, it is sufficient for the ground state energy to reach convergence levels of about $1$ milli-Hartree per atom. This does not change the conclusions about the benchmark calculations laid out in this section, in the sense that the performance of our code remains very favourable when compared with a standard plane-wave code (ABINIT, \citep{Gonze_ABINIT_1}), if both codes are made to use the minimum number of basis functions that would allow them to reach convergence levels of $1$ milli-Hartree per atom in the ground state energies. In a cluster system involving 62 aluminum atoms (FCC structure with a single mono-vacancy), our code had a wall clock time which was about $1.8$} {times smaller than the wall clock time registered by the plane-wave code when both codes were made to attain this level of convergence. {Both codes were executed with the same computational resources -- specifically, each code was run on a single node of the Itasca cluster (hardware described at the beginning of Section \ref{sec:examples}) and a single computational thread on a single core of this node was used}. For reaching this convergence level, the plane-wave code used over $370,000$ basis functions, while our code used only about $43,000$. We systematically ensured that the minimum energy cutoff and cell size that would be required by the plane-wave code to reach the desired accuracy was used. We intend to present more details of these kinds of benchmarks in future work.}} presented in \cite{Gavini_higher_order}. For the 7$\times$7$\times$7 cluster, due to computational resource constraints, we used a somewhat smaller basis set than what would be required to achieve this same level of convergence. So we present here results in which the total energy was within 0.01 electron volts per atom of the higher order finite element method (FEM) results. The results are shown in Table~\ref{table:Al_clusters}. \begin{table}[ht] \small \begin{center} \begin{tabular}{ | c | c | c || c | c | c |} \hline System & No. atoms & No. electrons & ClusterES & Plane-wave & FEM\\\hline\hline $3\times3\times3$ & 172 & 516 & -56.01809 & -56.01814 & -56.01776\\\hline $5\times5\times5$ & 666 & 1998 & -56.05057 & -56.05068 & -56.04906\\\hline $7\times7\times7$ & 1688 & 5064 & -56.05812 & -- & -56.06826\\\hline \end{tabular} \caption{Ground state energy per atom of large aluminum clusters. Electron-volt units used. The plane wave and FEM results were obtained from reference \cite{Gavini_higher_order}.} \label{table:Al_clusters} \end{center} \end{table} To show that our methodology and its implementation is highly competitive with existing methods, we display in Table~\ref{table:run_times} timing results\footnote{{In order to enable comparison with the results in \citep{Gavini_higher_order}, we report here timings in total c.p.u hours. This was estimated by multiplying the wall clock timings with the number of MPI processes used. The $3\times3\times3$} {and $5\times5\times5$ aluminum clusters used 16 MPI and 256 MPI processes respectively.}} of the $3\times3\times3$ and $5\times5\times5$ systems and compare it with the results presented in \cite{Gavini_higher_order}. The computational platform used by the authors in \cite{Gavini_higher_order} was quite similar to our own, if not by some measures, superior to ours. Nevertheless, due to the fast convergence of our spectral basis set, the efficient basis transforms, and various other algorithmic methodologies adopted here, our code was able to well outperform the plane-wave and finite element codes. In particular, in spite of having access to highly efficient FFTs, the plane-wave code performance seems to have suffered due to the requirement of having large supercells (with large vacuum regions) for obtaining converged results with these clusters. \begin{table}[ht] \begin{center} \begin{tabular}{ | c || c | c | c |} \hline System & ClusterES & Plane-wave & FEM\\\hline\hline $3\times3\times3$ FCC Aluminum cluster & 18 & 646 & 371\\\hline $5\times5\times5$ FCC Aluminum cluster & 1948 & 7307 & 6619\\\hline \end{tabular} \caption{Computational run times of ClusterES compared against existing plane-wave and FEM codes. All run times are presented in c.p.u. hours. The plane wave and FEM results were obtained from reference \cite{Gavini_higher_order}.} \label{table:run_times} \end{center} \end{table} \subsection{Brief comments on symmetry adaptation} \label{subsec:symmetry_adaptation} Most plane-wave codes allow for some method of symmetry adaptation, usually in the form of special point sampling methods for the Brillouin zone \citep{chadi_cohen_special_points, evarestov_special_points}. Due to the formal similarities of our method with the plane-wave method, it is natural to investigate if symmetry adaptation can be carried out in a straight-forward way in our setting. There indeed seems to be a relatively simple way of carrying out this enterprise. The key point is that (like in the case of plane-waves), the basis functions in use arise as eigenfunctions of the Laplacian operator. This operator commutes with all relevant symmetry operations -- it commutes with translational symmetry in a periodic setting and similarly it commutes with all point group operations in our setting. In our case, this results in the fact that point group actions on the basis set can be computed easily : the spherical harmonics ensure that symmetry group action on the basis set can be written down analytically in terms of Wigner D-matrices \citep{Edmonds_angular_quantum}. Therefore, this gives us an efficient method of constructing Peter-Weyl projectors \citep{Folland_Harmonic, Barut_Reps} onto the symmetry invariant irreducible subspaces of the problem at hand. These projected subspaces can then be employed, in conjunction with subspace iteration methods (such as Chebyshev-Filtered SCF iterations) to obtain a symmetry adapted reduction of a given problem. A full scale report on symmetry adaptation within our basis set highlighting these points is currently under preparation. \section{Conclusions and future directions} \label{sec:conclusions} In summary, we have proposed and implemented a method for efficient and accurate solution of the Kohn-Sham equations for clusters. This method serves as an analog of the plane-wave method for periodic systems and similar to that method, it shows rapid and systematic convergence properties. We have demonstrated that with the adoption of various algorithmic strategies, our method produces reliable results for a vast array of materials systems. In terms of performance metrics, benchmark calculations on various cluster systems show that our method is highly competitive when compared with other established basis sets and methods, both in terms of accuracy and speed. The formal analogies of our method with the plane-wave method allow us to adopt, mutatis mutandis, a multitude of numerical and algorithmic strategies commonly employed by the plane-wave method, which eventually lead to the efficient and reliable performance of our implementation. An additional outcome is that our method forms a systematic generalization of approximate spherical basis function based methods introduced earlier in the literature in a variety of contexts (most commonly for the purpose of jellium calculations). Our method retains the basic simplicity of those methods (since the basis functions employed are of a similar nature), but it has far superior performance and applicability than any of those approximate methods. Our basis functions allow arbitrary point group symmetries to be exploited systematically, and obtaining leverage out of this fact constitutes the subject of on going and future work. A promising area of research is to use the ClusterES package for first principles materials discovery based on the ideas broadly outlined in \citep{James_OS}. {In a separate contribution, we are currently following up on this work by demonstrating the use of our method in the accurate computation of quantum mechanical forces. It appears to us that the application of the Hellman-Feynman force formula is straight forward in our method: the global nature of the basis results in the absence of Pulay forces and further, the spectral convergence properties that are inherent to our basis also carry over to the forces. In the near future, we aim to carry out abinitio molecular dynamics simulations of various cluster systems of interest using our method.} {Currently, one of the main computational bottlenecks in carrying out basis transforms is due to the transform in the radial direction, which scales quadratically in the number of radial basis functions (see the discussion following eq.~\ref{eq:G_lmr}). So far, our use of Gauss quadrature and that of machine optimized libraries has enabled us to keep the constant in front of this asymptotic expression small, thus leading to the competitive run times of our code in practice. In the long run, however, an asymptotically faster algorithm should be employed and we intend to explore various possibilities in this direction since it has a direct bearing on our ability to successfully tackle even larger systems of interest.} {Finally, due to its use of Dirichlet boundary conditions, our proposed method allows for charged systems to be easily studied without the need for introducing an artificial background charge (as currently used in plane-wave codes). Thus, the study of charged cluster systems\footnote{{This possible avenue of research was suggested to us by an anonymous reviewer.}} using our approach is likely to be a fruitful avenue of research in the near future.} \section*{Acknowledgement} This work was primarily supported by Russell Penrose. It also benefited from the support of NSF-PIRE Grant No. OISE-0967140, ONR N00014-14-1-0714 and the MURI project FA9550-12-1-0458 (administered by AFOSR). We would like to thank the Minnesota Supercomputing Institute for making the parallel computing resources used in this work available. We would like to thank Phanish Suryanarayana (Georgia Tech.) for his many insightful comments and suggestions at various stages of this work. We would like to thank Vikram Gavini and Phani Motamarri (U. Michigan) for stimulating discussions as well as for making available some of their Finite Element Method results which helped us in carrying out validation studies. We would also like to thank Gero Friesecke (TU Munich, Germany) and Michael Ortiz (Caltech) for informative discussions. {We gratefully acknowledge comments from the anonymous reviewers which helped us in improving the presentation of our work.} ASB and RDJ would like to acknowledge the hospitality of the Hausdorff Research Institute for Mathematics, Bonn, Germany where this work was partially carried out. \bibliographystyle{elsarticle-num}
1,314,259,993,162
arxiv
\section{Preliminaries} \paragraph{Model.} We use the work-span model to calculate the complexity of our algorithm. \vspace{-0.1cm} \paragraph{Related work.} There are several works that presented parallel batched search trees. Paul, Vishkin and Wagener~\citep{paul1983parallel} studied 2-3 trees. Park and Park~\citep{park2001parallel} showed similar results for red-black trees. Blelloch and Reid-Miller~\citep{blelloch1998fast} presented a work-efficient parallel batched treap. Akhremtsev and Sanders~\citep{akhremtsev2016fast} looked on work-efficient $(a,b)$-trees. And, finally, Blelloch et al.~\citep{blelloch2016just} worked on the parallel work-efficient batched generic binary search trees with the only provided \emph{Join} function. However, none of these works consider the trees that could have lower than logarithmic height under a wide range of distributions. In this work, we consider the parallel batched version of Interpolation Search Tree (later called IST) introduced in~\citep{mehlhorn1993dynamic}. The main difference with the previously researched data structures is that insertions, deletions and searches with smoothly distributed arguments take $O(\log \log n)$ time, where $n$ is the size of the tree. \vspace{-0.1cm} \paragraph{Standard functions.} We need to use several standard \emph{parallelizable} functions. \emph{Span} function calculates the prefix sums of the array of size $n$ in $O(n)$ work and $O(\log n)$ span. \emph{Filter} function filters out all the elements that do not satisfy the condition $C$. If the size of the array is $n$, it takes $O(n \cdot \mathrm{cost}(C))$ work and $O(\log n \cdot \mathrm{cost}(C))$ span. Please note that in our data structure condition functions are pretty simple and they work in $O(1)$, thus, the complexity of filter becomes $O(n)$ work and $O(\log n)$ span. Also, we need \emph{rank} function that given two sorted arrays $a$ and $b$ of sizes $n$ and $m$, correspondingly, finds the position $k$ of each element $a[i]$ in $b$ such that $b[k - 1] \leq a[i] \leq b[k]$. It works in $O(n + m)$ work and $O(\log^2 (n + m))$ span. Moreover, we need \emph{merge} algorithm that given two sorted arrays $a$ and $b$ of size $n$ and $m$, correspondingly, merges to arrays together with $O(n + m)$ work and $O(\log^2 (n + m))$ span. \footnote{We can just use \emph{rank} to achieve such merge complexity.} Finally, we need a \emph{parallel-for} loop on $n$ iterations that executes each loop step in parallel. It is implemented using a binary-splitting technique and introduces additional $O(\log n)$ span and $O(n)$ work. \vspace{-0.1cm} \paragraph{Result.} Our data structure, parallel batched IST, performs a batch of $M$ requests in expected $O(M \log \log (n + M))$ work and $\mathrm{polylog}(n + m)$ span. It outperforms parallel batched Treap presented in~\citep{blelloch2016just} on our experiments. \vspace{-0.2cm} \section{IST definition} \begin{definition} Let $a$ and $b$ are reals. An Interpolation Search Tree (IST) with boundaries $a$ and $b$ for a set of $n$ keys $\{x_1, \ldots, x_n\} \subseteq [a, b]$ consists of: \begin{enumerate} \item An array REP of representatives $x_{i_1}, \ldots, x_{i_k}, i_1 < i_2 < \ldots < i_k$ that is $\text{REP}[j] = x_{i_j}$. Furthermore, $k$ satisfies $\sqrt{n} / 2 \leq k \leq 2\cdot\sqrt{n}$. \item Interpolation search trees $T_1, \ldots, T_{k+1}$ for subarrays of keys $S_1, \ldots, S_{k + 1}$ where $S_j = \{x_{i_{j-1}+1}, \ldots x_{i_j - 1}\}$ for $2 \leq j \leq k$, $S_1 = \{x_1, \ldots, x_{i_1-1}\}$, and $S_{k + 1} = \{x_{i_k+1}, \ldots, x_{n}\}$. Furthemore, tree $T_j$, $2 \leq j \leq k$ has boundaries $x_{i_{j-1}}$ and $x_{i_j}$, $T_1$ has boundaries $a$ and $x_1$, and $T_{k + 1}$ has boundaries $x_k$ and $b$. \item An array $\text{ID}[1, \ldots, m]$, where $m$ is some integer, with $\text{ID}[i] = j$ iff $\text{REP}[j] < a + i(b - a) / m \leq \text{REP}[j + 1]$. \end{enumerate} \end{definition} Array REP contains a sample of the keys and array ID helps to find a value in the array REP, i.e., it gives the first approximation that is then corrected to the actual position, using some search technique, for example, binary search. We require \emph{ideal} IST to have the samples in REP to be equally spaced. We rebuild subtrees in insertion and deletion algorithms at \emph{appropriate} time intervals, i.e. when enough \texttt{insert} or \texttt{delete} operations have been executed on that subtree. \begin{definition} An IST with parameter $\alpha$, $\frac{1}{2} \leq \alpha < 1$, for set $S$ of size $n$ is \emph{ideal} if $i_j = j [\sqrt{n}]$ for all $j \geq 1$, if $m = [n^\alpha]$, and if the subtrees are again ideal ISTs. \end{definition} In the ideal IST the first level node contains $\sqrt{n}$ keys, all nodes on the second level contains $\sqrt[4]{n}$ keys, and so on. \begin{theorem} \label{thm:ideal} Let $\frac{1}{2} \leq \alpha < 1$. Then an ideal IST for an ordered set $S$ of size $n$ can be built in $O(n)$ time and require $O(n)$ space. It has depth $O(\log \log n)$. \end{theorem} Additionally, at each subtree $T$ we maintain the number of operations from the last rebuild $C(T)$, the size of the subtree just after the last rebuild $S_0(T)$, and the current size of the subtree $S(T)$. Thus, an insertion and a deletion work as follows. They traverse the IST recursively, visiting in each node the desired child, also they maintain the counters $C(T)$ and $S(T)$ until it finds a node with $C(T) \geq S_0(T) / 4$. At that node it builds a new \emph{ideal} IST out of $T$ including the element, being inserted. Deletions are performed by marking the element to be deleted. The element is physically removed only during the rebuild operation. \begin{theorem}[\citep{mehlhorn1993dynamic}] Let $\mu$ be a smooth density of finite support $[a, b]$ with parameter $\alpha$, $\frac{1}{2} \leq \alpha < 1$, and let $T$ be a $\mu$-random IST of size $n$. Then, the expected cost of processing a $\mu$-random search is $O(\log \log n)$ and $O(\log^2 n)$ worst time. The expected amortized cost of processing a $\mu$-random insertion and deletion is $O(\log \log n)$ and $O(\log n)$ worst case amortized time. \end{theorem} \section{Parallel Construction of Ideal IST} We start the description of our algorithm with how to build an ideal IST from the sorted array $a$ of keys with length $n$. For that we choose each $[\sqrt{n}]$-th element ($[\sqrt{n}], 2[\sqrt{n}], \ldots$) and create an array REP out of them. Then, recursively build the ideal ISTs from subarrays $a[1, \ldots, [\sqrt{n}]-1]$, $a[[\sqrt{n}] + 1, 2[\sqrt{n}]-1]$, and so on. To build them we used a parallel-for loop with $O(\sqrt{n})$ iterations giving $O(\sqrt{n})$ work and $O(\log n)$ span. Finally, we have to find array ID of length $m = n^{\alpha}$, i.e., for each $k$ find the position of $a + k \cdot (b - a) / m$ in array REP. For that we can use the standard \emph{rank} algorithm with $O(n + m) = O(m)$ work and $O(\log^2 (n + m)) = O(\log^2 n)$ span. Our rebuild algorithm runs recursively on $O(\log \log n)$ levels, since the height of an ideal tree is $O(\log \log n)$. Thus, in total the work is $O(n)$ by Theorem~\ref{thm:ideal} and the span is $O(\log \log n \cdot \log^2 n)$. \section{Flatten IST into Array} The other thing that we need for our data structure is to flatten the tree: get all the unmarked keys of the tree into the sorted array. For that we use the counters $S$ for the size of subtrees. We create an array with the length of REP array and calculate prefix sums of the sizes of the subtrees using \emph{scan} function while \emph{filtering} out the marked keys in REP array. Then, we recursively go into subtrees knowing the position where they should be located in the array. The height of IST is $O(\log n)$ at worst. Thus, this algorithm works in $O(n)$ work and $O(\log^2 n)$ span. \section{Batched operations} Now, we are given a batch of $M$ operations that we want to apply to a $\mu$-random IST $T$. At first, we compare how many operations was done to $T$ since the last rebuild $C(T)$ with the size of $T$, $S_0(T)$, after the last rebuild divided by $4$, i.e., $S_0(T) / 4$. If the number of operations is smaller, then we simply find for each operation in which subtree it should go using standard search algorithm in REP array of IST, mark the necessary elements to be inserted or deleted, and then continue recursively with each subtree. Also, we update the size of the subtree. Otherwise, if the number of operations since the last rebuild is bigger, then we transform the tree into a sorted array using the previously described algorithm, merge it with the keys of operations, apply the operations using \emph{parallel-for} loop, and, finally, build the tree from a sorted array using the previously described algorithm. Since the height of IST is at most $O(\log n)$, the work is expected to be $O(M \log \log (n + M))$ and the span is $O(\mathrm{polylog} (n + M))$. \vspace{-0.2cm} \section{Experiments} We implement the data structure on C++ and compile it with OpenCilk 1.0 compiler~\citep{opencilk} using \texttt{-O2} flag. We wrote the code with the help of the pctl library presented in~\citep{acar2019provably}. It contains the scalable implementations of standard functions and provides an automatic solution to the granularity problem. Also, to resolve the issue with the parallel memory allocation we used \texttt{tcmalloc}~\citep{tcmalloc}. We run our code on an Intel Xeon Gold 6230 machine with 16 cores. We compared our parallel batched IST with the parallel batched Treap explained in~\citep{blelloch2016just} and the sequential std::set from C++ standard library. For the experiments, we fill the data structures with approximately $10^7$ of keys uniformly taken from $[1, 2 \cdot 10^7]$ (each key is taken with the probability $1/2$) and then we apply the batch of $10^6$ insertions with the keys taken uniformly from $[1, 2 \cdot 10^7]$. We run data structures on $1$ and $16$ processes and calculate their speedup. The experimental results are presented in the following table. Each value was averaged over $10$ runs. \begin{center} \begin{tabular}{c|c|c|c} & 1 proc, s & 16 proc, s & speedup \\\hline IST & 5.1s & 0.36s & 14.1 \\ Treap & 7.5s & 0.5s & 14.9 \\ std::set & 3.6s & --- & --- \\ \end{tabular} \end{center} As one can see, our data structure outperforms std::set on 16 processes and outperforms parallel batched Treap on both settings, however, it has a little bit worse speedup. \vspace{-0.2cm} \section{Conclusion} In this short paper, we presented a new parallel batched data structure based on the sequential IST~\citep{mehlhorn1993dynamic}. It appears to perform better than parallel batched Treap presented in~\citep{blelloch2016just}. In the future, we want to run more experiments on the trees of different sizes with different types of operations and on different machines with more cores.